I repeat the above: eh…what???
Where did my table go???
Please tell me I didn’t delete the table? It’s a test system and I took a backup before starting but it’s a whole lot of hassle to recreate the table.
Click through for the full story.
This past weekend, after one of my presentations on Analysis Services at SQL Saturday 520, an attendee described a performance issue in his environment that he thought could be related to locking and blocking and asked if there was a way to confirm if this was the cause. The short answer is yes, absolutely!
While I wouldn’t say this is a common issue across *all* analysis services environments, it’s definitely common amongst the subset of analysis services environments that do intra-day processing – which is when the SSAS database is being processed during hours of the day when users might be simultaneously querying it.
This is a very interesting article. I’d never thought about SSAS taking locks, but it makes perfect sense.
The small font size may be difficult to read for some people. The easiest and fastest way to change the font size is to click somewhere in the formula bar and use your mouse middle wheel while holding down the Control button on your keyboard. However, the font size will go back to the default size next time you open SSDT.
If you wish to change the font size permanently, you can use the Tools menu. Click on Tools –> Options –> Fonts and Colors –> Environment and choose “Business Intelligence Designers” from the drop down menu. Here you can change the font size. Your changes will not take effect until you restart SSDT.
Unfortunately, the surrounding chrome doesn’t change size, so there are hard limits of how far you can scale up.
I can hear you yawning already – yet another blog post on getting data from multiple Excel workbooks in Power Query and Power BI. Just about everyone who has ever written a blog post on Power BI has written about this subject, including me. However there’s a twist this time: what if your Excel workbooks are stored in SharePoint or OneDrive For Business? If they are, then your dataset may not refresh successfully after you have published unless you load your data in a particular way.
Read the whole thing if you work with Power BI.
According to Stack Overflow documentation, these are the categories of questions that may be closed by the community users:
- off topic
- too broad
- primarily opinion-basedNot everyone in the Stack Overflow community is able to close a question. In fact users need to have certain reputation expressed in points (more details here).
To calculate the overall website closure rate is easy. Just use the original “questions_2016” dataset and count how many questions have the field “Closed Date” populated. Over 10% of questions made in 2016 have been closed so far.
If you’re interested in learning more about data analysis, walk through the exercise as well and play around with the data set too. Hat tip, R-Bloggers.
Unlike the sys.dm_exec_requests view in SQL Server, the sys.dm_pdw_exec_requests view actually keeps up to 10000 records with the information of a request even after it has executed. This capability is very useful as you can track specific query executions as long as their records are still among the 10000 kept by the view. As time passes the oldest records are phased out in favor of more recent ones.
This is an interesting look at some of the differences between Azure SQL Data Warehouse and a “normal” SQL Server installation. Good reading.
For many years it’s been a best practice to never put an ORDER BY in a view. The idea is that a view shouldn’t have an inherent order. Just like any other query. If you want the data from a view ordered then you query the view with an ORDER BY clause. In fact if you put an ORDER BY in a view you’ll get an error:
Msg 1033, Level 15, State 1, Procedure MyView, Line 4 [Batch Start Line 2]
The ORDER BY clause is invalid in views, inline functions, derived tables, subqueries, and common table expressions, unless TOP, OFFSET or FOR XML is also specified.
I knew about the TOP 100 PERCENT bit, but had no idea that order was outright ignored. Read the comments for additional information.
Now that we can separate data for each group(s), we can fit a model to each tibble in
map()from the purrr package (also
tidyverse). We’re going to add the results to our existing tibble using
mutate()from the dplyr package (again,
tidyverse). Here’s a generic version of our pipe with adjustable parts in caps:
Read the whole thing. Hat tip, R-Bloggers.
There have been a few discussions on stack overflow recently about how to manage deployments in uncontrolled environments, specifically data migrations. The questions were from an SSDT perspective, I don’t think that SSDT is a great choice for these uncontrolled environments and there are some additional requirements for these uncontrolled environments that need some additional thought and care when creating release scripts (whether manually or using a tool).
Ed has some interesting thoughts here, and I agree with the idea that SQL Server Data Tools deployment scripts are not the best choice when you have people changing schema all around you in unexpected ways.
Creating a DACPAC is easy*. Within SSMS, you simply select the database node, Tasks, Extract Data-Tier Application. I had need to get a database reference to the SSISDB for some reporting we were building out so I clicked along my merry way.
Looks like the command line works just fine, at least.