The order in which calculations are done – not just reading from left to right, but remembering that things like multiplication and division happen before addition and subtraction. My son tells me that kids nowadays are being taught something called “BIDMAS” – which stands for “Brackets, Indices, Division, Multiplication, Addition, Subtraction”. Or it can be BODMAS – Brackets, Operations, Division… (Operation is a fancy new way of describing indices – ie xy)
Unsurprisingly, there are similar rules for Boolean operators.
It’s a valuable lesson oft learned.
Once you’ve got into the habit of using Extended Properties to document your database, there are obvious benefits:
- You can explain why you added that index or modified that constraint.
- You can describe exactly what that rather obscure column does.
- You can add a reasoned explanation to the use of a table.
You will often need these explanations because, sadly, DDL code isn’t ‘self-documenting’, and human memory is fallible. Extended Properties are easily searched because they are all exposed in one system view.
It is great to add explanations to lists of procedures, functions and views once the database becomes sizeable. Extended Properties are useful when exploring the metadata, but the requirement isn’t quite so essential because comments are preserved along with the source code. Tables, however, are a big problem because SQL Server throws away the script that produces the table, along with all the comments. The reason that this happens is that there are many ways you can alter parts of a table without scripting the entire table. How could one infallibly preserve all these ALTER statements in the preserved script? It’s tricky. Table scripts that you get from SSMS or script via SMO are therefore synthesised from the system tables but without those comments or even Extended Properties.
Extended Properties are useful, but I think the lack of tooling around them prevented widespread adoption. Now that there are a few tools which support them (including SSMS’s data classification mechanism), I wonder if these will get a second look.
It looks like internally Query Store is referred to as plan_persist. That makes sense, thinking about how the Query Store persists query plans to your database’s storage. Let’s take a look at those catalog views vs their clustered and nonclustered indexes. I’ve modified the query a bit at this point, to group together the key columns.
This lets you see how the Query Store authors expected us to use these tables. Which isn’t always how people use them…
In both of these scenarios you will not be able to save the package. So what the heck are you supposed to do?! Here’s where my tunnel vision (and panic) sets in. How was I supposed to get my SSAS objects processed?
I could always script out my processing tasks using SSMS and drop them in a SQL Agent job step. But I have multiple environments and multiple cubes so each one would have to be hard coded. Not a great idea, so scratch that.
Click through to learn the best way to fix this.
Create 3 measures as shown below, and then add those 3 measures in the report along with a month slicer as shown below. You can change the month in the slicer and verify that the measure values change for the selected month.
Sales (Selected Month) = SUM ( Sales[Sales] )
Sales Last Year = CALCULATE ( SUM ( Sales[Sales] ), SAMEPERIODLASTYEAR ( ‘Date'[Date] ) )
Sales YTD = TOTALYTD ( SUM ( Sales[Sales] ), ‘Date'[Date] )
This is the first time I’ve seen a What If parameter in use. Very interesting.