Press "Enter" to skip to content

Month: March 2016

Database File Info

Mike Fal shows us how to get database file information using Powershell:

To do this, I dove into the SMO object model. This gets a little /Net-y, but the good news is there’s lots of properties we can use to get the information we are looking for. If we look at both the DataFile and LogFile classes, there are properties readily available for us. Both classes have UsedSpace and Size properties (both measured in KB), from which we can derive both available space and percentage used. All it takes is wrapping some collection and formatting logic around these objects and we’re good to go. You can see my full function up on GitHub.

This is a nice example of using the Powershell pipeline to build an end product, in this case an HMTL report of log and file usage.

Comments closed

DBCC Extended Checks

The SQL Server CSS team has a new post on DBCC performance improvements:

Starting with SQL Server 2016, additional checks on filtered indexes, persisted computed columns, and UDT columns will not be run by default to avoid the expensive expression evaluation(s.)  This change greatly reduces the duration of CHECKDB against databases containing these objects.  However, the physical consistency checks of these objects is always completed.  Only when EXTENDED_LOGICAL_CHECKS option is specified will the expression evaluations be performed in addition to already present, logical checks (indexed view, XML indexes, and spatial indexes) as part of the EXTENDED_LOGICAL_CHECKS option.

 

For filtered indexes, CHECKDB has also been improved to skip records that do not qualify as being indexed by target NC index. 

This is telling me that we’ll want to have two separate CHECKDB processes, one which regularly runs CHECKDB (or CHECKTABLE) and one which occasionally runs CHECKDB with EXTENDED_LOGICAL_CHECKS.

Comments closed

Table Partitioning

Slava Murygin walks through partitioning:

Usually new Partition Ranges are added to the end of a Function/Schema, but in my example I’ll demonstrate how to split middle Partition.
At first we create new File Group. Then make it next within a schema and then split a Function:

All records, which are Less than 1/1/2015 and more than 1/1/2014, were moved to the new partition.

Partitioning is one of those topics I run into frequently enough to need to know it, but not frequently enough to memorize it; every time it feels like I’m starting from scratch.

Comments closed

Improving SMO Performance

Cody Konior shows how to use GetDefaultInitFields in SMO to improve performance, and also explains a bug when you try to do this:

It turns out has no default properties set, and the exception seems to occur after you try to set some. Interestingly though the way that boolean flag works is to set default properties on each SMO type; which means the solution is to turn it on for everything and then only disable it for this specific type; retaining any of the performance improvements throughout the rest of SMO.

Anything that makes SMO faster gets a vote of approval from me.

Comments closed

Using The YEAR Function

Kendra Little looks at the performance ramifications of using the YEAR() function:

But if we hover over that index seek, we can see in the tooltip that there’s a hidden predicate that is NOT a seek predicate. This is a hidden filter. And because this is SQL Server 2016, we can see “Number of Rows Read” — it had to read 9.3 million rows to count 1.9 million rows. It didn’t realize the 2006 rows were together– it checked all the females and examined the FakeBirthDateStamp column for each row.

Built-in functions can cause SARGability issues.

Comments closed

Switching To Identities

James Anderson shows how to do a table switch to switch a table without an identity column to one with an identity column:

The SWITCH statement can instantly ‘move’ data from one table to another table. It does this by updating some meta data, to say that the new table is now the owner of the data instead of the old table. This is very useful as there is no physical data movement to cause the stresses mentioned earlier. There are a lot of rules enforced by SQL Server before it will allow this to work. Essentially each table must have the same columns with the same data types and NULL settings, they need to be in the same file group and  the new table must be empty. See here for a more detailed look at these rules.

If you can take a downtime, this is pretty easy.  Otherwise, making sure that the two tables are in sync until the switchover occurs is a key problem to keep in mind.

Comments closed

System Database Restrictions

Dave Mason ran into a system database restriction regarding number of files:

One thing I recently learned is that you can’t add files to [model]. I was hoping to impact the number of files a database has when it is created. It’s common for db’s to be created, with just the standard mdf/ldf pair of files. Then the db is loaded with varying amounts of data. After a certain threshold, it makes more sense to have multiple data files, based on the environment. Sure, I could add files later and “balance” the data across the files. But it would be nice to *not* have to do this; some down time may be required and LOB data presents its own challenges

There aren’t too many restrictions on the model database, but this is a biggie.

Comments closed

Plan Handles

Andy Mallon talks about plan handles (sql_handle):

If you’ve ever looked at sys.sysprocesses or sys.dm_exec_requests (or a number of other DMVs), you’ve noticed there is a column called “sql_handle” that contains some binary gobbledygook. Books Online gives the (un)helpful definition as “Hash map of the SQL text of the request.”

“Binary gobbledygook” is probably the best description of the plan handle available.

Comments closed

Finding Lost Customers

Meagan Longoria has lost customers and is trying to find them:

I was testing some calculations in my tabular model when I noticed that my Lost Customers calculation wasn’t working as expected. This was rather interesting to me since the calculation I was using was from DAXPatterns.com.  After some experimentation, I determined that the difference between my data situation, and the situation that fit the DAX Pattern was that my customer dimension was a Type 2 Slowly Changing Dimension. That meant I couldn’t use the customer key to identify a unique customer, since each customer could have multiple customer keys  (surrogate keys) with the same customer number (business key). Specifically, if a customer made a purchase in December 2015, then changed their name, then made a purchase in January and February, my calculation was counting them as lost because it was not recognizing that the customer with the new name and same customer number was actually the same customer.

She ends up with two solutions, each with different trade-offs.  Knowing as little DAX as I do, looking at two different ways of solving this problem is great because it gives you more insight into language semantics.

Comments closed

MDX For Beginners

Dan King continues his series on MDX:

So obviously we brought back internet Sales for all years captured into the cube.  So now we need to understand the WHERE clause and how to use it properly in MDX.

The “WHERE” clause in MDX is used to define another slicer and limit the attributes from the defined slicer. The slicer used in the “WHERE” clause cannot be the same slicer used when defining your rows hence why I use the date dimension.  The MDX query below depicts the WHERE clause and syntax:

I liked Dan’s introductory-level presentation on learning MDX, and this blog series is following along those same lines.

Comments closed