Press "Enter" to skip to content

Day: December 15, 2015

Fractals In SSMS

Slava Murygin gives us a script to generate fractals:

That is most difficult operation. At first, SSMS can’t show more than 5000 separate objects at the same time. In order to show more we have to construct “MULTIPOLYGON” or “GEOMETRYCOLLECTION”. That only the way to fit more objects into SSMS screen. However it is still limited.
In order to combine triangles in a single object we divide them in buckets (Line 106).
In this example I just making number of buckets approximately equal to a number of objects within each bucket. Making lower number of buckets will increase processing speed, but produce less colors. All objects in one collection will have the same color.
Also, I wrapped the last query in extra CTE to have more flexibility on results formation.

This is a fun post showing some of the power and limitations of geometry types in SQL Server and their display in SSMS.

Comments closed

Trace Flag 834 And Columnstore

Chris Bell warns us against having Trace Flag 834 turned on in an instance which contains columnstore indexes:

[I]t is not recommended to have trace flag 834 on when using columnstore indexes in your databases.

Since the 834 trace flag is a global level flag, and columnstores are in individual databases I wrote the script below to go through and check if you ave any columnstore indexes, and then check if the trace flag is enabled.

Chris also has a helpful script to see if your instance has this issue.

Comments closed

Check Your Automated Backups

Cody Konior shows us a few cases in which automated backup tools (like Ola Hallengren’s scripts) won’t actually back up databases:

We love Ola Hallengren’s Maintenance Solution but you should always always double-check either the msdb backup history or themaster.dbo.CommandLog table to make sure any important backup was taken. This is especially important if you trigger it manually and are relying on human input to get the parameters right.

Here are three easy to miss cases where the scripts won’t backup a database. These absolutely, definitely, aren’t bugs, they’re idiosyncrasies with the underlying backup command and (sometimes) how the script works. But they’re also much easier to miss in the verbose output of the script.

The moral of the story is to check your automated backup routines and make sure that they are doing what you expect.

Comments closed

Master Data Services Workflow

Reza Rad shows us how to introduce workflow concepts into MDS 2016:

This feature introduced in SQL Server 2016 CTP 3.0. You can save your pending changes in a set called Change Set. This Set then can be modified or deleted or applied. You can also add the functionality of Requirement for Approval for the change set, which then requires an entity administrator to approve or reject the change set.  This is a great feature for MDS and would help on the work flow process of making changes in master data. Let’s now have a closer look at how this feature works in action.

I don’t use Master Data Services, but I could see this being very useful at a large company with a lot of people modifying master data.

Comments closed

Check Bulk Insert Errors

Tom Staab points out bulk insert allows up to 10 errors by default:

The issue was that the last row in a text file contained the row count, so he needed to bulk import all but that last row.

My solution was to set maxerrors to 1 so the import would ignore the last row due to the error. Any other row with an error would still fail the import. This reminded me of one of my least favorite defaults in SQL Server, so I decided to write about it here as well. A lot of people don’t realize this, but by default a bulk insert will only fail after 10 errors. Why not 0? I’ve never understood that. If you don’t change the default and then bulk insert 20 rows of data from a file, it will only fail if over half of the rows cause an error.

Keep track of error incidence and what that means for your data.  The default of 10 errors does seem rather strange.

Comments closed

CHECKSUM And BINARY_CHECKSUM Collisions

Jovan Popovic shows us how fragile CHECKSUM and BINARY_CHECKSUM are:

You can find patterns of inputs that return same values. This is similar to modulo operator that returns the same remainder if you find the right pattern of numbers.

As an example, if you run following queries you will  find that these functions return 0 for all strings containing only letter ‘a’ that have lengths 32, 64, 96, n * 32 characters:

This isn’t the only place where you can get data collisions with these two functions; because they return ints, they’re limited to 8 bits.  Jeff Preshing has a great chart on hash collisions, showing that if you have 77,163 separate values, there’s a 50% chance two will collide.

Use HASHBYTES instead.  The likelihood of MD5 collisions is pretty low (needing 609 million records before a 1% chance of collision), SHA1 even lower (1.71 x 10^23 records before a 1% chance of collision), and SHA2 256 or 512 ridiculously low.

Comments closed