Press "Enter" to skip to content

Day: December 23, 2015

Finding Login Permissions

Andy Galbraith has a new permissions script:

I recently was tasked with this ticket:

Please add new login Domain\Bob to server MyServer.  Grant the login the same permissions as Domain\Mary.

On the face of it, this seems relatively straightforward, right?  It is the kind of request that we all get from time to time, whether as an ad-hoc task or as part of a larger project, such as a migration.

The catch of course is that it isn’t that easy – how do you know what permissions Mary has?

Andy’s script looks good.  For bonus points, compare it to fn_my_permissions.

Comments closed

Tuning SQL Server Backups

Derik Hammer has a post on tuning SQL Server backups:

Finally, do not forget about your memory. To backup or restore a database you have to load data pages into memory. We will talk more about memory below and how the internal buffer pool comes into play and can cause operating system paging or out of memory conditions.

Derik shows the various knobs and switches available, and I want to emphasize one thing:  optimizing backup statements involves testing different scenarios.  You can make good guesses as to the appropriate MAX_TRANSFER_SIZE or BUFFERCOUNT, but even then, test different combinations and find what works best for each database.

Comments closed

FOR JSON WITHOUT_ARRAY_WRAPPER

Jovan Popvic introduces us to the WITHOUT_ARRAY_WRAPPER clause:

This option enables you to remove square brackets [ and ] that surround JSON text generated by FOR JSON clause. I will use the following example:

SELECT 2015 as year, 12 as month, 15 as day
FOR JSON PATH, WITHOUT_ARRAY_WRAPPER

This query will return:

{ "year":2015, "month":12, "day":15 }

However, without this option, following text would be returned:

[{ "year":2015, "month":12, "day":15 }]

Jovan also points out important changes between 3.1 and 3.2 with FOR JSON output.

Comments closed

Restoring CDC-Enabled Databases

Mark Broadbent shows us how to restore databases with Change Data Capture enabled:

This automatically poses the question of how it is possible to restore a backup chain with CDC? On a database restore, in order to apply differential backups and transaction logs the NORECOVERY clause is required to prevent SQL Server from performing database recovery.

If this option is required but KEEP_CDC in conjunction with it is incompatible, surely this means point in time restores are not possible for restores that require CDC tables to be retained?

-Wrong!

The answer is a bit surprising, and my guess is that most database administrators are totally unaware of this restoration quirk.

Comments closed

Learning R

Grant Fritchey is learning R:

Awesome. Fixed that algorithm problem, right?

Wrong.

That’s because algorithms are not the problem… the only problem. The real problem is data preparation. A lot of the examples you’ll read online are very straight forward with nice neat data sets. That’s because they were carefully groomed and prepared. Here I am looking at the wooly wild real data and I’m utterly lost in how to properly prepare this so that it’s appropriately set up as a continuous distribution(or a distribution at all). WOOF! The reason this is so hard is because I actually don’t understand the data fundamentals of the problem I’m trying to solve in exactly the way needed to solve the problem. More cogitation is necessary.

Just because you can write R code doesn’t mean you are a data scientist.  Grant has the right mindset, but this post is fair warning that R’s complexity isn’t so much in its being a DSL, but rather in the domain itself.

Comments closed

Type 2 SCDs With Biml

Meagan Longoria has a great post on Type 2 Slowly Changing Dimensions:

The most common mistake I see in SCD 2 packages, whether using the built-in transformation or creating your own data flow, is that people use OLEDB commands to perform updates one row at a time rather than writing updates to a staging table and performing a set-based update on all rows.  If your dimension is small, the performance from row by row updates may be acceptable, but the overhead associated with using a staging table and performing set-based update will probably be negligible. So why not keep a consistent pattern for all type 2 dimensions and require no changes if the dimension grows?

Spot on.

Comments closed