Press "Enter" to skip to content

Month: April 2016

Echoing Variable Values

Bill Fellows shows how to send informational messages in SSIS and gives an example in Biml:

To aid in debugging, it’s helpful to have a “flight recorder” running to show you the state of variables. When I was first learning to program, the debugger I used was a lot of PRINT statements. Verify your inputs before you assume anything is a distillation of my experience debugging.

While some favor using MessageBox, I hate finding the popup window, closing it and then promptly forgetting what value was displayed. In SSIS, I favor raising events, FireInformation specifically, to emit the value of variables. This approach allows me to see the values in both the Progress/Execution Results tab as well as the Output window.

There’s value in putting in code like this as part of generic processing.  Flip the debug bit to true whenever you need this detailed information.  You can also think about calling the method multiple times, such as before and after an expected change block.

Comments closed

Power BI KPIs And Colorblindness

Meagan Longoria points out that the choice of red and green for KPIs isn’t ideal:

And which colors do we love to use with KPIs? Red and green, of course! Color is a very powerful tool in data viz. We use it to indicate meaning and to draw attention to something important. KPI boxes are used to display key metrics in an efficient manner. These key metrics are usually rather important, and our users need to be able to see their status at a glance.

I quite like the design of the KPI boxes in Power BI, but for some reason they were created without the ability to adjust the color associated with the state (good/bad). Shown below, they use the common red/green color scheme.

It sounds like Microsoft is already working on fixing the issue.

Comments closed

Where Was I?

Shane O’Neill shows how to see the queries which were run in Management Studio:

Suddenly you’re not sure if you really ran the SELECT statement at all.
Maybe you ran the insert statement and 2089 rows were marked to never be seen again!
Or maybe that other table only had 2089 rows in it and you’ve now deleted every one!!

Now this blog post is not going to deal with fail-safe’s for preventing those scenarios because 1) you should already know how to do that, and b) if you don’t know, then maybe back away until you research it… It’s only going to deal with a nice little way you can figure out what it was that you just ran.

I don’t think this will go into my everyday processes, but it’s handy to have when you absolutely need to make sure you’re running the correct line in a script.

Comments closed

Messing With Statistics

Erik Darling shows how to fake stats:

One thing I’ve found is that the inflated counts don’t seem to change anything for Identities, or Primary Keys. You’ll always get very reasonable plans and estimates regardless of how high you set row and page counts for those. Regular old clustered indexes are fair game.

Some really interesting things can start to happen to execution plans when SQL thinks there’s this many rows in a table. The first is that SQL will use a rare (in my experience) plan choice: Index Intersection. You can think of this like a Key Lookup but with two nonclustered indexes rather than from one nonclustered index to the clustered index.

This is very useful when you don’t have many rows in dev, can’t put many rows in dev, and can’t restore a stats-only database from prod.

Comments closed

More On String Splitting

Aaron Bertrand has a follow-up post on STRING_SPLIT():

So here, the JSON and STRING_SPLIT methods took about 10 seconds each, while the Numbers table, CLR, and XML approaches took less than a second. Perplexed, I investigated the waits, and sure enough, the four methods on the left incurred significant LATCH_EX waits (about 25 seconds) not seen in the other three, and there were no other significant waits to speak of.

And since the latch waits were greater than total duration, it gave me a clue that this had to do with parallelism (this particular machine has 4 cores). So I generated test code again, changing just one line to see what would happen without parallelism:

There’s a lot going on in that post, so I recommend checking it out.

Comments closed

IT Pro Cloud Essentials Program

Grant Fritchey talking about a new Microsoft program:

You need to start working on adding Azure knowledge to your skill set. If you have access to an MSDN license, getting into Azure is easy because of the credits available. However, not everyone works for a company that provides MSDN or has purchased a license. In that case, getting into Azure, just for testing and learning could be expensive (I frequently “spend” $150/month with my MSDN credits). However, Microsoft is very serious about getting everyone moved into this space. They’ve launched a new free program called IT Pro Cloud Essentials. Not only does it give you Azure credit, but you also get access to O365, another set of skills and understanding you need to acquire.

Also check out their Visual Studio Dev Essentials program.  Its Azure credit is only $25 a month, but offers you SQL Server 2014 (and will offer 2016) Developer Edition.

Comments closed

Finding Nested Stored Procedures

Michael J. Swart has a script to find nested stored procedures:

Adventureworks seems just fine to me. Only four instances of procedures calling procedures. I looked at the database I work with most. Hundreds of procedures (representing 15% of the procedures) call other procedures. On the other end of the spectrum is Stackoverflow. I understand that they don’t use stored procedures at all.

Check out the comments for more notes.

Comments closed

Truncating Log Files

Sean McCown has a script to truncate log files:

I want to show you a great piece of code to truncate and shrink all your log files. The biggest question you always ask is why should you shrink your log files? It’s been drilled into everyone’s head that shrinking log files is bad because you can cause too many VLFs, and of course there’s the zeroing out that happens when they grow.
OK, so let’s answer that question. There are a couple reasons you’d want to shrink all the files to a small size.

Regularly shrinking log files in production isn’t a particularly great thing, but as Sean points out, there are valid reasons for doing this.

Comments closed

ENCRYPTION_SCAN Locks

Suresh Kandoth explains ENCRYPTION_SCAN in a non-TDE scenario:

There are three types of operations that acquire lock with the resource_type of DATABASE and resource_subtype of ENCRYPTION_SCAN:

– Encryption scan performed during TDE enable/disable

– Bulk Allocations that happen as part of bcp/bulk insert/select-into/index operations, etc

– Sort spills that are done as part of sort operators in the query plan

These locks are taken to serialize operations like bulk allocations and sorts with encryption scan.

Read the whole thing.  It looks like this isn’t by itself a significant issue, but it is interesting to see this lock type against a database without TDE.

Comments closed

Field And Record

Michael Swart introduces Field & Record magazine:

I’m definitely a descriptivist. Language is always changing and if a word or phrase gets adopted widely enough, it is no longer “wrong” (whatever that means).

So when I hear “Field” and “Record” they’re acceptable to me. But if I’m explaining something, I don’t want to distract from the thing I’m saying. And from that point of view, I try to use “Row” and “Column” because I don’t know anyone who blinks at those terms.

Entity and Attribute or bust.  That’s my philosophy.

Comments closed