Press "Enter" to skip to content

Curated SQL Posts

Disk Usage By Table Report

Kenneth Fisher shows off my favorite built-in SSMS report:

Every now and again you need to know how big a table is. Or several tables. Or all of the tables. Number of rows is frequently handy when you’re going to create a new index or otherwise modify the table. The amount of space used by the indexes can be helpful in deciding how much space you need to do a re-index. The tables with the most unused space is nice to know if you have a problem with ever growing heaps.

In the past my go to solution here was sp_spaceused. It’s a really handy procedure.

USE AdventureWorks2014;
GO
EXEC sp_spaceused 'Person.Person';
GO

Great information but it has a few problems. You can only run it for one table at a time (sp_msforeachtable is a workaround, if undocumented), the file sizes aren’t consistent (sometimes KB, sometimes MB or even GB), and it only returns the name of the object but not the schema. So if there is the same table name under multiple schemas it can get tricky.

Read on for how to access and use this report.

1 Comment

Assigning Tags In Azure

Melissa Coates shows how to assign tags to resources in Azure using Powershell:

You can assign tags for resource groups, as well as individual resources which support Azure Resource Manager. The individual resources do not automatically inherit tags from the resource group parent. A maximum of 15 key/value pairs can be assigned (though you could store concatenated values or embedded JSON in a single tag value as a workaround). You may want to just assign tags at just the resource group level, and use custom queries to “inherit” at the resource level. Alternatively, you may want to assign tags to the individual resources directly particularly if you want to see them clearly on the standard “download usage” report of billing.

Since the key/value pairs are just free-form text, watch out for uniformity issues. To improve consistency, you can utilize policies to require tags and/or apply defaults if you’d like (for example, you might want to enforce a “Created By” tag). Tags can be set in the ARM template when you initially deploy a resource (which is best so that no billing occurs without proper tagging), or afterwards to existing resources via the portal, PowerShell, or CLI.

Melissa also shows how to query those tags using Powershell.

Comments closed

Understanding Azure Billing

Denny Cherry explains some of the oddities behind Azure billing:

Some things stand out to me when looking at this bill. First is the service that is being billed. It doesn’t say how many DWs we’re being billed more (more on that later).

Something else odd is that the SQL DW shows being billed for 1220 hours. Now in theory the most number of hours for any single service you can be billed for in a single month is 744 as that’s 24 (hours) *31 (days), but here we’re being billed for 1220 hours. Now we know for a fact that this SQL DW is powered off at times, so how are we using this resource for more than 31 days in a month?

The last oddity is the rate. $1.46 an hour. This points us to the reason for all of these strange numbers. That’s the rate to run a SQL DW at DW100 for one hour.

Click through for Denny’s explanation.

Comments closed

When CHECKDB Doesn’t Catch Corruption

Paul Randal explains one condition when DBCC CHECKDB misses corruption:

An interesting situation was discussed online recently which prompted me to write this post. A fellow MVP was seeing periodic corruption messages in his error log, but DBCC CHECKDB on all databases didn’t find any corruptions. A subsequent restart of the instance caused the problem to go away.

My diagnosis? Memory corruption. Something had corrupted a page in memory – maybe it was bad memory chips or a memory scribbler (something that writes into SQL Server’s buffer pool, like a poorly-written extended stored procedure), or maybe a SQL Server bug. Whatever it was, restarting the instance wiped the buffer pool clean, removing the corrupt page.

So why didn’t DBCC CHECKDB encounter the corrupt page?

Read on for the rest of the story.

Comments closed

Dealing With Newlines In Reports

Shane O’Neill shows how to deal with newlines in data:

have mentioned before that we can use CHAR(10) and CHAR(13) for new lines and carriage returns in SQL Server so I’ll leave it up to an exercise to the reader to create a table with these “troublesome” bits of information in them (plus if you came here from Google, I assume you already have a table with them in it).

For me, I’ve just created a single table dbo.NewLineNotes that has a single entry with a new line in it.

Read on for more.

Comments closed

2016 Best Practices By Default

Wayne Sheffield shows how SQL Server 2016 helps you follow best practices:

tempdb

Ahh, tempdb. In SQL Server, that database does so much… it’s where temporary tables and table variables are stored. It’s where row version activity is stored. It’s where running queries will spill out temporary workspace needed for the query. And on, and on, and on. On busy systems, having this database running optimally is of vital importance. And there are several best practices that need to be observed here.

Read on for several tips.

Comments closed

Powershell Pattern Matching

Klaas Vandenberghe shows some of the pattern matching and regular epxression functionality within Powershell:

This is like grep if you ever encountered that. The Select-String cmdlet finds a -pattern in text.
It has an -allmatches switch to … guess what 🙂

Get-ChildItem D:\Myscripts -File -Recurse *.sql | Select-String -pattern "drop\s+[?(database|table|login)"

will get us all our sql scripts in which we drop a database, table or login.
The returned MatchInfo object holds the matching parts of the script text, and also the name of the file and the line number where the match was found.

Get-ChildItem | Select-String is my most frequently used Powershell pipeline.

Comments closed

Neural Networks From Scratch

Ilia Karmanov explains neural nets and shows how to build one in R:

Hence, my motivation for this post is two-fold:

  1. Understanding (by writing from scratch) the leaky abstractions behind neural-networks dramatically shifted my focus to elements whose importance I initially overlooked. If my model is not learning I have a better idea of what to address rather than blindly wasting time switching optimisers (or even frameworks).
  2. A deep-neural-network (DNN), once taken apart into lego blocks, is no longer a black-box that is inaccessible to other disciplines outside of AI. It’s a combination of many topics that are very familiar to most people with a basic knowledge of statistics. I believe they need to cover very little (just the glue that holds the blocks together) to get an insight into a whole new realm.

Starting from a linear regression we will work through the maths and the code all the way to a deep-neural-network (DNN) in the accompanying R-notebooks. Hopefully to show that very little is actually new information.

This is pretty detailed.  Karmanov mentions Andrej Karpathy, whose Hacker’s guide to Neural Networks is also a must-read on the topic.

Comments closed

Schema-Only Optimized Tables Can Still Roll Back

Chris Adkin investigates whether schema-only memory-optimized tables are logged and whether they support transactions the way other tables do:

The statement “There is zero logging when DURABILITY=SCHEMA_ONLY” is not factually correct, its more like a minimally logged operation. What is surprising is the fact that logged as advertised for the in-memory engine should result in far fewer log records than the equivalent workload for the legacy engine, clearly this is not the case in this particular example and something I need to dig into somewhat deeper. Also note that the version of SQL Server being used is SQL Server 2016 SP1 CU3, which should be stable. One final point, in order to make sure that fn_dblog and fn_dblog_xtp produced clean results for me each time, I took the quick and dirty option of re-creating my test database each time.

This post definitely ranks in the “Microsoft did this right” category.

Comments closed

Running Totals With Window Functions

Bert Wagner shows the best method to calculate a running total in SQL Server 2012 or later:

Before SQL Server 2012, the solution to generating a running total involved cursors, CTEs, nested subqueries, or cross applies. This StackOverflow thread has a variety of solutions if you need to solve this problem in an older version of SQL Server.

However, SQL Server 2012’s introduction of window functions makes creating a running total incredibly easy.

Enhanced window functions was one of 2012’s killer features on the T-SQL developer side.  Bert’s post doesn’t cover window ranges and sizes, as the defaults work for him, but Steve Stedman has a good post on the topic if you want more details.

Comments closed