Press "Enter" to skip to content

Curated SQL Posts

SQLCover Updated

Ed Elliott has an update to SQLCover, his code coverage tool:

This includes a few minor fixes but also support for SQL Azure so if you run your test in a v12 database or higher you can now get an idea of code coverage from that.

If you are interested in using this but don’t know where to start, there is a powershell script in the download (https://the.agilesql.club/SQLCover/download.php) and if you also get reportgenerator (https://github.com/danielpalme/ReportGenerator/releases/tag/v2.4.5.0):

Check out the tool and add a few database tests…or any database tests…

Comments closed

More RAM In Standard Edition Requested

Erik Darling wants Standard Edition to support more RAM:

Microsoft needs to make money. I get it. There’s no such thing as a free etc. But do they really need to make Enterprise licensing money off of people who will never use a single Enterprise feature? Should a small shop with a lot of data really have to make a $5000 jump per core just to cache another 128-256GB of data? That seems unreasonable to me. RAM is cheap. Licensing is not.

I wouldn’t suggest à la carte pricing, because licensing is already complicated enough. What could make sense is offering higher memory limits to shops with Software Assurance. Say up to 512GB on Standard Edition. That way, Microsoft can still manage to keep the lights on, and smaller shops that don’t need all the pizzaz and razzmatazz of Enterprise Edition can still hope to cache a reasonable amount of their data.

It’s an interesting argument.  I’m always sympathetic to having more features (and I’m including stretching limits as a feature here) in Standard Edition as not every company can afford Enterprise Edition.

Comments closed

Script Those Indexes

Kendra Little provides a T-SQL script to script out all indexes on a database:

Sometimes you need to script out all the indexes in a database.

Maybe you’re concerned something has changed since they were last checked in.

Or maybe the indexes aren’t checked into source control, and you’re working on fixing that. (Important!)

Either way, sometimes you need to do it, and it’s not fun through the GUI. I needed to write some fresh demo code for this recently, and I needed it to give the details for partitioned tables using data compression, and I thought I’d share.

The fact that the built-in Generate Scripts does not include compression is annoying, but Kendra’s script does.  For bonus points, use Powershell to update scripts automatically with index changes and check them into your source control system of choice.

Comments closed

Think Set-Based

Paul Randal explains why you should think in a set-based manner when writing T-SQL queries:

This explanation involves a similar problem but not involving SQL Server. Imagine you need to acquire twelve 4′ x 8′ plywood sheets from your local home improvement store.

You could drive to and from the store twelve times, and each time you need to go into the store, purchase the sheet, and wait for a staff member to become available to load the sheet into your pickup truck, then drive home and unload the sheet.

Or you could drive to the store once and purchase all twelve sheets in one go, with maybe four staff members making three trips each out to your pickup, carrying one sheet each time. Or even just one staff member making twelve trips out to your pickup.

Which method is more efficient? Multiple trips to the store or one trip to the store, no matter how many staff members are available to carry the sheets out?

On the SQL Server side, the APPLY operator is a method for bridging set-based operations with procedural thought patterns.  It’s not the perfect answer for everything, but there are some fantastic use cases (like simplifying calculations via chained APPLY operators).

Comments closed

New R And RTVS

R 3.3.0 is now available:

As this is a major release, you’ll need to re-install any packages you were using (and perhaps wait a little while until package authors make any compatibility fixes needed for version 3.3.0). If you’re on the Windows platform, Tal Galili’s installr package automates the process for you. If you are using the checkpoint package (on any platform) you can simply increment the checkpoint date to anytime after May 2, 2016.

(For Microsoft R Open users, the next version to be released will be MRO 3.2.5, and MRO 3.3.0 will follow soon thereafter.)

For more information about R 3.3.0, including the detailed list of changes and bug fixes, follow the link to the announcement from the R Core Group below.

David Smith also notes that R Tools for Visual Studio 0.3 has been released:

R Tools for Visual Studio, the open-source extenstion to Visual Studio that provides an IDE for the R language, has been upgraded to include several new features.

The latest update, RTVS 0.3, now includes:

  • An R package manager, allowing you to review, install, and uninstall packages using a convenient user interface.

  • The Variable Explorer now allows you to open data-frames for viewing in an Excel workbook.

  • New toolbar buttons to run selected code, source the current script, import data from a URL or file, and start/stop a Shiny app.

This is a great time to get interested in R.  If you’re familiar with Visual Studio, Microsoft is making great strides toward integrating things nicely.

2 Comments

HBase’s Failure To Catch On

Matt Asay has an interesting article on how HBase started as a big thing but has fizzled since:

Ex-Googler (and current Amazon Web Services employee) Tim Bray argues “there is a real cost to this continuous widening of the base of knowledge a developer has to have to remain relevant.” RedMonk analyst Stephen O’Grady takes this a step further: “It could be that we’re approaching the too-much-of-a-good-thing stage. In which case, the logical outcome will be a gradual slowing of fragmentation followed by gradual consolidation.”

In other words, niche data stores that do one thing really well are giving way to more generally applicable databases that can serve a broader range of enterprise needs.

The second part of Keep’s sentence above, however, spells out another reason HBase is struggling: It’s really hard to use.

I have a statement which is 90% serious and 10% joke:  a database product is truly mature once it supports SQL.  So what’s the answer for HBase?  The current attempt at an answer is Phoenix, which is…SQL for HBase.

Comments closed

Traces Can Be Expensive

Matt Slocum warns against having too many server-side traces running:

Before the client took the last ditch effort of just restarting SQL Server, I checked traces.  There were 9 user traces collecting a ton of trace events.

I manually killed them all and suddenly performance returned to normal.  Phew!  Crisis averted.

As a follow up from that issue, I created a script that stops and then deletes all user traces.  We are discussing setting up a job to run this script periodically to keep traces at bay and I am educating the group on proper trace discipline (and Extended Events).

Even Extended Events aren’t free, so the same advice applies.

Comments closed

Scraping And Importing Web Data

Jon Morisi shows how to scrape a website and load the result into a SQL Server table:

Next save this as a csv file.

Now jump into SQL Server Management Studio, drill down to your database (you may want to create a new, empty database for your snarfing), right-click and start the Import and Export wizard, via “Import Data”:

This is the one-off solution.  If you need to do it regularly, read up on creating scrapers and use Integration Services to load.

Comments closed

Beware Manually Edited SQLPS Files

Cody Konior makes mention of an issue if you’ve manually edited your SqlPSPostscript.ps1 file:

The SQLPS module has been slow to load for years now and has finally been fixed in the April 2016 release. But most of us couldn’t wait a few years and edited the SqlPSPostscript.ps1 file that sits in the module directory.

If you’ve done this on one of the SSMS previews (not SQL 2014 as it’s in a different directory) and then upgraded to the April preview though you’ll be missing something, two things actually, being aliases for the renamed Encode-SqlName and Decode-SqlName, which won’t get put into your modified SqlPSPostScript.ps1 file. And so if you use these your older scripts may break.

There’s an easy fix, though, so all’s well.

Comments closed

Cool Storage

James Serra talks about “cool storage” in Azure Blob Storage:

The access tiers available for blob storage accounts are “hot” and “cold”.  In general, hot data is classified as data that is accessed very frequently and needs to be highly durable and available.  On the other hand, cool data is data that is infrequently accessed and long-lived.  Cool data can tolerate a slightly lower availability, but still requires high durability and similar time to access and throughput characteristics as hot data.  For cool data, slightly lower availability SLA and higher access costs are acceptable tradeoffs for much lower storage costs.  Azure Blob storage now addresses this need for differentiated storage tiers for data with different access patterns and pricing model.  So you can now choose between Cool and Hot access tiers to store your less frequently accessed cool data at a lower storage cost, and store more frequently accessed hot data at a lower access cost.  The Access Tier attribute of hot or cold is set at an account level and applies to all objects in that account.  So if you want to have both a hot access tier and a cold access tier, you will need two accounts.  If there is a change in the usage pattern of your data, you can also switch between these access tiers at any time.

It looks like there shouldn’t be a performance difference between the two;  it’s more of a cost difference in which you might be able to save money by choosing your tier wisely.

Comments closed