Press "Enter" to skip to content

Author: Kevin Feasel

Azure SQL Database Security

James Serra has a number of links to Azure SQL Database security features:

Life we be so much easier if we could just trust everyone, but since we can’t we need solid security for our databases.  Azure SQL Database has many security features to make you sleep well at night:

Most of these are exactly the same as the on-premise product—at least the SQL Server 2016 version—but it goes to show that Azure SQL Database has grown up quite a bit.

Comments closed

Documenting Permissions

Chris Bell has a new script to audit SQL Server permissions:

I wanted the script to do a few things. Tell me who is in a AD Group that was granted rights. IT is one thing to see the group name, but that doesn’t really tell me who has access. I also wanted the output to be a little more user readable, so I formatted the output some. There are other things I did too, but you can fun reading through the code and comparing the 2 sources.

Knowing who’s allowed to do what is key to having a successful security posture.  This script won’t tell you object-level permissions, but at least gives you an idea about role and group membership.

Comments closed

CSVs With Line Breaks In Power BI

Chris Webb talks about one of the banes of my existence:  flat files with “rogue” line breaks:

But what if you can’t fix the source data? I was asked this question the other week, and since I had been asked about it before and not come up with a good answer, I decided to spend some time researching the problem.

What I found was that it was relatively easy to write some M code that gave me the correct results, but very hard to write code that performed acceptably well on a large data set (I was testing on a CSV file containing almost half a million rows). Here’s the code for the function I ended up with:

It’s nice to see that Power Query & Power BI have methods to get around this sort of issue, but it sounds like even those methods are limited in value.

Comments closed

Database Maintenance

SQLWayne describes his maintenance routines:

The most critical thing as a SQL Server DBA is to ensure that your databases can be restored in the event of the loss of a server for whatever reason: disk crash, fire in the server room, tribble invasion, whatever.  To do this, not only do you have to back up your databases, you also have to test restores!  Create a database and restore the backups of your production DB to them.  It’s the safest way to make sure that everything works.  This test restore can be automated to run every night, but that’s outside the scope of what I want to talk about right now.

There are lots of places that problems can creep in, this is just one part of how you’ll need to monitor systems.  This is how I’ve done things for a number of years, and thus far it has served me well.

Depending upon your instance count, average database size, maintenance windows, etc. etc. etc., some of these things may or may not work, but the principle is the same:  protect the data, and automate your processes to protect that data.  This is a good article to read for ideas, and then from there dig into other administrative blog posts, videos, and books to become better versed in the tools and techniques available to protect your data.

Comments closed

Luhn Testing In T-SQL

Phil Factor shows us the Luhn algorithm, a quick test to determine if a credit card number is potentially valid:

There are many ways of doing it in SQL. (and Rosetta Code is a good place to view solutions in various other languages). I believe that Peter Larsson holds the record for the fastest calculation of the Luhn test for a sixteen-digit credit card, with this code. As it stands, it isn’t a general solution, but it can be modified for different lengths of bank card.

Phil has two interesting T-SQL functions in the code and wants to find more.

Comments closed

Using The Import-Export Wizard

James Anderson uses the Import-Export wizard to import data:

Whilst working on an upcoming post I realised that I needed a decent data set on which to test some of the new visualisations in SSRS 2016. I remembered this post on open data sets by Kendra Little that mentioned Data.gov. There are a lot of different data sets here but I settled on a set describing social media usage by organisations in New York.

The Import-Export wizard is nice for these types of one-off data loads—it’s the gateway into SSIS.

Comments closed

Unit Testing A Function

Steve Jones walks through a practical example of unit testing T-SQL with tsqlt:

However I wanted to add some tests. Does this really work? What if I don’t have a backslash? I thought the best way to do this was with a few tSQLt tests, which I quickly built. The entire process was 5-10 minutes, which isn’t a lot longer than if I had been running random tests myself with a variety of strings.

The advantage of tests is that if I come up with a new case, or another potential bug, I copy the test over, change the string and I have a new test, plus all the regressions. I’m not depending on my memory to run the test cases.

I first put the code in a function, which makes it easier to test.

tsqlt is a great tool for database unit testing.

Comments closed

Power BI Plus Fitbit

Rob Farley visualizes his Fitbit data with Power BI:

I added a column: RollinAvgSteps = AVERAGEX(FILTER(fitbit_export_20160214, EARLIER(fitbit_export_20160214[Date])>=fitbit_export_20160214[Date]),fitbit_export_20160214[Steps])

…which takes the average of my steps to date. There are a bunch of ways to achieve this, but this is the way that I chose. And you can see that the average line is (happily) improving! Oh, and because I pulled down the extract on the 14th, there’s a dip at the end. My numbers were much healthier by the end of the day, and despite spending way too long NOT walking, I did about 7244 steps that day.

You can see the result at http://bit.ly/RobFitbit

I like the rolling average that Rob added in.

Comments closed

VARCHAR(1)

Kenneth Fisher warns against low VARCHAR sizes:

The first thing you’ll notice is that a single space is stored the same way in both columns. With an empty string, on the other hand, we see a difference. Char columns are fixed length. So even though we inserted an empty string into it we get back a single space.

The next major difference is that varchar columns require an extra two bytes of storage. So a varchar(1) column actually uses three bytes not just the one byte that char(1) does.

This is exactly the type of scenario row-level compression improves.

Comments closed