Managing Database Changes with Scripts

Ed Elliott continues a series on automated database deployments with a quick change process:

In my blog post here https://the.agilesql.club/blogs/ed-elliott/2019-06-10/steps-to-automated-database-deployments I described the steps you need to go through so you can build up your confidence that you are capable of deploying databases using automation. I mean, afterall, knowing that it is possible to automate your deployments and having confidence that they will succeed are two very different things.

Even with the best tooling in the world, automated database deployments are still a struggle and there is one key thing that you can do, no matter what tools you choose and that is to make the deployments re-runnable. (Insert discussion here on the word idempotent and how it means re-runnable but sounds far cooler and intellectual). If you make your deployments re-runnable then you can, by their very definiton, re-run them.

Click through for two options. I definitely prefer option number 1 as well.

Analyzing Data by the Numbers

I am close to wrapping up my series on forensic accounting techniques:

Round number analysis focuses on the final digits of a number, specifically looking at how many 0s there are at the end after rounding off the decimal. We’re interested in the percentage of records in each rank (or Type). For example, $10,820 is a Type 1 because there is a single 0 at the end. $8,300 is a Type 2 because there are two 0s at the end. $11,000 is a Type 3. And $9009 is a Type 0 because there are no 0s at the end.

With that in mind, how do we do this kind of calculation in SQL Server? With one of my favorite operators: APPLY.

Click through to watch me look at round numbers, last digits, and first digits.

Dropping Database Objects with Aplomb

Pamela Mooney has a two-part series on dropping database objects. Part one includes a big setup script:

Some months ago, a fellow DBA came to me and expressed her concern over the normal housecleaning process that occurred at her company.  Developers or product owners would submit changes for objects which were no longer used, only to find that sometimes, they were.  No damage had been done, but the possibility of creating an emergent situation was there.

I could well understand her concern.  Most of us who have been DBAs for any length of time have probably come across a similar scenario.  I thought it would be fun to write a process that reduced the risk of dropping database objects and made rollbacks a snap.

Part 2 handles the actual drops:

Now, the objects in the table will be dropped after 120 days.  But what if you need them to be dropped before (or after)?  Both options work, but I’ll show you the before, and also what happens if there is a problem with the drop.

Check it out and drop with impunity. Or at least the bare minimum of punity.

Contrasting Flink with Kafka Streams

Sourabh Verma contrasts Apache Flink with Kafka Streams:

Initially, I would like you all to focus on a few questions before comparing the frameworks:
1. Is there any comparison or similarity between Flink and the Kafka?
2. What could be better in Flink over the Kafka?
3. Is it the problem or system requirement to use one over the other?

I’m generally happy with both technologies as well as Spark Streaming. But as Sourabh points out, there are differences to keep in mind.

Against Surrogate Keys on Junction Tables

Lukas Eder explains the costs of surrogate keys on tables intended to join multiple tables together:

There is really no point in adding another column FILM_ACTOR_ID or ID for an individual row in this table, even if a lot of ORMs and non-ORM-defined schemas will do this, simply for “consistency” reasons (and in a few cases, because they cannot handle compound keys).

Now, the presence or absence of such a surrogate key is usually not too relevant in every day work with this table. If you’re using an ORM, it will likely make no difference to client code. If you’re using SQL, it definitely doesn’t. You just never use that additional column.

But in terms of performance, it might make a huge difference!

Lukas makes a good argument here.

Thoughts on Certification

Eugene Meidinger is certifiable:

This being a complex topic, I thought I’d lay out the various factors to give a more comprehensive answer than you can easily fit in a tweet.

So the first two questions we need to answer are “Why do certs exist?” and “Why do people take them?”. Without these, we can’t give a good answer to whether you should take them. Certifications often exist for reasons that have nothing to do with your personal best interest. It is necessary to understand that fact.

Giving the economist’s spin, certifications are imperfect signals of reputation. When you know nothing else about a candidate, business partner, vendor, or ranting homeless person on the street, that cert can let you update your priors about the person. The exclusivity of the certification goes a long way in building credence: the MCM (or MCSM) has such a positive reputation even years after its cancellation because it was so difficult an exam that the only way a person could pass is if that person really knew the topic extremely well. By contrast, the old MCSE certifications from the early 2000s were a joke because anybody could memorize a brain dump, spit out answers, and get a cert.

The economist in me also says that certifications tend to be a net drain because you’re spending time on an imperfect signal when there are probably better imperfect signals out there. Your blog, YouTube/Twitch channel (assuming you’re not just playing Slay the Spire all day), and GitHub repo are going to tell me more about your interests and technical capabilities.

Read what Eugene has to say. I think we agree on the broad strokes but I’m probably more in the “not worth it” camp than he is with the exception of cases where it’s necessary to land a business contract (e.g., needing to be a Microsoft Gold Partner).

Disable Lightweight Pooling

Randolph West explains why enabling lightweight pooling in SQL Server is almost always a bad idea:

When can I enable lightweight pooling then?
Don’t. But if you must, these are the conditions under which Microsoft suggests it may be useful:
– Large multi-processor servers are in use.
– All servers are running at or near maximum capacity.
– A lot of context switching occurs (greater than 20,000 per second).

We can measure context switching with a performance counter in Performance Monitor on Windows, so the last two items on this list can be monitored. Use the Context Switches/sec counter to track context switches.

I’m sure there were a few customers who benefited from this, but I’ve neither seen nor heard of a case where it did actually help.

Dealing With Massive Database Counts

Edward Harrison describes some of the issues you run into when working on a SQL Server instance with thousands of databases:

One of the headaches we frequently have to deal with is the excessive amount of time it takes for SQL Servers with the discussed design to go through any planned patches. We have seen this while carrying out some planned maintenance for one of our customers and the work simply comprised failing over a SQL Server instance, from the active node, over onto the passive node so that the former active node could be patched. The passive node had had its SQL Server patched earlier in the day while the active node was still on the older patch level.

This SQL Server had 10,000 databases and all that we needed to do was to simply click the buttons to failover the SQL Server and then wait for it come online on the other node. In the end, this failover took nearly four hours to complete because upgrade scripts had to be applied to each of the 10,000 databases.

Even a couple thousand databases on an instance can slow things down considerably. Consider an operation which takes 10 milliseconds like laying out a database in the object explorer in SSMS. With 10K databases, that 10ms activity (note that I’m making up the number) would take 100 seconds. And these are often blocking tasks, meaning you’re sitting there spinning your wheels for 100 seconds before you can do the thing you wanted to do. The pain is real. And for server activities, most administrative functions (backups, CHECKDB, index rebuilding, etc.) is done one at a time, meaning you have a lot of time spent on this stuff.

Saving To Excel From Azure Data Studio

Bob Pusateri shows us how you can export to Excel from Azure Data Studio:

In SQL Server Management Studio, there’s no single-step way to save a result set to Excel. Most commonly I will just copy/paste a result set into a spreadsheet, but depending on the size of the result set and the types of data involved, that doesn’t always play nicely.

But Azure Data Studio does it WAY better, trust me. If you want that result set in a spreadsheet, just save it as one and poof – you have an Excel file!

Considering that Excel is the most popular BI tool, it makes sense to support it.

Things To Know About Databricks UAP

Kara Annanie has five things you should know about the Databricks Unified Analytics Platform:

4.     A Spark Dataframe is not the same as a Pandas/R Dataframe
Spark Dataframes are specifically designed to use distributed memory to perform operations across a cluster whereas Pandas/R Dataframes can only run on one computer. This means that you need to use a Spark Dataframe to realize the benefits of the cluster when coding in Python or R within Databricks.

This is a nice set of tips for people getting started with Spark and Databricks.

Categories

June 2019
MTWTFSS
« May  
 12
3456789
10111213141516
17181920212223
24252627282930