Dropping Database Objects with Aplomb

Pamela Mooney has a two-part series on dropping database objects. Part one includes a big setup script:

Some months ago, a fellow DBA came to me and expressed her concern over the normal housecleaning process that occurred at her company.  Developers or product owners would submit changes for objects which were no longer used, only to find that sometimes, they were.  No damage had been done, but the possibility of creating an emergent situation was there.

I could well understand her concern.  Most of us who have been DBAs for any length of time have probably come across a similar scenario.  I thought it would be fun to write a process that reduced the risk of dropping database objects and made rollbacks a snap.

Part 2 handles the actual drops:

Now, the objects in the table will be dropped after 120 days.  But what if you need them to be dropped before (or after)?  Both options work, but I’ll show you the before, and also what happens if there is a problem with the drop.

Check it out and drop with impunity. Or at least the bare minimum of punity.

Contrasting Flink with Kafka Streams

Sourabh Verma contrasts Apache Flink with Kafka Streams:

Initially, I would like you all to focus on a few questions before comparing the frameworks:
1. Is there any comparison or similarity between Flink and the Kafka?
2. What could be better in Flink over the Kafka?
3. Is it the problem or system requirement to use one over the other?

I’m generally happy with both technologies as well as Spark Streaming. But as Sourabh points out, there are differences to keep in mind.

Against Surrogate Keys on Junction Tables

Lukas Eder explains the costs of surrogate keys on tables intended to join multiple tables together:

There is really no point in adding another column FILM_ACTOR_ID or ID for an individual row in this table, even if a lot of ORMs and non-ORM-defined schemas will do this, simply for “consistency” reasons (and in a few cases, because they cannot handle compound keys).

Now, the presence or absence of such a surrogate key is usually not too relevant in every day work with this table. If you’re using an ORM, it will likely make no difference to client code. If you’re using SQL, it definitely doesn’t. You just never use that additional column.

But in terms of performance, it might make a huge difference!

Lukas makes a good argument here.

Thoughts on Certification

Eugene Meidinger is certifiable:

This being a complex topic, I thought I’d lay out the various factors to give a more comprehensive answer than you can easily fit in a tweet.

So the first two questions we need to answer are “Why do certs exist?” and “Why do people take them?”. Without these, we can’t give a good answer to whether you should take them. Certifications often exist for reasons that have nothing to do with your personal best interest. It is necessary to understand that fact.

Giving the economist’s spin, certifications are imperfect signals of reputation. When you know nothing else about a candidate, business partner, vendor, or ranting homeless person on the street, that cert can let you update your priors about the person. The exclusivity of the certification goes a long way in building credence: the MCM (or MCSM) has such a positive reputation even years after its cancellation because it was so difficult an exam that the only way a person could pass is if that person really knew the topic extremely well. By contrast, the old MCSE certifications from the early 2000s were a joke because anybody could memorize a brain dump, spit out answers, and get a cert.

The economist in me also says that certifications tend to be a net drain because you’re spending time on an imperfect signal when there are probably better imperfect signals out there. Your blog, YouTube/Twitch channel (assuming you’re not just playing Slay the Spire all day), and GitHub repo are going to tell me more about your interests and technical capabilities.

Read what Eugene has to say. I think we agree on the broad strokes but I’m probably more in the “not worth it” camp than he is with the exception of cases where it’s necessary to land a business contract (e.g., needing to be a Microsoft Gold Partner).

Disable Lightweight Pooling

Randolph West explains why enabling lightweight pooling in SQL Server is almost always a bad idea:

When can I enable lightweight pooling then?
Don’t. But if you must, these are the conditions under which Microsoft suggests it may be useful:
– Large multi-processor servers are in use.
– All servers are running at or near maximum capacity.
– A lot of context switching occurs (greater than 20,000 per second).

We can measure context switching with a performance counter in Performance Monitor on Windows, so the last two items on this list can be monitored. Use the Context Switches/sec counter to track context switches.

I’m sure there were a few customers who benefited from this, but I’ve neither seen nor heard of a case where it did actually help.

Dealing With Massive Database Counts

Edward Harrison describes some of the issues you run into when working on a SQL Server instance with thousands of databases:

One of the headaches we frequently have to deal with is the excessive amount of time it takes for SQL Servers with the discussed design to go through any planned patches. We have seen this while carrying out some planned maintenance for one of our customers and the work simply comprised failing over a SQL Server instance, from the active node, over onto the passive node so that the former active node could be patched. The passive node had had its SQL Server patched earlier in the day while the active node was still on the older patch level.

This SQL Server had 10,000 databases and all that we needed to do was to simply click the buttons to failover the SQL Server and then wait for it come online on the other node. In the end, this failover took nearly four hours to complete because upgrade scripts had to be applied to each of the 10,000 databases.

Even a couple thousand databases on an instance can slow things down considerably. Consider an operation which takes 10 milliseconds like laying out a database in the object explorer in SSMS. With 10K databases, that 10ms activity (note that I’m making up the number) would take 100 seconds. And these are often blocking tasks, meaning you’re sitting there spinning your wheels for 100 seconds before you can do the thing you wanted to do. The pain is real. And for server activities, most administrative functions (backups, CHECKDB, index rebuilding, etc.) is done one at a time, meaning you have a lot of time spent on this stuff.

Saving To Excel From Azure Data Studio

Bob Pusateri shows us how you can export to Excel from Azure Data Studio:

In SQL Server Management Studio, there’s no single-step way to save a result set to Excel. Most commonly I will just copy/paste a result set into a spreadsheet, but depending on the size of the result set and the types of data involved, that doesn’t always play nicely.

But Azure Data Studio does it WAY better, trust me. If you want that result set in a spreadsheet, just save it as one and poof – you have an Excel file!

Considering that Excel is the most popular BI tool, it makes sense to support it.

Things To Know About Databricks UAP

Kara Annanie has five things you should know about the Databricks Unified Analytics Platform:

4.     A Spark Dataframe is not the same as a Pandas/R Dataframe
Spark Dataframes are specifically designed to use distributed memory to perform operations across a cluster whereas Pandas/R Dataframes can only run on one computer. This means that you need to use a Spark Dataframe to realize the benefits of the cluster when coding in Python or R within Databricks.

This is a nice set of tips for people getting started with Spark and Databricks.

Updating Power BI Reports Based On Website Changes

Kasper de Jonge shows how you can scrape a webpage using an M function and check for specific updates:

A quick hack today. Got this question from someone who needed to be able to update a report and show users that something had changed. This is easy when you have access to a database and can add data to it but in this case that was not possible.
So I came up with a hacky (and great :P) way to do this. and wanted to share it in case it came handy in your box of tricks :).

It is a little bit hacky, but much less so on a website with a last updated date visible someplace.

When SLEEP_TASK Waits Are Important

Ginger Keys gives an example where the SLEEP_TASK wait indicates a performance problem:

Ordinarily SLEEP_TASK is a nonspecific wait type in SQL Server which occurs when a task sleeps while waiting for a generic event to occur, according to Microsoft documentation.  This wait type can usually be safely ignored, however on some occasions it can happen when a script does not execute completely or hangs up for long periods of time.

The SLEEP_TASK wait means that a thread is waiting on a resource or waiting for some event to occur, and could indicate background task scheduling, a query plan exchange operator that isn’t tracked by CXPACKET, or it could be a hashing operation that spills to tempdb.

Read the whole thing. For a bit more information, check out the SQLskills description of this wait type.

Categories

April 2019
MTWTFSS
« Mar  
1234567
891011121314
15161718192021
22232425262728
2930