Press "Enter" to skip to content

Curated SQL Posts

New Features in Python 3.9

Harini Guptha walks us through some of what’s upcoming in Python 3.9:

The latest beta preview python 3.9.0b5 is released on July 20th, 2020. Yes, you heard it right. This is the last of the five planned beta release previews. This beta release gives an opportunity to the community to test the new features.

If you are planning to go for a python certification, make sure that you check out what functionalities are deprecated and what features are newly added in this release. Its always good to get yourself updated with the latest information. Without further ado, let’s go through the new features of Python 3.9.0b5.

Click through for the list.

Comments closed

Mentoring from Paul Randal

Paul Randal is offering up mentorship time:

If I remember correctly, I think I helped 8 or more people decide to change jobs for a better work environment suited to their goals, and several people go it alone as consultants. It was hugely satisfying to help so many people with their careers and lives, in a non-technical capacity.

Now it’s time to do it again, as I haven’t done any public mentoring since 2015, so this blog post serves as a call for prospective mentees!

Please read the rest of this post carefully, so you’re clear how this works. We’re making a time commitment to each other so I want to be up-front about a few things.

I was one of those 54 mentees back in 2015 and can recommend it. I will say, though, that you get out of it exactly what you put in—this isn’t some “I want to advance my career” easy mode.

Comments closed

Tracking Resource Utilization by User

Brent Ozar uses most of Resource Governor:

You’ve got a bunch of users on the same SQL Server, and you wanna know who’s using the most resources. SQL Server doesn’t make this easy. You can query some management views to see which queries or databases have been using the most resources, but you can’t intuitively tell which USERS are.

To solve this problem, we’re going to use Resource Governor.

Wait. Come back.

I’ve always liked the idea behind Resource Governor and since about SQL Server 2016, it has been quite a useful product because “make some queries slow down” can absolutely be the right answer when those queries are harming the performance of queries which matter more.

Comments closed

How a 60GB Database Backup Became 1TB

Garry Bargsley points out the importance of a tiny flag:

I put on my investigator hat and begin looking around. I started with the Windows File Server to see what was actually on the drive in question. Just as I thought, three SQL backup files were in there proper folder. Although there were only three files, something else caught my attention. One backup file was 800GB and another 1TB. That was strange as I don’t think the source databases are that big. Sure enough, I look and one database is 60GB and the other is 45GB.

Something is not right here!! So, next, I run a RESTORE HEADERONLY against one of the backup files. What did I see?

Read on to learn what Garry saw, and then what Garry didn’t see.

Comments closed

Transaction Modes in SQL Server

I have a video and blog post out:

What I want to do in today’s post is to cover the different sorts of transaction modes and get into the debate about whether you should use explicit transactions or rely on auto-committed transactions for data modification in SQL Server. This came from an interesting discussion at work, where some of the more recent database engineers were curious about our company policy around transaction modes and understanding the whys behind it. I didn’t come up with the policy, but my thinking isn’t too far off from the people who did.

But before I get too far off course, let’s briefly lay out some of the basics around transactions.

Read on for a good deal of info on the different transaction modes, including a bit on why implicit transactions (as opposed to autocommit transactions) are a bad thing in SQL Server.

Comments closed

Transaction Isolation Levels in SQL Server

Dan Jackson walks us through the different transaction isolation levels in SQL Server and what they mean for us:

We will start with a definition and then evolve it: the isolation level specifies how much one transaction must be protected from resource or data modifications made by other transactions.

Consider the case where user A is trying to read a list of products out of a table, meanwhile user B comes along and changes some of the product information in the table. As part of their same transaction, user A comes back to try and read the product table, but it has changed. Do you want user A to read the new information or not?

Isolation levels allow you to decide what would happen in scenarios like the one I’ve just described and so it should come as no surprise that they are described in terms of which concurrency side effects they allow.

Read on for a description of typically-undesirable side effects and the isolation levels which prevent them.

Comments closed

TOP and Ordering

Erik Darling is in the middle of a back-to-basics series on performance tuning:

And you see, once you set up a query to return the TOP N rows, there’s an expectation that users get to choose the order they start seeing rows in. As long as we stick to columns whose ordering is supported by an index, things will be pretty stable.

Once we go outside that, a TOP can be rough on a query.

Read on for an example of what happens when that type of thing goes wrong.

Comments closed

So You Want to Fail Over a SQL Managed Instance

Danimir Ljepava takes us through user-initiated failover of SQL Managed Instances:

In August 2020, we have released a new feature user-initiated manual failover allowing to manually trigger a failover on SQL Managed Instance using PowerShell or CLI commands, or through invoking an API call.

Manually initiated failover on a managed instance will be an equivalent of the automated failover for high availability and software patches initiated automatically by the service. Manually invoking a failover on MI will help test end-to-end applications for fault resiliency on automatic failovers in case of planned or unplanned events before deploying to production. In addition to testing how failover impacts existing database sessions, it can also help verify if it changes the end-to-end performance due to changes in the network latency. In some cases if performance issues are encountered on SQL MI, manually invoking a failover to a new node can help mitigate the performance issue.

Read on to see how you can perform failover and how you can confirm that it worked.

Comments closed

Learning from a Hadoop Outage

Sandhya Ramu and Vasanth Rajamani have an after-action report:

For companies and organizations, failure tends to be far more illuminating than success and the lingering effects of a failure can be harmful if the team moves too quickly and does not resolve the issue in a thorough and transparent manner. We recently ran into a large incident that involved data loss in our big data ecosystem and by reflecting on our diagnosis and response, we hope that our learnings from an impactful incident in our big data ecosystem will be insightful.

Here’s what happened: roughly 2% of the machines across a handful of racks were inadvertently reimaged. This was caused by procedural gaps in our Hadoop infrastructure’s host life cycle management. Compounding our woes, the incident happened on our business-critical production cluster.

Read on to understand what happened and why. It’s a lesson in the importance of having a disaster recovery plan and testing it

Comments closed

An Introduction to Spark Streaming

Sarfaraz Hussain has started a series on Spark Streaming. The first post gives an introduction to the topic:

The philosophy behind the development of Structured Streaming is that,

We as end user should not have to reason about streaming”.

What that means is that we as end-user should only write batch like queries and its Spark’s job to figure out how to run it on a continuous stream of data and continuously update the result as new data flows in.

Sarfaraz then follows this up with a bit on the structure of a streaming query:

So, as new data comes in Spark breaks it into micro batches (based on the Processing Trigger) and processes it and writes it out to the Parquet file.

It is Spark’s job to figure out, whether the query we have written is executed on batch data or streaming data. Since, in this case, we are reading data from a Kafka topic, so Spark will automatically figure out how to run the query incrementally on the streaming data.

Check them both out.

Comments closed