Press "Enter" to skip to content

Curated SQL Posts

Deleting A Small Number Of Rows From A Big Table

Brent Ozar has a tip for deleting a relatively small percentage of data from a very large table:

Say you’ve got a table with millions or billions of rows, and you need to delete some rows. Deleting ALL of them is fast and easy – just do TRUNCATE TABLE – but things get much harder when you need to delete a small percentage of them, say 5%.

It’s especially painful if you need to do regular archiving jobs, like deleting the oldest 30 days of data from a table with 10 years of data in it.

The trick is making a view that contains the top, say, 1,000 rows that you want to delete:

Read on for a demo.

Comments closed

Using Powershell To Find Linked Server References

Drew Furgiuele shows us how we can use the sqlserver Powershell module to find linked server references in code:

SQL Search is doing exactly what it’s designed to do here: it’s finding every object that matches that string, HumanResources. It’s unfortunate that it also happens to be the name of my linked server and a schema in my database, but such is life, right? Its returning every object it hits a match on, which includes a bunch of views, plus one of those views actually contains my linked server reference.

I love SQL Search

Let me be clear: I’m not here to gang up on the fine folks at Redgate. This tool is beautiful and I love it. Otherwise, how else could we quickly search for objects in our databases? The alterative would be either querying system views for object definitions or using cursors to call sp_helptext over and over, and then trying to do pattern matching. Same as what SQL Search does.

There’s no easy way to sort this wheat from chaff, is there? This might be a starting point; let’s narrow down the search the objects we might need to look at. Then, we’ll manually script each one out, one at a time. That sounds an awful lot like a manual process. “If only there was a way to automate this checking”, he asked, sarcastically.

Click through for a very interesting cmdlet.

Comments closed

Synchronous Kafka With Spring Request-Reply

Gaurav Gupta shows how to use Spring-Kafka to implement a request-reply pattern:

The behavior of request-reply is consistent even if you were to create, say, three partitions of the request topic and set the concurrency of three in consumer factory. The replies from all three consumers still go to the single reply topic. The container at the listening end is able to do the heavy lifting of matching the correlation IDs.

Kafka’s real advantage still comes from distributed, asynchronous processing, but if you have a use case where you absolutely need synchronous processing, you can do that in Kafka as well.

Comments closed

Scalar Function Blocking

Erik Darling notes that scalar functions can cause multi-table blocking:

Someone had tried to be clever. Looking at the code running, if you’ve been practicing SQL Server for a while, usually means one thing.

A Scalar Valued Function was running!

In this case, here’s what it looked like:

Someone had added that function as a computed column to the Users table:

Spoilers:  this was a bad idea.

Comments closed

Accessing SQL Server From Scala

Sidharth Khattri shows how to use Scala Slick, a library designed to integrate with database, to connect to SQL Server:

Now moving onto our FRM (Functional Relational Mapping) and repository setup, the following import will be used for MS SQL Server Slick driver’s API

import slick.jdbc.SQLServerProfile.api._

And thereafter the FRM will look same as the rest of the FRM’s delineated on the official Slick documentation. For the example on this blog let’s use the following table structure

CREATE TABLE user_profiles ( id INT IDENTITY (1, 1) PRIMARY KEY, first_name VARCHAR(100) NOT NULL, last_name VARCHAR(100) NOT NULL
)

whose functional relational mapping will look like this:

class UserProfiles(tag: Tag) extends Table[UserProfile](tag, "user_profiles") { def id: Rep[Int] = column[Int]("id", O.PrimaryKey, O.AutoInc) def firstName: Rep[String] = column[String]("first_name") def lastName: Rep[String] = column[String]("last_name") def * : ProvenShape[UserProfile] = (id, firstName, lastName) <>(UserProfile.tupled, UserProfile.unapply) // scalastyle:ignore
}

I’m definitely going to need to learn more about this.

Comments closed

Powershell: Validating Parameters Using Private Functions

Mike Robbins shows how to split out validation from your primary function within Powershell:

They responded by asking if it was possible to move the custom message that Throw returns to the private function. At first, I didn’t think this would be possible, but decided to try the code to make an accurate determination instead of just assuming it wasn’t possible.

I’ve now learned something else which makes the whole process of moving the validation from the ValidateScript block to a private function much more user friendly which is what I think the person who asked the question was trying to accomplish.

If you have several parameters with somewhat complex validation logic, this makes maintenance a lot easier.

Comments closed

Running dbachecks In Parallel

Caludio Silva shows how you can run multiple instances of dbachecks concurrently:

Imagine that I want to check for databases in Full Recovery Model on the production environment and I want to start (in parallel) a new check for the development environment where I want to check for Simple Recovery Model if this setting is not changed in the correct time frame, we can end checking for Full Recovery Model on the development environment where we want the Simple Recovery Model.

The first time I tried to run tests for some environments in parallel, that had the need to change some configs, I didn’t realise about this detail so I ended up with much more failed tests than the expected! The bell rang when the majority of the failed tests were from a specific test…the one I had changed the value.

Read the whole thing before you start running Task.Parallel or even running multiple copies of dbachecks in separate Powershell windows.

Comments closed

The Evolution Of Hadoop

Holden Ackerman has an interesting analysis of Qubole customers’ adoption of Hadoop 2:

In Qubole’s 2018 Data Activation Report, we did a deep-dive analysis of how companies are adopting and using different big data engines. As part of this research, we found some fascinating details about Hadoop that we will detail in the rest of this blog.

A common misconception in the market is that Hadoop is dying. However, when you hear people refer to this, they often mean “MapReduce” as a standalone resource manager and “HDFS” as being the primary storage component that is dying. Beyond this, Hadoop as a framework is a core base for the entire big data ecosystem (Apache Airflow, Apache Oozie, Apache Hbase, Apache Spark, Apache Storm, Apache Flink, Apache Pig, Apache Hive, Apache NiFi, Apache Kafka, Apache Sqoop…the list goes on).

I clipped this portion rather than the direct analysis because I think it’s an important point:  the Hadoop ecosystem is thriving as the matter of primary importance switches from what was important a decade ago (batch processing of large amounts of data on servers with direct attached storage) to what is important today (a combination of batch and streaming processing of large amounts of data on virtualized and often cloud-based servers with network-attached flash storage).

Comments closed

How Spark Works: RDDs And DAGs

Shubham Agarwal gets into the way that Spark translates operations on Resilient Distributed Datasets into actions:

When we do a transformation on any RDD, it gives us a new RDD. But it does not start the execution of those transformations. The execution is performed only when an action is performed on the new RDD and gives us a final result.

So once you perform any action on an RDD, Spark context gives your program to the driver.

The driver creates the DAG (directed acyclic graph) or execution plan (job) for your program. Once the DAG is created, the driver divides this DAG into a number of stages. These stages are then divided into smaller tasks and all the tasks are given to the executors for execution.

Click through for more details.

Comments closed