Press "Enter" to skip to content

Author: Kevin Feasel

Using Azure Data Catalog

Melissa Coates has some good advice if you start using Azure Data Catalog:

Register only data sources that users interact with. Usually the first priority is to register data sources that the users see-for instance, the reporting database or DW that you want users to go to rather than the original source data. Depending on how you want to use the data catalog, you might also want to register the original source. In that case you probably want to hide it from business users so it’s not confusing. Which leads me to the next tip…

Use security capabilities to hide unnecessary sources. The Standard (paid) version will allow you to have some sources registered but only discoverable by certain users & hidden from other users (i.e., asset level authorization). This is great for sensitive data like HR. It’s also useful for situations when, say, IT wants to document certain data sources that business users don’t access directly.

This is a good set of advice.

Comments closed

Control Nuggets

Meagan Longoria discusses control nuggets in BimlScript:

Control nuggets start with <# and end with #>. Just like text nuggets, control nuggets can be a single line or multiple lines. And they can contain simple or complex logic.

There were actually control nuggets in the text nugget example from the previous post. The variable declarations at the top of the file (lines 4 – 10) are control nuggets.

Below is another example file that uses code nuggets. I like to separate my design patterns. project connections, and package generation into separate files. The BimlScript shown below is from a slightly altered “caller file”, the file that I would execute to create the SSIS packages that should follow a specified design pattern. This caller file is generating Type 1 slowly changing dimensions.

Meagan also includes a couple of Biml scripts to explain control nuggets, so check those out.

Comments closed

TempDB Configuration

Sander Stad looks at the new TempDB defaults in 2016:

The initial size is set to 8MB. I would never create TempDB data files with a size of 8 MB which in my opinion is really small for a data file even for small systems.

In most cases I would look at the size of the TempDB disk (because you create a desperate disk for TempDB right…) and fill up the entire disk with data files. In this case I would create 4 x 5 GB files to leave some space for the log file.

To properly set the initial size of the TempDB you still need to go through some checks how much your system might need.

Read the whole thing.

Comments closed

Basic XML Querying

Steve Jones discusses a couple peculiarities of XML querying within SQL Server:

This doesn’t work.

The reason this doesn’t work is that XML is case sensitive. Meaning ORDERID != OrderID. The former is in the query, the latter in the XML document. If I change the query, this works (note I have OrderID below).

Like Steve, I’m not a big fan of doing XML processing within SQL Server, but if it’s a necessary part of your workload, it’s worth knowing.

Comments closed

Aggregating And Joining Using Kafka Streams

Michael Noll digs into Kafka Streams, showing how to enrich data and collect aggregates:

The stream of user click events is considered to be a record stream, where each data record represents a self-contained datum.  In contrast, the stream of user geo-location updates is interpreted as a changelog stream, where each data record represents an update (i.e. any previous data records having the same record key will be replaced by the latest update).  In Kafka Streams, a record stream is represented via the so-called KStream interface and a changelog stream via the KTable interface.  Going from the high-level view to the technical view, this means that our streaming application will demonstrate how to perform a join operation between a KStream and a KTable, i.e. it is an example of a stateful computation.  This KStream-KTable join also happens to be Kafka Streams’ equivalent of performing a table lookup in a streaming context, where the table is updated continuously and concurrently.  Specifically, for each user click event in the KStream, we will lookup the user’s region (e.g. “europe”) in the KTable in order to subsequently compute the total number of user clicks per region.

Let’s showcase the beginning (input) and the end (expected output) of this data pipeline with some example data.

This article is fairly detailed, but it covers a rather interesting topic in a good way.

Comments closed

Using Sqoop To Transfer Data

Sai Sriparasa shows how to use Sqoop to transfer data from a Hadoop cluster into a relational database:

Here are a few best practices for exporting with Sqoop:

  • Options file—As commands with Sqoop export and Sqoop import tend to be bigger in size, I recommend storing the commands in an options file. By keeping it in an options file, you can even make it part of a version control pipeline to monitor changes to the command.

  • Field termination—With Sqoop export, I recommend providing field termination metadata using the “–fields-terminated-by” option. Also, other formatting options such as “lines-terminated-by”, “enclosed-by”, “escaped-by”, etc., can be used as required.

  • Mapper tuning—When an export job is submitted, Sqoop creates a Java class and submits a MapReduce job based on input splits; then, each mapper connects to the database to export the data. The default number of mappers is 4, so I recommend tuning the number of mappers depending on the availability of processors on the cluster. Too many mappers might cause the load to increase on the database. We recommend that you monitor the number of connections and keep track of processlist on MySQL.

  • Staging table—The Sqoop export job is broken down into multiple transactions based on the mappers. Each transaction is therefore atomic and does not have any dependencies on other transactions. I recommend using the “–staging-table” option that acts as the buffer table for the separate transactions. After all transactions have been committed, a single transaction move is made to move the data to the final destination. Use the “–clear-staging-table” option to clean up the staging table after the export job.

There’s a lot in here which is Amazon-specific and there are a couple of things you’d have to change to deploy to SQL Server, but there’s a lot of useful information here.  I like that Sai shows how to use the Hadoop credential API instead of doing something silly like saving your password in plaintext.

Comments closed

Getting Started With Spark

Denny Lee announces a new Spark intro guide:

We are proud to introduce the Getting Started with Apache Spark on Databricks Guide. This step-by-step guide illustrates how to leverage the Databricks’ platform to work with Apache Spark. Our just-in-time data platform simplifies common challenges when working with Spark: data integration, real-time experimentation, and robust deployment of production applications.

Databricks provides a simple, just-in-time data platform designed for data analysts, data scientists, and engineers. Using Databricks, this step-by-step guide helps you solve real-world Data Sciences and Data Engineering scenarios with Apache Spark. It will help you familiarize yourself with the Spark UI, learn how to create Spark jobs, load data and work with Datasets, get familiar with Spark’s DataFrames and Datasets API, run machine learning algorithms, and understand the basic concepts behind Spark Streaming.

If you are at all interested in distributed databases, Spark is a must-learn.

Comments closed

Buffer Pools And Availability Groups

Joey D’antoni answers the question, what happens to pages in the buffer pool when you have an AG failover?

Recently at SQL Saturday Philadelphia, we started discussing failover  as it relates to mirroring and Always On Availability Groups. Specifically, we were wondering what would happen if you had a relatively busy readable secondary replica (which would have a lot of pages in the buffer pool on the secondary instance) and if those pages would be flushed from cache or anything like that. So I reached out to the product group and Kevin Farlee from Microsoft was extremely helpful

The answer is not what I was expecting.

Comments closed

Kill Remote Processes

Michael Bourgon has a Powershell script to kill remote processes:

Simple problem: we had to replace the config files for an app that hits the database, pointing it at a CNAME.

However, the file could be in use because the app was running.  And the app only loads the file on startup.  So we came up with this, which worked quite well.  Note that we use WMI (see my other posts on it!) to grab the application info then a WMI method to terminate the app.

Click through for the script.

Comments closed

Filtered Indexes For Uniqueness

Shane O’Neill answers one of my favorite interview questions:

I used to think that this would be a complex requirement, possibly requiring aTRIGGER or two to check the inserted value against whatever is already there; but there is a way to have this functionality and have it the way that SQL Server normally would enforce a uniqueness on a column; by using a UNIQUE INDEX.

In case you’re thinking…

“Oh, a unique index doesn’t check what’s already there, is that it?”

I’m afraid that’s not the case.

This is one of my favorite uses of filtered indexes:  “limited” uniqueness.  In other words, I’m okay with an unlimited number of NULL values but all non-NULL values need to be unique.

Comments closed