Press "Enter" to skip to content

Author: Kevin Feasel

Indirect Checkpoint And Non-Yielding Scheduler Problems

Parikshit Savjani has a post describing an issue you might experience with indirect checkpoint post SQL Server-2012:

One of the scenarios where skewed distribution of dirty pages in the DPList is common is tempdb. Starting SQL Server 2016, indirect checkpoint is turned ON by default with target_recovery_time set to 60 for model database. Since tempdb database is derived from model during startup, it inherits this property from model database and has indirect checkpoint enabled by default. As a result of the skewed DPList distribution in tempdb, depending on the workload, you may experience excessive spinlock contention and exponential backoffs on DPList on tempdb. In scenarios when the DPList has grown very long, the recovery writer may produce a non-yielding scheduler dump as it iterates through the long list (20k-30k) and tries to acquire spinlock and waits with exponential backoff if spinlock is taken by multiple IOC routines for removal of pages.

This is worth taking a close read.

Comments closed

Bacpacing In Azure

Derik Hammer shows how to use a bacpac file to deploy an existing database to Azure SQL Database:

The recommended method for working with Azure is always PowerShell. The Azure portal and SSMS are tools there for your convenience but they do not scale well. If you have multiple databases to migrate, potentially from multiple servers, using PowerShell will be much more efficient. Scripting your Azure work makes it repeatable and works towards the Infrastructure as Code concept.

In this demonstration, the below steps will be used.

  1. Export the bacpac file to a local directory with sqlpackage.exe.

  2. Copy the bacpac to Azure Blob Storage with AzCopy.exe

  3. Use the PowerShell AzureRM module and cmdlets to create an Azure SQL Database from the bacpac file.

Derik shows the point-and-click way as well as the Powershell way.

Comments closed

Remote Installation Of Powershell On SQL Server 2017

Tracy Boggiano has a script to install the SQL Server 2017 Powershell module via Powershell remoting:

Once the file is copied to the server locally you can use run the below script from your local machine to install SQL Server PowerShell 2017 for all user to use on the server.  The Get-CMSHosts function can be found on my blog here. You will need to download PsExec from here and extract to location on your local computer and provide the path.

NOTE: This is a hotfix that requires a RESTART. Be careful in PRODUCTION.

Click through for the script.

Comments closed

Polybase Design Patterns On Azure SQL DW

Simon Whiteley continues his Polybase on Azure SQL Data Warehouse series.  First, he covers data loading patterns:

That’s enough about data loading for now, there’s another major use case for Polybase that we haven’t yet discussed. Many data processing solutions have a huge, unwieldy overnight batch job that performs aggregates, lookups, analytics and various other calculations.

However, it is often the case that this is not timely enough for many business requirements. This is where Polybase can help.

If we have an External Table over the newest files, this will read these new records at query time. We can write a view that combines the External Table with our batch-produced table. This will obviously go a little slower than usual, given it has to read data from flat files each time, however the results returned will be up to date.

Simon then covers the Create Table As Select statement:

In order to utilise SQLDW effectively, we write SQL for our transformations, rather than relying on external tools such as SSIS. This ensures the work is being done by our compute nodes and, therefore, can be scaled up to increase performance.

General best practice, therefore, would be write stored procedures for each of the data movements we want to occur. This allows us to add in auditing, logging etc. But what we’re interested in here is the core data movement itself.

Writing a traditional INSERT statement isn’t the fastest way to get data into a table. There is a special syntax which creates a new table and inserts into it, that is automatically configured for optimal bulk loading, this is the CTAS, or “Create Table as Select” statement.

This is a pair of interesting posts from Simon.

Comments closed

Self-Analysis Of SQL Server Dump Files

Arun Sirpal walks through the SQL Server Diagnostics preview:

Notice the region to upload – If you are using a work machine I would suggest getting authorisation. The great thing here is that this is GDPR compliant.

Once ready hit the upload button, it goes through 3 phases. Upload, Analysis and a recommendation.

It sends your dump files to an external service, which is important enough to point out.  If you want more details on the product, Rony Chatterjee has a FAQ.

Comments closed

New T-SQL Functions

Dennes Torres shows off four T-SQL functions releasing in SQL Server 2017:

Translate

Translate does the work of several replace functions, simplifying some queries.

The function is called ‘Translate’ because its main objective: transform one kind of information in another by doing a bunch of replaces.

For example: GeoJson and WKT are two different formats for coordinates. In GeoJson a coordinate is represented using the format ‘[137.4, 72.3]’ while in WKT a point is represented using the format ‘(137.4 72.3)’.

We would need several ‘Replace’s to transform GeoJson format in WKT format and the reverse. The ‘Translate’ function can do this easily.

I knew the first three, but Translate sneaked right past me.

Comments closed

Real-Time Streaming ETL With Kafka Streams

Yeva Byzek has a tutorial using Kafka and Kafka Streams to perform real-time ETL:

Let’s consider an application that does some real-time stateful stream processing with the Kafka Streams API. We’ll run through a specific example of the end-to-end reference architecture and show you how to:

  • Run a Kafka source connector to read data from another system (a SQLite3 database), then modify the data in-flight using Single Message Transforms (SMTs) before writing it to the Kafka cluster

  • Process and enrich the data from a Java application using the Kafka Streams API (e.g. count and sum)

  • Run a Kafka sink connector to write data from the Kafka cluster to another system (AWS S3)

Read the whole thing.

Comments closed

PoshBot

Warren Frame shows off ChatOps with PoshBot:

We’re going to cover the basics to get up and running with PoshBot:

  • Create a Slack bot
  • Create a PoshBot configuration
  • Run PoshBot as a service
  • Write a PoshBot plugin
  • Use PoshBot

This might seem like a lot of work, but the configuration and service are a one time thing – Writing plugins is just like writing PowerShell functions and modules!

One of my mad scientist co-workers has put together a similar bot and it tells our DBA team how servers are doing.  It’s quite useful for system reconnaissance, particularly when all you have is a phone and a trouble ticket.

Comments closed

Basics Of Neural Nets

Leila Etaati has a new series on neural nets in R:

in Neural Network, we have some hidden Nodes that do the main job ! they found the best value for the output, they are using some function that we call that functions as “Activation function” for instance in below picture, Node C is a hidden node that take the values from node A and B. as you can see the weight (the better path) related to Node B as shown in tick line that means Node B may lead to get better results so Node C get input values from Node B not Node A.

If you have time, also check out the linked YouTube videos.

Comments closed

Envisioning Neural Nets As Org Charts

Maiia Bakhova describes the layout of a neural net as similar to a chain of command within an organization:

We can observe a lot of in common with a corporation chain of command. As we see middle managers are hidden layers which do the balk of the job.  We have the similar information flow and processing which is analogous to forward propagation and backward propagation.
What is left now is to explain that  dealing with sigmoid function at each node is too costly so it mostly reserved for CEO level.

That’s a metaphor I hadn’t heard before.

Comments closed