Press "Enter" to skip to content

Month: November 2018

Azure SQL Database Supports R Integration

David Smith notes that Azure SQL Database now has (in preview) support for R:

Azure SQL Database, the database-as-a-service based on Microsoft SQL Server, now offers R integration. (The service is currently in preview; details on how to sign up for the preview are provided in that link.) While you’ve been able to run R in SQL Server in the cloud since the release of SQL Server 2016 by running a virtual machine, Azure SQL Database is a fully-managed instance that doesn’t require you to set up and maintain the underlying infrastructure. You just choose the size and scale of the database you want to manage, and then connect to it like any other SQL Server instance. (If you want to learn how to set up an Azure SQL database, this Microsoft Learn module is a good place to start.)

Python and Java are not yet supported, but I’d imagine that they’ll be on the way too.

Comments closed

Kafka And Handling Missing Topics

The folks at Redglue show what happens when you send a message to a Kafka broker on a non-existent topic:

Now let’s produce messages to a non-existent topic called redglue_nonexistent:

root@kafka1:~# kafka-console-producer --broker-list 127.0.0.1:9092 --topic redglue_nonexistent
I maybe don't exists
[2018-11-28 14:22:12,454] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 1 : {redglue_nonexistent=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)

Obvious there a WARNING saying that the topic doesn’t exists, but it allows you to “send” messages to that specific topic

Read on to see what happens.

Comments closed

Connection Failed With Error 772

Jack Vamvas investigates an error when trying to connect to SQL Server 2016 on Windows Server 2016:

Question: I’ve upgraded an application with a built – in Database API . When attempting to establish a SQL Server database connection this error appears – Connection failed – SQL Server Error 772 – TCPIP Socket

Upon investigation the application was using  the native drivers attempting to connect to a SQL Server 2016 \ Windows 2016

As part of the testing I downloaded the ODBC 13.1 SQL Server drivers – independent of the application and tested a DSN connection to the same SQL Server – and it connected OK. I then created a DSN with native drivers and the error reappeared.

What is going on ? How can I fix this issue?

Read on for the solution and keep those drivers up to date.

Comments closed

Implementing A Change Tracking Solution In SQL Server

Jon Shaulis shows us how we can use Change Tracking to detect when rows get modified:

This allows you to detect changes in a lightweight manner via the Transaction Log in SQL Server in combination with T-SQL. Change Data Capture is more about auditing or creating a historical view and Temporal Tables are the next step up from there which became available in 2016 versions of SQL Server. Change Tracking is primarily used for finding only things that have changed. Not necessarily why, how, or who changed it, but what has changed and what it is now.

So why would you want this technology implemented? I find this technology is best suited for tasks where I want as light of a footprint as possible and I want to bring over incremental changes.

Click through for a long and complete walkthrough.  If you’re thinking to implement change tracking, this is a good link to check out.

Comments closed

Adding Constraints In The CREATE TABLE Statement

Steve Jones shows how you can add constraints in your CREATE TABLE statement:

A good habit to get into is to explicitly name your constraints. I try to do this when I create tables to be sure that a) I have a PK and b) it’s named the same for all environments.

I can create a PK inline, with a simple table like this:

CREATE TABLE Batting
   (
        BattingKey INT NOT NULL CONSTRAINT BattingPK PRIMARY KEY
        , PlayerID INT
        , BattingDate DATETIME
        , AB TINYINT
        , H TINYINT
        , HR tinyint
   )
;

This gives a primary key, named “BattingPK, that I can easily see inline with the column.

Steve also gives an alternative formulation which works well for composite keys.  You can additionally add constraints after the create statement, but if you are creating temp tables and want to take advantage of temp table reuse, constraints have to be created as part of the table (and cannot have names).  For additional fun, since SQL Server 2014, you can create indexes as part of the CREATE TABLE statement as well—that was needed to create memory-optimized tables as back in that edition, you couldn’t add new indexes after the fact.

Comments closed

Resumable Online Index Creation In SQL Server 2019

Monica Rathbun tries out resumable online index creation in SQL Server 2019:

SQL Server 2019 brings a very exciting new feature that, is long overdue. Resumable online index create is one of my favorite new things. This paired with the Resumable Index Rebuilds introduced with SQL Server 2017 really gives database administrators much more control over index processes.

Have you ever started to build a new index on very large table only to have users call and complain their process is hung, not completing, or system is slow? That’s when you realize you’re the cause because you tried to sneak in a new index. I have many times, because creating a new index can impact performance and can be a problematic process for users when you have no or little downtime windows available. When you kill the create process it rolls back requiring you to start from the beginning the next time. With resumable online index creation you now have the ability to pause and restart the build at the point it was paused.  You can see where this can be very handy.

Click through for a demo and discussion on what options are available.

Comments closed

A Case When CASE Isn’t The Right Case

Adrian Buckman notes differences in the two ways of using CASE statements:

It looks so clean compared to the first example! but it wasn’t until I tested the second method out that I realised that the behaviour of the two CASE expressions are different as outlined on books online

The CASE expression has two formats:
The simple CASE expression compares an expression to a set of simple expressions to determine the result.
The searched CASE expression evaluates a set of Boolean expressions to determine the result.
Both formats support an optional ELSE argument.

I put together some examples to illustrate the difference when evaluating Null using the two Case expressions, the query returns the column ‘Databasename’ from the derived list values clause, example 1 has a Null value and example 2 has a value of ‘SQLUndercover’ which you will see below:

Adrian looks into a scenario in which the two CASE expressions return different results, and digs into execution plans to find out why.

Comments closed

Quick Search Within Visual Studio Code

Chrissy LeMaire has an extension which allows for quick searches of highlighted text on a few sites:

We recently released a VS Code extension that lets you highlight terms and search dbatools.io, Microsoft Docs, Google, StackOverflow, DuckDuckGo or Technet or Thwack right from your code! It’s called dbatools simple search and you can find it in the Extension Marketplace.

I’ve also confirmed that it does work with Azure Data Studio; you just need to download the vsix extension from the Extension Marketplace and install and get context menu search support.

Comments closed

Using Source Control For Those Database Queries

Caitlin Hudon shares a few SQL Truths and explains why tracking database queries in source control is important:

If I could teach SQL to analysts who plan to work in industry data science, I’d start by sharing a few SQL Truths I’ve learned, and why I recommend tracking SQL queries in git. Here goes:

  1. You will *always* need that query again
  2. Queries are living artifacts that change over time
  3. If it’s useful to you, it’s useful to others (and vice versa)

Focusing on these points has led to my continuous adoption of a query library — a git repository for saving and sharing commonly (and uncommonly) used queries, all while tracking any changes made to these queries over time.

This is separate from keeping database objects (like table or procedure definitions) in source control.

Comments closed

Reading Changes From The Transaction Log

Marek Masko shows us how to read through the transaction log to understand a data change operation:

Another solution that can be used to track changes executed against your database is to read Transaction Log file or Transaction Log Backups. Transaction log file (and backups) consists of every transaction executed against your database. The only requirement for this is to have a database in the Full recovery model. In the Simple recovery model, every committed transaction can be very quickly overwritten by another one.

Also, this is something that you get for free without the need to enable and configure any additional SQL Server functionality. Of course, besides the database backups, but you already do database backups, right?

To test this approach, you have to make some preparations. You need to set the database’s recovery model to Full. As a first step, check database properties to ensure that it is appropriately configured. As a second step, you need to create a full database backup. From this point, the database is in Full recovery model, and every transaction is fully logged. Thanks to this you are able to read logged transactions from Transaction Log file. The same applies to read from Transaction Log backup. To do this, you need to create such a backup after you execute database schema changes.

I think I’ve only done this once or twice, but it’s a good technique to know about.

Comments closed