Press "Enter" to skip to content

Month: September 2018

Batch Mode On Rowstore

Kevin Farlee announces another query processing improvement in SQL Server 2019:

In the SQL Sever 2019 preview, we are further expanding query processing capabilities with several new features under the Intelligent Query Processing (QP) feature family.  In this blog post we’ll discuss one of these Intelligent QP features that is now available in public preview, batch mode on rowstore. This feature unlocks the advantages of batch mode execution in cases where there is no columnstore participating in the query.

Batch mode is a different execution mode primarily targeted at analytics queries which are characterized as scanning many rows, and doing significant aggregations, sorts, and group-by operations across these rows.  Batch mode has been reserved for queries which involve columnstore  indexes until now.

Performing scans and calculations using batches of ~ 900 rows at a time rather than row by row is much more efficient for analytic-type queries.  For queries that can take advantage of it, batch mode can easily make queries execute many times faster than the same query against the same data in row mode.

To date, the workaround you could use was to create an empty filtered columnstore index on a rowstore table.  This solution is more architecturally pleasing and means one less hack to explain.

Comments closed

The Evolution Of Polybase

Asad Khan gets into improvements in SQL Server 2019:

  • Break down data silos and deliver one view across all of your data using data virtualization. Starting in SQL Server 2016, PolyBase has enabled you to run a T-SQL query inside SQL Server to pull data from your data lake and return it in a structured format—all without moving or copying the data. Now in SQL Server 2019, we’re expanding that concept of data virtualization to additional data sources, including Oracle, Teradata, MongoDB, PostgreSQL, and others. Using the new PolyBase, you can break down data silos and easily combine data from many sources using virtualization to avoid the time, effort, security risks and duplicate data created by data movement and replication. New elastically scalable “data pools” and “compute pools” make querying virtualized data lighting fast by caching data and distributing query execution across many instances of SQL Server.

Just in time for me to scramble to update Polybase slides for Conference Season…

Comments closed

What’s In SQL Server 2019 CTP 2.0?

Aaron Bertrand gives us the highlights:

  • Certificate Management in Config Manager View and validate all of your certificates from a single interface, and manage and deploy certificate changes across all of the replicas in an Availability Group or all of the nodes in a Failover Cluster Instance.

  • Built-in data classification A new ADD SENSITIVITY CLASSIFICATION statement helps you identify and automatically audit sensitive data, a huge step up from the previous SSMS wizard (which just used extended properties).

Aaron also digs into the engine a bit:

APPROX_COUNT_DISTINCT

This new aggregate function is designed for data warehouse scenarios, and is an equivalent for COUNT(DISTINCT()). Instead of performing expensive distinct sort operations to determine actual counts, it relies instead on statistics to get something relatively accurate. You should find that the margin of error is within 2% of the precise count, 97% of the time, which is usually fine for high-level analytics, values that populate a dashboard, or quick estimates.

On my system I created a table with integer columns ranging from 100 to 1,000,000 unique values, and string columns ranging from 100 to 100,000 unique values. There were no indexes other than a clustered primary key on the leading integer column. Here are the results of COUNT(DISTINCT()) vs. APPROX_COUNT_DISTINCT() against those columns, so you can see where it is off by a bit (but always well within 2%):

By the way, APPROX_COUNT_DISTINCT() is a really good idea, and I’m glad it’s here.

Comments closed

SQL Operations Studio Is Now Azure Data Studio

David Hiltenbrand notes a name change:

Will SQL Operations Studio upgrade automatically to Azure Data Studio? 

NO! Although they’re effectively the same thing currently, you do need to install Azure Data Studio separately from your existing sqlops install. You can install the new Azure Data Studio after downloading it from here: https://aka.ms/getazuredatastudio. The docs also include a helpful section, Move User Settings, that will help you migrate any custom settings you don’t want to lose from your sqlops configuration.

Personally, I’m not a big fan of the name change.  But Grant Fritchey clues us in on the reason behind it:

The core concept here is to have a development tool that gives you a common framework for working with data, not just SQL data, but CosmosDB and others. Further, a tool that you can run where you work. Do you have a Mac? Cool. Use Azure Data Studio. Running Linux? Cool. Use Azure Data Studio. Still on Windows with me? We also get Azure Data Studio.

I do get the benefit of a tool which can hit different data sources, including something which is not SQL-based.  But the “Azure” in the name throws me.  I’ll still connect to my on-prem and AWS-based SQL Servers with it though.

Comments closed

R From The Year 2000

Colin Gillespie takes us down memory lane with some old, old code:

Last week I spent some time reminiscing about my PhD and looking through some old R code. This trip down memory lane led to some of my old R scripts that amazingly still run. My R scripts were fairly simple and just created a few graphs. However now that I’ve been programming in R for a while, with hindsight (and also things have changed), my original R code could be improved.

I wrote this code around April 2000. To put this into perspective,

  • R mailing list was started in 1997
  • R version 1.0 was released in Feb 29, 2000
  • The initial release of Git was in 2005
  • Twitter started in 2006
  • StackOverflow was launched in 2008

Basically, sharing code and getting help was much more tricky than today – so cut me some slack!

It’s a good sign when an arbitrary task becomes easier to understand as a language evolves.  And I’m glad they dumped the underscore assignment operator.

Comments closed

Reticulate: Python-R Interop

Adnan Fiaz walks us through an example of using the reticulate library to call Python from R:

So what exactly does reticulate do? It’s goal is to facilitate interoperability between Python and R. It does this by embedding a Python session within the R session which enables you to call Python functionality from within R. I’m not going to go into the nitty gritty of how the package works here; RStudio have done a great job in providing some excellent documentation and a webinar. Instead I’ll show a few examples of the main functionality.

Just like R, the House of Python was built upon packages. Except in Python you don’t load functionality from a package through a call to librarybut instead you import a module. reticulate mimics this behaviour and opens up all the goodness from the module that is imported.

This is a good intro to a package which is already useful but I think will be even better over time as R & Python interoperability becomes the norm.  H/T R-Bloggers

Comments closed

Working With Data Frames In R

Dave Mason has a couple of blog posts on data frames.  First, the basics:

Conceptually, a dataset is a grid or table of data elements. It consists of rows, which we specifically call “observations”, and of columns , which are called “variables”. (Observations may also be referred to as “instances”. Variables may also be referred to as “properties”.) The data frame in R is designed for data sets. As the R documentation tells us, data frames are “used as the fundamental data structure by most of R’s modeling software”.

The function we’ll be working with primarily in this post is the data.frame() function. I have read that in R programming, creating data frames with this function is rather uncommon. Most of the time, data frames are created by invoking other functions that read data from an external data source (like a file or a database table) with a data frame as the return type. But for simplicity, data.frame() will serve our purposes.

Then, subsetting data frames:

Adding columns to a data frame is easy–easy compared to adding rows. We’ll get to that. To add a column, first create a vector. The class doesn’t matter. But the number of elements does–it has to match the number of observations in the data frame. Now that we have our vector, here are some options to add it as a new column to a data frame: use the $ shortcut, use double brackets with the new column name, bind the vector to the dataframe with cbind().

The data frame (or tibble, if using the tidyverse version) is probably the single most important data type in R for getting work done.

Comments closed

Writing Audit Logs To Azure Event Hubs

Ronit Reger announces that Azure SQL Database auditing logs can now go to Azure Log Analytics or Azure Event Hubs:

Azure Log Analytics plays a central role in monitoring and management of your Azure environment. It enables collecting telemetry and other data from a variety of sources across Azure, and provides a query language and analytics engine for deep analysis and insights on the operation of applications and resources. For more information on the Log Analytics platform, see What is Azure Log Analytics.

With native support for saving SQL audit logs directly to Log Analytics, log data from all of your database resources can be gathered and stored in a single central location. The logs can now be analyzed using the rich analysis tools provided by the platform, which can provide deeper visibility and advanced cross-resource analytics.

In addition, SQL Server audit logs (from on-premises SQL Servers or SQL Servers on a VM) can also be collected in Log Analytics via OMS agent integration, as described in this article. Thus, you can manage and analyze all of your database audit logs, whether from the cloud or on-premises, in a single central location using the power of Azure Log Analytics.

This looks useful.

Comments closed

Keeping Headers Visible When Scrolling In SSRS

Ginger Keys shows us how to keep tablix headers visible when going through a SQL Server Reporting Services report:

When scrolling through the pages of a SQL Server Reporting Services (SSRS) report, it is very useful to be able to see the column headers throughout the report.  So let’s say you have successfully created an SSRS report using Visual Studio, and everything looks wonderful…except the headers on your columns disappear when you scroll down the page.  You have even set the properties of your Tablix to “Keep Headers Visible While Scrolling”, but it still doesn’t work!  Trying to keep the column headings visible while you scroll down the page of your SSRS report can be a frustrating endeavor.  The following steps will demonstrate how to make it work.

I always thought “Keep Headers Visible While Scrolling” should have been renamed to “Don’t Do Anything About Headers But Let Me Think You Did Something So I Can Look Like I Don’t Know What I’m Talking About When I Tell Customers That The Report Headers Should Stay Visible While Scrolling” but I guess that might have been too long of a property description.

Comments closed

Switching Object Schemas

Steve Jones show you a quick way of switching a database object’s schema:

I haven’t had the need to move an object from one schema to another in years. Really since SQL Server 2000. I wrote about deleting a user that owns a schema recently, but that’s often a first step. The next thing I might need to do is actually move objects from that schema to a new one.

I actually ran across this command when I was looking how to move the schema to a new user. There’s actually a parameter for ALTER SCHEMA that will move objects.

This doesn’t pop up too often for me at least, but it’s good to remember if you’re using schemas as a method of categorizing data.

Comments closed