Press "Enter" to skip to content

Author: Kevin Feasel

New SSMS And SSDT

In this blast of new things, Andy Leonard makes mention that there are new versions of Management Studio and Data Tools available:

It’s Release Day! 🙂

New versions of SQL Server Data Tools (SSDT) are available here. SSDT 16.5 and 17.0 (RC1) are available. Also available are Data-Tier Application Framework (DacFx) versions 16.5 and 17.0 (RC1).

New versions of SQL Server Management Studio (SSMS) are available here. SSMS 16.5 and 17.0 (RC1) versions are available for SSMS.

It’s going to be a busy couple of days for some people…

Comments closed

New Features In 2016 SP1

Niko Neugebauer looks at new functionality released as part of SQL Server 2016 SP1:

CREATE OR ALTER. I almost cried when I found out that it was implemented. I was asking, begging, threatening, crying for years to get this in the SQL Server. Now, I can finally have future project deployments of those who are not using SSDT running with much less problems.
Now we can modify and deploy objects like Stored Procedures, Triggers, User-Defined Functions, and Views without any fear. Just “Make it so!”

Yeah, they’re not as big as “every Enterprise Edition development feature over the past decade is now available to anybody” but there are some nice additions here.

Comments closed

Ola’s Jobs On Linux

Joey D’Antoni gives a few pointers around SQL Server on Linux, including running Ola Hallengren’s scripts on a system without a SQL Server Agent:

Aside from a couple of DMVs that show you Linux specific performance information, everything in SQL Server on Linux is the same. Some of the HA and DR functionality is not complete, and the SQL Agent is not done, however you can use cron (and if you’re familiar with Linux, you should learn about cron—I’ll have another post on that next week).

These are some good notes, so check it out.

Comments closed

Installing SQL Server On Ubuntu

James Anderson installs SQL Server on Ubuntu:

’m installing Ubuntu 16 on a Hyper-V VM. I’ll be using a generation 1 Hyper-V VM as I’ve had much better luck installing Linux VMs on these. I’ve had boot and performance issues with Linux on Generation 2 VMs.

After installing Ubuntu I followed the instructions in the SQL Server Linux documentation and recorded my progress below.

For Ubuntu we need to register the SQL Server repository before we can use the apt-get package manager to download and install SQL Server.

This step-by-step walkthrough shows that installing SQL Server on Linux is pretty easy:  it’s just a single package (RPM or DEB depending upon your flavor of Linux).

Comments closed

SQL Server 2016 SP1

Parikshit Savjani notes that SQL Server 2016 SP1 is available:

The following table compares the list of features which were only available in Enterprise edition which are now enabled in Standard, Web, Express, and LocalDB editions with SQL Server 2016 SP1. This consistent programmatically surface area allows developers and ISVs to develop and build applications leveraging the following features which can be deployed against any edition of SQL Server installed in the customer environment. The scale and high availability limits do not change, and remain as–is for lower editions as documented in this MSDN article.

This is huge.  With SQL Server 2016 SP1, you can get data compression, In-Memory OLTP, partitioning, database snapshots, Polybase, Always Encrypted, and a lot more in Standard edition.  If you’re on Standard Edition today, this is a must-upgrade—some of these have been Enterprise-only features for nearly a decade and they were a huge part of the appeal for paying for Enterprise.  My question is, what are they going to announce to make people want to keep buying Enterprise Edition?

Comments closed

Preemptive Scheduling

Ewald Cress looks at preemptive scheduling:

Cooperative scheduling is a relay race: you simply don’t stop without passing over the baton. If you write code which reaches a point where it may have to wait to acquire a resource, this waiting behaviour must be implemented by registering your desire with the resource, and then passing over control to a sibling worker. Once the resource becomes available, it or its proxy lets the scheduler know that you aren’t waiting anymore, and in due course a sibling worker (as the outgoing bearer of the scheduler’s soul) will hand the baton back to you.

This is complicated stuff, and not something that just happens by accident. The textbook scenario for such cooperative waiting is the traditional storage engine’s asynchronous disk I/O behaviour, mediated by page latches. Notionally, if a page isn’t in buffer cache, you want to call some form of Read() method on a database file, a method which only returns once the page has been read from disk. The issue is that other useful work could be getting done during this wait.

Read on for a detailed example looking at xp_cmdshell.

Comments closed

New MPP For Big Data

James Serra notes that there will be a Microsoft Professional Program for Big Data:

A few months back, Microsoft started the Microsoft Professional Program for Data Science (note the program name change from Microsoft Professional Degree to Microsoft Professional Program, or MPP).  This is online learning via edX.org as a way to learn the skills and get the hands-on experience that a data science role requires.  You may audit any courses, including the associated hands-on labs, for free.  However, to receive credit towards completing the data science track in the Microsoft Professional Program, you must obtain a verified certificate for a small fee for each of the ten courses you successfully complete in the curriculum.  The course schedule is presented in a suggested order, to guide you as you build your skills, but this order is only a suggestion.  If you prefer, you may take them in a different order.  You may also take them simultaneously or one at a time, so long as each course is completed within its specified session dates.

Look for it sometime next year.

Comments closed

Understanding The Cardinality Estimator

SQL Scotsman is working on a very interesting series on statistics and the different cardinality estimators.  So far, this is a three-part series.  Part one is an overview:

A few of those assumptions changed in the new  SQL Server 2014/2016 CE, namely:

  • Independence becomes Correlation: In absence of existing multi-column statistics, the legacy CE views the distribution of data contained across different columns as uncorrelated with one another. This assumption of independence often does not reflect the reality of a typical SQL Server database schema, where implied correlations do actually exist. The new CE uses an increased correlation assumption for multiple predicates and an exponential back off algorithm to derive cardinality estimates.

  • Simple Join Containment becomes Base Join Containment: Under the legacy CE, the assumption is that non-join predicates are somehow correlated which is called “Simple Containment”. For the new Cardinality Estimator, these non-join predicates are assumed to be independent (called “Base Containment”), and so this can translate into a reduced row estimate for the join. At a high level, the new CE derives the join selectivity from base-table histograms without scaling down using the associated filter predicates. Instead the new CE computes join selectivity using base-table histograms before applying the selectivity of non-join filters.

Part two looks at trace flag 9481:

When To Use Trace Flag 9481

Query Scope:  You’ve moved (migrated/upgraded) to SQL Server 2014 / 2016, your databases are at compatibility level 120 / 130 and using the new CE, your workload is performing well overall but there are a few regressions where a small number of queries actually perform worse.  Use Trace Flag 9481 on a per query basis as a temporary measure until you can tune / rewrite the query so it performs well without the hint.

Part three discusses database scoped configurations in SQL Server 2016:

The problem with lowering the database compatibility level is that you can’t leverage the new engine functionality available under the latest compatibility level.

This problem was solved in SQL Server 2016 with the introduction of Database Scoped Configurations which gives you the ability to make several database-level configuration changes for properties that were previously configured at the instance-level.  In particular, the LEGACY_CARDINALITY_ESTIMATION database scoped configuration allows you to set the cardinality estimation model independent of the database compatibility level. This option allows you to leverage all new functionality provided with compatibility level 130 but still use the legacy CE in the odd chance that the latest CE casuses severe query regressions across your workload.

The article on statistics is quite long for a blog post and a great read.  I’m looking forward to reading more.

Comments closed

Processing Azure Analysis Services

Bill Anton shows how to process an Azure Analysis Services tabular model:

This post contains a list of various methods that can be used to process (i.e. load data into) an Azure AS tabular model. As you will see – not much has changed from the regular on-premise version (which is a very good thing as it softens the learning curve).

Read on if you’re looking at putting an Analysis Services model into Azure.

Comments closed

External Tables To Hadoop

I have a post looking at creating external tables in Polybase to hit a Hadoop folder:

The DATA_SOURCE and DATA_FORMAT options are easy:  pick you external data source and external file format of choice.

The last major section deals with rejection.  We’re going from a semi-structured system to a structured system, and sometimes there are bad rows in our data, as there are no strict checks of structure before inserting records.  The Hadoop mindset is that there are two places in which you can perform data quality checks:  in the original client (pushing data into HDFS) and in any clients reading data from HDFS.  To make things simpler for us, the Polybase engine will outright reject any records which do not adhere to the quality standards you define when you create the table.  For example, let’s say that we have a Age column for each of our players, and that each age is an integer.  If the first row of our file has headers, then the first row will literally read “Age” and conversion to integer will fail.  Polybase rejects this row (removing it from the result set stream) and increments a rejection counter.  What happens next depends upon the reject options.

Creating an external table is pretty easy once you have the foundation prepared.

Comments closed