Press "Enter" to skip to content

Category: Versions

Azure SQL Database Compatibility Level Change

Joe Sack reports that new Azure SQL Databases will have a compatibility level of 140 by default:

Once this new database compatibility default goes into effect, if you still wish to use database compatibility level 130 (or lower), please follow the instructions detailed here: View or Change the Compatibility Level of a Database.  For example, you may wish to ensure that new databases created in Azure SQL Database use the same compatibility level as other databases in Azure SQL Database to ensure consistent query optimization behavior across development, QA and production versions of your databases. We recommend that database configuration scripts explicitly designate COMPATIBILITY_LEVEL rather than rely on the defaults, in order to ensure consistent application behavior.

For new databases supporting new applications, we recommend using the latest compatibility level (140).  For pre-existing databases running at lower compatibility levels, the recommended workflow for upgrading the query processor to a higher compatibility level is detailed in the article, Change the Database Compatibility Mode and Use the Query Store.  Note that this article refers to compatibility level 130 and SQL Server, but the same methodology applies for moves to 140 for SQL Server and Azure SQL DB.

It’s good to hear, and as Joe mentions, you have the ability to move back down to 130 if you need it.

Comments closed

SQL Server 2017 RC2

Microsoft has announced Release Candidate 2 of SQL Server 2017, hot on the heels of RC1:

Microsoft is pleased to announce availability of SQL Server 2017 Release Candidate 2 (RC2), which is now available for download.

The release candidate represents an important milestone for SQL Server.  Development of the new version of SQL Server along most dimensions needed to bring the industry-leading performance and security of SQL Server to Windows, Linux, and Docker containers is complete.  We are continuing to work on performance and stress testing of SQL Server 2017 to get it ready for your most demanding Tier 1 workloads, as well as some final bug fixes.

There are no new features and the Windows release notes are empty, but there are some Linux release notes as they firm up that offering before launch.

Comments closed

Downgrading SQL Server

Jonathan Kehayias is going the opposite direction of the Jeffersons:

At some point in your career working with SQL Server, you will run into a situation where the wrong edition of SQL Server has been installed on a server and will need to change the edition for licensing reasons.  Whether it is Enterprise Edition where Standard Edition should have been installed, Enterprise Edition where Developer Edition should have been used, or my favorite, Evaluation Edition where the 180 day trial has expired and Enterprise Edition isn’t going to be used, the only route available for downgrading the edition is to uninstall and reinstall SQL Server entirely.  SQL Server Setup makes upgrading editions a piece of cake with SKUUPGRADE as a command line option for going from Standard/Developer/Evaluation to Enterprise, but anything else requires a full uninstall and reinstall to change the SKU/Edition and then restore all of the system and user databases to the new instance, which typically means a lot of work.  I hate having to restore system databases and avoid having to do it if possible, so here is how I do this process and minimize the work required:

No matter what you are going to have to do an uninstall and reinstall of the SQL Server instance to downgrade the SKU.  However, you can save yourself some time and the headache of trying to restore the system databases if you are careful about what you do.  I have done a plenty of SKU downgrades in the past and the easiest way to do it, and I am not saying this is the Microsoft supported way but that it works if done correctly, is to:

Jonathan has an 11-point checklist that’s well worth checking out.  Though hopefully, may all your editions be Enterprise…

Comments closed

SQL Server 2017 RC1 Now Available

The SQL Server team announces RC1 of SQL Server 2017:

In SQL Server 2017 RC1, there were several feature enhancements of note:

  • SQL Server on Linux Active Directory integration – With RC1, SQL Server on Linux supports Active Directory Authentication, which enables domain-joined clients on either Windows or Linux to authenticate to SQL Server using their domain credentials and the Kerberos protocol. Check out the getting started instructions.

  • Transport Layer Security (TLS) to encrypt data – SQL Server on Linux can use TLS to encrypt data that is transmitted across a network between a client application and an instance of SQL Server. SQL Server on Linux supports the following TLS protocols: TLS 1.2, 1.1, and 1.0. Check out the getting started instructions.

  • Machine Learning Services enhancements – In RC1, we add more model management capabilities for R Services on Windows Server, including External Library Management. The new release also supports Native Scoring.

  • SQL Server Analysis Services (SSAS)  In addition to the enhancements to SSAS from previous CTPs of SQL Server 2017, RC1 adds additional Dynamic Management Views, enabling dependency analysis and reporting. See the Analysis Services blog for more information.

  • SQL Server Integration Services (SSIS) on Linux  The preview of SQL Server Integration Services on Linux now adds support for any Unicode ODBC driver, if it follows ODBC specifications. (ANSI ODBC driver is not supported.)

  • SQL Server Integration Services (SSIS) on Windows Server  RC1 adds support for SSIS scale out in highly available environments. Customers can now enable Always On for SSIS, setting up Windows Server failover clustering for the scale out master.

Linux AD support is big.

Comments closed

Cloudera Enterprise 5.12 GA

Fred Koopmans announces that Cloudera Enterprise 5.12 is now generally available:

Data Science & Engineering

  • Cloudera Data Science Workbench enhancements include:

    • GPU Support: Cloudera Data Science Workbench now enables popular deep learning frameworks to run on GPUs, both on-premises and in the cloud.

    • Embedded Web UIs: Users can work with the Apache Spark Web UI for Spark sessions. Other interactive web applications like TensorBoard, Shiny, and Plotly now appear directly in the workbench.

    • Enhanced Job Scheduling: Cloudera Data Science Workbench users can now schedule jobs directly from external schedulers or orchestration systems via the new Jobs API.

Read on for more enhancements.

Comments closed

Test The DBATools Beta

Chrissy LeMaire wants you to test the beta of dbatools:

Before the official release of bagofbobbish to master and the PowerShell Gallery, we need help finding bugs. Then, we’ll need some time to resolve those bugs. Hopefully this can be done before community members show off dbatools at a few key SQLSaturdays around the world this Saturday, July 8th.

We would really appreciate it if you would download the beta from GitHub and (in a test environment) see if you can find anything that doesn’t work as expected.

If you find any bugs, please file a report on GitHub. You can also reach out to us in the Slack channel.

Currently, there aren’t any webpages for the commands listed in this post, but all commands have help, so when you need help, simply type Get-Help commandName -Examples or Get-Help commandName -Full.

Get testing.  There are a lot of new commands, so if you haven’t checked out dbatools in a while, give it a go.  Also, congrats to Rob Sewell for his newly minted MVP status.

Comments closed

Using Availability Groups For Upgrades

Adrian Buckman has a fun post on upgrading to SQL Server 2017 (CTP) using Availability Groups to minimize downtime:

Don’t panic, this is still going as planned as this is totally expected and this is the reason why:

We are now in a situation where we have the Primary server running 2017 but one (for us) or possibly more than one for you running on 2016 , its not possible for the 2016 server to synchronize as its databases have not been upgraded yet, they will therefore be stuck in recovery but we are about to fix that very soon.

This is a viable upgrade option:  we did it when upgrading from 2014 to 2016.  There are a lot of steps, but in the end, it worked fine.

Comments closed

The Most Useless SQL Server Feature

Adam Machanic put out a poll on Twitter, asking for the most useless SQL Server feature:

It was at this point that I realized just how many candidates there are for “most useless” things lying around the product. So I decided to create my own tweet. I asked for the most useless feature, anytime between version 7.0 (which I would call the beginning of SQL Server’s “modern era”) and now. I received quite a few suggestions, and so I have decided to catalog them here—along with a bit of personal commentary on each one.

The list that follows is mostly unordered and culled straight from what I received on Twitter. Hopefully I haven’t missed anything due to Twitter’s weird threading and “priority” mechanisms. And please let me know in the comments if your favorite useless feature is missing, or you’d like to add a comment and/or argument about one of these. Perhaps we can find some way to turn these dark and ugly corners into things of beauty? Well, we shall see…

I almost completely agree with Adam’s opinions on this long list.  I’d emphasize, though, that In-Memory OLTP is by no means useless.

1 Comment

Rebuilding Full-Text Catalogs

Thomas Rushton ran into an issue with full-text indexing component versions:

Restoring 27 databases; they all restored properly, but 15 of them gave a warning along these lines:

Warning: Wordbreaker, filter, or protocol handler used by catalog ‘FOOBARBAZ’ does not exist on this instance. Use sp_help_fulltext_catalog_components and sp_help_fulltext_system_components check for mismatching components. Rebuild catalog is recommended.

Read on for the solution.

Comments closed

What’s New In Hadoop 3.0?

Shubham Sinha explains some of the changes coming to Hadoop:

Integrating EC with HDFS can maintain the same fault-tolerance with improved storage efficiency. As an example, a 3x replicated file with 6 blocks will consume 6*3 = 18 blocks of disk space. But with EC (6 data, 3 parity) deployment, it will only consume 9 blocks (6 data blocks + 3 parity blocks) of disk space. This only requires the storage overhead up to 50%.

Since Erasure coding requires additional overhead in the reconstruction of the data due to performing remote reads, thus it is generally used for storing less frequently accessed data. Before deploying Erasure code, users should consider all the overheads like storage, network and CPU overheads of erasure coding.

Now to support the Erasure Coding effectively in HDFS they made some changes in the architecture. Lets us take a look at the architectural changes.

There are some nice features coming to Hadoop version 3.

Comments closed