Press "Enter" to skip to content

Day: November 9, 2016

Elastic Database Jobs

Mark Vaillancourt looks at Elastic Database Jobs in Azure:

The new Elastic Database Jobs are designed to echo well the functionality the folks working with SQL Server are accustomed to on-prem with SQL Agent. But it’s even better than that. There are many features that are just baked in that you no longer have to worry about. I’ve presented on the new Elastic Jobs as part of a larger presentation on the overall Elastic tools associated with Azure SQL Database a handful of times. That presentation is called Azure SQL Database Elastic Boogie and references Marcia Griffith’s hit song Electric Boogie (The Electric Slide). Yeah. I know. That will explain the use of the word boogie all over the place.

Even with it just being a very new private preview, my experience has been a great one. Huge kudos to Debra and her team on that.

This sounds pretty good.  I really like the dynamic resolution portion and wish that on-prem SQL Agent jobs could do the same out of the box.

Comments closed

Breaking Out Result Sets With Biml

Ben Weissman uses OFFSET-FETCH to split out large tables into separate files:

This post uses objects and annotations from our previous post “Export to Flatfiles with Biml”. Please use the code from that post as a prerequisit.

In the previous post, we’ve exported the whole database to flatfiles with one file per table. But what if we want to split large tables into multiple files? One easy way to do that would be to retrieve the data using OFFSET-FETCH NEXT from SQL Server.

Read on for more.

Comments closed

TMSCHEMA DMVs

Meagan Longoria wants Azure Analysis Services documentation:

It would be great to get the DMVs documented similar to the MDSCHEMA DMVs as they are quite useful for tasks like documenting your tabular model.  Since the TMSCHEMA DMVs work in Azure Analysis Services as well, I have logged this request on the Azure AS User Voice for that. Please lend me a vote so we can make this information more easily available.

Please vote on this.

Comments closed

Free Trial Of Azure SQL Data Warehouse

James Serra notes that there is a free one-month trial of Azure SQL Data Warehouse:

You can use this one month free trial to do POCs and try out SQL DW up to 200 DWU and 2TB of data.  You must sign up by December 31st 2016.  Please note that once the one month free trial is over, you will start getting billed at general availability pricing rates.  For more information on the free trial, and to sign up, go here.

This is great because you can quickly run out of credits otherwise.

Comments closed

Memory-Optimized Tables

Sunil Agarwal explains that memory-optimized tables are more than just “in memory” tables:

In my many conversations with customers during Microsoft events, people often confuse between the terms ‘In Memory’ and ‘Memory-Optimized’ and many think that they are one and the same. If you continue reading this blog, you will realize that they are somewhat related but can lead to very different performance/scalability.

To understand this, let us travel back in time few years when the size of OLTP databases were much larger than the memory available on the Server. For example, your OLTP database could be 500GB while your Server box has 128 GB of memory. We all know the familiar strategy to address it by storing data/indexes in pages. SQL Server supports 8k pages and brings pages in/out of memory as needed by deploying complex heuristics as implemented as part of Buffer Pool. When running a query, if the PAGE containing the requested row(s) in not in memory, an explicit physical IO is done to bring it into memory. This impacts query performance negatively. Today, you can buy a Server class machine with say 1 TB of physical memory that can keep your full 500GB database in memory. This will indeed improve the performance of your workload by removing  bottleneck due to IO path. This is what I refer to as ‘your database is in memory’. However, the more important question to be asked ‘Is your database optimized for memory?’.

Read on for more details.

Comments closed

Dynamic Data Masking For Lower Environments

Joey D’Antoni shows how to use Dynamic Data Masking to help prevent sensitive production data from getting to lower environments:

Well at PASS Summit, both in our booth and during my presentation on security in Azure DB, another idea came up—exporting data from production to development, while not releasing any sensitive data. This is a very common scenario—many DBAs have to export sensitive data from prod to dev, and frequently it is done in an insecure fashion.

Doing this requires a little bit of trickery, as dynamic data masking does not work for administrative users. So you will need a second user.

Read on for details.

Comments closed