Press "Enter" to skip to content

Month: November 2016

Elastic Database Jobs

Mark Vaillancourt looks at Elastic Database Jobs in Azure:

The new Elastic Database Jobs are designed to echo well the functionality the folks working with SQL Server are accustomed to on-prem with SQL Agent. But it’s even better than that. There are many features that are just baked in that you no longer have to worry about. I’ve presented on the new Elastic Jobs as part of a larger presentation on the overall Elastic tools associated with Azure SQL Database a handful of times. That presentation is called Azure SQL Database Elastic Boogie and references Marcia Griffith’s hit song Electric Boogie (The Electric Slide). Yeah. I know. That will explain the use of the word boogie all over the place.

Even with it just being a very new private preview, my experience has been a great one. Huge kudos to Debra and her team on that.

This sounds pretty good.  I really like the dynamic resolution portion and wish that on-prem SQL Agent jobs could do the same out of the box.

Comments closed

Breaking Out Result Sets With Biml

Ben Weissman uses OFFSET-FETCH to split out large tables into separate files:

This post uses objects and annotations from our previous post “Export to Flatfiles with Biml”. Please use the code from that post as a prerequisit.

In the previous post, we’ve exported the whole database to flatfiles with one file per table. But what if we want to split large tables into multiple files? One easy way to do that would be to retrieve the data using OFFSET-FETCH NEXT from SQL Server.

Read on for more.

Comments closed

Free Trial Of Azure SQL Data Warehouse

James Serra notes that there is a free one-month trial of Azure SQL Data Warehouse:

You can use this one month free trial to do POCs and try out SQL DW up to 200 DWU and 2TB of data.  You must sign up by December 31st 2016.  Please note that once the one month free trial is over, you will start getting billed at general availability pricing rates.  For more information on the free trial, and to sign up, go here.

This is great because you can quickly run out of credits otherwise.

Comments closed

Memory-Optimized Tables

Sunil Agarwal explains that memory-optimized tables are more than just “in memory” tables:

In my many conversations with customers during Microsoft events, people often confuse between the terms ‘In Memory’ and ‘Memory-Optimized’ and many think that they are one and the same. If you continue reading this blog, you will realize that they are somewhat related but can lead to very different performance/scalability.

To understand this, let us travel back in time few years when the size of OLTP databases were much larger than the memory available on the Server. For example, your OLTP database could be 500GB while your Server box has 128 GB of memory. We all know the familiar strategy to address it by storing data/indexes in pages. SQL Server supports 8k pages and brings pages in/out of memory as needed by deploying complex heuristics as implemented as part of Buffer Pool. When running a query, if the PAGE containing the requested row(s) in not in memory, an explicit physical IO is done to bring it into memory. This impacts query performance negatively. Today, you can buy a Server class machine with say 1 TB of physical memory that can keep your full 500GB database in memory. This will indeed improve the performance of your workload by removing  bottleneck due to IO path. This is what I refer to as ‘your database is in memory’. However, the more important question to be asked ‘Is your database optimized for memory?’.

Read on for more details.

Comments closed

Dynamic Data Masking For Lower Environments

Joey D’Antoni shows how to use Dynamic Data Masking to help prevent sensitive production data from getting to lower environments:

Well at PASS Summit, both in our booth and during my presentation on security in Azure DB, another idea came up—exporting data from production to development, while not releasing any sensitive data. This is a very common scenario—many DBAs have to export sensitive data from prod to dev, and frequently it is done in an insecure fashion.

Doing this requires a little bit of trickery, as dynamic data masking does not work for administrative users. So you will need a second user.

Read on for details.

Comments closed

Elasticsearch 5.0

Itamar Syn-hershko looks at the new functionality in the latest version of Elasticsearch:

One fundamental feature of Elasticsearch is scoring – or results ranking by relevance. The part that handles it is a Lucene component called Similarity. ES 5.0 now makes Okapi BM25 the default similarity and that’s quite an important change. The default has long been tf/idf, which is both simpler to understand but easier to be fooled by rogue results. BM25 is a probabalistic approach to ranking that almost always gives better results than the more vanilla tf/idf. I’ve been recommending customers to use BM25 over tf/idf for a long time now, and we also rely on it at Forter for doing quite a lot of interesting stuff. Overall, a good move by ES and I can finally archive a year’s long advise. Britta Weber has a great talk on explaining the difference, and BM25 in particular, definitely a recommended watch.

This is one of several search-related features in the latest version.  Looks like a solid release.

Comments closed

RStudio 1.0

RStudio has officially hit 1.0:

While RStudio has been an enormously useful IDE for R since day 1, it’s officially been in “beta” status all of this time. But last week, RStudio released the first official production version, RStudio 1.0. Check out that link for the release history of RStudio and all that’s been added to it over the last 6 years, but this release also adds major new functionality, including:

R Tools for Visual Studio is certainly making strides, but RStudio is the gold standard for R IDEs.

Comments closed

Installing Polybase

I have a post on installing Polybase:

Java gets updated due to security vulnerabilities approximately once every three days, so I won’t link to any particular version.  You only need to get the Java Runtime Environment (JRE), not the Java Development Kit (JDK).  Anyhow, once you have that installed, you can safely install SQL Server.

In the Polybase configuration section, you have the option of making this a standalone Polybase instance or enlisting it as part of a scale-out group.  In my case, I want to leave this as a standalone Polybase machine.  The reason that I want to leave it as a standalone machine is that I do not have this machine on a Windows domain, and you need domain accounts for Polybase scaleout to work correctly.  Later in the series, we’ll give multi-node Polybase a shot.

This is the easiest installation scenario, but it’s a start.

Comments closed