Press "Enter" to skip to content

Month: May 2017

Azure MySQL Backups

Grant Fritchey focuses on an area where Azure’s MySQL Platform as a Service offering really makes sense:

Why Is MySQL Platform as a Service Important?

I am going to answer this question. There are a lot of advantages to creating, using and developing against data storage within a PaaS offering. One of the biggest for me is backups. Microsoft is automatically taking backups of the MySQL databases you create within Azure. These are real, full backups. Microsoft validates the backups. As I write this, you’ll have the ability to restore your entire database, to any point in time, at intervals of five minutes, over a 35 day preceding period. By programming against a MySQL database within Azure, you are gaining protection of the information you’re storing within your database, and you don’t have to do anything to benefit from this. It’s all part of the service.

Read the whole thing.

Comments closed

Sub-Second Hive Analytics

Carter Shanklin and Slim Bouguerra have started a series on using Hive and Druid to obtain sub-second SQL queries over terabytes of data:

We’ll show how the Hive/Druid integration delivers ultra-fast SQL analytics that can be consumed from your favorite BI tool to get accelerated business results.  And we will show benchmark results of BI queries running in just milliseconds over a 1TB dataset.

 WHAT IS DRUID?

Druid is a high-performance, column-oriented, distributed data store, which is well suited for user-facing analytic applications and real-time architectures. Druid is included as a technical preview in HDP 2.6 and you can read more about Druid on our project page, or at the project website.

This first post is mostly about Druid, which sounds like it might eventually become a very interesting technology for implementing Kimball-style warehouse models but for the whole “Joins?  We don’t need no steenkin’ joins” philosophy.  But when used as one engine component (as mentioned in the post), I can see it being quite useful.

Comments closed

Keeping Up With Analytics

Jen Underwood discusses the need to stay relevant in analytics and shares some tips on how to do so:

Although most analytics applications today still leverage older data warehouse and OLAP technologies on-premises, the pace of the cloud shift is significantly increasing. Infrastructure is getting better and is almost invisible in mature markets. Cloud fears are subsiding as more organizations witness the triumphs of early adopters. Instant, easy cloud solutions continue to win the hearts and minds of non-technical users. Cloud also accelerates time to market allowing for innovation at faster speeds than ever before. As data and analytics professionals, be sure to make time to learn a variety of cloud and hybrid analytics tools.

Exploring novel technologies across various ecosystems in the cloud world is usually as simple as spinning up a cloud image or service to get started. There are literally zillions of free and low cost resources for learning. As you dive into a new world of data, you will find common analytics architectures, design patterns, and types of technologies (hybrid connectivity, storage, compute, microservices, IoT, streaming, orchestration, database, big data, visualization, artificial intelligence, etc.) being used to solve problems.

It’s worth reading the whole thing.

Comments closed

Basic Data Tidying

Sarah Dutkiewicz tidies up a data set in R:

Looking at this data, the first thing I thought was untidy. There has to be a better way. When I think of tidy data, I think of the tidyr package, which is used to help make data tidy, easier to work with. Specifically, I thought of the spread() function, where I could break things up. Once data was spread into appropriate columns, I figure I can operate on the data a bit better.

Sarah has also made the data set available in case you’re interested in following along.

Comments closed

Interpreting Regression Coefficients

Steph Locke explains what beta values on parameters in a regression actually signify:

When we read the list of coefficients, here is how we interpret them:

  • The intercept is the starting point – so if you knew no other information it would be the best guess.

  • Each coefficient multiplies the corresponding column to refine the prediction from the estimate. It tells us how much one unit in each column shifts the prediction.

  • When you use a categorical variable, in R the intercept represents the default position for a given value in the categorical column. Every other value then gets a modifier to the base prediction.

Linear regression is easy, but the real value here is Steph’s explanation of logistic regression coefficients.

Comments closed

Azure SQL Data Warehouse Max Size Property

Joey D’Antoni says not t freak out if you look at the Max Size property on an Azure SQL Data Warehouse database:

One of the promises of Azure SQL Data Warehouse is the ability to have petabyte scale. The ability to quickly scale data, and have that data scale independently of compute resources. So when one I my clients emailed me yesterday with this screenshot, needless to say I was concerned.

As you can see, when  the properties screen shows a max size of 5 GB.

Click through for the reason why.

Comments closed

Dealing With 404 Errors In Power BI Query Editor

Callum Green shows how to deal with a scenario when you try to retrieve data for a particular row but get a 404 error:

The error message is a little misleading but let’s save the debugging debate for another day. The key observation is “Guildford” data is not available, simply because it comes after “Camberley” in the list. Whilst we want to see errors in a Query, we do not want them causing data loss.

Resolution

As I mentioned at the beginning of this article, using the Remove Errors function would prevent the loss of Guildford data. However, the user needs to handle errors as Unknown Members and conform to a typical Kimball Data Warehouse.

I am sure there are many ways to fulfil the requirement, but here is how I approached it:

Read on for the resolution.

Comments closed

Creating Graph Objects In SQL Server

Steve Jones creates a simple graph relationship in SQL Server 2017:

What does all that mean? No idea. Clearly there is JSON that’s returned here and can be deserialized to gather meanings. Is this useful? I think graphs solve a certain set of problems very well, and more efficiently than relational systems. Certainly I could implement a graph structure relationally, but at scale I’m not sure the queries would be as easy to write or run as quickly.

I don’t know if I’d use a graph structure in any of the problems we try to solve in the SQLServerCentral app, but who knows. Maybe we would if we could.

Steve leaves this with more questions than answers, but he does give a very simple repro script if you want to futz about with graphs.

Comments closed

Adaptive Join Internals

Dmitry Pilugin digs deep into how adaptive joins work in SQL Server 2017:

We have three types of physical join algorithms in SQL Server: hash, nested loops and merge. Adaptive join allows SQL Server automatically choose an actual physical algorithm on the fly between the first two – hash (HM) and nested loops (NL).

NL has two join strategiesnaive nested loops join (inner loop scans the whole inner table or index) and index nested loops join (index on the join column of the inner table is used to find necessary rows and then those rows are applied to the outer row, also called Nested Loops Apply). Typically, the second one performs very well if you have rather small input on the outer side and indexed rather big input on the inner side.

HM is more universal and uses hash algorithms to match rows, so no indexes are necessary. You may refer to my blog post Hash Join Execution Internals for more details.

Adaptive Join starts execution as a Hash Join. It consumes all the input of the build phase and looks at the adaptive join threshold, if the number of rows is more or equal this threshold it will continue as a hash join. However, if the number of rows is less than this threshold, it will switch to a NL.

If you want to get a better understanding of how adaptive joins works, Dmitry’s post is a great start.

Comments closed