Base Versus Simple Containment

Joe Obbish takes a crack at explaining the difference between base containment and simple containment for cardinality estimation:

We know that the first query will return 500k rows and the second query will return 0 rows. However, can SQL Server know that? Each statistics object only contains information about its own column. There’s no correlation between the UNIQUE_ID and MOD_FILTER columns, so there isn’t a way for SQL Server to know that the queries will return different estimates. The query optimizer can create an estimate based on the filters on the WHERE clause and on the histograms of the join columns, but there’s no foolproof way to do that calculation. The presence of the filters introduces uncertainty into the estimate, even with statistics that perfectly describe the data for each column. The containment assumption is all about the modeling assumption that SQL Server has to make to resolve that uncertainty.

It’s an interesting post aimed at trying to get you to think like a simplified cardinality estimator.  SQL Server doesn’t behave exactly like this, but it’s a good mental reference point.

Querying Elasticsearch

Swatee Chand has a tutorial on querying Elasticsearch:

In Elasticsearch, aggregations framework is responsible for providing the aggregated data based on a search query. Aggregations can be composed together in order to build complex summaries of the data. For a better understanding, consider it as a unit-of-work. It develops analytic information over a set of documents that are available in Elasticsearch. Various types of aggregations are available, each of them having its own purpose and output. For simplification, they are generalized to 4 major families:

  1. Bucketing

    Here each bucket is associated with a key and a document. Whenever the aggregation is executed, all the buckets criteria are evaluated on every document. Each time a criterion matches, the document is considered to “fall in” the relevant bucket.

  2. Metric

    Metrics are the aggregations which are responsible for keeping a track and computing the metrics over a set of documents.

  3. Matrix

    Matrix are the aggregations which are responsible for operating on multiple fields. They produce a matrix result out of the values extracted from the requested document fields. Matrix does not support scripting.

  4. Pipeline

    Pipeline are the aggregations which are responsible for aggregating the output of other aggregations and their associated metrics together.

If you deal with Elasticsearch (or have log data that you want to query through), this tutorial will give you an idea of what you can do.

Azure Functions Basics

Vincent-Philippe Lauzon explains the basics of Azure Functions:

In general, serverless refers to an economical model where we pay for compute resources used as opposed to “servers”.

Wait…  isn’t that what the Cloud is about?

Well, yes, on a macro-scale it is, but serverless brings it to a micro-scale.

In the cloud we can provision a VM, for example, run it for 3 hours and pay for 3 hours.  But we can’t pay for 5 seconds of compute on a VM because it won’t have time to boot.

A lot of compute services have a “server-full” model.  In Azure, for instance, a Web App comes in number of instances.  Each instance has a VM associated to it.  We do not manage that VM but we pay for its compute regardless of the number of requests it processes.

In a serverless model, we pay for micro-transactions.

This is the first part in a series and is aimed at giving a conceptual explanation.

Conditional Formatting With Power BI Line Charts

Daniil Maslyuk shows how to perform conditional formatting on a line chart in Power BI:

Have you ever wished you could change the line colour depending on the overall trend? For example, if your sales increase over time, the line is green; if there is a decline, then the line is red. While this functionality is not yet natively available in Power BI Desktop, it does not mean this cannot be done! In this article, I am going to show you how to achieve this effect.

Read on to see how he does it.

Source Control In SQL Operations Studio

Drew Furgiuele reminds me why I prefer Mercurial over Git:

Of course, this just scratches the surface of using source control and Git. Now that you have a working example, I encourage you to read more especially about branching and merging, and your inevitable merge conflicts. I think branching is incredibly important if you like to tweak your scripts because it gives you a separate copy of your code to work on and test, and when you’re satisfied you merge your changes back to your master branch (or trunk).

Once you get the basics down, it’s really easy to start contributing other projects too! You could fork a project, work on it, and then submit a pull request to the owners to add features. Or, maybe someone will find code you made available and do the same. It’s amazing. Or a very minimum, this will get you up to speed on how you can introduce an SCM practice for SQL-related scripts at your workplace, if you don’t have one, or how you can start using it if you do (and don’t already).

Click through to see how Drew integrates Git with SQL Operations Studio.  Spoilers:  it’s pretty easy, given the relationship between SqlOps and Visual Studio Code.

Against Simply Closing Connections With Rollback

Kenneth Fisher shows how to close all database connections and immediately roll them back, but then explains why you shouldn’t default to that:

But, let’s take a minute and think about this a bit closer. Should we? Should we dump everyone out of the database? Are you sure?

  • Well, it’s dev and we’ve talked to the development team. No one is doing anything right now.
  • It’s prod so we’ve been extra careful. We arranged an outage with the business and confirmed that now is a good time.

Well, in theory, we’ve been told no one is using the database. However, that’s theory, this is reality, and in fact, someone is in the database. But what are they doing? Is it just a leftover connection? Someone left a query window open and forgot about it? The application creates a connection and holds on to it 24×7 even if it’s not doing anything?

Click through for more thoughts on the topic.

Getting Wait Stats From The Query Store

Erin Stellato shows how to retrieve wait stats per query from Query Store:

In SQL Server 2016 a new DMV is exposed, sys.dm_exec_session_wait_stats, which provides information about waits for an existing, active session. If you know the session_id, you can track waits for a query when it starts and when it completes (snapshot the information at the beginning and end of the query and then diff the information). The challenge is that you have to know the session_id for the query, and you have to set up data capture in advance – which isn’t trivial when you’re in the midst of a high priority issue.

Wait statistics information exists in an actual execution plan starting in SQL Server 2016 SP1. Only the top 10 waits are captured, and there are limitations in terms of what this data represents. For example, CXPACKET is ignored and not included in the output, but it will be included in 2016 SP2 and 2017 CU3 and up – where irrelevant parallelism waits are instead captured by CXCONSUMER (which will not be included in actual plan waits).

So how can we see what a specific query is truly waiting on? We can use Query Store! SQL Server 2017 includes the capture of wait statistics information in Query Store, and this functionality is also available in Azure SQL Database. Wait statistics are tied to a query plan, and are captured over time, just like the runtime statistics. The addition of wait statistics information in Query Store was the number one feature request after its initial release, and all that information together creates powerful troubleshooting capabilities.

Read on for a demo-filled post.  It’s hard to overstate just how useful this information is for query tuning.

Design Concepts: Cognitive Load

Meagan Longoria explains the concept of cognitive load and how it relates to building reports:

Our first design concept is cognitive load, which comes from cognitive psychology and instructional design. Cognitive Load Theory says that when we present our audience with information, we are asking them to use brain power to process it. That brain power (aka working memory) is limited, so we need to be intentional about the information we present to them.

In order to commit information to memory and be able to recall it later, that information must go through sensory memory and working memory and then be encoded into long-term memory.

This concept drives a lot of good advice in dashboard and report design, particularly that if it does not directly help a person learn the important information you are trying to convey, it’s not worth having on the report or dashboard.

Categories

November 2017
MTWTFSS
« Oct Dec »
 12345
6789101112
13141516171819
20212223242526
27282930