Press "Enter" to skip to content

Month: October 2018

How Join Hints Affect Adaptive Joins

Grant Fritchey looks at the combination of adaptive joins and query hints which specify join type:

I’ve highlighted the interesting bit. “Actual Number of Locally Aggregated Rows” is part of aggregation push down, explained by the amazing Niko Negebauer here and here. Basically, the aggregation is occurring with the data access. So while there is a Hash Match operator for the aggregation, actually, the active part of the aggregation was performed within the columnstore. That’s why the Actual Number of Rows coming out of the columnstore index itself is 0, but the number of rows coming out of the Hash Match Aggregate is 441.

So… why not another aggregate push down when we used the hint? Because the hint says, we MUST use a hash join. At that point the optimizer has no choices on where, when, how it does data processing. It must, first, ensure that a hash join is used, so it does. First thing out of the gate, hash join. Then a hash aggregate. This difference in behavior results in a 24% decrease in performance. The only interesting thing is that the reads remained consistent. This means that it was just the processing of the join that added overhead.

Read the whole thing.

Comments closed

Running Totals In Power BI With M

Imke Feldmann gives us a reason to use M to calculate running totals:

Today I want to share a scenario where a running total calculation in the query editor saved a model that run out of memory when done with DAX:

Problem

The model couldn’t be refreshed and returned out of memory error with a calculated column in the fact table of over 20 Mio rows (from a csv-file). A running total should be calculated for each “JourneyID”, of which there were over 1 Mio in the table itself. This rose memory consumption during refresh by over 300 % – until it finally errored out:

Click through for the solution.

Comments closed

When Disabled SQL Agent Jobs Still Run

David Fowler troubleshoots a nasty issue with the SQL Server Agent:

In a previous blog post: Duplicate Agent jobs – A good reason not to meddle with Msdb I explained a situation where someone was updating msdb tables manually rather than using the supplied system stored procedures such as msdb.dbo.sp_update_job, It would seem that this was not the only occasion where I would find myself in the midst of the meddlers’ medley.

This time around the meddler decided to disable the job using an update statement against the msdb.dbo.sysjobs table setting enabled from 1 to 0, you have no idea how long it took me to work that out!! but this lead me onto discovering more about the SQL server agent and its general behaviour.

Short form:  use the built in procedures to modify SQL Agent jobs rather than going off and updating tables on your own.  Click through for the long form, with plenty of interesting details.

Comments closed

Natural Keys?

Steve Jones wonders if we should give up on natural primary key constraints:

One of the things I think is important in modeling your particular entity is including a primary key (PK). In my DevOps talk I stress this, as I’d rather most attendees come away thinking a PK is important as their first takeaway from the session. There are exceptions, but they are rare, and I would prefer that most tables just have some PK included from the beginning.

A PK ought to be stable as well, and there are plenty of written words about how to pick the PK for your particular problem domain. Often I have received the advice that natural keys are preferred over surrogate keys, and it is worth the effort to try and identify a suitable column (or set of columns) that will guarantee uniqueness. I think that’s good advice, and it’s also advice I tend to ignore.

Read on for Steve’s reasoning.  I tend to use surrogate keys out of habit, though I do prefer to put unique key constraints on natural keys to help me reason through data models.

Comments closed

Power BI: Datasets, Reports, And Dashboards

Eugene Meidinger teams up with Bert Wagner to teach Power BI using a food metaphor:

A Power BI Dataset is a series of Power Query queries that have been shaped in a DAX model. Each dataset can combine different files, database tables and online services all into one tabular model.  In our cookie analogy, these are all different “ingredients”.

Unlike SSRS, a dataset in Power BI does not represent a single table or query of data. A dataset should be considered more like a “flavor” of data used to accomplish a specific type of reporting: financial, operational, HR, etc. So in our analogy, the dataset is the “raw dough”.

So in Power Query, you are going to have a set of queries which each combine a data source with a usually linear set of transformations.

The pie chart cookie is the first one I would have eaten, if only to eliminate it.

Comments closed

Writing Higher-Order Functions With Scala

Jyoti Sachdeva explains the concept of higher-order functions and shares an example in Scala:

In this blog, I’m going to explain higher-order functions.

A higher order function takes other function as a parameter or return a function as a result.

This is possible because functions are first-class value in scala. What does that mean?

It means that functions can be passed as arguments to other functions and functions can return other function.

The map function is a classic example of a higher order function.

Higher-order functions are one of the key components to functional programming and allows us to reason in small chunks at a time

Comments closed

Building Observable Distributed Systems

Kevin Sookocheff has some thoughts on building observable systems:

Given the shortcomings of monitoring and testing, we should shift focus to building observable systems. This means treating observability of system behaviour as a primary feature of the system being built, and integrating this feature into how we design, build, test, and maintain our systems. This also means acknowledging that the ease with which we can debug our production environment will be a key indicator of system reliability, scalability, and ultimately customer experience. Designing a system to be observable requires effort from three disciplines of software development: development, testing, and operations. None of these disciplines is more important than the others, and the sum of them is greater than the value of the individual parts. Let’s take some time to look at each discipline in more detail, with a focus on observability.

My struggle has never been with the concept, but rather with getting the implementation details right.  “Make everything observable” is great until you run out of disk space because you’re logging everything.

Comments closed

Redshift Architecture Performance Tips

John Ryan has a few hints to help us build speedy Redshift clusters:

The Need to Vacuum

As Redshift does not reclaim free space automatically, updates and delete operations can frequently lead to table growth. Equally, it’s important as new entries are added, that the data is maintained in a sorted sequence.

The VACUUM command is used to re-sequence data, and reclaim disk space as a result of DELETE and UPDATE operations. Although it won’t block other processes, it can be a resource-intensive operation, especially for data stored using interleaved sort keys.

It should be run periodically to ensure consistent performance and to reduce disk usage.

Some of this is good Postgres advice; some of it is good MPP advice (and serves well, for example, when dealing with Azure SQL Data Warehouse); the rest is Redshift-specific.

Comments closed

Scaling Anomaly Detection With Kafka And Cassandra

Paul Brebner has started a series on anomaly detection using Kafka and Cassandra, starting with an introduction:

Let’s look at the application domain in more detail. In the previous blog series on Kongo, a Kafka focussed IoT logistics application, we persisted business “violations” to Cassandra for future use using Kafka Connect. For example, we could have used the data in Cassandra to check and certify that a delivery was free of violations across its complete storage and transportation chain.

An appropriate scenario for a Platform application involving Kafka and Cassandra has the following characteristics:

  1. Large volumes of streaming data is ingested into Kafka (at variable rates)

  2. Data is sent to Cassandra for long term persistence

  3. Streams processing is triggered by the incoming events in real-time

  4. Historic data is requested from Cassandra

  5. Historic data is retrieved from Cassandra

  6. Historic data is processed, and

  7. A result is produced.

It looks like he’s focusing on changepoint detection, which is one of several good techniques for generalized anomaly detection.  I’ll be interested in following this series.

Comments closed

Cloning And Columnstore Statistics

Niko Neugebauer points out a fix in SQL Server 2019:

I have a huge love for the DBCC CLONEDATABASE command – it has been made available (backported) to every SQL Server version starting with SQL Server 2012, since the original release in SQL Server 2014, while being constantly improved in the Service Packs and Cumulative Updates.

This blog post is focusing on the Database Cloning improvement in the SQL Server 2019 that is already available in the public CTP 2.0 – the possibility of the automated statistics extraction for the Columnstore Indexes.
WHY ?
Well, there was quite a significant problem with the Columnstore Indexes previously – the statistics for them were not extracted into the cloned database, unless you did created the statistics in the most recent step before Database cloning.

Click through for more details and a comparison between SQL Server versions.

Comments closed