Press "Enter" to skip to content

Day: December 27, 2019

How Spark Runs on YARN with HDFS

Sarfaraz Hussain explains how some of the pieces of the Hadoop ecosystem fit together:

Once it verifies that everything is in place, it will assign a Job ID to the Job and then allocate the Job ID into a Job Queue.

So, in Job Queue there can be multiple jobs waiting to get processed.

As soon as a job is assigned to the Job Queue, it’s corresponding information about the Job like Input/Output Path, the location of the Jar, etc. are written into the temp location of HDFS.

Read the whole thing.

Comments closed

Testing Kafka Streams

Jukka Karvanen takes us through TopologyTestDriver in Apache Kafka Streams:

Similarly to testing a single record, it is possible to pipe Value lists into a TestInputTopic. Validating the output can be done record by record like before, or by using the readValueToList() method to see the big picture when validating the whole collection at the same time. For our example topology, when the test pipes in the values, it needs to validate the keys and use the readKeyValueToList() method.

Testing streams of data (regardless of the product) is enough of a mental shift that it’s not an easy problem. For that reason, I welcome any tool which simplifies the process.

Comments closed

Auditing the Power BI Activity Log

Brett Powell shows how to query the Power BI Activity Log:

The new Power BI Activity Log makes it much easier for Power BI administrators to access audit log data to monitor and analyze activities occurring within the tenant they support. This blog post describes one solution for accessing, processing, and loading Power BI activity log data as part of an automated process to support ongoing Power BI administrative analysis and reporting.

Brett has a very nicely defined solution and lays it out step by step for us.

Comments closed

Columnstore Versus Page Compression

Aaron Bertrand compares columnstore and page compression on a specific table:

Recently someone at work asked for more space to accommodate a rapidly growing table. At the time it had 3.75 billion rows, presented on 143 million pages, and occupying ~1.14TB. Of course we can always throw more disk at a table, but I wanted to see if we could scale this more efficiently than the current linear trend. Sounds like a great job for compression, right? But I also wanted to try out some other solutions, including columnstore – which people are surprisingly reluctant to try. I am no Niko, but I wanted to make an effort to see what it could do for us here.

Note that I am not focusing on reporting workload or other read query performance at this time – I merely want to see what impact I can have on storage (and memory) footprint of this data.

Here is the original table. I’ve changed table and column names to protect the innocent, but everything else is relatively accurate.

Page compression won, and I’ve got a pretty good idea why (though some of the diagnostic info is gone): Aaron has several VARCHAR and NVARCHAR columns, and those blow up the columnstore dictionary pretty fast. Aaron has more to go in this series, so stay tuned.

Comments closed

Auditing Index Changes

Jason Brimhall gives us a solution to track index changes:

In the previous article, I showed a quick and dirty method to try and capture the indexes that may have been just created. We all know that in the hectic life of the DBA, it is easy enough to forget the indexes created more than just a few hours in the past.

In that article, I mentioned that the solution provided was far from a complete solution to the entire problem. A bigger part of the picture involves audits to know when and by whom indexes were created, altered or dropped.

Keeping track of this kind of thing is important, particularly in an environment where performance suddenly changes even though the devs totally didn’t touch anything, promise.

Comments closed

Costs and Benefits of Recompilation

Erik Darling takes us through the pros and cons of slapping that RECOMPILE label on a query:

It’s been a while since SQL Server has had a real RECOMPILE problem. And if you put it up against the performance problems that you can hit with parameter sniffing, I’d have a hard time telling someone strapped for time and knowledge that it’s the worst idea for them.

Obviously, you can run into problems if you (“you” includes Entity Framework, AKA the Database Demolisher) author the kind of queries that take a very long time to compile. But as I list them out, I’m kinda shrugging.

My rule of thumb is that WITH RECOMPILE isn’t the first answer, but I won’t mess around too long before going to it.

Comments closed

Monitoring Oracle on Azure

Kellyn Pot’vin-Gorman covers several tools which are available for working with Oracle databases in Azure:

The Oracle SQL Developer product has come a long way since it’s inception and much of that credit needs to go to the incredible team at Oracle, including those that are prevelant in the Oracle community, Like Jeff Smith, Kris Rice and Ashley Chen.  Their willingness to listen to the Oracle community and turn their needs into features has been one of the critical reasons for the product success.

Although this product is more focused towards the developer, unlike the previous three, I want to point out a few areas that hopefully will convince you there are more similarities than differences.

The shortest version of this is “the same tools as exist on-prem” but if you don’t know that answer, Kellyn’s got you covered.

Comments closed

Recovering From Parsing NaN Values

Slava Murygin shows how you can recover from a nasty TRY_PARSE error:

In my previous post I outlined very dangerous SQL Server problem, caused by usually not very harmful commands “PARSE” and “TRY_PARSE“: Having “NaN” value for REAL and FLOAT producing severe error.

This post will be about a major issue it causes and on how to fight it.

Will start from generating the problem.

Attention: Do not run that in production!!!

Slava’s serious about not running the code in a real environment.

Comments closed