Press "Enter" to skip to content

Day: November 7, 2017

How Per-Second AWS Billing Helps With Data Processing

Prakash Chockalingam explains how AWS per-second billing can make resource allocation easier:

Because of the hourly increments in billing, users spend a lot of time playing a giant game of Tetris with their big data workloads — figuring out how to pack jobs to use every minute of the compute hour. Examples:

  • If a job could be run on 10 nodes and finished in 20 minutes, it was better to run it on fewer nodes so that it takes around 50 minutes. As a result, you would pay less.
  • Running two 10 node jobs that took 20 minutes in parallel would cost two times more than running them sequentially.

The above problem got compounded if there were many such jobs to be run. To handle this challenge, many organizations turned to a resource scheduler like YARN. Organizations were following the traditional on-premises model of setting up one or more big multi-tenant cluster on the cloud and running YARN to bin-pack the different jobs.

Read on to see how this has changed as a result of per-second billing.

Comments closed

Tracking Kafka Consumer Lag

Simarpreet Kaur Monga has a Scala-based example showing how to calculate Kafka offset lag for consumers:

The Consumer can subscribe to multiple topics, you need to pass the list of topics you want to consume from. For the sake of simplicity, I have just passed a single topic to consume from.

Now that the consumer has subscribed to the topic, it can consume from that topic.

The consumer maintains an offset to keep the track of the next record it needs to read.

Now, let us see how we can find the consumer offsets.

The Consumer offsets can be found using the method offset of class ConsumerRecord. This offset points to the record in a Kafka partition. The consumer consumes the records from the topic in the form of an object of class ConsumerRecord. The class ConsumerRecord also consists of a topic name and a partition number from which the record is being received, and a timestamp as marked by the corresponding ProducerRecord (the record sent by the producer).

Click through for the rest of the story.

Comments closed

Soft-NUMA Doesn’t Limit MAXDOP

Lonny Niederstadt tests whether soft-NUMA forces MAXDOP = 1:

I mentioned that I was planning to set up a soft-NUMA node for each vcpu on a 16 vcpu VM, to evenly distribute incoming connections and thus DOP 1 queries over vcpus.  Thomas Kejser et al used this strategy to good effect in “The Data Loading Performance Guide”, which used SQL Server 2008 as a base.
https://technet.microsoft.com/en-us/library/dd425070(v=sql.100).aspx

My conversation partner cautioned me that leaving this soft-NUMA configuration in place after the specialized workload would result in DOP 1 queries whether I wanted them or not.  The claim was, effectively, a parallel query plan generated by a connection within a soft-NUMA node would have its MAXDOP restricted by the scheduler count (if lower than other MAXDOP contributing factors).  Though I wasn’t able to test at the time, I was skeptical: I’d always thought that soft-NUMA was consequential to connection placement, but not to MAXDOP nor to where parallel query workers would be assigned.

I’m back home now… time to test!!

Read on for the test.

Comments closed

Picking Azure VM Sizes

Glenn Berry helps us pick the right-sized Azure VM for a SQL Server installation:

A common issue with Azure VM sizing for SQL Server has been the fact that you were often forced to select a VM size that had far more virtual CPU cores than you needed or wanted in order to have enough memory and storage performance to support your workload, which increased your monthly licensing cost.

Luckily, Microsoft has recently made the decision process a little easier for SQL Server with a new series of Azure VMs that use some particular VM sizes (DS, ES, GS, and MS), but reduce the vCPU count to one quarter or one half of the original VM size, while maintaining the same memory, storage and I/O bandwidth. These these new VM sizes have a suffix that specifies the number of active vCPUs to make them easier to identify.

For example, a Standard_DS14v2 Azure VM would have 16 vCPUs, 112GB of RAM, and support up to 51,200 IOPS or 768MB/sec of sequential throughput (according to Microsoft). A new Standard_DS14-8v2 Azure VM would only have 8 vCPUs, with the same memory capacity and disk performance as the Standard_DS14v2, which would reduce your SQL Server licensing cost per year by 50%. Both of these Azure VM SKUs would have the same ACU score of 160.

Glenn is, as always, a font of useful information.  Go read the whole thing.

Comments closed

Unresolved Reference In Database Project

Ed Elliott explains something fairly straightforward and gives us a primer on using Stack Overflow too:

If you build an SSDT project you can get an error which says:

“SQL71502: Function: [XXX].[XXX] has an unresolved reference to object [XXX].[XXX].”

If the code that is failing is trying to use something in the “sys” schema or the “INFORMATION_SCHEMA” schema then you need to add a database reference to the master dacpac:

Click through for the answer and a comment-by-comment walkthrough from Ed.

Comments closed

Window Function Sort Performance

Lukas Eder explains one potential issue with window functions against large data sets:

Usually, this blog is 100% pro window functions and advocates using them at any occasion. But like any tool, window functions come at a price and we must carefully evaluate if that’s a price we’re willing to pay. That price can be a sort operation. And as we all know, sort operations are expensive. They follow O(n log n) complexity, which must be avoided at all costs for large data sets.

In a previous post, I’ve described how to calculate a running total with window functions (among other ways). In this post, we’re going to calculate the cumulative revenue at each payment in our Sakila database.

This is a good article comparing how different RDBMS products handle a fairly complicated windowed query and what you can do to improve performance.

Comments closed

Multiple Result Sets With ML Services

Dave Mason figures out how to create multiple result sets with SQL Server ML Services:

Of course for this strategy to work, I’d have to know ahead of time how many data frames/HTML tables there are. Hmmm. Can dynamic T-SQL help me here? If I could find out at run time how many data frames there are, and which ones I may or may not want, then why not? Here’s some R code that reads HTML tables into a variable as a list of data frames(line 8), iterates through the list (starting at line 18), decides if the HTML table has any data in it (lines 21, 24), and adds the HTML table number (the element number in the list) to a different data frame (line 27). The output shows us we would want HTML tables 1, 2, and 4. (Yeah, I really didn’t want #4. But that can be fixed by enhancing the R code to be more selective. Let’s just go with it for now.)

The method is a bit disappointing (and it’s arguably worse for inputs); I do hope the ML Services team can improve upon this experience.

Comments closed

Power BI Report Server Scheduled Refresh

Wolfgang Strasser covers Power BI Report Server’s scheduled data refresh ability in a two-part series.  First, he shows how to configure scheduled data refresh:

What needs to be added here is the fact that data refresh can be speeded up to 1 minute refresh intervals. (Which is much more often compared to 8 times for powerbi.com PRO users).

When you are done with the schedule configuration, save it and the definition dialog is updated with the plan information. The last data refresh timestamp plus the last status is also displayed.

Just like Reporting Services, data refresh happens via SQL Agent job.  Wolfgang then shows us what’s in the Power BI Report Server’s metadata database:

CAUTION: The PBIRS meta-database is a system database needed for successfully running PBIRS. Do NOT modify objects in that database – a non-functioning PBIRS could be the result!

The table dbo.SubscriptionHistory contains the history of data refresh actions. A SELECT on this table provides you with the following information:

  • Subscription metadata (SubscriptionID plus Type)

  • Execution runtime information (HistoryID, StartTime, EndTime, Status plus Message)

I’m pretty jazzed about this feature.

Comments closed