Press "Enter" to skip to content

Day: August 24, 2017

Regression Trees And Double Seasonal Time Series Trends

Peter Laurinec walks us through an example of using regression trees to solve a problem with double-seasonal time series data in R:

Classification and regression tree (or decision tree) is broadly used machine learning method for modeling. They are favorite because of these factors:

  • simple to understand (white box)
  • from a tree we can extract interpretable results and make simple decisions
  • they are helpful for exploratory analysis as binary structure of tree is simple to visualize
  • very good prediction accuracy performance
  • very fast
  • they can be simply tuned by ensemble learning techniques

But! There is always some “but”, they poorly adapt when new unexpected situations (values) appears. In other words, they can not detect and adapt to change or concept drift well (absolutely not). This is due to the fact that tree creates during learning just simple rules based on training data. Simple decision tree does not compute any regression coefficients like linear regression, so trend modeling is not possible. You would ask now, so why we are talking about time series forecasting with regression tree together, right? I will explain how to deal with it in more detail further in this post.

This was a very interesting article.  Absolutely worth reading.  H/T R-Bloggers

Comments closed

Managing Hive Slowly Changing Dimensions

Carter Shanklin shows how to manage Type 1, 2, and 3 slowly changing dimensions in Hive:

The most common SCD update strategies are:

  • Type 1: Overwrite old data with new data. The advantage of this approach is that it is extremely simple, and is used any time you want an easy to synchronize reporting systems with operational systems. The disadvantage is you lose history any time you do an update.

  • Type 2: Add new rows with version history. The advantage of this approach is that it allows you to track full history. The disadvantage is that your dimension tables grow without limit and may become very large. When you use Type 2 SCD you will also usually need to create additional reporting views to simplify the process of seeing only the latest dimension values.

  • Type 3: Add new rows and manage limited version history. The advantage of Type 3 is that you get some version history, but the dimension tables remain at the same size as the source system. You also won’t need to create additional reporting views. The disadvantage is you get limited version history, usually only covering the most recent 2 or 3 changes.

The Hive solution is getting closer and closer to a traditional relational warehouse solution.  And on the whole, that’s a good thing.

Comments closed

Kafka Connect To Elasticsearch

Robin Moffatt shows how to take data from Kafka Connect and feed it into Elasticsearch:

Whilst Kafka Connect is part of Apache Kafka itself, if you want to stream data from Kafka to Elasticsearch you’ll want the Confluent Open Source distribution (or at least, the Elasticsearch connector).

The configuration is pretty simple. As before, see inline comments for details

It’s worth noting that if you’re using the same convertor throughout your pipelines (Avro, in this case) you’d actually put this in the Connect worker config itself rather than repeating it for each connector configuration.

This is a simple example which shows just how easy it can be.

Comments closed

Query Store And Backups

Kendra Little walks us through Query Store portability via database backups:

Query Store was designed to be clever, and to minimize its impact on your performance. Query Store only flushes its data from memory to disk periodically. You get to control this by setting the data flush interval in the Query Store settings for a database. (Read more about this in Microsoft Documentation here.)

The default value for Query Store data flush is 15 minutes. That means that in the case of a crash, you might lose up to around 15 minutes of activity.

Even if you’ve lowered this, you might want to make sure that a backup contains the very latest activity, particularly if you’re taking the backup to get Query Store data for someone to look at.

Read the whole thing.

Comments closed

Rowgroup Elimination In Stored Procedures

Erik Darling notes a parameter sniffing problem when trying to use rowgroup elimination in a stored procedure:

So where are we? Well, we found that Rowgroup Elimination is possible in stored procedures with ColumnStore indexes, but that the cached plan doesn’t change based on feedback from that elimination.

  • Good news: elimination can occur with variables passed in.
  • Bad news: that cached plan sticks with you like belly fat at a desk job

Remember our plan? It used a Stream Aggregate to process the MAX. Stream Aggregates are preferred for small, and/or ordered sets.

Great post, Brent.

Comments closed

Scale-Up In Azure SQL Database

Arun Sirpal walks us through how to bump up the scale of an Azure SQL Database:

We should all be aware that when changing the service tier and/or performance level of a database it creates a replica of the original database at the new performance level and then switches connections over to the replica. No data is lost during this process but during the brief moment when we switch over to the replica, connections to the database are disabled, so some transactions in flight may be rolled back. So it probably best that you do this during a period of low activity.

It’s pretty easy to do, but heed Arun’s warning, as you can cause queries to fail during switch-over.  My experience has been that scaling up is pretty slow, too:  I seem to recall it taking 10-20 minutes for a moderately sized database.

Comments closed

SSIS In Visual Studio 2017

Koen Verbeeck shows how to get SQL Server Integration Services support within Visual Studio 2017:

However, if you wanted to use Visual Studio 2017 you had to wait till today (August 23, 2017). There are several reasons why you want to use VS 2017 over VS 2015:

  • You are one of the cool kids and you use only the latest Visual Studio

  • There’s no TFS Explorer plug-in available for Visual Studio 2015. If you want to install SQL Server Data Tools only (thus without the full-blown Visual Studio), and you wanted TFS integration, you couldn’t use VS 2015. Unless you installed VS 2015 Community Edition (which has its own license issues).

  • You have a brand new laptop and you don’t want to install multiple versions of Visual Studio (the situation I’m currently in).

Read the comments for more details and clarification.

Comments closed

Comparing Baselines Using Power BI

Melissa Connors shows how to take baseline data from SentryOne and visualize it within Power BI:

Using Power BI to connect to multiple baselines in SentryOne allows me to make fast comparisons. I previously translated baseline values into charts manually or through some Excel/SQL Server connections. See this post on data compression as an example. I wanted to compare performance between different compression levels (None, Row, and Page), and include the Average, Minimum, Maximum, and Standard Deviation values. Now, I have a standard template that looks better in Power BI. Once I create a baseline in my database, I have access to it in my charts. If you don’t have Power BI, you can use the query from this post in Excel or another reporting method for your comparisons.

Even if you don’t use SentryOne, the principles are generally applicable.

Comments closed

Thinking About Virtual Log Files

Monica Rathbun has a reminder that Virtual Log Files can be troublesome in excess:

What causes High VLFs?

As transactions force growth of the log file, inappropriate log file sizing or auto-growth settings can cause a high number of VLFs to occur.  Each growth event adds VLFs to the log file.  The more often you grow in conjunction with smaller growth segments, the more VLFs your transaction log will have.

Example

If you grow your log by the default 1 MB you may end up with thousands of VLFs as opposed to growing by 1GB increments. MSDN does a great job on explaining how a transaction logs work for a deeper dive I recommend reading it.

Read on to see how many VLFs your databases have, as well as how to reduce the number should it grow excessive.

Comments closed