Press "Enter" to skip to content

Day: July 25, 2017

Kafka Streams Basics

Anuj Saxena walks through Kafka Streams and provides a quick example:

The features provided by Kafka Streams:

  • Highly scalable, elastic, distributed, and fault-tolerant application.

  • Stateful and stateless processing.

  • Event-time processing with windowing, joins, and aggregations.

  • We can use the already-defined most common transformation operation using Kafka Streams DSL or the lower-level processor API, which allow us to define and connect custom processors.

  • Low barrier to entry, which means it does not take much configuration and setup to run a small scale trial of stream processing; the rest depends on your use case.

  • No separate cluster requirements for processing (integrated with Kafka).

  • Employs one-record-at-a-time processing to achieve millisecond processing latency, and supports event-time based windowing operations with the late arrival of records.

  • Supports Kafka Connect to connect to different applications and databases.

Read on for more details as well as a sample script to get started.

Comments closed

Random Forests In R

Anish Sing Walia explains the basics of random forests and provides sample code in R:

Random Forests are similar to a famous Ensemble technique called Bagging but have a different tweak in it. In Random Forests the idea is to decorrelate the several trees which are generated on the different bootstrapped samples from training Data.And then we simply reduce the Variance in the Trees by averaging them.
Averaging the Trees helps us to reduce the variance and also improve the Perfomance of Decision Trees on Test Set and eventually avoid Overfitting.

The idea is to build lots of Trees in such a way to make the Correlation between the Trees smaller.

Random forests frequently give a good answer to classification problems, enough so as to make them a nice starting point.

Comments closed

Using seplyr Instead Of dplyr

John Mount explains seplyr and why it can be better for certain use cases than dplyr:

seplyr is a dplyr adapter layer that prefers “slightly clunkier” standard interfaces (or referentially transparent interfaces), which are actually very powerful and can be used to some advantage.

The above description and comparisons can come off as needlessly broad and painfully abstract. Things are much clearer if we move away from theory and return to our practical example.

Click through for a great example, and also read John’s comment on the Pascal-style assignment operator he uses.

Comments closed

Data Lake Analysis With Excel And Power BI

Sachin C Sheth announces support for Azure Data Lake Store within Excel and Power BI:

Until now, if you had to analyze data stored in ADLS with Excel, you would have to copy it into a relational data store like Azure SQL Data Warehouse or download the data onto a machine, and then use Excel to analyze that data. This was rather cumbersome involving additional cost and time. With this new support, you can now access files stored in ADLS with Excel in-place, without having to copy them to other stores or locations. You can quickly get advanced insights into raw or prepared data. Models and queries you have created using Excel that ran against local data, can be run seamlessly against data stored in ADLS.

Security capabilities of ADLS allow administrators to control access to the data stored in ADLS in a discretionary manner. With this you can limit the access that Excel users have for the data in ADLS. In this manner, data in the ADLS-based data lake continues to be the single source of truth with no redundant copies and can be analyzed by analytics tools of your own choice .

Click through for a demo video.

Comments closed

Diving Into The Data Lake

Jesse Gorter explains the data lake metaphor:

A data lake is a concept that opposes the idea of a data mart. Where a data mart is a silo with structured and cleansed data, a data lake is a huge data collection that is unstructured and raw. You could also say that a data mart is a bottle of clean water whereas the data lake is the lake with (not so clean) water. 🙂

Now why would you want a data lake? Imagine you are generating huge logfiles, for example in airplanes. Machines that track air pressure, temperature etc. If something goes wrong, you definitely want to be alerted. That is event-driven: “if A and B happen, alert pilot, or do C” and there are tools for dealing with that kind of streaming data. But what if the plane landed safely? What do you do with all that data? You do not need it anymore right?

Well, some people would say: “Wrong”. You might need that data later for reasons you do not know today. Google, Microsoft and Facebook are all hoarding data. Also data they are not sure they might need someday. This data could later prove to be valuable for AI, machine learning or for something else.

Read the whole thing.  The data lake concept is powerful, but it requires at least as much data governance as prior models.  Just because you can dump a bunch of files without thinking about it doesn’t mean you’ll get back something useful later.

Comments closed

Mixing VARCHAR And NVARCHAR

Solomon Rutzky walks through some of the nuance of mixing VARCHAR and NVARCHAR data types with respect to your collation:

There are two types of Collations in SQL Server: SQL Server Collations and Windows Collations:

  • SQL Server Collations (those starting with SQL_) are older Collations that were the only ones available prior to SQL Server 2000. These Collations use simplistic sort orders, and do not handle the great variety of linguistic rules defined by Unicode. In fact, they do not have any Unicode rules defined at all, so N[VAR]CHAR data in these Collations will actually use OS-level Collation rules.

  • Windows Collations (those not starting with SQL_) were introduced in SQL Server 2000. These Collations not only have the Unicode rules defined, but they also apply those same linguistic rules to [VAR]CHAR data. While this does come at a slight cost to performance, it also allows for consistency of behavior. And it is this consistency that helps out greatly when there is a mismatch of datatypes (as we will see in a moment).

The simple advice to avoid mixing NVARCHAR and VARCHAR data types is still sound, but do read the whole thing.

Comments closed

Reading Drive Files Recursively From T-SQL

Slava Murygin shows how to use a common table expression to read a recursive drive listing:

In this blog I’ll use undocumented in SQL Server extended stored procedure “xp_dirtree“.
That procedure is well described in Patrick Keisler’s blog.

Briefly, “xp_dirtree” extended procedure returns tree of sub-directories from given folder and has three parameters:
1. Starting or root folder
2. Depth level – determines how deep SQL Server will dig into a sub-directory structure. If it set to zero or omitted whole tree will be returned.
3. If omitted or set to zero will return only folders .If set to “1” will return files as well.

If you want to be really mean to somebody, create a shortcut to the parent directory within a directory.

Comments closed

Managing Drives On Linux

David Klee walks through some basics of Linux administration with respect to drives and mountpoints:

We see that all four of the drives show up in the list. Because of the nature of how I presented the disks to the VM, the bootable drives (sdd) show up at the end of the chain instead of the beginning, but that’s OK. It doesn’t change how the disks are bootable.

We are going to use the Logical Volume Manager (LVM) to manage the disks for us instead of using regular partitions. On Windows-based servers, Windows has the ability to expand partitions without incurring any downtime, and we want to have the same flexibility from a Linux standpoint. With elevated priviledges (for the rest of the commands), let’s scan the drives to look at what it sees.

Read the whole thing.

Comments closed

Environmental Considerations With SQL Server

Jeff Mlakar wraps up his environmental diagnostics series:

Let’s start by defining terms. By environmental I mean factors that affect database performance outside of the code. The remediation of issues can be accomplished without altering SQL, no code change request, no release management necessary. Poor performance can often be addressed by doing a quick review of the overall system and ensuring there are not horrible defaults in use.

This is the low hanging fruit we can get before deep diving into the code

Before you start banging your head against the hood of the car take a walk around and look at the big picture first. It is a common thing to see junior DBAs go right for the most complex area they can identify and ignore the obvious issues which are plain to the eyes of a veteran DBA.

Read on for some resources to help you learn more.

Comments closed