Press "Enter" to skip to content

Day: February 9, 2017

Dplyr Tutorial

Deepanshu Bhalla has a nice dplyr tutorial:

What is dplyr?
dplyr is a powerful R-package to manipulate, clean and summarize unstructured data. In short, it makes data exploration and data manipulation easy and fast in R.
 
What’s special about dplyr?

The package “dplyr” comprises many functions that perform mostly used data manipulation operations such as applying filter, selecting specific columns, sorting data, adding or deleting columns and aggregating data. Another most important advantage of this package is that it’s very easy to learn and use dplyr functions. Also easy to recall these functions. For example, filter() is used to filter rows.

dplyr is a core package when it comes to data cleansing in R, and the more of it you can internalize, the faster you’ll move in that language.

Comments closed

Apache Zeppelin 0.7.0

Vinay Shulka announces Apache Zeppelin 0.7.0:

SPARK IMPROVEMENTS

This release also adds support for Spark 2 including version Spark 2.1. Zeppelin now also links to Spark History Server UI from Zeppelin so users can more easily track Spark jobs. The Livy interpreter now supports specifying packages with the job.

SECURITY IMPROVEMENTS

The major security improvement in Zeppelin 0.7.0 is using Apache Knox’s LDAP Realm to connect to LDAP. Zeppelin home page now lists only the nodes to which the user is authorized to access. Zeppelin now also has the ability to support PAM based authentication.

The full list of improvements is available here

This visualization platform is growing up nicely.

Comments closed

Leading Wildcard Seek Triggers

Aaron Bertrand demonstrates the triggers you could use if you wanted to build leading wildcard seek tables:

In my last post, “One way to get an index seek for a leading wildcard,” I mentioned that you would need triggers to deal with maintaining the fragments I recommended. A couple of people have contacted me to ask if I could demonstrate those triggers.

To simplify from the previous post, let’s assume we have the following tables – a set of Companies, and then a CompanyNameFragments table that allows for pseudo-wildcard searching against any substring of the company name

Read on for the triggers.

Comments closed

IoT Versus Event Hub

James Serra clarifies the differences between Azure’s IoT Hub and its Event Hub:

The majority of the time, if the data is coming directly from the devices, either directly or via a field-based gateway, IoT Hub will be the more appropriate choice.  Event Hub will generally be the more appropriate choice if either the data will not be coming to Azure directly from the devices, but rather either cloud-to-cloud through another provider, intra-cloud, or if the data is already landing on-premise and needs to be streamed to the cloud from a small number of endpoints internally.  There are exceptions to both conditions, of course.

Both solutions offer very high throughput data ingestion and can handle tremendous streaming data volumes.  In fact, today, IoT Hub is primarily a set of additional services that wrap an underlying Event Hub.

Read on for more scenarios and limitations in each.  They definitely serve different use cases.

Comments closed

Understanding Deadlock Priority

Kenneth Fisher explains deadlock priority:

Everyone deals with deadlocks from time to time. But sometimes we need to control who’s the deadlock victim and who isn’t. For example, I’m doing a big delete on a table in a 24×7 environment, I can’t afford downtime to do it so I’m doing my delete in small chunks to reduce transaction size and blocking time. My delete needs to happen but I’m in no hurry and I really can’t afford to deadlock some other transaction. So how do I make sure?

Or on the other hand, I’m running an update that absolutely has to happen right now. It’s going to take a bit and I can’t afford the time for it to be started over. A deadlock would be a disaster. What do I do?

That’s where deadlock priority comes into play.

Click through for the explanation.

Comments closed

Jepsen: MongoDB 3.4.0-rc3

Kyle Kingsbury takes a new look at MongoDB:

In April 2015, we discussed stale and dirty reads in MongoDB 2.6.7. However, writes appeared to be safe; update-only workloads with majority write concern were linearizable. This conclusion was not entirely correct. In this Jepsen analysis, we develop new tests which show the MongoDB v0 replication protocol is intrinsically unsafe, allowing the loss of majority-committed documents. In addition, we show that the new v1 replication protocol has multiple bugs, allowing data loss in all versions up to MongoDB 3.2.11 and 3.4.0-rc4. While the v0 protocol remains broken, patches for v1 are available in MongoDB 3.2.12 and 3.4.0, and now pass the expanded Jepsen test suite. This work was funded by MongoDB, and conducted in accordance with the Jepsen ethics policy.

Mongo has grown up when it comes to data integrity, though be sure you’re using the v1 replication protocol.

Comments closed

Dual KPI Custom Visual

Adam Saxton has a quick video demonstrating a dual KPI custom visual:

The Dual KPI efficiently visualizes two measures over time. It shows their trend based on a joint timeline, while absolute values may use different scales, for example Profit and Market share or Sales and Profit.

Each KPI can be visualized as line chart or area chart. The visual has dynamic behavior and can show historical value and the change from the latest value when you hover over it. It also has small icons and labels to convey KPI definitions and alerts about data freshness.

I looks cool, but I dunno; my philosophy is that man cannot serve two KPIs.

Comments closed

Gap Analysis Custom Visual

Devin Knight continues his Power BI custom visuals series:

In this module you will learn how to use the Gap Analysis Power BI Custom Visual.  The Gap Analysis visual is used to analyze the difference between two different groups of data you have.  For example, you might use it to analyze the gap between two answers people gave in survey response data.

I like the gap analysis visual; it works well as a cross-category comparison visual, giving you an idea of the relative importance of each category as well as the change from one time period to the next.  It’s a good way of fitting two useful pieces of information into the same visual.

Comments closed

Query Store Space Allocation

Grant Fritchey demonstrates how Query Store allocates disk space:

I love the questions I get while I’m presenting because they force me to think and learn. The question in the title is one I received recently. The answer, now that I’m not standing in front of people, is easy. Of course the space is not pre-allocated. Query Store tables are just system tables. They have a limit on how big they can grow (100mb by default), but that space isn’t going to be pre-allocated in any way. The space will just get used as and when it’s needed, just like any other system table. However, don’t take my word for it, let’s prove that.

Read on for the proof.

Comments closed

BCP Output In JSON Line-Delimited Format

Jovan Popovic shows how to use FOR JSON PATH to output rows in a table to JSON line-delimited format:

Although this is not a valid JSON format, many system use it to exchange data.

One advantage of line-delimited JSON format compared to the standard JSON is the fact that you can append new JSON objects at the end of the file without removing closing array bracket as in the standard JSON.

This might be a niche use case, but I’m sure that in this post-XML-all-the-things era, this is more common than you might first expect.

Comments closed