Press "Enter" to skip to content

Month: August 2018

Your R Code Should Be In Source Control Too

Lindsay Carr explains the importance of storing your R code in source control:

But wait, I would need to learn an additional tool?

Yes, but don’t panic! Git is a tool with various commands that you can use to help track your changes. Luckily, you don’t need to know too many commands in Git to use the basic functionality. As an added bonus, using Git with RStudio takes away some of the burden of knowing Git commands by including buttons for common actions.

As with any tool that you pick up to help your scientific workflows, there is some upfront work before you can start seeing the benefits. Don’t let that deter you. Git can be very easy once you get the gist. Think about the benefits of being able to track changes: you can make some changes, have a record of that change and who made it, and you can tie that change to a specific problem that was reported or feature request that was noted.

It’s still code, and you gain a lot by keeping code in source control.

Comments closed

The Power Of Resilient Distributed Datasets

Ramandeep Kaur explains just how powerful Resilient Distributed Datasets are:

A fault-tolerant collection of elements that can be operated on in parallel:  “Resilient Distributed Dataset” a.k.a. RDD

RDD (Resilient Distributed Dataset) is the fundamental data structure of Apache Spark which are an immutable collection of objects which computes on the different node of the cluster. Each and every dataset in Spark RDD is logically partitioned across many servers so that they can be computed on different nodes of the cluster.

RDDs provide a restricted form of shared memory, based on coarse-grained transformations rather than fine-grained updates to shared state.

Coarse-grained transformations are those that are applied over an entire dataset. On the other hand, a fine grained transaction is one applied on smaller set, may be a single row. But with fine grained transactions you have to save the updates which can be costlier but it is flexible than a coarse grained one.

Read on for more about the fundamental data structure in Spark.

Comments closed

Quick Tips For Working With Extended Events

Tibor Karaszi argues that it’s never too late to get into Extended Events:

I know, I know. New habits are hard to learn. Many of us have been using SQL trace and the Profiler GUI for a very long time. And we know that we are supposed to move over to Extended Events (XE), but we postpone it for some later time. And then we give XE a try, and some thing doesn’t work as we want. So we go back to more familiar territories.

But XE has really grown on me over the last few years. I like to share the things that I initially didn’t like with XE, and how I overcame them. And also some other of my tips to make it easier to be productive with XE. I will deliberately minimize showing T-SQL and queries against the XE dynamic management views here. As you use XE more and more, you will probably use T-SQL to a higher degree. But this blog post is for those of you who want to “get into” XE and I find using a GUI is great as a starting point. Assuming the GUI is any good, that is. And I think the SSMS GUI is, for most parts.

There are a lot of tips here, so check out Tibor’s advice.

Comments closed

Reading Power BI Log Files

Kellyn Pot’vin-Gorman shows us where we can find Power BI logs and what they look like when we load them into Power BI:

Second one is  inspecting the Reporting Server Portal log, (RSPortal**.log) that resides in C:\Program Files\Microsoft Power BI Report Server\PBIRS\LogFiles

We again load this log file via Get Data –> Text/CSV and then choose to view all files, as it won’t see the .log extension otherwise.  Choose the file and click on Edit.

The M query displays the changes I performed to format the data into something that can easily be worked with.  Because of the stagnated output of the data lines, this will format the error and warning messages, with the rest of the rows only having the Information Message fulfilled, the rest of the columns will be null:

Read the whole thing.

Comments closed

Changing Connection Strings In VertiPaq Analyzer

Shabnam Watson shows us how to change the connection string in VertiPaq Analyzer, a plugin for Excel:

While trying to set up VertiPaq Analyzer on a new computer, I ran into a problem where Excel was not letting me change the SSAS connection that was built in the workbook. It turns out I had missed one of steps in the instructions in the workbook. As a result, when I got to Connection Properties, everything was grayed out and this message was at the bottom:

Some properties cannot be changed because this connection was modified using PowerPivot Add-in.

Read on to see how to fix this.  And check out VertiPaq Analyzer if you’re working heavily with Analysis Services Tabular or Power BI.

Comments closed

Power Platform Licensing And Pricing

Wolfgang Strasser explains how you can get started with the Microsoft Power Platform:

This blog post is part of my Power Platform blog series.

Maybe you’ve already heard about the Microsoft Power Platform (which consists three tools Power BI, PowerApps and Microsoft Flow) and now is the time to start testing it?

The first questions that arise are: What do I need? Do I need to pay if I only want to try it out?

Licensing can get tricky, so it’s good to get a clear explanation of pricing and what you can do with the products.

Comments closed

Reserved Capacity With Azure SQL Database

Chris Seferlis explains the concept of Azure SQL Database Reserved Capacity:

Last week I posted about the Azure Reserve VM Instance where you could save some money in Azure. Another similar way to save is with Azure SQL Database Reserved Capacity. With this you can save 33% compared to license included pricing by pre-buying SQL Database pre-cores for a 1- or 3-year term.

This can be applied to a single subscription or shared across your enrollments, so you can control how many subscriptions can use the benefit, as well as how the reservation is applied to the specific subscriptions you choose.

The reservation scope to a single subscription allows you to apply it to that SQL Database resource(s) within the selected subscription. A reservation with a shared scope can be shared across subscriptions in the enrollment and there’s some flexibility involved like Managed Instances where you can scale up/down.

Read on for more.  AWS had been offering discounts for reserved capacity for a while, but now we’re seeing Microsoft play the game too.

Comments closed

HDF 3.2 Updates

Dinesh Chandrasekhar walks us through some of the updates to Hortonworks Data Flow version 3.2:

Kerberos keytab isolation
Kerberos keytabs can now be isolated at a per principal level. This allows for users in a multi-tenant environment to safely be able to reference specific keytabs and principals. This ensures that just because a user has access to a HDFS keytab they will not have access to all of the HDFS principals. This provides a more granular control so that users are limited to only the principals they require.

Kafka 1.1.1 Support
In HDF 3.2, Kafka has been upgraded from 1.0.0 to 1.1.1. Key features and improvements have been added with respect to security and governance. In addition to these bug fixes, an important new feature was added to capture producer and topic metrics at partition level without instrumenting or configuring interceptors on the clients. This provides a non-invasive approach to capture important metrics for producers without refactoring/modifying your existing Kafka clients

Hive 3 support
Apache NiFi now supports Hive 3 running on HDP 3.0. This support ensures better performance for Hive streaming to HDP, Hive streaming to S3, and the ability to write directly to ORC from NiFi without first converting your datasets to Avro. Writing directly to ORC for better Hive query performance is accomplished by using the NiFi PutORC processor. With HDF 3.2, a few other processors related to HBase and HDFS have also been updated and enhanced.

Looks like there are some good updates to this version.

Comments closed

Using The glue Package In R

Evgeni Chasnovski shows the glue package and also works around some trickiness with NULL:

Recently, fate lead me to try using {glue} in a package. I was very pleased to how it makes code more readable, which I believe is a very important during package development. However, I stumbled upon this pretty unexpected behavior:

y <- NULL
paste("I have", x, "apples and", y, "oranges.")
## [1] "I have 10 apples and  oranges."
str(glue("I have {x} apples and {y} oranges."))
## Classes 'glue', 'character'  chr(0)

If one of the expressions is evaluated into NULL then the output becomes empty string.

glue reminds me of string formatting in .NET languages.  On the whole, that’s a good thing.

Comments closed

Window Functions Have Defaults, Too

Steve Jones reminds us that when running a window function, there is a default window in place:

What I want to do is compare the passing yards each year with the most current value for that player, showing the plus or minus. This means that for Aaron Rodgers, who threw for 1675 yards in 2017, I’d want to show this for the first few years of his career:

This shows me an easy view of the years where he was better in his career than he is now. Last year was likely a down year because of injury, but we’ll see this year.

In any case, if I run this query using LAST_VALUE() for the final year of his career, I don’t get the right results.

It’s good to keep in mind the full syntax for a window function for just this reason.

Comments closed