Press "Enter" to skip to content

Month: October 2019

Installing Python Libraries on EMR Clusters with Notebooks

Parag Chaudhari shows how we can install Python libraries on existing ElasticMapReduce clusters using EMR Notebooks:

The notebook-scoped libraries discussed previously require your EMR cluster to have access to a PyPI repository. If you cannot connect your EMR cluster to a repository, use the Python libraries pre-packaged with EMR Notebooks to analyze and visualize your results locally within the notebook. Unlike the notebook-scoped libraries, these local libraries are only available to the Python kernel and are not available to the Spark environment on the cluster. To use these local libraries, export your results from your Spark driver on the cluster to your notebook and use the notebook magic to plot your results locally. Because you are using the notebook and not the cluster to analyze and render your plots, the dataset that you export to the notebook has to be small (recommend less than 100 MB).

Read the whole thing.

Comments closed

From Kafka to Pulsar

Avaro Santos Andres has arguments for migrating from Apache Kafka to Apache Pulsar:

Imagine you have thousands or millions of devices sending data to your data lake. This data must be managed with speed, security, and reliability. In addition, for legal reasons you must partition data by country, device, and city. These requirements seem reasonable, and in 2019, stream-processing platforms must be able to deal with them.

But how well do they? Kafka is not known to work well when there are thousands of topics and partitions even if the data is not massive. You can see how complicated it can be to try to solve performance challenges in these scenarios.

I like this sort of competition, as I know Kafka will step up their game as a result.

Comments closed

Dynamically Controlling Power Query Columns

Erik Svensen wants to display columns dynamically in Power Query:

This means that even though we might add new columns to the ProductsAttributes table – it will still only be Brand that is expanded and only that column.

The bolded arguments is 2 lists that contains the Column names to expand and the new names of the columns – the last argument is optional so we can actually skip that if we want the original names – https://docs.microsoft.com/en-us/powerquery-m/table-expandtablecolumn

Read on to see how to do this.

Comments closed

Ordered Clustered Columnstore Indexes in Azure SQL DW

Niko Neugebauer takes us through a new feature in preview for Azure SQL Data Warehouse:

After creating (or dropping and recreating a Clustered Columnstore Index we can specify the reserved word ORDER and then one or !!!MULTIPLE!!! columns. This looks like an extremely promising feature!

On Azure SQL Data Warehouse one can of course define table as a Columnstore and with that specification it is also possible to define an ORDER option with one or multiple columns.

For the syntax and basic functionality testing purposes on Azure SQL Data Warehouse, let us then create a table with a Clustered Columnstore Index, load some data and see if by recreating an Ordered Clustered Columnstore Index we can achieve some improvements.

Niko has a few hard-earned lessons from this post.

Comments closed

When Readers Block Writers

Erik Darling takes us through a scenario where readers can block writers for an extended amount of time:

To hold onto Shared locks, you’d need to use an isolation level escalation hint, like REPEATABLE READ.

I could do that here if I were a lazy cheater.

Instead, I’m going to show you a more common and interesting scenario.

This leaves the classes of non-lazy cheater and a lazy non-cheater (because non-lazy non-cheater sounds batty). Regardless of your answer, great post by Erik.

Comments closed

Deploying CUs to Multiple Instances with Powershell

Jeff Iannucci embraces the power of the shell:

This all started because we had some 14 new SQL Server 2017 instances that we were setting up, but we hadn’t yet applied the most recent cumulative update that we are using in our environment.  I started using the Update-DbaInstance cmdlet in the script below to apply to one server, but then I looked at the list of outstanding requests and thought about something Buck Woody once told me.

“You don’t have time for that. You’re going to be dead soon.”

He’s a fantastic fellow, but we should all be grateful he didn’t become a physician.

Click through for the five-line script and an explanation of what each line does.

Comments closed

Power BI and Case Sensitivity

Chris Webb explains that Power BI is case sensitive except where it isn’t:

Case sensitivity is one of the more confusing aspects of Power BI: while the Power Query engine is case sensitive, the main Power BI engine (that means datasets, relationships, DAX etc) is case insensitive. Imke Feldmann shows how to make Power Query case insensitive through custom M here; in this blog post I’m going to concentrate on what case insensitivity means in datasets and DAX and show a way to (kind of) work around it.

Read the whole thing. Chris has a lot of great information here.

Comments closed

Fun with [ as a Function in R

John Mount definitely quotes Dr. Ian Malcolm correctly in this one:

How about defining a new [-based function call notation? The ideas is: we could write sin[5] in place of sin(5), thus unifying the notations for function call and array access. Some languages do in fact have unified function call and array access (though often using “(” for both). Examples languages include Fortran and Matlab.

Let’s add R to the list of such languages.

I love the flexibility in the language, almost as much as I would enjoy taking away production rights from the person who ships this in my code base…

Comments closed

An Introduction to Apache Livy

Guy Shilo explains why we should care about Apache Livy:

Apache Livy is an open source server that exposes Spark as a service. Its backend connects to a Spark cluster while the frontend enables REST API. This enables running it as the organization’s Spark gateway and even run in in docker containers.

Not only it enables running Spark jobs from anywhere, but it also enables shared Spark context and a shared RDD cache among all it’s users which is time and memory saving.

I will demonstrate here how to setup Apache Livy on one of the cluster’s nodes and on a separate server.

Click through for the demonstration.

Comments closed