Press "Enter" to skip to content

Month: June 2020

Apache Spark Connector for SQL Server

The SQL Server team announces an open-sourced Apache Spark connector for SQL Server:

The Apache Spark Connector for SQL Server and Azure SQL is based on the Spark DataSourceV1 API and SQL Server Bulk API and uses the same interface as the built-in JDBC Spark-SQL connector. This allows you to easily integrate the connector and migrate your existing Spark jobs by simply updating the format parameter! 

This appears to be different from the old Spark connector to Azure SQL Database and SQL Server. Also, for anyone potentially confused between it and PolyBase, this is going in the opposite direction: the Spark connector lets you access a SQL Server from an Apache Spark cluster, reading SQL Server’s data and processing it across a number of executor nodes. By contrast, PolyBase lets you read data stored in Spark SQL tables from SQL Server, virtualizing it so that it looks like a regular SQL Server table.

Comments closed

Using PowerShell to Build Excel Files

Mike Bronowski takes us through the ImportExcel Powershell module:

Using Add-PivotTable is straightforward (and from now on I am closing the whole Excel). Note the Activate switch at the end. When you open the Excel file the worksheet that was used in the command will show up first.

A long while ago, I had put together Powershell code to do this kind of work with Excel, but back then we needed to use COM. This looks much simpler.

Comments closed

Power BI On-Premises Data Gateway Error SpooledOperationMissing

Gilbert Quevauvilliers encounters an error:

I got the following error DM_GWPipeline_Gateway_SpooledOperationMissing”,”parameters”:{},”details”:[],”exceptionCulprit”:1

This error was caused because the current Virtual machine where the On-Premise Data Gateway was running was cloned and then started up.

Read on to learn about the implications and how Gilbert was able to solve this issue.

Comments closed

Breaking Changes in Microsoft.Data.SqlClient 2.0

Erik Ejlskov Jensen goes over the list of breaking changes with the most recent version of Microsoft.Data.SqlClient:

Microsoft.Data.SqlClient version 2 has just been released. This library is the latest and greatest .NET client driver for SQL Server and Azure SQL Database – and will be used by EF Core 5. In addition to a number of new features (which I blogged about earlier), this major version release also includes a number of breaking changes.

Click through for the list.

Comments closed

Against Encrypted Stored Procedures

Denny Cherry explains why encrypting stored procedures is a fool’s errand:

To this I point out, that if you’ve encrypted your code so that I won’t look at it by accident, you are actually getting the exact opposite result. Because you are encrypting code that means that I can’t see if. That means that I want to make sure that you aren’t hiding any stupid practices from me. That means that as soon as I see your encrypted procedures I’m decrypting them to see what is going on with this code.

Along with this, because you’ve bothered to encrypt the stored procedures that means that I can’t get an execution plan, and query store can’t be used for the queries within the stored procedure. And since I’m guessing that I can performance tune your database better then your developers can, I’m going to be decrypting the procedures so that I can tune the system.

They’re trivial to decrypt and Denny points out a few reasons why it’s just a bad idea.

Comments closed

Custom Parameters in Azure Data Factory Deployments

Rayis Imayev shows us how to use customer parameters in ARM templates when deploying Azure Data Factory pipelines:

If I needed to visually explain how this custom parameterization works for Azure Data Factory resource, I would picture it this way. Before you solely relied on publishing your ADF code from your collaboration Git branch to the adf_publish branch where ARMTemplateForFactory.jsonand ARMTemplateParametersForFactory.json files live and get further deployed to other environments. You had some flexibility to parameterize your deployment or run some custom code to update ARM templates before they get deployed.

With the introduction of the ADF custom parameterization, you have an additional JSON file arm-template-parameters-definition.json that you can use to define rules to add supplementary parameters to the main ARMTemplateParametersForFactory.json file. There is a very important statement on Microsoft documentation site that explains how this new file operates, “A definition can’t be specific to a resource instance. Any definition applies to all resources of that type”. It’s like a garden rake that will collect all the leaves or none, i.e. if your rule defines a JSON property, let’s say “timeout” of your ForEach loop container, then all timeouts will be scooped into ARM template parameter file.

Read on for the full explanation as well as an example.

Comments closed

SQL Server 2019 CU5

Microsoft has released SQL Server 2019 CU5:

This article describes Cumulative Update package 5 (CU5) for SQL Server 2019. This update contains fixes that were released after the initial release of SQL Server 2019 and updates the SQL Server and Analysis services components to the following builds.

I see a half-dozen or so PolyBase-related fixes, though I was hoping to read that they’ve fixed the issue around creating and using external objects when you are connected to SQL Server via Windows authentication. I’ll have to test that out to see if it was fixed—that was broken in CU3 and remained so in CU4.

Comments closed

Scaling Kafka Down

Jay Kreps announces an update to Project Metamorphosis:

Rightly or wrongly, Kafka is viewed as a “heavy-duty” component, one where you have to ask, “Do we really really need it?” and wait until you have several use cases to justify adoption. Until then people often abuse other systems or try to get by as best they can without first-class support for events in their infrastructure. This is unfortunate, and we want to change it! A log of events is a fundamentally better abstraction and the ecosystem around Kafka is phenomenal. Limiting this to only large-scale users and big tech companies does a disservice to the small apps and use cases that are often some of the best uses of the technology.

We’re changing that. We offer three types of clusters, optimized for the type of use case. Our Basic cluster type allows you to just pay for the amount of Kafka you use with no cost at all for allocating an unused cluster. This scales down to zero cost and scales up very gradually. You can literally use a few dollars a month of Kafka with no additional operational overhead. This makes Kafka a very lightweight option, at very low cost, for even the smallest of use cases. This offering is also ideal for development and test environments, even for those using Dedicated clusters for their production environments.

This does seem to help out in cases where I’d otherwise suggest Kinesis or Event Hubs (or one of the five dozen other Azure products around brokering messages) due to the relative lack of volume.

Comments closed

Pattern Matching in Scala

Mansi Babbar covers one of the most powerful tools available in functional programming languages like Scala:

The match expression consist of multiple parts:

1. The value we’ll use to match the patterns is called a candidate 
2. The keyword match
3. At least one case clause consisting of the case keyword, the pattern, an arrow symbol, and the code to be executed when the pattern matches
4. A default clause when no other pattern has matched. The default clause is recognizable because it consists of the underscore character (_) and is the last of the case clauses

This is similar to the switch statement in C-like languages but offers up a few more things like partial matching of complex objects. Mansi covers some of the ways in which these two things differ.

Comments closed

Calculating Test Coverage of Azure Data Factory Pipelines

Richard Swinbank wraps up a series on testing in Azure Data Factory:

To determine which activities have been executed by a test suite, I need to collect and aggregate activity run data from every pipeline execution triggered from any test fixture. In the previous post I developed components to retrieve and cache activities for a pipeline run – I’ll use those components here to collect data systematically.

I’m going to create a new helper class to contain functions specific to coverage measurement. It’s a subclass of the database helper because I want to exploit functionality from classes further up the hierarchy:

Read on for the code and process for measurement.

Comments closed