Press "Enter" to skip to content

Month: December 2019

Querying SQL Server from Python

Hasan Savran builds an Azure Data Studio notebook to query SQL Server from Python:

SQL Kernel is the default language, to query database with Python change SQL to Python 3. Probably, you will see the following message if this is the first time you are trying this. You need to install Python packages to be able to run python scripts. I have Visual Studio installed on my machine and I already have Python, I taught I could to use it by clicking “Use existing Python installation”. I was wrong, I couldn’t. This option looks for local installation files and when I point to Visual Studio Python files, it throws error in the middle of the installation. So, I will ignore this option for now.

In ADS, I haven’t gotten “Use existing Python location” to work either, so Hasan’s not alone in that regard.

Comments closed

Azure Data Factory Continued

Cathrine Wilhelmsen continues a series on Azure Data Factory. Catching up from the last time around, we first see the Copy Data activity:

You can copy data to and from more than 80 Software-as-a-Service (SaaS) applications (such as Dynamics 365 and Salesforce), on-premises data stores (such as SQL Server and Oracle), and cloud data stores (such as Azure SQL Database and Amazon S3). During copying, you can define and map columns implicitly or explicitly, convert file formats, and even zip and unzip files – all in one task.

Yeah. It’s powerful 🙂 But how does it really work?

Then Cathrine hits datasets:

But… please, please, please don’t use “source” or “destination” or “sink” or “input” or “output” or anything like that in your dataset names. It makes sense when you have one pipeline with one copy data activity, but as soon as you start building out your solution, it can get messy. Because what if you realize you want to use the original destination dataset as a source dataset in another copy data activity? Yeah… 🙂

So! Let’s rename the datasets.

After that, it’s on to linked services:

Azure Key Vault is a service for storing and managing secrets (like connection strings, passwords, and keys) in one central location. By storing secrets in Azure Key Vault, you don’t have to expose any connection details inside Azure Data Factory. You can connect to “the application database” without directly seeing the server, database name, or credentials used.

Cathrine is rolling with this series and it’s been great so far.

Comments closed

Build and Deploy SSIS Projects with Azure DevOps

Joost van Rossum has a pair of posts on Azure DevOps updates. First, Azure DevOps supports building SSIS projects:

This new task is much easier to use than the PowerShell code and also easier than most of the third party tasks. With a little practice you can now easily create a build task under two minutes which is probably faster than the build itself.

If your build fails with the following error message then you are probably using a custom task or component (like Blob Storage Download Task). These tasks are not installed on the build agents hosted by Microsoft. The solution is to use a self hosted agent where you can install all custom components

Second, Azure DevOps supports deploying SSIS projects:

Microsoft just released the SSIS Deploy task (public preview) which makes it much easier to deploy an SSIS project. Below you will find the codeless steps to deploy artifacts created by the SSIS Build task.

Click through for the step-by-step instructions for each.

Comments closed

Querying Pulsar Streams with Apache Flink

Sijie Guo and Markos Sfikas show how we can interact with Apache Pulsar using Apache Flink:

The latest integration between Flink 1.9.0 and Pulsar addresses most of the previously mentioned shortcomings. The contribution of Alibaba’s Blink to the Flink repository adds many enhancements and new features to the processing framework that make the integration with Pulsar significantly more powerful and impactful. Flink 1.9.0 brings Pulsar schema integration into the picture, makes the Table API a first-class citizen and provides an exactly-once streaming source and at-least-once streaming sink with Pulsar. Lastly, with schema integration, Pulsar can now be registered as a Flink catalog, making running Flink queries on top of Pulsar streams a matter of a few commands. In the following sections, we will take a closer look at the new integrations and provide examples of how to query Pulsar streams using Flink SQL.

Read on to see this integration in action.

Comments closed

Counting Tidyverse Package Arguments

Theo Roe has fun figuring out which tidyverse packages have the greatest number of available arguments in functions:

Before we start anything, I’d like to mention that most of the hard work came from nsaunders and his great blog post Idle thoughts lead to R internals: how to count function arguments.

Let’s get started.

The aim of this blog is to capture the number of arguments present in each function with packages of the tidyverse

Click through to see the code, as well as some methods of visualizing the results (methods which you can use in other situations).

Comments closed

Handling Forbidden XML Characters with SQL Server

Slava Murygin shows how we can use Unicode characters to make XML appear to display special characters:

That is very known issue that SQL Server’s XML does not accept characters “&”, “<” and “>”.
There are two more forbidden XML characters ” ‘ ” and ” ” ” (single and double quotes), but SQL Server mostly accept them.

The common solution is to replace these characters by their codes.
Would say we have a silly sentence: “Anne & Robin collect > “berries” than Jane & Kevin, but < than Ivan & Lucy.

Slava’s post is specifically geared toward wanting to view the characters as-is, not store them for later display. I’m not sure how often that comes up, but it’s a valid use case.

Comments closed

Learning to Learn

Buck Woody has a great post on learning how to learn:

In this new world of fast-paced learning, you’ll often find that you have to “throw away” what you’ve learned, meaning that a new language or tool is out now that requires your attention, and you won’t return to the one you know now. That doesn’t mean your hard study was wasted, because you’ll often find that new technology builds on the one you just learned, but I find that Type-A technologists are loath to drop something they just learned. You’ll have to get over that – it’s the way it is.

However, it can be true that once you learn something, it may be in an area that you just had to come up to speed on quickly, or it has “staying power” and will be around for a while. In that case, take this same process, and repeat all the steps, taking time to fill in the gaps and go much deeper in the areas you didn’t spend time on during your speed learning.

I really liked this post. The first thing it reminded me of was Sir Francis Bacon’s Of Studies (pdf, but with bonus content from Samuel Johnson), specifically the part about how we should superficially breeze through some books, but that others we must digest. The same goes with technologies.

Comments closed

Conversion Failed when Converting ‘NULL’ to Int

Kenneth Fisher has a fun error for us:

I love this error. Primarily because it demonstrates two very important things.

1. Errors matter. Make sure when you ask someone for help you give them the exact error and circumstances causing the error.
2. Experience matters. If you’ve been working with database development for a while you can probably pinpoint exactly what’s causing this error just from the error.

When I was handed this error from one of my co-workers I started by telling them exactly what was wrong, and then out of curiosity started a quick poll.

The answer is about as straightforward as it gets, and yet a pretty good percentage of people won’t get it on the first try.

Comments closed