Press "Enter" to skip to content

Day: May 29, 2020

Securing a Kafka Deployment

Rick Spurgeon and Nikoleta Verbeck give us some tips on securing Apache Kafka:

Confluent Platform provides Role-Based Access Control (RBAC), which addresses the gaps listed above. RBAC and ACLs perform similar functions but approach the problem differently and can be used independently or cooperatively. RBAC is powered by Confluent’s Metadata Service (MDS), which integrates with LDAP and acts as the central authority for authorization and authentication data. RBAC leverages role bindings to determine which users and groups can access specific resources and what actions can be performed within those resources (roles). RBAC is empowered on the Kafka cluster by way of Confluent Server. Confluent Server is a fully compatible Kafka broker which integrates commercial security features like RBAC.

The list of items is pretty straightforward with no major surprises.

Comments closed

Pandas UDFs and Python Type Hints in Spark 3.0

Hyukjin Kwon announces some updates forthcoming in Apache Spark 3.0:

The Pandas UDFs work with Pandas APIs inside the function and Apache Arrow for exchanging data. It allows vectorized operations that can increase performance up to 100x, compared to row-at-a-time Python UDFs.

The example below shows a Pandas UDF to simply add one to each value, in which it is defined with the function called pandas_plus_one decorated by pandas_udf with the Pandas UDF type specified as PandasUDFType.SCALAR.

Click through for explanations and demos for each.

Comments closed

Switching Between Solution and Folders Views with SSRS Projects

Elizabeth Noble deals with a head-scratcher:


This past week, I made the goal of automating the deployment of our first SSRS report at work. I created the report and after adding the report to source control, my Object Explorer looked like the image below.

I added my solution to source control and synced the project up to Github. However, when my colleague tried to clone the repo and open the Report Project, they saw an image like the one below.

Click through for the images (which really tell the story) and how you can fix this problem.

Comments closed

Passing Around JSON Arrays in Azure Data Factory

Rayis Imayev continues a series on JSON in Azure Data Factory:

It’s not a new thing to know that we can reference nested elements of ADF activities’ output since it’s represented in JSON format or pass the JSON file content to other tasks/components that can process this format.

But what if you need to pass a complete output of your ADF activity task further down your pipeline. Or you need to pass a JSON array elements to another ADF activity or sub-pipeline as the parameter value. Let’s explore what other options available in Azure Data Factory for this very interesting use case.

Read on for the demo.

Comments closed

How SQL Server Stores the Money Data Type

Randolph West wants to talk to us about how we use our MONEY:

While MONEY and DECIMAL(19,4) are functionally the same, they are stored differently on disk, and this is where it gets interesting. Let’s use a random amount of $ 4,513.19. Since it’s small enough to fit in both MONEY and SMALLMONEY, we can do a simple experiment. When we ask SQL Server to store this value in a MONEY data type, it will store it (byte-reversed) as 0x7CA8B00200000000. The SMALLMONEY version of this amount would look almost identical, stored as 0x7CA8B002 (without the leading zeroes). A quick look at this byte-reversed value (0x02B0A87C) in a hex calculator gives us the amount of 45,131,900. After moving the decimal point four places to the left, we get our starting value of 4513.1900.

But read on to see how that compares to other data types.

Comments closed

Custom Power BI Maps

Alice Drummond has two articles on improving the map experience in Power BI. First up is creating drill-down maps:

But why would we want to drill down on a map?

See the big picture: Using drill down, we can visualise the dataset at the ‘big picture’ scale, and then explore finer details for areas of interest (just think of the classic spatial hierarchy of Country à State à City à Suburb)

Save on real estate: We can also display multiple datasets at different levels in the same map – which goes a long way to conserving your precious report page real estate by reducing the number of visuals in your report!  

Performance: Following on from above, less visuals almost always equates to better report performance! And this is especially true when using MapBox, which is at the slower end of the Power BI performance scale at the best of times – so using drill down to reduce the number of maps combined with filtering your datasets goes a long way to speeding up your reports!

The second post is all about styling choropleth maps:

If you’ve played around with MapBox in Power BI – you’ll know that it has loads of great features to create really rich and beautiful maps, including some great ‘out of the box’ map styles (i.e. base maps). However, you might not be aware that it also gives you the ability to design custom styles with your own spatial layers. I discovered this feature on a recent project where my client wanted to include Victoria’s Catchment Management Authority (or CMA) boundaries on their base-map to provide greater context to help interpret their data. Up until this point, the only option I knew of was to purchase an expensive ArcGIS Online licence to create custom map styles. So, you can imagine just how excited I was when I discovered that I could also do this in MapBox – for FREE!!!

Click through for the posts and videos.

Comments closed

Feeding Databricks Output to Azure SQL Database

Arun Sirpal takes us through the process of moving data from Databricks into Azure SQL Database:

Recently I got to a stage where I leveraged Databricks to the best of my ability to join couple of CSV files together, play around some aggregations and then output it back to a different mount point ( based on Azure Storage) as a parquet file, I decided that I actually wanted to move this data into Azure SQL DB, which you may want to do one day.

This isn’t just dropping files into Blob Storage and picking them up, but rather a direct integration.

Comments closed

Stopping an ADF SSIS Integration Runtime

Andy Leonard shows how to shut off the Azure Data Factory SSIS integration runtime when you don’t need to use it:

I have blogged about stopping an Azure-SSIS IR in the past (Start Azure-SSIS, Execute Package, Stop Azure-SSIS). The pattern described in the Start Azure-SSIS, Execute Package, Stop Azure-SSIS post is sound and awesome. My only issue is the pipeline will report failure if the Azure-SSIS IR is not running when the Web Activity executes the Stop command. Mind you, the operation succeeds. But it reports failure.

Reporting failure when the operation succeeds seriously messes with my CDO (that’s OCD with the letters in the proper order).

Read on to see how to do this the best way.

Comments closed