Press "Enter" to skip to content

Day: October 16, 2020

Indexing S3 Data with NiFi and CDP Data Hubs

Eva Nahari, et al, walk us through text indexing of S3 data with Solar, NiFi, and Cloudera Data Platform:

Data Discovery and Exploration (DDE) was recently released in tech preview in Cloudera Data Platform in public cloud. In this blog we will go through the process of indexing data from S3 into Solr in DDE with the help of NiFi in Data Flow. The scenario is the same as it was in the previous blog but the ingest pipeline differs. Spark as the ingest pipeline tool for Search (i.e. Solr) is most commonly used for batch indexing data residing in cloud storage, or if you want to do heavy transformations of the data as a pre-step before sending it to indexing for easy exploration. NiFi (as depicted in this blog) is used for real time and often voluminous incoming event streams that need to be explorable (e.g. logs, twitter feeds, file appends etc).

Our ambition is not to use any terminal or a single shell command to achieve this. We have a UI tool for every step we need to take. 

Click through to see how well they do at that.

Comments closed

Kafka and Zookeeper: a Breakup in the Making

Gautam Goswami walks us through the situation with Apache Kafka and Apache Zookeeper:

Zookeeper is completely a separate system having its own configuration file syntax, management tools, and deployment patterns. In-depth skill with experience is necessary to manage and deploy two individual distributed systems and eventually up and running Kafka cluster. The person who manages both the system together should have enough troubleshooting information to find out issues in both the systems. 

There could be a possibility of making mistake on Zookeeper’s configuration files that might lead to breaking down of Kafka cluster. So having expertise in Kafka administration without Zookeeper won’t be able to help to come out from the crisis especially in the production environment where Zookeeper runs on a completely isolated environment (Cloud). Even though to setup and configure a single-node Kafka cluster for learning and R&D, we can’t proceed without Zookeeper.

Read on for the rest of the answer, as well as how Kafka is dis-integrating Zookeeper.

Comments closed

Records in C# 9

Patrick Smacchia walks us through record types in C# 9:

The second core property of string and record value-based semantic is immutability. Basically, an object is immutable if its state cannot change once the object has been created. Consequently, a class is immutable if it is declared in such way that all its instances are immutable.

I remember a discussion with a developer that got nervous about immutability. It looked like an unnatural constraint to him: he wanted his object’s state to change. But he didn’t realized that something he used everyday – string operations – relied on immutability. When you are modifying a string actually a new string object gets created. Records behave the same way. Moreover a clean new syntax based on the keyword with has been introduced with C#9. 

They aren’t as fancy as F# record types, but it is fun to watch C# move slowly to being a functional-friendlier language—something which has been the case since Don Syme helped implement generics in C#.

Comments closed

Optimizing Common Table Expressions

Itzik Ben-Gan continues a series on common table expressions:

If you’re wondering why not use a much simpler solution with a grouped query and a HAVING filter, it has to do with the density of the shipperid column. The Orders table has 1,000,000 orders, and the shipments of those orders were handled by five shippers, meaning that in average, each shipper handled 20% of the orders. The plan for a grouped query computing the maximum order date per shipper would scan all 1,000,000 rows, resulting in thousands of page reads. Indeed, if you highlight just the CTE’s inner query (we’ll call it Query 3) computing the maximum order date per shipper and check its execution plan, you will get the plan shown in Figure 3.

Read on for classic Itzik.

Comments closed

Durable Azure Functions and Azure Data Factory

Rayis Imayev wants to use Azure Functions with Azure Data Factory:

Ok, here is my problem: I have an Azure Data Factory (ADF) workflow that includes an Azure Function call to perform external operations and returns output result, which in return is used further down my ADF pipeline. My ADF workflow (1) depends on the output result of the Azure Function call; (2) plus a time efficiency of the Azure Function call is another factor to consider, if its time execution hits 230 seconds or more, ADF Azure Function will fail with a time-out error message and my workflow is screwed.

This gave Rayis the impetus to try out durable functions. Read on to see how that worked out.

Comments closed

Querying Multiple Data Sources in Azure Synapse Analytics

James Serra walks us through querying Data Lake Storage Gen2, Cosmos DB, and a table created in an Azure Synapse serverless Apache Spark pool:

As I was finishing up a demo script for my presentation at the SQL PASS Virtual Summit on 11/13 (details on my session here), I wanted to blog about part of the demo that shows a feature in the public preview of Synapse that is frankly, very cool. It is the ability to query data as it sits in ADLS Gen2, a Spark table, and Cosmos DB and join the data together with one T-SQL statement using SQL on-demand (also called SQL serverless), hence making it a federated query (also known as data virtualization). The beauty of this is you don’t have to first write ETL to collect all the data into a relational database in order to be able to query it all together, and don’t have to provision a SQL pool, saving costs. Further, you are using T-SQL to query all of those data sources so you are able to use a reporting tool like Power BI to see the results.

Click through to see how.

Comments closed

Custom Formatting in Powershell

Jeffrey Hicks takes us through formatting in Powershell and uses Get-Process as an example:

One of the features I truly enjoy about PowerShell, is the ability to have it present information that I need in a form that I want. Here’s a good example. Running Get-Process is simple enough and the output is pretty complete. But one thing that would make it better for me, is that sometimes I want an easy way to see high-memory use properties. Yes, I can pipe Get-Process to Sort-Object and Where-Object. However, in this particular situation, what I really want is to see high-memory usage processes displayed in red. Maybe those that are getting close to my arbitrary limit I’d like to see in Yellow. This isn’t that difficult to achieve using ANSI escape sequences.

Click through to see how.

Comments closed