Press "Enter" to skip to content

Category: Cloud

Spark SQL in Delta Lake

Kundan Kumarr walks us through some of the basic SQL operations you can perform with Delta Lake in Apache Spark:

Nowadays Delta lake is a buzz word in the Big Data world, especially among the spark developers because it relegates lots of issues found in the Big Data domain. Delta Lake is an open-source storage layer that brings reliability to data lakes. Delta Lake provides ACID transactions, scalable metadata handling, and unifies streaming and batch data processing. It is evolving day by day and adds cool features in its every release. On 19th June 2020, Delta lake version 0.7.0 was released and this is the first release on Spark 3.x. This release involves important key features that can make the spark developer’s work easy.

One of the interesting key features in this release is the support for metastore-defined tables and SQL DDLs. So now we can define Delta tables in the Hive metastore and use the table name in all SQL operations. We can perform SQL DDLs to create tables, insert into tables, explicitly alter the schema of the tables, and so on. So in this blog, we will learn how we can perform SQL DDLs/DMLS/DQL in Delta Lake 0.7.0.

Click through for the examples.

Comments closed

Building an Azure Function to Automate CHECKDB

Arun Sirpal shows us how to build an Azure Function:

The title is a mouthful and so is this post. In the past I have linked to blog posts from Microsoft that say consistency checks for Azure SQL Database is the responsibility of Microsoft. (https://azure.microsoft.com/en-gb/blog/data-integrity-in-azure-sql-database/)

However, Paul Randal got me thinking about his thoughts on it (via his insider email). Forming the core of this post. If you desire to run DBCC CHECKDB against Azure SQL Database (which I know people do) – how can you do this? There are many ways, but for this blog post – Enter Azure functions. There are many moving parts to this, but once setup and coded it is a very satisfying experience. Let’s dig in. I am NOT going to copy and paste every little element of the high-level guide from Microsoft, there is no point in that but I will show you the links that you need to setup the relevant function app project then the tailored bits around CHECKDB forms the bulk of this post.

This isn’t necessary to do, but if you want to learn how Azure Functions work, it’s a good example of working through the mechanics.

Comments closed

Reading S3 Data into Kafka

Ramu Kakarla takes us through using Apache Camel to load Amazon S3 data into a Kafka topic:

Apache Camel is an open-source framework for message-oriented middleware with a rule-based routing and mediation engine that provides a Java object-based implementation of the Enterprise Integration Patterns using an application programming interface to configure routing and mediation rules

Read on to see how this fits together.

Comments closed

ACID Transactions with Hive LLAP in ElasticMapReduce

Suthan Phillips and Chao Gao walk us through ACID transactions when using Hive on Amazon’s ElasticMapReduce platform:

ACID (atomicity, consistency, isolation, and durability) properties make sure that the transactions in a database are atomic, consistent, isolated, and reliable.

Amazon EMR 6.1.0 adds support for Hive ACID transactions so it complies with the ACID properties of a database. With this feature, you can run INSERT, UPDATE, DELETE, and MERGE operations in Hive managed tables with data in Amazon Simple Storage Service (Amazon S3). This is a key feature for use cases like streaming ingestion, data restatement, bulk updates using MERGE, and slowly changing dimensions.

This post demonstrates how to enable Hive ACID transactions in Amazon EMR, how to create a Hive transactional table, how it can achieve atomic and isolated operations, and the concepts, best practices, and limitations of using Hive ACID in Amazon EMR.

Click through for a demonstration.

Comments closed

Automating Azure Resource Deletion

Drew Skwiers-Koballa has put together a runbook to remove Azure resources tagged in a certain way:

Microsoft puts a lot of effort into making it easy for you to log on to portal.azure.com and spin up a VM, SQL Database, Function, or other instance. The Visual Studio Enterprise subscription with $150 credit per month can go quickly if you’re not careful to stop or remove big ticket items as soon as you’re done with them. Even if you have a virtually unlimited Azure account connected to a credit card, expense account, or a trust fund – you probably don’t want to accidentally leave something running longer than it is needed.

With an Azure Automation account and a Powershell workflow runbook I’m able to use resource tags to set resources for autodeletion by date or immediately that evening.

Click through for that runbook.

Comments closed

Comparing Cassandra and DynamoDB

Lewis DiFelice compares and contrasts Cassandra with DynamoDB:

In this post, we’ll look at some of the key differences between Apache Cassandra (hereafter just Cassandra) and DynamoDB.

Both are distributed databases and have similar architecture, and both offer incredible scalability, reliability, and resilience. However, there are also differences,  and understanding the differences and cost benefits can help you determine the right solution for your application.

There’s some good info in this comparison.

Comments closed

Changing Azure SQL DB Service-Level Objectives

Monica Rathbun notes that SSMS lets you change service-level objectives for Azure SQL Databases:

Sometimes as a DBA, I am lazy and want the ability to execute all of my tasks in one place. Lucky for me I discovered the other day that I can change my Azure SQL Database Service Level Object options within SQL Server Management Studio (SSMS) without ever having to go to the Azure Portal.

Read on to learn how, as well as what you can change.

Comments closed

Working with Read-Only Endpoints in Azure SQL Database

Arun Sirpal takes us through one method for improving performance in Azure SQL Database:

One of the main benefits of configuring active geo-replication for Azure SQL Database is leveraging the read-only endpoint, a good technique to split away read only activity from OLTP based workloads. This means that there is no reason why you cannot point users to these databases via tools such as Power BI as highlighted below.

But there are some things to keep in mind, as Arun points out.

Comments closed

The %tensorboard Magic Command in Databricks Notebooks

Jerry Liang and Hossein Falaki introduce a new magic command in Databricks Runtime 7.2:

In 2017, we released the  dbutils.tensorboard.start() API to manage and use TensorBoard inside Databricks python notebooks. This API only permits one active TensorBoard process on a cluster at any given time – which hinders multi-tenant use-cases. Early last year, TensorBoard released its own API for notebooks via the %tensorboard python magic command. This API not only starts TensorBoard processes but also exposes the TensorBoard’s command line arguments in the notebook environment. In addition, it embeds the TensorBoard UI inside notebooks, whereas the dbutils.tensorboard.start API prints a link to open TensorBoard in a new tab.

Read on to see how you can use it.

Comments closed

Managing Azure IoT Digital Twins with Python

Paul Hernandez continues a series on digital twins:

As mentioned in my previous post, Azure Digital Twins (ADT) Service is a new way to build next generation IoT solutions. In the first post I show you in a video how to manage ADT instances with the ADT Explorer. In the second post I show how to do mostly the same but using Postman and the ADT Rest API.

ADT has control plane APIs and data plane APIs. The latest is used to manage elements in the ADT instance. In order to use these APIs Microsoft published a .Net (C#) SDK. And SDK is a convenient way to manage instances, since you can easily create applications for your digital twins. If for any reason you prefer to use another language like java, javascript or python, you need to generate your own SDK.

In this post I describe how to autogenerate a Python SDK using the tool Autorest and a swagger file.

Read on to see how you can manage digital twins by generating an SDK.

Comments closed