Press "Enter" to skip to content

Category: Cloud

Looking At Databricks Cluster Pricing

Tristan Robinson takes a look at Azure Databricks pricing:

The use of databricks for data engineering or data analytics workloads is becoming more prevalent as the platform grows, and has made its way into most of our recent modern data architecture proposals – whether that be PaaS warehouses, or data science platforms.

To run any type of workload on the platform, you will need to setup a cluster to do the processing for you. While the Azure-based platform has made this relatively simple for development purposes, i.e. give it a name, select a runtime, select the type of VMs you want and away you go – for production workloads, a bit more thought needs to go into the configuration/cost.  In the following blog I’ll start by looking at the pricing in a bit more detail which will aim to provide a cost element to the cluster configuration process.

There are a few complicating factors in figuring out cluster price but rest assured that it will be costly.

Comments closed

Automating Azure SQL Database Scaling

Arun Sirpal shows how to use Azure Logic Apps to auto-scale Azure SQL Database:

When I was presenting my Azure SQL Database session at DataRelay (used to be SQLRelay) I was asked (over coffee) about auto scaling capabilities. Quite simply there is nothing out of the box to achieve this. The idea of auto scaling would be good where you would need a burst to fulfill higher demand in terms of workload for a time duration, you know, something like “end of the day, Friday night sale” for your database.

Classically you would probably go down the PowerShell route via a runbook, but I am different.

In this case, the automation is timer-based rather than load-based.

Comments closed

Deploying An Azure Container Within A Virtual Network

Andrew Pruski shows us that you can now deploy an Azure container running SQL Server within an Azure virtual network:

Up until now Azure Container Instances only had one option to allow us to connect. That was assigning a public IP address that was directly exposed to the internet.

Not really great as exposing SQL Server on port 1433 to the internet is generally a bad idea: –

Now I know there’s a lot of debated about whether or not you should change the port that SQL is listening on to prevent this from happening. My personal opinion is, that if someone wants to get into your SQL instance, changing the port isn’t going to slow them down much. However, a port change will stop opportunistic hacks (such as the above).

But now we have another option. The ability to deploy a ACI within a virtual network in Azure! So let’s run through how to deploy.

Click through for those instructions.

Comments closed

Azure Data Factory Or Integration Services?

Teo Lachev contrasts use cases for Integration Services vesus Azure Data Factory V2:

So, ADF was incorrectly positioned as “SSIS for the Cloud” and unfortunately once that message made it out there was a messaging problem that Microsoft has been fighting ever since. Like Azure ML, on the glory road to the cloud things that were difficult with SSIS (installation, projects, deployment) became simple, and things that were simple became difficult. Naturally, Microsoft took a lot of criticism from the customers and community, including from your humble correspondent. ADF, or course, has nothing to do with SSIS, thus leaving many data integration practitioners with a difficult choice: should you take the risk and take the road less traveled with ADF, or continue with the tried-and-true SSIS for data integration on Azure?

To Microsoft’s credits, ADF v2 has made significant enhancements in features, usability, and maintainability. There is an also a “lift and shift” option to run SSIS inside ADF but since this architecture requires a VM, I consider it a narrow case scenario, such as when you need to extend ADF with SSIS features that it doesn’t have. Otherwise, why would you start new development with SSIS hosted under ADF, if you could provision and license the VM yourself and have full control over it?

All in all, Teo is not the biggest fan of ADF at this point and leans heavily toward SSIS; read on for the reasoning.

Comments closed

Azure Data Factory V2 Dependencies

Meagan Longoria has important notes on how Azure Data Factory V2 Dependencies differ from SQL Server Integration Services precedent constraints:

This sounds similar to SSIS precedence constraints, but there are a couple of big differences.

  1. SSIS allows us to define expressions to be evaluated to determine if the next task should be executed.
  2. SSIS allows us to choose whether we handle multiple constraints as a logical AND or a logical OR. In other words, do we need all constraints to be true or just one.

ADF V2 activity dependencies are always a logical AND. While we can design control flows in ADF similar to how we might design control flows in SSIS, this is one of several differences. Let’s look at an example.

Meagan gives us three methods of replicating SSIS functionality using ADF V2, so check it out.

Comments closed

Azure SQL Managed Instance Inventory Analysis

Jovan Popovic shows us how to use the Azure CLI plus JSON support in SQL Server to manage a list of Azure SQL Managed Instances:

Sometime you would need to know how many Managed Instance you have created in Azure cloud. Although you can find all information about the Azure SQL Managed Instances in Azure portal or API (ARM, PowerShell, Azure CLI), sometime it is hard to list all instances and search them using some criteria. In this post you will see how easily you can load list of your Managed Instances and build inventory of your resources.

Problem

Imagine that you have a large number of Managed Instances and you need to know how many instances you have, in what regions, subnets, and virtual networks they are placed, how much compute and storage is allocated to each of them, etc. Analyzing inventory of Managed Instances might be hard if you just use PowerShell.

Click through for the solution.

Comments closed

Testing Azure SQL Database Failover

Arun Sirpal shows us how easy it is to fail over Azure SQL Database to another region:

So what happens now if I connect to the read/write endpoint? (I test this via SSMS)

The dreaded IP address / create a new firewall rule message. Why? Well this setup utilized a “server” level firewall rule and the server in the US did NOT have the IP address mapped in, you can see from the below screen shot that there are no firewall rules configured.

Fixing this is easy, you could just add the IP address on the secondary server as another server level rule but you should seriously consider using a database level firewall rule, the setup will get replicated to the secondary server making failover experience smoother.

Read on to see how to set this.

Comments closed

Redshift Architecture Performance Tips

John Ryan has a few hints to help us build speedy Redshift clusters:

The Need to Vacuum

As Redshift does not reclaim free space automatically, updates and delete operations can frequently lead to table growth. Equally, it’s important as new entries are added, that the data is maintained in a sorted sequence.

The VACUUM command is used to re-sequence data, and reclaim disk space as a result of DELETE and UPDATE operations. Although it won’t block other processes, it can be a resource-intensive operation, especially for data stored using interleaved sort keys.

It should be run periodically to ensure consistent performance and to reduce disk usage.

Some of this is good Postgres advice; some of it is good MPP advice (and serves well, for example, when dealing with Azure SQL Data Warehouse); the rest is Redshift-specific.

Comments closed

Databricks Delta Now Available On Azure

Cihan Biyikoglu and Singh Garewal announce the availability of Databricks Delta on Azure Databricks:

Using an innovative new table design, Delta supports both batch and streaming use cases with high query performance and strong data reliability while requiring a simpler data pipeline architecture:

Increased query performance – Able to deliver 10 to 100 times faster performance than Apache Spark(™) on Parquet through the use of key enablers such as compaction, flexible indexing, multi-dimensional clustering and data caching.

Improved data reliability – By employing ACID (“all or nothing”) transactions, schema validation / enforcement, exactly once semantics, snapshot isolation and support for UPSERTS and DELETES.

Reduced system complexity – Through the unification of batch and streaming in a common pipeline architecture – being able to operate on the same table also means a shorter time from data ingest to query result. Schema evolution provides the ability to infer schema from input data making it easier to deal with changing business needs.

The Azure version of Databricks is quickly reaching parity with the classic AWS-hosed version.

Comments closed

Other Ignite Announcements

Denny Cherry gives us a quick roundup of Ignite announcements:

On the Azure Data Platform side of the world, we have the announcement that Azure SQL DB now supports databases up to 100 TB in size using the Hyperscale feature of Azure SQL DB which you’ll see coming on October 1st, 2018.  Hyperscale is an excellent move for customers, as many customers were blocked by the fact that they couldn’t move the database to Azure SQL DB simple because of size; and this limit is going away in just a few short days.

Along with the legacy database platform, we have Managed Instance which was in Public Preview.  The fact is that it is in preview is no-more; Managed Instance is being released in General Availability starting on October 1st, 2018.  Managed Instance will make migrations to Azure much more accessible for many clients that need support for a SQL Server instance because of features that aren’t available in Azure SQL DB. Managed Instance will bridge this gap for customers giving customers basically full SQL Server functionality within a PaaS service.

In the Azure SQL DB space, we see new features for optimization of query performance getting released to General Availability.  These features include three new features called row mode: memory grant feedback, approximate query processing, and table variable deferred compilation. With minimal effort, these features can collectively optimize your memory usage and improve overall query performance.

They’re throwing a lot of stuff our way, including a less expensive version of Azure SQL Data Warehouse.

Comments closed