Press "Enter" to skip to content

Category: Cloud

Redshift Architecture Performance Tips

John Ryan has a few hints to help us build speedy Redshift clusters:

The Need to Vacuum

As Redshift does not reclaim free space automatically, updates and delete operations can frequently lead to table growth. Equally, it’s important as new entries are added, that the data is maintained in a sorted sequence.

The VACUUM command is used to re-sequence data, and reclaim disk space as a result of DELETE and UPDATE operations. Although it won’t block other processes, it can be a resource-intensive operation, especially for data stored using interleaved sort keys.

It should be run periodically to ensure consistent performance and to reduce disk usage.

Some of this is good Postgres advice; some of it is good MPP advice (and serves well, for example, when dealing with Azure SQL Data Warehouse); the rest is Redshift-specific.

Comments closed

Databricks Delta Now Available On Azure

Cihan Biyikoglu and Singh Garewal announce the availability of Databricks Delta on Azure Databricks:

Using an innovative new table design, Delta supports both batch and streaming use cases with high query performance and strong data reliability while requiring a simpler data pipeline architecture:

Increased query performance – Able to deliver 10 to 100 times faster performance than Apache Spark(™) on Parquet through the use of key enablers such as compaction, flexible indexing, multi-dimensional clustering and data caching.

Improved data reliability – By employing ACID (“all or nothing”) transactions, schema validation / enforcement, exactly once semantics, snapshot isolation and support for UPSERTS and DELETES.

Reduced system complexity – Through the unification of batch and streaming in a common pipeline architecture – being able to operate on the same table also means a shorter time from data ingest to query result. Schema evolution provides the ability to infer schema from input data making it easier to deal with changing business needs.

The Azure version of Databricks is quickly reaching parity with the classic AWS-hosed version.

Comments closed

Other Ignite Announcements

Denny Cherry gives us a quick roundup of Ignite announcements:

On the Azure Data Platform side of the world, we have the announcement that Azure SQL DB now supports databases up to 100 TB in size using the Hyperscale feature of Azure SQL DB which you’ll see coming on October 1st, 2018.  Hyperscale is an excellent move for customers, as many customers were blocked by the fact that they couldn’t move the database to Azure SQL DB simple because of size; and this limit is going away in just a few short days.

Along with the legacy database platform, we have Managed Instance which was in Public Preview.  The fact is that it is in preview is no-more; Managed Instance is being released in General Availability starting on October 1st, 2018.  Managed Instance will make migrations to Azure much more accessible for many clients that need support for a SQL Server instance because of features that aren’t available in Azure SQL DB. Managed Instance will bridge this gap for customers giving customers basically full SQL Server functionality within a PaaS service.

In the Azure SQL DB space, we see new features for optimization of query performance getting released to General Availability.  These features include three new features called row mode: memory grant feedback, approximate query processing, and table variable deferred compilation. With minimal effort, these features can collectively optimize your memory usage and improve overall query performance.

They’re throwing a lot of stuff our way, including a less expensive version of Azure SQL Data Warehouse.

Comments closed

Writing Audit Logs To Azure Event Hubs

Ronit Reger announces that Azure SQL Database auditing logs can now go to Azure Log Analytics or Azure Event Hubs:

Azure Log Analytics plays a central role in monitoring and management of your Azure environment. It enables collecting telemetry and other data from a variety of sources across Azure, and provides a query language and analytics engine for deep analysis and insights on the operation of applications and resources. For more information on the Log Analytics platform, see What is Azure Log Analytics.

With native support for saving SQL audit logs directly to Log Analytics, log data from all of your database resources can be gathered and stored in a single central location. The logs can now be analyzed using the rich analysis tools provided by the platform, which can provide deeper visibility and advanced cross-resource analytics.

In addition, SQL Server audit logs (from on-premises SQL Servers or SQL Servers on a VM) can also be collected in Log Analytics via OMS agent integration, as described in this article. Thus, you can manage and analyze all of your database audit logs, whether from the cloud or on-premises, in a single central location using the power of Azure Log Analytics.

This looks useful.

Comments closed

Switching To Managed Disks In Azure

Chris Seferlis walks us through an easy method to convert unmanaged disks to managed disks in Azure:

First off, why would you want a managed disk over an unmanaged one?

  • Greater scalability due to much higher IOPs and storage limits. There’s no longer the need to add additional storage accounts when you’re adding disk space, which has been a challenge for users that were using large virtual machines and required large storage space.

  • Better availability and reliability which ensures that disks are now isolated from each other in different storage scale units.

  • Managed disks offer an over 99.99% uptime, plus are always stored with 3 replicas of the data.

  • More granular access control by employing role-based access control (RBAC) security. You have granular capability to assign access to various people in your organization.

Keep reading to learn how to switch.

Comments closed

Against Multi-Cloud Models

Tyler Treat argues against companies looking at multi-cloud models:

A multi-cloud strategy looks great on paper, but it creates unneeded constraints and results in a wild-goose chase. For most, it ends up being a distraction, creating more problems than it solves and costing more money than it’s worth. I’m going to caveat that claim in just a bit because it’s a bold blanket statement, but bear with me. For now, just know that when I say “multi-cloud,” I’m referring to the idea of running the same services across vendors or designing applications in a way that allows them to move between providers effortlessly. I’m not speaking to the notion of leveraging the best parts of each cloud provider or using higher-level, value-added services across vendors.

Multi-cloud rears its head for a number of reasons, but they can largely be grouped into the following points: disaster recovery (DR), vendor lock-in, and pricing. I’m going to speak to each of these and then discuss where multi-cloud actually does come into play.

It’s an interesting article.  I think that Tyler is right, but that companies should be capable of switching between cloud providers or even creating hybrid approaches should the need arise.

Comments closed

Limiting Azure Administrator Data Access

Melissa Coates gives us a look at one aspect of Azure security:

Recently a customer expressed concern that an owner of an Azure resource group automatically gains access to the data within the services contained in the resource group. In this case, the customer was specifically referring to data in Azure Data Lake Storage Gen 1 but this concept applies to Azure Storage and other data-oriented services in Azure as well. The customer’s comment prompted me to look into available alternatives. This is by no means a detailed security post…rather, I’m trying to share a few nuggets of what I learned.

Worth the read.  Much of the latest round of regulatory push seems to be in the realm of limiting high-access insiders (like DBAs) from accessing sensitive information, and this post aligns with that.

Comments closed

Forcing MAXDOP In Azure SQL DB

Arun Sirpal shows us that you can change MAXDOP in Azure SQL Database:

In this quick post I will show you my parallel plan and how I use MAXDOP = 1 to suppress parallel plan generation so the operation will be executed serially. (Disclaimer – I am not saying this is the right thing to do, merely using it as an example of tweaking this setting, to be honest in 10 years I have changed MAXDOP = 1 twice). I executed a query in Azure. You can see the classic operators such as gather streams and repartition streams.

This change will affect all queries hitting that database, so it’s a coarser tool than changing cost threshold for parallelism (not allowed) or setting MAXDOP per-query (allowed).

Comments closed

ElasticMapReduce And RStudio

Tanzir Musabbir demonstrates how to set up Amazon ElasticMapReduce to include an RStudio edge node:

RStudio Server provides a browser-based interface for R and a popular tool among data scientists. Data scientist use Apache Spark cluster running on  Amazon EMR to perform distributed training. In a previous blog post, the author showed how you can install RStudio Server on Amazon EMR cluster. However, in certain scenarios you might want to install it on a standalone Amazon EC2 instance and connect to a remote Amazon EMR cluster. Benefits of running RStudio on EC2 include the following:

  • Running RStudio Server on an EC2 instance, you can keep your scientific models and model artifacts on the instance. You might have to relaunch your EMR cluster to meet your application requirements. By running RStudio Server separately, you have more flexibility and don’t have to depend entirely on an Amazon EMR cluster.
  • Installing RStudio on the master node of Amazon EMR requires sharing of resources with the applications running on the same node. By running RStudio on a standalone Amazon EC2 instance, you can use resources as you need without having to share the resources with other applications.
  • You might have multiple Amazon EMR clusters in your environment. With RStudio on Edge node, you have the flexibility to connect to any EMR clusters in your environment.

There is one major difference between running RStudio Server on an Amazon EMR cluster vs. running it on a standalone Amazon EC2 instance. In the latter case, the instance needs to be configured as an Amazon EMR client (or edge node). By doing so, you can submit Apache Spark jobs and other Hadoop-based jobs from an instance other than EMR master node.

Click through for detailed, step-by-step instructions on how to do this.

Comments closed

Azure Data Factory V2 Or SSIS?

Merrill Aldrich explains the differences between Azure Data Factory V2 and SQL Server Integration Services:

The next consideration is a bit more involved if you are new to data integration. Both of these tools excel at transporting data from place to place, but they have important differences in terms of what you can do to modify the data in transit. As a matter of emphasis, ADF has more features geared toward moving the data than performing any complex transformation along the way. SSIS, on the other hand, was built with a large library of transformations that you can chain together to make elaborate data flows including lookups, matching, splitting data, and more.

The tools also overlap quite a lot. In projects this seems to lead to the question of whether you’ll transform the data “in flight” using Extract Transform Load (ETL), or instead move the data to a destination where it’ll be transformed using Extract Load Transform (ELT).

These are not “pretty much the same thing” and Merrill does a good job of explaining what those differences in design mean for the products.

Comments closed