Press "Enter" to skip to content

Day: October 9, 2020

Tips for Optimizing Dedicated SQL Pools in Synpase Analytics

Tsuyoshi Matsuzaki shares some tips for improving query performance when using Dedicated SQL Pools in Azure Synapse Analytics:

By above BROADCAST_MOVE operation, the rows in dimension_City table are all copied in a temporary table (called TEMP_ID_3) on all distributed database. (See below.)
Since the size of dimension_City is small, then all rows in this table is duplicated in all database before joining. This time, we join only 2 tables, however, if a lot of tables are needed to join, this data movement will become large overhead for query execution.

The short version is, replicate smaller dimensions and align distribution keys for large tables which get joined together. Both of these minimize the changes of the engine needing to shuffle data between nodes. These sorts of things can make a huge difference when working with Dedicated SQL Pools, cutting query time down by an order of magnitude in some extreme cases.

Comments closed

Cosmos DB Serverless

Hasan Savran takes us through a new approach to Cosmos DB pricing:

There used to be two ways for Azure Cosmos DB to bill you for the services it provides. Those were Manual Provisioned throughput and Auto scale provisioned throughout.          Provisioned throughput is number of request units available for your applications to use per second. For example, Let’s say you picked 400 Request Units. That means your application’s budget is 400 request units per second. Depending on your needs, you can scale up or down. You can set this manually and scale up and down manually if you like. That will be the cheapest option which is 100 Request Units for $0.008 per hour. Your other option is the auto scale option, Cosmos DB scales up and down automatically with this option. All you need to tell Cosmos DB is, what the highest number it can scale up to. This option is 50% higher than the first option.

    These are great options, but they can be still be expensive for what you need.

Read on for what we know about Serverless pricing and which APIs currently support the Serverless model.

Comments closed

Tips for Monitoring Kubernetes

Michael Sorens walks us through some tips for monitoring Kubernetes:

The world begins, of course, with kubectl, the command-line interface to Kubernetes. The commands you start using early on help you examine your Kubernetes resources.

kubectl get . . .

With that command, you can examine your deployments, which rollout your replica sets, which create pods. Then you need services, which are logical sets of pods that provide an interface for external access. What can you examine with kubectl get ?

Use kubectl api-resources to see the list. At the time of this writing, there are 66 different resource types! That number will likely only grow over time. 

Read on for more, including the setup of the Kubernetes UI and third-party tooling.

Comments closed

Oracle Error ORA-28353: Failed to Open Wallet

Rene Antunez diagnoses an Oracle error:

I noticed the original error after applying the October 2018 bundle patch (BP) for 11.2.0.4. While I realize most clients are no longer in 11.2.0.4, this information remains valid for anyone upgrading from 11.2 to 12, 18 or 19c.

I had been doing several tests on my Spanish RAC (Real Application Cluster) Attack for 12.2. The goal was to patch my client to October 2018 PSU; obtaining enough security leverage to avoid patching their database and do their DB (database) upgrade to 18c. I created RAC VMs to enable testing. I also set up my environment to match the client’s, which had TDE with FIPS 140 enabled (I will provide more details on this later in the post).

While the patching was successful, the problem arose after applying the patch. I was unable to open the database despite having the correct password for the encryption key.

When I first read the title, I thought it was a joke making fun of Oracle’s licensing practices.

Comments closed

Tips for Moving to a Multi-Tenant Setup

Adrian Hills continues a series on multi-tenant SQL Server:

What you knew a few years ago might differ significantly from the reality today, whether you started off with a single tenant system that you pivoted quickly to support multiple tenants, or you envisioned 10s of tenants and ended up with 1000s. Whatever that reality is, when you experience pain points around the 3 considerations I covered in part 1 of this series (security, maintainability, and scalability), it can lead to a need to change the multi-tenancy approach you’re using. Often, the biggest driver for change is around performance and scalability and typically tends to be related to a need to move from a less-isolated multi-tenancy approach (single database) to a more-isolated approach that supports the scaling out of workloads (multiple databases).

In the steps below, I’ll cover the general path you can follow to successfully make an architectural change like this to the database layer. As a provider of database performance monitoring and DataOps tools, SentryOne offers some tools that can come in handy along the way, so I’ll call those out as we go.

Click through for guidance.

Comments closed

SSIS Code Promotion

Andy Leonard takes us through the process of migrating code from development through to production:

Developers need to able to develop software that will execute enterprise operations.
Production is solely managed by operations personnel. Allow very little, if any, developer access to Production.
Before deploying to Production, operations personnel need a Pre-Production environment they can use to test the deployment and performance after the deployment. No one wants operations personnel – or anyone, really – deploying a process to Production without a practice run.
Similarly, developers need to move their code from the Development tier (aka the “works on my machine” tier) to another tier – such as Test – so they can identify hard-coded defaults that should be parameters.

Read on for some tips from Andy, including where the SSIS Catalog Compare product can fit into this.

Comments closed

With Query Store, Alter instead of Drop and Create

Erin Stellato has a recommendation when running Query Store:

When I talk about Plan Forcing I always discuss how users should ALTER procedures when using Query Store, and not use DROP and CREATE. This is valid beyond Plan Forcing cases; it’s a best practice I recommend however you are using Query Store. Every query stored in Query Store has an object_id associated with it, which ties it back to its object (stored procedure, function, etc.). This is critical not just for plan forcing, but also when you want to look at historical performance for a query after a change to the object.

Read on for a demonstration of why this is important.

Comments closed