Paul White unravels the mysteries of sql_handle
:
This article describes the structure of a
sql_handle
and shows how the batch text hash component is calculated.
Read on to learn more.
Comments closedA Fine Slice Of SQL Server
Paul White unravels the mysteries of sql_handle
:
This article describes the structure of a
sql_handle
and shows how the batch text hash component is calculated.
Read on to learn more.
Comments closedI used to think the plan cache was so cool.
– You can find queries that aren’t good there
– Plans are full of details (and XML)
– Supporting DMVs give you extra insights about resource usageBut most of the time now, I’m totally frustrated with it.
It clears out a lot, plans aren’t there for some queries, and the plans that are there can be very misleading.
Can you really tell someone what their worst performing queries are when everything in there is from the last 15 minutes?
No.
Read on for what’s nice about Query Store, as well as a few fixes which need to be there before it’s really useful. I’ve used Query Store in big environments to good effect (though our DBAs had to rewrite the cleanup processes because they’re bad) and I’ve had to turn it off in medium-sized environments running 2016 because it was harming performance. It’s a great concept and reasonable implementation with a few too many sharp edges.
Comments closedMatthew Roche clues us in on what’s coming for Power BI:
The Power BI team at Microsoft publishes a “release plan,” which is essentially the public product roadmap. Anyone can use it to understand what new capabilities and improvements are planned, and when they’re expected to be released.
One challenge with the official release plan comes from the fact that it is a set of online documents, and that for each “release wave” there is a new set of docs – it’s not always clear where to look for the latest information on a given feature.
But thanks to Alex Powers, this is a lot clearer now. Click through to learn how.
Comments closedBy above
BROADCAST_MOVE
operation, the rows indimension_City
table are all copied in a temporary table (calledTEMP_ID_3
) on all distributed database. (See below.)
Since the size ofdimension_City
is small, then all rows in this table is duplicated in all database before joining. This time, we join only 2 tables, however, if a lot of tables are needed to join, this data movement will become large overhead for query execution.
The short version is, replicate smaller dimensions and align distribution keys for large tables which get joined together. Both of these minimize the changes of the engine needing to shuffle data between nodes. These sorts of things can make a huge difference when working with Dedicated SQL Pools, cutting query time down by an order of magnitude in some extreme cases.
Comments closedHasan Savran takes us through a new approach to Cosmos DB pricing:
There used to be two ways for Azure Cosmos DB to bill you for the services it provides. Those were Manual Provisioned throughput and Auto scale provisioned throughout. Provisioned throughput is number of request units available for your applications to use per second. For example, Let’s say you picked 400 Request Units. That means your application’s budget is 400 request units per second. Depending on your needs, you can scale up or down. You can set this manually and scale up and down manually if you like. That will be the cheapest option which is 100 Request Units for $0.008 per hour. Your other option is the auto scale option, Cosmos DB scales up and down automatically with this option. All you need to tell Cosmos DB is, what the highest number it can scale up to. This option is 50% higher than the first option.
These are great options, but they can be still be expensive for what you need.
Read on for what we know about Serverless pricing and which APIs currently support the Serverless model.
Comments closedMichael Sorens walks us through some tips for monitoring Kubernetes:
The world begins, of course, with kubectl, the command-line interface to Kubernetes. The commands you start using early on help you examine your Kubernetes resources.
kubectl get . . .
With that command, you can examine your deployments, which rollout your replica sets, which create pods. Then you need services, which are logical sets of pods that provide an interface for external access. What can you examine with
kubectl get
?Use
kubectl api-resources
to see the list. At the time of this writing, there are 66 different resource types! That number will likely only grow over time.
Read on for more, including the setup of the Kubernetes UI and third-party tooling.
Comments closedRene Antunez diagnoses an Oracle error:
I noticed the original error after applying the October 2018 bundle patch (BP) for 11.2.0.4. While I realize most clients are no longer in 11.2.0.4, this information remains valid for anyone upgrading from 11.2 to 12, 18 or 19c.
I had been doing several tests on my Spanish RAC (Real Application Cluster) Attack for 12.2. The goal was to patch my client to October 2018 PSU; obtaining enough security leverage to avoid patching their database and do their DB (database) upgrade to 18c. I created RAC VMs to enable testing. I also set up my environment to match the client’s, which had TDE with FIPS 140 enabled (I will provide more details on this later in the post).
While the patching was successful, the problem arose after applying the patch. I was unable to open the database despite having the correct password for the encryption key.
When I first read the title, I thought it was a joke making fun of Oracle’s licensing practices.
Comments closedAdrian Hills continues a series on multi-tenant SQL Server:
What you knew a few years ago might differ significantly from the reality today, whether you started off with a single tenant system that you pivoted quickly to support multiple tenants, or you envisioned 10s of tenants and ended up with 1000s. Whatever that reality is, when you experience pain points around the 3 considerations I covered in part 1 of this series (security, maintainability, and scalability), it can lead to a need to change the multi-tenancy approach you’re using. Often, the biggest driver for change is around performance and scalability and typically tends to be related to a need to move from a less-isolated multi-tenancy approach (single database) to a more-isolated approach that supports the scaling out of workloads (multiple databases).
In the steps below, I’ll cover the general path you can follow to successfully make an architectural change like this to the database layer. As a provider of database performance monitoring and DataOps tools, SentryOne offers some tools that can come in handy along the way, so I’ll call those out as we go.
Click through for guidance.
Comments closedAndy Leonard takes us through the process of migrating code from development through to production:
Developers need to able to develop software that will execute enterprise operations.
Production is solely managed by operations personnel. Allow very little, if any, developer access to Production.
Before deploying to Production, operations personnel need a Pre-Production environment they can use to test the deployment and performance after the deployment. No one wants operations personnel – or anyone, really – deploying a process to Production without a practice run.
Similarly, developers need to move their code from the Development tier (aka the “works on my machine” tier) to another tier – such as Test – so they can identify hard-coded defaults that should be parameters.
Read on for some tips from Andy, including where the SSIS Catalog Compare product can fit into this.
Comments closedErin Stellato has a recommendation when running Query Store:
When I talk about Plan Forcing I always discuss how users should ALTER procedures when using Query Store, and not use DROP and CREATE. This is valid beyond Plan Forcing cases; it’s a best practice I recommend however you are using Query Store. Every query stored in Query Store has an object_id associated with it, which ties it back to its object (stored procedure, function, etc.). This is critical not just for plan forcing, but also when you want to look at historical performance for a query after a change to the object.
Read on for a demonstration of why this is important.
Comments closed