As mentioned in the previous article, SQL Server 2019 cardinality estimations for a table variable are based on actual table variable row counts. Therefore, in SQL Server 2019, we should expect better estimations and better plans for queries that use table variables.
Which queries will benefit from this improvement? Generally, queries that use table variables with a lot of rows in them, which are not tuned yet. For table variables with a few rows, there will not be significant changes and you should expect the same execution plan and almost same execution parameters.
Queries whose execution was slow due to underestimation in table variables usually implement logical joins by using Nested Loop Join physical operator where a Hash or Merge Join operators would be more appropriate. In addition to this, underestimation of table variables participating in multiple joins could lead to issues with insufficient memory grants, and thus data spilling to tempdb .
Click through for the example. The next post in the series will be a case where it doesn’t work very well.
What does any self-respecting application need? Metrics! We need to know how many messages have been processed, when the last message was processed and so on.
The simplest option for gathering these metrics comes from within KSQL itself, using the same
DESCRIBE EXTENDEDcommand that we saw before:ksql> DESCRIBE EXTENDED GOOD_RATINGS; [...] Local runtime statistics ------------------------ messages-per-sec: 1.10 total-messages: 2898 last-message: 9/17/18 1:48:47 PM UTC failed-messages: 0 failed-messages-per-sec: 0 last-failed: n/a (Statistics of the local KSQL server interaction with the Kafka topic GOOD_RATINGS) ksql>
You can get more details, including explain plans, from this. There are external tools which Robin demonstrates as well, which let you track the streams over time.
You can see the estimated and actual number of rows right there on the query plan just like live query plans! You no longer have to waste hours of your life hovering over different parts of the query plan in order to see where the estimated row counts veer off from the actual row counts.
This doesn’t require SQL Server 2019, either.
Read on for Brent’s thoughts on the matter.
SSMS is based on the new VS 2017 Isolated Shell. This means a modern shell that unlocks all the accessibility features from both SSMS and VS 2017.
Smaller download size (~400 MB). This is less than half of what SSMS 17.x is.
SSMS can be installed in a custom folder. Currently, this is only available on the command line setup. Pass the extra argument to SSMS-Setup-ENU.exe, SSMSInstallRoot = C:\MyFolder
High DPI enabled by default.
Better support for multiple monitors to ensure dialogs and windows pop up on the expected monitor.
Isolation from SQL Engine. SSMS does not share components with SQL engine anymore. More isolation from SQL engine allows for more frequent updates.
Package Ids no longer needed to develop SSMS Extensions.
18.0 will install alongside 17, so you can have both at the same time.
The main advantages of Batch Mode are:
- Algorithms optimized for the multi-core modern CPUs;
- Better CPU cache utilization and increased memory throughput;
- Reduced number of CPU instructions per processed row.
All these features make Batch Mode much faster than Row Mode (typically an order of magnitude, 10x-100x times faster) for analytical queries with CS indexes. One major condition for Batch Mode is a presence of a CS index. If you don’t have a CS index on a table involved in a query, you won’t get Batch Mode.
However, some analytical queries may benefit from Batch Mode without a CS index, or CS cannot be created due to some limitations.
There are a few tricks that allow you to enable Batch Mode on a Rowstore table for example with a dummy filtered CS index (see this post from Itzik Ben-Gan), but SQL Server 2019 may use Batch Mode on Rowstore without any extra efforts from your side.
Dmitry dives into the debugger and teases out the specific circumstances which can help get a query considered for rowstore. If you want a deep dive into what’s currently available, this is your post.
Up until now Azure Container Instances only had one option to allow us to connect. That was assigning a public IP address that was directly exposed to the internet.
Not really great as exposing SQL Server on port 1433 to the internet is generally a bad idea: –
Now I know there’s a lot of debated about whether or not you should change the port that SQL is listening on to prevent this from happening. My personal opinion is, that if someone wants to get into your SQL instance, changing the port isn’t going to slow them down much. However, a port change will stop opportunistic hacks (such as the above).
But now we have another option. The ability to deploy a ACI within a virtual network in Azure! So let’s run through how to deploy.
Click through for those instructions.