The execution plan erroneously computes separate
ANYaggregates for the
c3columns, ignoring nulls. Each aggregate independently returns the first non-null value it encounters, giving a result where the values for
c3come from different source rows. This is not what the original SQL query specification requested.
The same wrong result can be produced with or without the clustered index by adding an
OPTION (HASH GROUP)hint to produce a plan with an Eager Hash Aggregate instead of a Stream Aggregate.
Click through for the scenarios. Paul has also reported the second scenario as a bug.
The order counts are now correct, but the total freight values are not. Can you spot the new bug?
The new bug is more elusive because it manifests itself only when the same customer has at least one case where multiple orders happen to have the exact same freight values. In such a case, you are now taking the freight into account only once per customer, and not once per order as you should.
Click through to avoid accidentally introducing bugs in your T-SQL code.
I haven’t written much about them yet (key emphasis there …) but AGs now being supported for containers in SQL Server 2019 is a big deal. Recently, SQL Server 2019 CTP 3.0 was released, but there’s a slight problem: if you try to deploy an AG with Kubernetes, you may see the following errors when trying to deploy the pods with the YAML that contains their definition. The services (i.e. instances of SQL Server) get created, but the pods do not.
Read on for the root cause and the solution.
After Spectre and Meltdown a few months back (which I cover in this blog post from January 4), another round of processor issues has hit the chipmaker. This one is for MDS (also known as a ZombieLoad) This one comprises the following security issues: CVE-2019-11091, CVE-2018-12126, CVE-2018-12127, and CVE-2018-12130. Whew! Fun fact: CVE stands for “Common Vulnerabilities and Exposures”.
As of now, this is only known to be an Intel, not AMD, issue. That is an important distinction here. The official Intel page on this issue can be found at this link. This issue does not exist in select 8th and 9th generation Intel Core processors as well as the 2nd generation Xeon Scalable processor family. (read: the latest stuff)
Be sure to read through all of this. Most of the notes are for non-SQL Server items which have an impact rather than bugs in SQL Server itself, but that doesn’t make patching any less important.
Don’t leave this trace flag enabled.
There’s at least one bug with it as of today on SQL Server 2017 CU13: table variables will throw errors saying their contents are being truncated even when no data is going into them. Andrew Pruski reports:
Special shout out to three of my co-workers on finding that issue. I had nothing to do with it but will take credit nonetheless.
Azure Data Studio – open an issue in the Github repo. While you open an issue, Github helps by searching the existing issues as you’re typing, so you’ll find out if there’s already a similar existing issue.
Click through for all of the links. I personally just yell skyward in the hopes that they hear me and fix my problems. It doesn’t work very often so I don’t recommend it as a strategy.
When I started investigating, the error was known only as an access violation, preventing some operations related to data cleansing or fact table versioning.
It occurred deep within a series of stored procedures. The execution environment included cross-database DELETE statements, cross-database synonyms, lots of SELECT statements against system views, scalar UDFs, and lots and lots of dynamic SQL.
And… I don’t have access to the four systems where the access violation occurred.
I was able to have some investigation performed on those systems – we learned that by disabling ‘auto stats update’ for the duration of the sensitive operations, the error was avoided. We also learned that reverting a system from SQL Server 2016 SP2 CU4 to SP2 CU2 avoided the errors. On those systems, reverting to SP2 CU2 or temporarily disabling ‘auto stats update’ were sufficient temporary mitigations.
Very interesting sleuthing work. It also appears the issue might have been limited to SP2 CU4, as SP2 CU3 and SP2 CU5 return different results in Lonny’s repro.
The latest CUs for SQL Server 2016 and 2017 contain some important Query Store fixes that I thought worth mentioning for those of you on either version or those of you looking to upgrade. As of this writing, the current CU for SQL Server 2016 SP2 is CU5, and for SQL Server 2017 it is CU13. Many times we see fixes that make it into a SQL Server 2017 CU ported back to a SQL Server 2016 build. Interestingly enough, there are some Query Store fixes in 2016 CUs that are not in 2017 CUs. I don’t know if that’s because the issues do not exist in 2017, or if it’s just that they have been fixed yet in 2017. I’m planning to update this post if the fixes are added down the read. So here we go, in descending CU order…
This post is a great reason to keep those SQL Server instances up to date.
2 Weeks ago I was working in a very interesting case in SQL 2016. I received an email from one of my customers saying that they were having intermitent issues within their app that was executing some SP_Execute_External_script SP calls to the database.
We also restarted the launchpad service but with no luck….. The biggest challenge was that sometimes the service responded fine and sometimes it showed the issue that I paste above (And this for me was absolutelly new…). From SQL Side we executed an extended events session with all R services counters but nothing appeared… From sys.dm_Exec_session_Wait_stats we just observed that the session was waiting for SATELLITE_SERVICE_SETUP wait which points that SQL Was waiting an answer from the R service itself.
Click through for the solution.
We were enjoying a nice peaceful afternoon when we hear panicked shouting that a SQL Server had become unresponsive and the customers were unable to do anything.
We moseyed on down to the server in question to take a look at it. One thing stood out immediately, CPU was pegged out at 100% but SQL itself didn’t actually seem to be doing anything, transactions\second was on the floor. Unfortunately this happened a while back and I didn’t think to capture any graphs or metrics at the time so you’re just going to have to take my word for this.
The issue David ran into was subsequently fixed, making this a cautionary tale to keep those SQL Server instances patched.