No matter how bad contention gets for normal spinlocks, at least we account for cycles spent spinning: this stuff gets exposed in sys.dm_os_spinlock_stats and can allow us to diagnose and quantify contention. However, spinning done on a latch’s spinlock gets no obviously visible accounting. As such, if we did somehow manage to accrue a large number of spins on a very hot latch, it wouldn’t be obvious that the time went into spinning. This is not to say of course that it is necessarily a common problem, just that it would be hard to prove one way or the other.
If I appear to be painting a bleak picture, I apologise. Given the importance of latches, especially buffer latches, one would assume that the SQL Server development teams would be on the constant lookout for opportunities to identify issues and mitigate against them. And this is exactly the kind of scenario where some bright spark comes up with the idea of superlatches.
Read the whole thing.
Anyway, one of the particular updates that caught my attention was Adds the adjusted buffer size to the BufferSizeTuning event when AutoAdjustBufferSize is enabled in SSIS 2016. In simple terms, it allows you to log the size of the data flow buffer set automatically by the AutoAdjustBufferSizeproperty. This property basically automatically calculates the buffer size needed to reach the amount of rows in the buffer specified by DefaultBufferMaxRows. Therefore, it ignores the DefaultBufferSize property. Unfortunately, this new property is set to false by default.
Logging is at the Verbose level, so you won’t use it very often, but for test purposes, it’s interesting to see those changes.
Again the wall. There is no way you can choose a temporal table and apply encryption to a column or columns using the wizard.
I tried then using the powershell (after manually creating the keys) as this is true the only way to encrypt existing columns, just in case.
Raul doesn’t stop there, though, and he does figure out a workaround.
The problem was the auto options for the X-Axis range was wrong and he recommended setting it manually.
To this is simple, click on the visual with the faulty X-Axis, then on the left hand menu, click the paint brush, then expand the X-Axis and manually enter the start and end
What’s interesting is that the answer came from submitting feedback from within the application.
Developers have the option to use the Power BI REST API’s to embed tiles or reports into their website or application. This option does require that the end user is signed into Power BI. This means that they will need to have signed up for Power BI. They will then see the items that they have access to.
I only remembered the first option, but the REST API is very interesting.
Until now, the single biggest problem has been that both Azure SQL DB and Amazon RDS SQL Server don’t give you access to backup files. If you wanted to get your data out, you were hassling with things like import/export wizards, BCP, or sync apps.
This is a really, really, really big deal, something Azure SQL DB doesn’t support (and I dearly wish it did). I get even more excited reading this because now Microsoft has to do it in order to remain competitive, and that’ll make Azure SQL DB a much more attractive product for traditional DBAs.
This makes the migration strategy to and from RDS significantly easier. Brent gives a few examples of how this will help customers.
It’s possible to use a workflow to execute your backups. You have to take into consideration that there is a downside. If you execute all the backups at once you’ll probably get issues with throughput if you’re dependent on a slow network for example.
You could always add another parameter to filter out specific databases to make sure you execute it as a specific set.
Sander does include his script, so check that out.
I frequently need to know where backups went and I restore those backups for operational recovery on a regular basis. Would you believe in 20+ years as a DBA I can count the number of database restores for a disaster on my fingers? (Which is good because taking off your shoes at the office is considered bad form.)
I think the important take-away from this post is that you should leave your shoes on at work. You don’t know what kind of disgusting things are in that carpet. Also, read on to learn where to find database restoration history.
For row store targets, it is important to note that the presence of a clustered index or any additional non-clustered indexes on the target table will disable the parallel INSERT behavior. For example, here is the query plan on the same table with an additional non-clustered index present. The same query takes 287 seconds without a TABLOCK hint and the execution plan is as follows
This post goes into detail on when you can expect parallelism in rowstore and columnstore insertions. I highly recommend reading it.
CREATE ASSEMBLYsupports specifying a CLR assembly using bits, a bit stream that can be specified using regular T-SQL. The full method is described in Deploying CLR Database Objects. In practice, the
CREATE ASSEMBLYstatement looks something like:
After learning about assembly deployment, check out Michael’s one-question survey.