First off, the word itself. The Cloud. What is The Cloud? It’s a server that you don’t own. You can’t touch it, it’s in someone else’s data center. It may or may not be virtual. Amazon’s Cloud or Microsoft’s or Google’s are several data centers with racks and racks of servers. They are physical, just not at your location. And they’re accessed across the Internet. This is something that we’ve been doing for 30 years, it’s called a Wide-Area Network, just scaled up bigger. We had bi-coastal WANs before the World Wide Web came along.
Four or five years ago, I was absolutely in agreement. Today, I’m 50/50, being near 100% for many types of servers (web servers, etc.) and closer to 25-30% for databases. My expectation is that those numbers will continue to shift upward as time goes on, but there will always be reasons not to migrate certain servers to someone else’s data center.
To understand why we get this performance degradation with SQL Server 2016 RC1 three key parts of a transactions life cycle need to be understood along with the serialisation mechanisms that protect them
Chris digs into call stacks as part of his post. We’ll see if there are some performance improvements between now and RTM on this front.
SAN snapshots, and I don’t care who your vendor is, by definition depend on the production LUN. We’ll that’s the production data.
That’s it. That’s all I’ve got. If that production LUN fails for some reason, or becomes corrupt (which sort of happens a lot) then the snapshot is also corrupt. And if the snapshot is corrupt, then your backup is corrupt. Then it’s game over.
SAN snapshots are a good part of an infrastructure-side recovery plan, but databases have their own recovery time and recovery point objectives. Conflating these two can lead to sadness.
To test or not to test. That is the question. Or is it? Shouldn’t we always test? That was the question posted here. It’s a big question so let’s break it down a bit.
My short answer is: test as much as necessary to keep two conditions from occurring: first, that you get fired (or worse); second, that the inevitable bug somebody else discovers in your code is too embarrassing. Kenneth’s answer is better.
So there I was, creating some indexes. Since I was in demo mode, I had execution plans turned on. I’m actually terrible about remembering to turn them off. There have been times when I’ve run loops to create some REALLY big tables, and left them turned on. SSMS crashed pretty quickly. Apparently generating execution plans isn’t free! Keep that in mind when you’re timing/tuning queries.
Since they were there, I took a look at them. They were basically what I expected. The only way to index data is to read it and sort it and blah blah blah. But there was a little something extra!
Yo dawg, I heard you like indexes…
Last week, I received an alert that the percentage of transaction log in use on one of our production databases was increasing more than it should have been. I’ll refer to this database as DB1 from here on in. It had reached 18%, which is above the normal 5-10% we like to see it at. DB1 is in an Always On Availability Group so if the log usage jumps we jump.
The first thing I checked was that the log backups were running. We use Ola Hallengren’s maintenance solution to manage our backups. The backup jobs were running without error.
The next thing to check was the log_reuse_wait_desc column in sys.databases.
The highlighted row is the row for DB1. A log_reuse_wait of 2 means there were no changes to backup since the last log backup. If that were the case then why was my transaction log slowly filling like an unattended bath?
Read on to learn more. James also has a link to a (closed/won’t-fix) Connect item.
Back in December, I published a post entitled, “How I spot not-yet-documented features in SQL Server CTPs.” In that post, I revealed a few of the ways that I get an early jump on what’s changed between CTP and/or RC builds of SQL Server. You can do those same things if you want to see what new objects or columns have been created, or which system modules have changed, between – say – SQL Server 2014 and SQL Server 2016. This will likely be a bigger list than any of the individual sets of items I’ve posted about in our SQL Server 2016 builds post, but the same concepts apply – create a linked server to the older instance, optionally create some synonyms for easy adaptation, and go to town.
There are a few other things I check as we get closer to the final release, and they can be quite revealing about what features have made it into the product. We can also get some insight into things they tried to get in but couldn’t (for example, there are error messages and Intellisense verbiage for
STRING_AGG(), which does not seem to be in the cards for RTM). I’m going to point out a few, but I’m not going to iterate through all of the things I’ve learned – this is more to serve as as a starting point so you can experiment on your own.
If you’re interested in this kind of spelunking, you can learn a lot without having to reverse engineer binaries.
In an enterprise solution, you might have 10+ perspectives – some of which might have similar names – and without a clear description it will be confusing for a user (especially new users) to know which perspective is the correct one.
A better idea is to add a description/annotation property for each perspective where a more helpful text description can be provided indicating the business process, common types of analysis, etc. This would provide a metadata hook for self-service reporting tools (e.g. Excel, Power BI) as well as enterprise data cataloging solutions such as Azure Data Catalog.
Another helpful feature would be the ability to set the visibility of a perspective – or if you’re more familiar with the Tabular vernacular: “hide it from client tools”!
While you’re reading about perspectives, fill out Bill’s SSAS survey.
What’s clear from these examples is that trying to relate what’s going on in the query to what you see in Profiler is quite tricky even for seemingly simple queries; for most real-world queries it would be almost impossible to do so with total confidence. That said, when I’m tuning queries I usually comment out large parts of the code to try to isolate problems, thus creating much simpler queries, and I hope the value of this post will lie in you being able to spot similar patterns in Profiler to the ones I show here when you do the same thing. In part 3 of this series I’ll show you some practical examples of how all this information can help you tune your own queries.
Whenever I read Profiler, my next question is “Is there an extended event which covers this?”
Once the data has been encrypted, we can move forward with creating a new view that will be used to “head fake” the application. The view is named the same as the original table therefore the change is seamless to the application. The application doesn’t know if it’s calling a table or a view so that’s why this works.
You should never store passwords in plaintext. You should almost never store passwords in a reversable format (i.e., encrypted). The primary case in which I can see storing passwords encrypted rather hashed is if you have automated systems (or non-automated technicians) which need passwords to authenticate somewhere. Even then, there’s a lot of value in using OAuth tokens. But if you can’t get around any of this, John’s solution does remove the really bad decision of leaving passwords in plaintext.