I recently saw a thread on twitter, where the OP talked about setting the max size for an In-Memory OLTP container. I responded as I always do: it’s not possible to set a limit on anything having to do with storage for In-Memory OLTP.
Unfortunately, that’s not correct: through SSMS or TSQL, you can in fact set a max size for a container.
But you should not ever do that…..
Because if you do, and your checkpoint files exceed the max size of the container, your database can go into the In Recovery, Suspect, or OFFLINE state.
Read on for a repro that you should not try in production. Or anywhere, really.
It’s a super useful feature that not all DBAs know about. Since CMS data is stored in msdb and accessible via SMO, you can access it from SQL Server Management Studio or PowerShell modules like dbatools.
Central Management Server’s essential functionality includes:
Actions that are taken by using a central management server group act on all servers in the server group. This includes connecting to servers by using Object Explorer and executing Transact-SQL statements and Policy-Based Management policies on multiple servers at the same time.
I mostly use it as a visual repository of my SQL Servers. Prior to using dbatools and Invoke-DbaSqlQuery, however, I did use CMS to easily execute code against a number of different servers.
CMS is a great feature, and is a critical tool for scaling out a SQL Server infrastructure.
This is every DBA’s favorite game. Figuring out what DMV contains the data you want. It turns out there are two places that database file size info is maintained. Each database has
sys.database_fileswhich has information for that database. The master database also has
sys.master_files, which contains information for every database.
sys.master_filesseems like it would be the obvious choice: everything in one view in master is going to be easier to query than hitting a view in a bunch of different databases. Alas, there’s a minor snag. For tempdb,
sys.master_fileshas the initial file size, but not the current file size. This is valuable information, but doesn’t answer the use cases we set out above. I want to know the current file sizes. Thankfully,
sys.database_fileshas the correct current file sizes for tempdb, so we can use that.
sys.database_filesseems like it’s going to be the right move for us then. Alas, this isn’t quite perfect either. With Log Shipping, and Availability Group secondaries, if you’ve moved data files to a different location,
sys.database_fileswill contain the location of the files on the primary database. Thankfully,
sys.master_fileshas the correct local file locations for user databases, so we can use that.
Ugh, so it looks like the answer is “both”… we’ll need to use
sys.database_filesfor tempdb, and
sys.master_filesfor everything else.
Click through for the script, including Andy’s critical reflection on how Past Andy has failed him.
For the data file, the impact can be illustrated in the following chain of events:
- A new 1MB data file is created that contains no information. (ie. a 1MB data file containing 0MB of data)
- Data is written to the data until it reaches the file size. (ie. the 1MB data file now contains 1MB of data)
- The SQL server suspends normal operations to the database while the data file is grown by 1MB. (ie. the data file is now 2MB and contains 1MB of data) If Instant File Initialization (IFI) is enabled, the file is expanded and database operations resume. If IFI is not enabled, the expanded part of the data file must be zeroed before db operations resume, resulting in an additional delay.
- Once the data file has been grown successfully, the server resumes normal database processing. At this point the server loops back to Step 2.
The server will continue this run-pause-run-pause processing until the data file reaches its Maxsize, or the disk becomes full. If the disk that the data file resides on has other files on it (ie. the C drive, or a disk that is shared by several databases), there will be other disk write events happening between the data file growth events. This may cause the data file expansion segments to be non-contiguous, increasing the file fragmentation and further decreasing the database performance.
This is all to answer the question, “What’s the problem with missing a few log backups?”
Useful information it provides at table level:
- tableType, to identify HEAP tables
- row_count, to identify tables with plenty of rows or now rows at all
- TotalSpaceMB, to identify big tables in size
- LastUserAccess, to identify tables that are not used
- TotalUserAccess, to identify tables that are heavily used
- TableTriggers, to identify tables that have triggers
Useful information it provides at column level:
DataType-Size, to identify supersized, incorrect or deprecated data types
Identity, to identify identity columns
Mandatory-DefaultValue, to identify NULL/NOT NULL columns or with default constraints
PrimaryKey, to identify primary key columns
Collation, to identify columns that might have different collation from the database
ForeignKey-ReferencedColumn, to identify foreign keys and the table.column they reference
Click through for the script.
I have been using Azure Logic apps recently to build some workflows to gather data from external sources ultimately inserting it into a database for reporting, I then got thinking, how can this be useful for “DBA” based tasks? Let’s take a step back for a minute, what are logic apps? It is a technology that helps integrate apps, data, systems, and services across enterprises. Key parts of a logic app solution are connectors, triggers and actions.
I decided that I wanted to execute a stored procedure every 6 hours to capture wait statistics for my Azure SQL Database and log the information in a table.
This is what my workflow looks like.
There are a few alternatives available, so it’s nice to see an example of one of them.
Designate a SQL Server instance as “Central Management server”.
Create server groups named Production servers and Development servers under CMS server.
Register “QA Testing server” and “Staging server” under the development server group, and “HR DB” and “Sales DB” under the production server group.
Connect and access the CMS server from another server inside the same domain using SSMS.
Once you get past a few SQL Server instances, having a CMS in place is a fantastic idea.
MSDTC configuration is not as straight forward as you might think. It’s a different choice if you are using a local MSDTC, clustered MSDTC, on-premises, Azure, Failover Cluster, or Availability Group. Every one of those variables leads you down a different path with different choices. The goal of this article is to clarify those choices.
If you think you do not use the MSDTC, you better check again. I frequently see folks using it that do not realize it. The most common is Linked Servers. You got it. If you are using Linked Servers, you’re using the MSDTC.
You are no longer required to install an MSDTC with a SQL Failover Cluster starting in SQL Server 2008. That does not mean you do not need an MSDTC configuration. If you wanted to make your application database highly available, but your application did not use the MSDTC you were being required to install one anyway. That limitation is what was removed, not that you simply didn’t need a clustered MSDTC.
Read the whole thing.
Usually I use Wireshark to debug networking problems in production. My Wireshark workflow is:
- Capture packets with tcpdump (typically something like
sudo tcpdump port 443 -w output.pcap
- scp the pcap file to my laptop (
scp host:~/output.pcap .)
- Open the pcap file in Wireshark (
That’s pretty simple! But once you have a pcap file with a bunch of packets on your laptop, what do you do with it?
Wireshark is my go-to tool for diagnosing networking issues.
SQL Server 2008 and 2008 R2 have had a tremendous run. But all good things come to an end, right? On July 9, 2019, Microsoft will end Extended Support, which means no more updates or support of any kind, potentially leaving you vulnerable to security and compliance issues.
The good news is, you still have plenty of time and options to avoid any heartburn caused by the technology “circle of life.” And we’ll lay out all of those options for you in a webinar on July 12.
Non-contrarian opinion: get those old SQL Server instances updated. Life is so much better on the 2012 and later branch.
Contrarian opinion: core-based licensing had such a major impact on businesses that five years from now, we’ll still see more SQL Server 2008 and 2008 R2 in the wild than we see 2005 today.