Press "Enter" to skip to content

Category: Administration

The Impact of Auto-Close on Performance

Steve Stedman explains why Auto-Close should almost never be on for your database:

When the AutoClose setting is enabled, SQL Server will shut down the database after the last user disconnects. This means that every time a new connection is made, SQL Server must go through the entire process of starting the database again. This includes reading the database file, allocating memory, and performing any necessary recovery processes. This overhead can cause a noticeable delay for users as they connect, especially if the database is large or complex.

Read on for several other factors affecting performance. I will say that the best use case for Auto-Close is when you have a dev instance—especially on a local machine—with a large number of databases and a very limited amount of RAM available. Otherwise, if this is a server, I’m turning Auto-Close off. Even today, I’d rather just buy enough RAM for my developers than flip this switch.

Comments closed

Firewalls and TLS in SQL Server on Linux

I have a new video out:

In this video, we harden our SQL Server instance in two ways: by using a firewall to limit inbound traffic, and by using a certificate to force encrypted connections to SQL Server.

This was a video I enjoyed creating. It also shows the progress of SQL Server security: go back to 2005 (pre-SP1) and even SQL authentication over TDS was unencrypted by default. They fixed it so that the authentication would use a self-signed cert but the data you’d get back from query results was unencrypted. Nowadays, encryption is easy (if you’re okay with a self-signed cert) and some future version of SQL Server will make it mandatory.

Comments closed

Redgate State of the Database Landscape Results

Louis Davidson review the results:

Every year, Redgate surveys technologists to ask a big question (through lots of little questions, naturally.) This year’s question was about their current data platform configuration and usage. Just before it was released, I read the results, and I have to say, some of the things I learned amazed me…until I thought a bit more about it.

Read on for what amazed Louis and then check out the survey results yourself.

Comments closed

Databases with Transaction Logs Larger than Data

Jess Pomfret checks database sizes:

This week I needed a query to find any databases where the transaction log is bigger than the total size of the data files. This is a red flag, and can happen for a few reasons that would need further investigation. However, this post is just to share the query, partly for you, and partly for future Jess.

If you do want to read more about why this could happen and how to fix it, Brent has a good post and some queries here: Brent Ozar – Transaction Log Larger than Data File.

Click through for the script and a quick example.

Comments closed

Handling Orphaned Database Files with dbatools

Rod Edwards rounds up the orphans:

This may be an edge case issue, it may not. Or some may not know this is a potentially a thing. For any of the above questions, i’m not sure of the answer. I do know however, that it doesn’t involve morally suspicious fairy tales of any kind, flutes, or pastry products for that matter.

I also know that it’s something that could potentially be robbing disk space across SQL Estates so i’ll talk about it anyway and supply a simple way to fix this in one sweep using the magnificent DBATools.

Rod’s claim is no pastry products, but my counter-argument is that the command probably runs better if someone brings donuts in.

Comments closed

Adding a Service Principal to a Power BI Workspace

Marc Lelijveld reads the manual, disagrees with it, and stubbornly sets out to prove it right:

Today, I run into an issue by adding a service principal to a workspace. To my experience so far, this should be straight forward and a matter of adding with the right permissions in the workspace settings. However, for some reason I couldn’t get it to work. In this blog I will elaborate on what caused this issue and which tenant setting influences this.

Click through for the story behind this and the resolution to Marc’s issue.

Comments closed

Simple Change Auditing in SQL Server

Vitaly Bruk keeps it simple:

Some time ago, one of my customers asked me about a simple audit system. The system must collect all “user” changes in the database. The need appears because a lot of users (like developers, DevOps, technical support, analysts, etc.) have access to the DB. Due to an internal problem, he cannot change or take permissions, but he wanted to be able to track the changes.

In this article, I want to talk about the simplest way to track changes in SQL Server. The idea for this came due to an incident that happened to one of my customers – one of the tables disappeared. Yep, like that, the entire table was missing, and no one knew where it had gone.

Table disappeared? I blame the network.

Comments closed

The Power of Schema Binding

Guy Glantser explains what schema binding is and why it’s important:

In SQL Server, when we use the “WITH SCHEMABINDING” clause in the definition of an object (view or function), we bind the object to the schema of all the underlying tables and views. This means that the underlying tables and views cannot be modified in a way that would affect the definition of the schema-bound object. It also means that the underlying objects cannot be dropped. We can still modify those tables or views, as long as we don’t affect the definition of the schema-bound object (view or function).

At one point, I was a lot more gung-ho about schema binding and would do it even for views that didn’t require it, as I liked the idea of protecting the schema from changing. This turned out to be more of an annoyance than a benefit, though, so I quietly modified that belief a while ago. Nowadays, I’ll typically use schema binding when it’s required and on the rare occasion in which I create a function (assuming the function complies).

Comments closed

Incremental Backup in Postgres

Robert Haas talks about a new feature:

Five days before Christmas I committed my patch to add incremental backup to PostgreSQL. Actually, I’ve been committing preparatory patches for some months now, but December 20 saw the two main patches land. Since then, there’s been a bunch of bug-fix commits, and there are still a few pending items that need to be addressed, but the core of the feature is now committed. If you want a quick overview of the feature, Lukas Fittl has a great video about that. Here, I’d like to talk about the architecture of the feature itself in a little more detail, and specifically with how we decide which data to copy.

Most people who are likely to read this blog are probably already familiar with the core idea of an incremental backup: instead of copying the whole database instance, just copy the data that has changed. That’s faster, and takes up less space on disk. But, to work properly, you have to be able to quickly and reliably identify which data has, in fact, changed. There’s more than one way to do that.

Read on for some of the complexity around this. It’s interesting to see what goes on behind the scenes in a relational database.

Comments closed