Press "Enter" to skip to content

Category: Administration

Enabling Large Memory Pages in SQL Server

David Klee talks us through large memory pages:

SQL Server Enterprise Edition can leverage large memory pages to reduce the amount of memory pointers required for larger SQL Server deployments. Reducing the number of pointers makes the database engine more efficient, especially for SQL Servers with greater than 32GB of RAM. A normal memory block is 4KB, and many thousands of pointers are required to manage the memory underneath a larger SQL Server. Large memory pages can change the block size to 2MB, greatly reducing the number of pointers required for memory management.

Read on to see what effect this has, as well as when to use them and—more importantly—when not to use them.

Comments closed

A Primer on RoboCopy

John Morehouse takes us through a venerable file copying tool for Windows:

Robocopy has been around for years within the Microsoft eco-system and it is highly versatile.  However, until recently, it wasn’t a tool that I was versed at and frankly, hardly used it.  Over the past year or so, however, I have found myself using it more and more as a solution to for file movement when needed.

Essentially, robocopy will move files or even directories from one location to another.  It will copy permissions without any additional coding, and it will only copy the files that do not exist within the destination.  This is useful in that you do not have to explicitly account for files that might already exist at the destination.  It handles it for you.

Read on to see more, as well as a demo of RoboCopy in action.

Comments closed

Considerations Before Failing Over an AG

David Fowler points out a few things to look at before failing over an Availability Group:

What about your SQL Agent jobs? Have you got jobs that perform actions on your data? If you have, do those jobs exist on the new primary? If they don’t then I’m happy to bet that whatever function that they were playing probably isn’t happening anymore.

One thing that I always want to make sure before I failover is, do I have all the relevant jobs ready to roll on the secondary server?

But what about the jobs that you’ve got on the old primary? There’s a fair chance that, if they’re doing any sort of data manipulation, they’re going to be failing.

Click through for additional considerations.

Comments closed

Why Optimize for Ad Hoc Workloads

Randolph West explains why optimize for ad hoc workloads should be enabled by default:

Enabling the optimize for ad hoc workloads configuration setting will reduce the amount of memory used by all query plans the first time they are executed. Instead of storing the full plan, a stub is stored in the plan cache. Once that plan executes again, only then is the full plan stored in memory. What this means is that there is a small overhead for all plans that are run more than once, on the second execution.

Read the whole argument. I don’t know that I’ve seen an instance yet where this setting was a really bad choice.

Comments closed

Orphaned Users in SQL Server

Dave Bland walks us through one way to fix an orphaned user:

In my many years of working as a DBA, I have encountered many disabled logins.  However, I have never really encountered what looks to be a disabled database user account.  I didn’t even think it was possible to disable a user account in a SQL Server database.  I checked the user account properties just to makes sure I was correct.  Sure enough, no option to disable a user account. This finally turned out to be a simple case of looks can be deceiving.

You can also use the sp_change_users_login procedure to fix orphaned users.

Comments closed

Defining Downtime Down

Andy Mallon takes us through the notion of downtime:

There’s a lot of discussion about preventing downtime. As a DBA and IT professional, it’s my sworn duty to prevent downtime. I usually describe my job as DBA something along the lines of, “to make sure data is always available to the people and applications that need it, and never available to the people and applications that shouldn’t have it.” Preventing downtime is certainly important for that first part–but how the heck do you define downtime?

Andy asks more questions than provides answers, but these are the types of questions which the technical side and the business side can get together on to define what constitutes downtime.

Comments closed

Trying Out the Data Migration Assistant

Dave Mason shares some thoughts on the Data Migration Assistant:

I recently took advantage of an opportunity to try Mirosoft’s Data Migration Assistant. It was a good experience and I found the tool quite useful. As the documentation tells us, the DMA “helps you upgrade to a modern data platform by detecting compatibility issues that can impact database functionality in your new version of SQL Server or Azure SQL Database. DMA recommends performance and reliability improvements for your target environment and allows you to move your schema, data, and uncontained objects from your source server to your target server.” For my use case, I wanted to assess a SQL 2008 R2 environment with more than a hundred user databases for an on-premises upgrade to SQL 2017.

Dave takes us through an upgrade on three sample databases and then gives us some more messages from actual production databases.

Comments closed

Intelligent Query Processing FAQ

Joe Sack answers a number of questions about intelligent query processing in SQL Server:

You have batch mode adaptive joins, but no row mode adaptive joins. Why?
Adaptive joins are more appropriate for scenarios where the join-input row count fluctuates significantly.  Batch mode assumes a higher row flow vs. an OLTP low-row typical pattern.  Row mode adaptive joins would likely be too prone to regressions.  Batch mode on rowstore opens up adaptive joins for scenarios where we estimate higher row counts for join-inputs.

There are some good questions and answers in this set.

Comments closed

Azure SQL Managed Instance Public Endpoints

Danimir Ljepava announces public endpoints for Azure SQL Managed Instances:

Public endpoint, ability to connect to Azure SQL Database Managed Instance from Internet, without VPN has reached global availability today. The release of this feature will help support many new integration scenarios.
 
The public endpoint for Managed Instance can today be enabled/disabled via PowerShell script. The support for Azure portal will be coming within the next week or so as soon as all updates are rolled out.

Click through to learn how to enable it with Powershell.

Comments closed