Press "Enter" to skip to content

Category: Administration

Redshift Architecture Performance Tips

John Ryan has a few hints to help us build speedy Redshift clusters:

The Need to Vacuum

As Redshift does not reclaim free space automatically, updates and delete operations can frequently lead to table growth. Equally, it’s important as new entries are added, that the data is maintained in a sorted sequence.

The VACUUM command is used to re-sequence data, and reclaim disk space as a result of DELETE and UPDATE operations. Although it won’t block other processes, it can be a resource-intensive operation, especially for data stored using interleaved sort keys.

It should be run periodically to ensure consistent performance and to reduce disk usage.

Some of this is good Postgres advice; some of it is good MPP advice (and serves well, for example, when dealing with Azure SQL Data Warehouse); the rest is Redshift-specific.

Comments closed

Reminder: Cycle Those SQL Server Error Logs

Monica Rathbun has a public service announcement:

I saw this again recently and see it too often in environments so wanted to take a second to remind everyone to cycle their error logs on a regular basis. SQL Server keeps error logs and when you reboot or restart SQL Server services the logs are cycled and a new one is created. Depending on how many logs you have configured for SQL Server to have this may include removal of the oldest log as well. Since many of pride ourselves on keeping our SQL Servers up and running, reboots may be few and far between thus our logs get large in size.

When they grow out of control it can require long wait times for the logs open to even view them. An easy way to keep this from happening is to cycle them routinely. You can easily automate these by creating a SQL Agent job to cycle the log to a new one on a regular basis whether it is monthly, weekly or even daily.

My preference is to cycle daily with 45 or so logs maintained; that way, if there are service restarts, I still have more than a month of logs.

Comments closed

PFS Corruption When Moving From SQL Server 2014

Paul Randal notes a bug in SQL Server 2014:

I’m seeing reports from a few people of DBCC CHECKDB reporting PFS corruption after an upgrade from SQL Server 2014 to SQL Server 2016 or later. The symptoms are that you run DBCC CHECKDB after the upgrade and get output similar to this:

Msg 8948, Level 16, State 6, Line 5
Database error: Page (3:3863) is marked with the wrong type in PFS page (1:1). PFS status 0x40 expected 0x60.
Msg 8948, Level 16, State 6, Line 5
Database error: Page (3:3864) is marked with the wrong type in PFS page (1:1). PFS status 0x40 expected 0x60.
CHECKDB found 2 allocation errors and 0 consistency errors not associated with any single object.
CHECKDB found 2 allocation errors and 0 consistency errors in database 'MyProdDB'.
repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (MyProdDB).

I’ve discussed with the SQL Server team and this is a known bug in SQL Server 2014.

Read on for the fix and additional good advice.

Comments closed

Writing Audit Logs To Azure Event Hubs

Ronit Reger announces that Azure SQL Database auditing logs can now go to Azure Log Analytics or Azure Event Hubs:

Azure Log Analytics plays a central role in monitoring and management of your Azure environment. It enables collecting telemetry and other data from a variety of sources across Azure, and provides a query language and analytics engine for deep analysis and insights on the operation of applications and resources. For more information on the Log Analytics platform, see What is Azure Log Analytics.

With native support for saving SQL audit logs directly to Log Analytics, log data from all of your database resources can be gathered and stored in a single central location. The logs can now be analyzed using the rich analysis tools provided by the platform, which can provide deeper visibility and advanced cross-resource analytics.

In addition, SQL Server audit logs (from on-premises SQL Servers or SQL Servers on a VM) can also be collected in Log Analytics via OMS agent integration, as described in this article. Thus, you can manage and analyze all of your database audit logs, whether from the cloud or on-premises, in a single central location using the power of Azure Log Analytics.

This looks useful.

Comments closed

Switching Object Schemas

Steve Jones show you a quick way of switching a database object’s schema:

I haven’t had the need to move an object from one schema to another in years. Really since SQL Server 2000. I wrote about deleting a user that owns a schema recently, but that’s often a first step. The next thing I might need to do is actually move objects from that schema to a new one.

I actually ran across this command when I was looking how to move the schema to a new user. There’s actually a parameter for ALTER SCHEMA that will move objects.

This doesn’t pop up too often for me at least, but it’s good to remember if you’re using schemas as a method of categorizing data.

Comments closed

Thoughts On Max Memory Settings

Monica Rathbun shares some thoughts on SQL Server’s max memory settings:

Quite often I see database administrators set SQL Server max server memory thinking everything related to SQL Server uses this shared memory pool. This is a mistake. There are many things that rely on memory that are not part of SQL Server. Best practices state that you should leave memory allotted for the operating system. However, did you know that if you are running services like SSIS, SSAS or SSRS on the same server as the database engine that it does not use the same memory you have allocated for SQL Server? If the Max Memory setting is not configured correctly, these other serves could incur memory pressure.  While the memory consumed by SSAS and SSRS can be configured, SSIS can be a little bit more challenging. Beyond this, there are even scenarios where SQL Server max memory consumed can exceed the setting, like with CLR in versions earlier than 2012 and some other bugs in SQL Server.

As a consultant, I have seen memory pressure and memory exhausted too many times to count because the DBA was unaware of this. I applaud those that take the time to properly configure this setting according to what the database engine requires. Let’s take it a step further and take the time to look at what additional services you are using and allot memory accordingly.

Read on for more.

Comments closed

Undeleting A Deleted Azure SQL Database

Arun Sirpal shows us the “undoing a big mistake” button:

Okay honestly I have done this once. I have deleted Azure SQL Databases and then try and find the quickest way to recover. The Azure portal is actually pretty good when it comes to deleting resources, for example it will usually ask you to re-type the name of the resource to confirm deletion, so you can tell what a bad mistake I made.

Let’s look at how to delete a database then recover it.

I’m curious how long it stays there before dropping off into the abyss.

Comments closed

Sending SQL Server Notifications To Slack

Alessandro Alpi shows how to integrate SQL Server notifications with Slack:

Now, how can we send notifications from SQL Server in an easier way than using custom code or a Slack incoming webhook? Is there any integration or a Slack app?  Yes. And guess what? I think you’ll like it because you don’t need to write a single line of code, and you don’t need to choose between CLR, PowerShell or any other language. It’s ironic, but the integration is called “Email”.

Speaking of CLR, I’ve had success with the SqlServerSlackAPI in the past.

Comments closed

Running SQL Server On Cluster Shared Volumes

Sreekanth Bandarla continues a series on clustered shared volumes:

In the previous part of this series, we have seen what a cluster shared volume is and what are the advantages and other considerations to keep in mind when deploying CSVs for SQL Server workloads. In this article, I will walk though actual installation of a failover cluster Instance leveraging CSVs.

To begin with, I will walk you through my cluster setup from 20,000 foot view. I created two brand new VMs running windows server 2012 R2 and renamed them accordingly. Nothing special w.r.t disk drives at this point, Just basic VMs with a system drive(C$).

The rest of the story is over at SQLShack.

Comments closed

New(ish) VLF Status: 4

Paul Randal points out a new VLF status which can appear if you’re using an Availability Group:

At least since I started working on the SQL Server team (just after 7.0 shipped) and since then there have only been two VLF status codes:

  • 0 = the VLF is not active (i.e. it can be (re)activated and overwritten)
  • (1 = not used and no-one seems to remember what it used to mean)
  • 2 = the VLF is active because at least one log record in it is ‘required’ by SQL Server for some reason (e.g. hasn’t been backed up by a log backup or scanned by replication)

A few weeks ago I learned about a new VLF status code that was added back in SQL Server 2012 but hasn’t come to light until recently (at least I’ve never encountered it in the wild). I went back-and-forth with a friend from Microsoft (Sean Gallardy, a PFE and MCM down in Tampa) who was able to dig around in the code to figure out when it’s used.

Read on to uncover the mysteries of the VLF status of 4.

Comments closed