Press "Enter" to skip to content

Category: Administration

Automating Azure SQL Database Scaling

Arun Sirpal shows how to use Azure Logic Apps to auto-scale Azure SQL Database:

When I was presenting my Azure SQL Database session at DataRelay (used to be SQLRelay) I was asked (over coffee) about auto scaling capabilities. Quite simply there is nothing out of the box to achieve this. The idea of auto scaling would be good where you would need a burst to fulfill higher demand in terms of workload for a time duration, you know, something like “end of the day, Friday night sale” for your database.

Classically you would probably go down the PowerShell route via a runbook, but I am different.

In this case, the automation is timer-based rather than load-based.

Comments closed

SQL Undercover Inspector 1.2 Released

Adrian Buckman announces a new version of SQL Undercover Inspector:

#21 Added AG Databases check to warn on databases not joined to an AG 

If you are using Availability groups and you have this new module enabled the Inspector will assume that all databases should be joined to an AG, every database name for the instance is inserted into a new table called [Inspector].[AGDatabases] and the Is_AG flag is set to a 1 , if databases are joined to an AG then the Is_AGJoined column is set to a 1 therefore no Advisory will be shown on the report. If a database is marked as Is_AG then it will continue to warn if not joined to an AG on the Inspector reports, if you wish to exclude a given database from the advisory condition simply update Is_AG to a 0 .

Instances that are not Hadr enabled with at least one AG will automatically have Is_AG set to 0 and will be excluded from the checking even if the module is enabled.

Click through for a big list of changes.

Comments closed

Uninstalling A SQL Server Feature

Marek Masko shows how to uninstall a particular SQL Server feature:

If you wonder if it is possible to uninstall particular SQL Server feature instead of uninstalling the complete instance, then I have a good news. It is possible and it is very easy.

Let’s assume you want to uninstall PolyBase because you want to install it with a newer version of SQL Server (PolyBase can be installed on only one SQL Server instance per computer).

It brings me mild sadness that the feature Marek chose for this demo was Polybase, but slightly greater joy for the reason.

Comments closed

Missing Backup Directory When Trying To Upgrade SQL Server

Lori Brown walks us through the solution to an error she experienced:

I was recently performing an in-place upgrade of SQL 2008 R2 to SQL 2014 on one of my client’s servers.  I have done a ton of successful and uneventful in-place upgrades and was surprised when the upgrade failed with the error message:  “Failed to create a new folder ‘X:\SQLBackups’. The specified path is invalid (for example, it is on an unmapped drive).”  This client had over the years changed from using a local drive for all backups to having backups sent to a network share.  So, the X drive really was no longer in existence.

Read on for the solution.

Comments closed

Unused Indexes Might Not Be

Tara Kizer has a warning for people eager to drop “unused” indexes:

About 10 years ago, I decided to drop an unused index on a table that had 2 billion rows. The database was around 7TB in size. We were having storage and performance issues. I thought I could help the system out if I were to drop the index.

4 days after I dropped the index, I got a call from our NOC that CPU utilization was pegged at 100% for an hour so they were reaching out to the on-call DBA to check it out. I logged in and saw a query using A LOT of CPU. Its WHERE clause matched the index I had dropped. The execution plan showed it was scanning the table.

It turned out that I only had 2 weeks of uptime, which didn’t include the 1st of the month. The query that was causing the high CPU utilization was a report that ran on the 1st of each month.

Tara has also provided us with a script to track these details over time, so check that out.

Comments closed

Redshift Architecture Performance Tips

John Ryan has a few hints to help us build speedy Redshift clusters:

The Need to Vacuum

As Redshift does not reclaim free space automatically, updates and delete operations can frequently lead to table growth. Equally, it’s important as new entries are added, that the data is maintained in a sorted sequence.

The VACUUM command is used to re-sequence data, and reclaim disk space as a result of DELETE and UPDATE operations. Although it won’t block other processes, it can be a resource-intensive operation, especially for data stored using interleaved sort keys.

It should be run periodically to ensure consistent performance and to reduce disk usage.

Some of this is good Postgres advice; some of it is good MPP advice (and serves well, for example, when dealing with Azure SQL Data Warehouse); the rest is Redshift-specific.

Comments closed

Reminder: Cycle Those SQL Server Error Logs

Monica Rathbun has a public service announcement:

I saw this again recently and see it too often in environments so wanted to take a second to remind everyone to cycle their error logs on a regular basis. SQL Server keeps error logs and when you reboot or restart SQL Server services the logs are cycled and a new one is created. Depending on how many logs you have configured for SQL Server to have this may include removal of the oldest log as well. Since many of pride ourselves on keeping our SQL Servers up and running, reboots may be few and far between thus our logs get large in size.

When they grow out of control it can require long wait times for the logs open to even view them. An easy way to keep this from happening is to cycle them routinely. You can easily automate these by creating a SQL Agent job to cycle the log to a new one on a regular basis whether it is monthly, weekly or even daily.

My preference is to cycle daily with 45 or so logs maintained; that way, if there are service restarts, I still have more than a month of logs.

Comments closed

PFS Corruption When Moving From SQL Server 2014

Paul Randal notes a bug in SQL Server 2014:

I’m seeing reports from a few people of DBCC CHECKDB reporting PFS corruption after an upgrade from SQL Server 2014 to SQL Server 2016 or later. The symptoms are that you run DBCC CHECKDB after the upgrade and get output similar to this:

Msg 8948, Level 16, State 6, Line 5
Database error: Page (3:3863) is marked with the wrong type in PFS page (1:1). PFS status 0x40 expected 0x60.
Msg 8948, Level 16, State 6, Line 5
Database error: Page (3:3864) is marked with the wrong type in PFS page (1:1). PFS status 0x40 expected 0x60.
CHECKDB found 2 allocation errors and 0 consistency errors not associated with any single object.
CHECKDB found 2 allocation errors and 0 consistency errors in database 'MyProdDB'.
repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (MyProdDB).

I’ve discussed with the SQL Server team and this is a known bug in SQL Server 2014.

Read on for the fix and additional good advice.

Comments closed

Writing Audit Logs To Azure Event Hubs

Ronit Reger announces that Azure SQL Database auditing logs can now go to Azure Log Analytics or Azure Event Hubs:

Azure Log Analytics plays a central role in monitoring and management of your Azure environment. It enables collecting telemetry and other data from a variety of sources across Azure, and provides a query language and analytics engine for deep analysis and insights on the operation of applications and resources. For more information on the Log Analytics platform, see What is Azure Log Analytics.

With native support for saving SQL audit logs directly to Log Analytics, log data from all of your database resources can be gathered and stored in a single central location. The logs can now be analyzed using the rich analysis tools provided by the platform, which can provide deeper visibility and advanced cross-resource analytics.

In addition, SQL Server audit logs (from on-premises SQL Servers or SQL Servers on a VM) can also be collected in Log Analytics via OMS agent integration, as described in this article. Thus, you can manage and analyze all of your database audit logs, whether from the cloud or on-premises, in a single central location using the power of Azure Log Analytics.

This looks useful.

Comments closed

Switching Object Schemas

Steve Jones show you a quick way of switching a database object’s schema:

I haven’t had the need to move an object from one schema to another in years. Really since SQL Server 2000. I wrote about deleting a user that owns a schema recently, but that’s often a first step. The next thing I might need to do is actually move objects from that schema to a new one.

I actually ran across this command when I was looking how to move the schema to a new user. There’s actually a parameter for ALTER SCHEMA that will move objects.

This doesn’t pop up too often for me at least, but it’s good to remember if you’re using schemas as a method of categorizing data.

Comments closed