Press "Enter" to skip to content

Category: Administration

Startup Stored Procedures

Dave Mason looks at using sp_procoption to execute stored procedures at startup:

If you have more than one task you want to run at startup, you could include code for each task in a single stored procedure. But that’s generally bad coding practice. Go with the modular approach and create a separate stored procedure for each distinct task. With sp_procoption, you can set more than one stored procedure for automatic execution. The MSDN documentation was not clear regarding the order of execution, though. I thought there might be something similar to sp_settriggerorder, but I wasn’t able to find anything like that. I ran a test on SQL 2014 with 3 stored procedures set for automatic execution.

Dave has interesting notes on procedure run order, where these procedures need to live, and even some ideas on what you might put into startup stored procedures.

Comments closed

Sensible Auto-Growth Settings

Ajay Jagannathan notes that SQL Server 2016’s database auto-growth has changed to better default values:

model database: New default data and log file size is 8MB and default auto-growth is 64MB. This ensures that any new database created without explicitly specifying the SIZE/FILEGROWTH parameter will have 8MB initial size for all data and log files and 64MB for auto-growth for both data and log files.

For data files, having a 64MB autogrow, aligns with 1 PFS interval (which covers a range of 8088 pages = 64MB). For log files, having a 64MB autogrow helps with sizing the initial VLFs correctly so that they can be garbage claimed (wrapped-around) without which the log can keep growing.

This is much better than the prior default of 1 MB size and 10% auto-growth.  Percentage growth leads to eventual pain.

Comments closed

SQL Server Event Handling

Dave Mason looks at different levels of event handling within SQL Server:

While event handling for .Net developers is implemented in a unified way, this is not the case for SQL Server. Event handling for SQL Server lacks the “one stop shopping” afforded to .Net developers. *If* we had access to the code base for SQL Server and wanted to handle a specific event, we could add our own code, recompile sqlservr.exe, and be on our way. But since we don’t have this ability, we use SQL Server’s run-time hooks. Consider the following:

  • DDL Triggers: handles Data Definition Language events (synchronously).

  • Event Notifications: handles a wide swath of SQL Server events via Service Broker (asynchronously).

  • SQL Alerts: handles the following events:

    1. Events with a specific error number or severity level that are written to the Windows Event Log.

    2. Events for a specific performance condition.

    3. WMI events.

  • sp_procoption: handles the startup event by specifying a stored procedure to run when the database engine service starts.

  • SQL Agent jobs: handles time-based events defined by user-specified job schedules (ie daily, hourly).

This sounds like the beginning of a new series.

Comments closed

Spark Metrics

Swaroop Ramachandra looks at some key metrics for Spark administration:

Once you have identified and broken down the Spark and associated infrastructure and application components you want to monitor, you need to understand the metrics that you should really care about that affects the performance of your application as well as your infrastructure. Let’s dig deeper into some of the things you should care about monitoring.

  1. In Spark, it is well known that Memory related issues are typical if you haven’t paid attention to the memory usage when building your application. Make sure you track garbage collection and memory across the cluster on each component, specifically, the executors and the driver. Garbage collection stalls or abnormality in patterns can increase back pressure.

There are a few metrics of note here.  Check it out.

Comments closed

Preparing For Disaster Recovery

Kendra Little has a 30-minute video and explanation for how to prepare for a failover event:

The fact that you’re thinking about this is great!

You’re right, there are two major types of fail-overs that you have to think about:

  • Planned failover, when you can get to the original production system (at least for a short time)
  • Unplanned failover, when you cannot get to it

Even when you’re doing a planned failover, you don’t have time to go in and script out settings and jobs and logins and all that stuff.

Timing is of the essence, so you need minimal manual actions.

And you really should have documentation so that whomever is on call can perform the failover, even if they aren’t you.

The short answer is, test, test, test.  Test where it can’t hurt, and then test where it can.  But do read/watch the whole thing.

Comments closed

Monitoring Apache Spark

Swaroop Ramachandra has started a series on monitoring Apache Spark:

Spark provides metrics for each of the above components through different endpoints. For example, if you want to look at the Spark driver details, you need to know the exact URL, which keeps changing over time–Spark keeps you guessing on the URL. The typical problem is when you start your driver in cluster mode. How do you detect on which worker node the driver was started? Once there, how do you identify the port on which the Spark driver exposes its UI? This seems to be a common annoying issue for most developers and DevOps professionals who are managing Spark clusters. In fact, most end up running their driver in client mode as a workaround, so they have a fixed URL endpoint to look at. However, this is being done at the cost of losing failover protection for the driver. Your monitoring solution should be automatically able to figure out where the driver for your application is running, find out the port for the application and automatically configure itself to start collecting metrics.

For a dynamic infrastructure like Spark, your cluster can get resized on the fly. You must ensure your newly spawned components (Workers, executors) are automatically configured for monitoring. There is no room for manual intervention here. You shouldn’t miss out monitoring newer processes that show up on the cluster. On the other hand, you shouldn’t be generating false alerts when executors get moved around. A general monitoring solution will typically start alerting you if an executor gets killed and starts up on a new worker–this is because generic monitoring solutions just monitor your port to check if it’s up or down. With a real time streaming system like Spark, the core idea is that things can move around all the time.

Spark does add a bit of complexity to monitoring, but there are solutions in place.  Read the whole thing.

Comments closed

Data Protection Manager

Tom Roush discusses gotchas around Microsoft’s Data Protection Manager:

You’ve got DPM installed, and for the most part, configured.  It’s working, but you have transaction log drives filling up on some of your servers, and it’s not really clear why.

Wanna know why?

Here’s the answer:

It’s because the UI is very unclear, because the documentation is unclear, (there was a hint of it on page 83) and because the things that would be obvious to a DBA simply aren’t mentioned.

Tom has a very detailed post on the topic, making it a must-read if you use this tool.

Comments closed

Database Snapshots

Kenneth Fisher discusses database snapshots:

Here is where it starts getting interesting. A snapshot initially takes up little to no space. As changes are made to the source database the snapshot grows in size. In fact the snapshot is the size of all of the pages changed in the source database since the creation of the snapshot. Basically as a page is changed in the source database a copy of the original page is made and stored in the snapshot, but only the first time. (Note: The files used to store these pages are called sparse files.) This means that if you change the same page over and over again it will only be written to the snapshot once. It then logically follows that the largest a snapshot can get is the size of the source database at the time the snapshot was taken. Since most of the time we change a very small portion of the database at any given point in time this means that snapshots tend to be much smaller than the source database. In fact you could load millions of rows into the source database (assuming they are mostly/all in new pages) and it will have little to no effect on the size of the snapshot.

My favorite use of database snapshots was so developers could test their changes in QA and then revert back to a pre-snapshot environment.  That way, they could preserve data for future runs.

Comments closed

Indirect Checkpoint

Kendra Little notes that indirect checkpointing is now the default in SQL Server 2016:

SQL Server 2016 introduces big new features, but it also includes small improvements as well. Many of these features are described in the “It Just Runs Faster” series of blog posts by Bob Ward and Bob Dorr.

One article in this series explained that new databases created in SQL Server 2016 will use “Indirect Checkpoint” by default. Indirect checkpoint was added in SQL Server 2012, but has not previously been enabled by default for new databases. The article emphasizes this point:

Check out the comments as well.

Comments closed