Press "Enter" to skip to content

Author: Kevin Feasel

Back Up Those System Databases

Arun Sirpal had gremlins infest his server, causing a service pack installation to go sideways:

I try to start the service but it fails. (I was desperate ok!)

Script level upgrade for database ‘master’ failed because upgrade step ‘msdb110_upgrade.sql’ encountered error 5846, state 1, severity 16. This is a serious error condition which might interfere with regular operation and the database will be taken offline. If the error happened during upgrade of the ‘master’ database, it will prevent the entire SQL Server instance from starting. Examine the previous errorlog entries for errors, take the appropriate corrective actions and re-start the database so that the script upgrade steps run to completion.

Cannot recover the master database. SQL Server is unable to run. Restore master from a full backup, repair it, or rebuild it. For more information about how to rebuild the master database, see SQL Server Books Online.

To be honest, rebuilding master would be my last option.

Read the whole thing, and then double-check that you have good copies of master & msdb somewhere.

Comments closed

Instantiating Cortana Intelligence Gallery Solutions

Melissa Coates has a step-by-step guide showing how to install a solution from the Cortana Intelligence Gallery:

We had no options along the way for selecting names for resources, so we have a lot of auto-generated suffixes for our resource names. This is ok for purely learning scenarios, but not my preference if we’re starting a true project with a pre-configured solution. Following an existing naming convention is impossible with solutions (at this point anyway). A wish list item I have is for the solution deployment UI to display the proposed names for each resource and let us alter if desired before the provisioning begins.

The deployment also doesn’t prompt for which subscription to deploy to (if you have multiple subscriptions like I do). The deployment did go to the subscription I wanted, however, it would be really nice to have that as a selection to make sure it’s not just luck.

It sounds like there are some undesirable defaults, but at least it does appear to be very easy to do.

Comments closed

R 3.4.0 Now Available

A new version of R is now available:

  • Accumulating vectors in a loop is faster – Assigning to an element of a vector beyond the current length now over-allocates by a small fraction. The new vector is marked internally as growable, and the true length of the new vector is stored in the truelength field. This makes building up a vector result by assigning to the next element beyond the current length more efficient, though pre-allocating is still preferred. The implementation is subject to change and not intended to be used in packages at this time.

There’s a big list of changes, so check it out and think about upgrading.

Comments closed

Building An Online R Training Environment

Steph Locke has shared how she put together a training lab for her R workshop:

This starts with the tidyverse & Rstudio then:

  • adds the requisite programs for dependencies in my package and whois for mkpasswd to be able to work

  • installs packages from github, notably the one designed to facilitate the day of text analysis

  • get the shell script and the csv from the gist

  • make the shell script executable and then run it

I loved the business card touch.  It’s easy enough to print out little strips of paper with the username and password, but this has a bit more staying power.

Comments closed

Getting Off Of Profiler, A Twelve-Step Program

Wayne Sheffield has a blast from the past, repeating an old T-SQL Tuesday to show how to use Extended Events:

Now that you have this XE session scripted out, it can be easily installed on multiple servers. If you encounter a deadlock problem, you can easily start the XE session and let it run to trap your deadlocks. They will be persisted to a file dedicated for the deadlocks. You can use my Deadlock Shredder script at http://bit.ly/ShredDL to read the deadlocks from the file and shred the deadlock XML into a tabular output.

Note that the default system_health XE session also captures deadlocks. I like to have a dedicated session for just deadlocks. As lightweight as XE is, sometimes it may benefit a server to turn off the system_health session. Additionally, Jonathan Kehayias has a script that will take a running trace and completely script out an XE session for it. This script can be found at https://www.sqlskills.com/blogs/jonathan/converting-sql-trace-to-extended-events-in-sql-server-2012/. Even though this script is available, I like to figure things out for myself so that I can learn what is actually going on.

Extended Events are extremely useful for administrators, typically with a fraction of the overhead cost of  server-side (much less Profiler) traces.

Comments closed

Secret Mysteries Of SQL Server 2017

Erik Darling goes spelunking for new and modified internal objects in SQL Server 2017:

sys.dm_db_missing_index_group_stats

sys.dm_db_missing_index_group_stats
What have we here?

 

Of particular interest are last_sql_handle, query_hash, and query_plan_hash. It appears that we’ll finally be able to easily tie missing index requests to their queries, without doing a lot of painful XML processing. I had planned on adding something like this, but couldn’t find a good fit between 1) adding XML processing to sp_BlitzIndex, or adding more DMV queries and rather unpleasant XML processing to sp_BlitzCache. This will make implementing it far easier, assuming it works the way it looks like it will work.

Erik has three examples of interesting additions in CTP 2.0.

Comments closed

Diagnosing Database Restore Wait Times

Bob Ward notes that the “100 percent processed” message doesn’t mean everything is quite finished yet in a database restoration:

Notice the “100 percent…” message has detailed about “bytes processed”. Since my data is around 13Mb this tells me that the progress indicators are all about the data transferred step of RESTORE. Notice the time gap in the messages to “Waiting for Log zeroing…” and “Log Zeroing is complete”. That gap in time is around 2 minutes. Exactly the time it took between the the 100% Complete message in the SSMS Window and the final restore message.

From this evidence I can conclude that having a transaction log file size that takes a long time to initialize can be a possible cause of seeing the behavior of getting the 100% Complete Message quickly but the overall RESTORE takes longer to complete.

There’s a lot worth reading packed into this post, as you’d expect from Bob.  Read the whole thing.

Comments closed

Fitness In Modeling

Leila Etaati notes the Scylla and Charybdis of models:

However, in the most machine learning experiences, we will face two risks :Over fitting and under fitting.
I will explain these two concepts via an example below.
imagine that we have collected information about the number of coffees that have been purchased in a café from 8am to 5pm.

Overfitting tends to be a bigger problem in my experience, but they’re both dangerous.

Comments closed

Dealing With NULL

Jeff Mlakar has a pair of comparisons for NULL handling, with ISNULL vs COALESCE and CONCAT vs + for concatenation:

We expect this much from IsNull. However, coalesce is a little different. will take the data type from the first non-null value passed and use that for the table definition. This might not always be what you want because if you pass bits you might get integers. If you pass an array of integers and floats you will get numeric. Be aware if this isn’t what you wanted.

Read the whole thing.

Comments closed

Resumable Online Index Rebuilds

Niko Neugebauer introduces a new feature in SQL Server 2017:

After multiple executions, the first process (Resumable Online Index Rebuild) on the average took 65.8 seconds, while the second one (a simple online) took only 60.8 seconds, representing 8% of the improvement of the overall performance. I can’t say if it looks acceptable to you or not, but for me this is something I will be definitely considering to be as an advantage for the cases where the resumable process is needed.

I decided to run a test on much bigger table, the lineitem which for 10GB TPCH database contains 60 Million Rows. My expectation here was to see if the percentage would stay the same or will jump to a whole new level (please make sure that you do execute the following script at least a couple of times, to get the real results and not the results of your disk-drive prefetching :)):

The big table example result was somewhat surprising.  Niko is his normal, informative self, so definitely read the whole thing.

Comments closed