Press "Enter" to skip to content

Category: Administration

A Wish For Shutdown Procedures

Dave Mason wants shutdown procedures like how we can have startup procedures:

If ‘shutdown’ was valid for the @OptionName parameter, I know exactly what I’d do: configure sp_procoption to execute a stored proc that sends an email alert. It would be great to know when a SQL instance goes down, wouldn’t it? Looking at the log file, we can see entries indicating the instance was shut down:

Dave does mention potential failure scenarios, but I agree:  it’d be nice to have the ability to run procedures at shutdown to perform certain actions.

Comments closed

DBCC CLONEDATABASE

Kennie Pontoppidan talks about a new DBCC command:

This command enables you to take a copy of the schema and the statistics for an existing database into a new database. According to the documentation, DBCC CLONEDATABASE

  • Creates a new destination database that uses the same file layout as the source but with default file sizes as the model database.

  • Creates an internal snapshot of the source database.

  • Copies the system metadata from the source to the destination database.

  • Copies all schema for all objects from the source to the destination database.

  • Copies statistics for all indexes from the source to the destination database.

I knew there were ways of scripting out statistics, but DBCC CLONEDATABASE looks like a new and interesting beast.

Comments closed

Infrastructure Error

Suresh Kandoth explains the various causes of “Login-based server access validation failed with an infrastructure error” error messages:

The important piece of information from that message is the part about server access validation failed. So let’s examine that carefully. During the login process, the database engine has to perform several checks regarding the login and its various attributes before letting the application connect to the SQL Server instance. Out of that big list, related to this error condition, there are two checks to find out if this login is authorized to access this server instance. There are two permissions that come into play while performing these authorization checks:

– Does the login have the SERVER class permission named CONNECT SQL for this server instance?

– Does the login have the ENDPOINT class permission named CONNECT for the specific endpoint on which the client application established the connection?

Suresh lays out a number of scenarios and explains why they could cause this error to occur.

Comments closed

TempDB Configuration

Sander Stad looks at the new TempDB defaults in 2016:

The initial size is set to 8MB. I would never create TempDB data files with a size of 8 MB which in my opinion is really small for a data file even for small systems.

In most cases I would look at the size of the TempDB disk (because you create a desperate disk for TempDB right…) and fill up the entire disk with data files. In this case I would create 4 x 5 GB files to leave some space for the log file.

To properly set the initial size of the TempDB you still need to go through some checks how much your system might need.

Read the whole thing.

Comments closed

Buffer Pools And Availability Groups

Joey D’antoni answers the question, what happens to pages in the buffer pool when you have an AG failover?

Recently at SQL Saturday Philadelphia, we started discussing failover  as it relates to mirroring and Always On Availability Groups. Specifically, we were wondering what would happen if you had a relatively busy readable secondary replica (which would have a lot of pages in the buffer pool on the secondary instance) and if those pages would be flushed from cache or anything like that. So I reached out to the product group and Kevin Farlee from Microsoft was extremely helpful

The answer is not what I was expecting.

Comments closed

Too Much NUMA

Denny Cherry discusses the performance downsides of having too many NUMA nodes on a server:

At one client I was working with in 2015 they had a server which was configured very similarly to what I’ve described above. They had 32 cores, with 11 NUMA nodes. One NUMA node has 4 cores, the next 9 had three cores each, while the 11th NUMA node had one core in it.

When monitoring the performance on the server what we saw was that three cores were running at 100% (or close to it) while the other cores were running at about 4%.

The reason for this, is that SQL Server is NUMA aware, and it was trying to get the best performance possible out of the configuration that it had access to.

Moderation in everything.  Also, it’s important to plan growth and check every once in a while for oddities like this.

Comments closed

Moving SSRS Reports

Andy Mallon fills a need in the community:

We’re just interacting with the SSRS Web Service. I’m using PowerShell, but you could also write a little .NET app to do something similar.

I’m not using any magical SSRS-specific cmdlets. I’m using PowerShell to interact with a web service….that web service just happens to be SSRS.

If you’re still trying to get along with RSScripter (if you can even find a copy any longer), Andy’s code might help you out.

Comments closed

Polybase DMVs

I look at the DMVs associated with Polybase and external table creation:

Let’s walk through this one step at a time and understand what the DMV is telling us.  Unfortunately, the DMV documentation is a little sparse, so some of this is guesswork on my part.

  1. A RandomIDOperation appears to create a temporary table.  In this case, the table (whose name is randomly generated) is named TEMP_ID_53.  I’m not sure where that name comes from; the session I ran this from was 54, so it wasn’t a session ID.

  2. After the table gets created, each Compute node gets told to create a table called TMP_ID_53 in tempdb whose structure matches our external table’s structure.  One thing you can’t see from the screenshot is that this table is created with DATA_COMPRESSION = PAGE.  I have to wonder if that’d be the same if my Compute node were on Standard edition.

  3. We add an extended property on the table, flagging it as IS_EXTERNAL_STREAMING_TABLE.

  4. We then update the statistics on that temp table based on expected values.  629 rows are expected here.

  5. Then, we create the dest stat, meaning that the temp table now has exactly the same statistics as our external table.

  6. The next step is that the Head node begins a MultiStreamOperation, which tells the Compute nodes to begin working.  This operator does not show up in the documentation, but we can see that the elapsed time is 58.8 seconds, which is just about as long as my query took.  My guess is that this is where the Head node passes code to the Compute nodes and tells them what to do.

  7. We have a HadoopRoundRobinOperation on DMS, which stands for “Data Movement Step” according to the location_type documentation.  What’s interesting is that according to the DMV, that operation is still going.  Even after I checked it 40 minutes later, it still claimed to be running.  If you check the full query, it’s basically a SELECT * from our external table.

  8. Next is a StreamingReturnOperation, which includes our predicate WHERE dest = ‘ORD’ in it.  This is a Data Movement Step and includes all of the Compute nodes (all one of them, that is) sending data back to the Head node so that I can see the results.

  9. Finally, we drop TEMP_ID_53 because we’re done with the table.

This post was about 70% legwork and 30% guesswork.  That’s a bit higher a percentage than I’d ideally like, but there isn’t that much information readily available yet, so I’m trying (in my own small way) to fix that.

Comments closed

Stored Procedure Last Run Times

Richie Lee has a script to see when stored procedures were last executed:

Quick script to get the last time a stored procedure was executed in the database. The reason for the seemingly over-engineered script is that different query plans can be generated, meaning that stored procedures can appear more than once in the list.

The query doesn’t quite work as-is, but making qs.execution_count into an aggregation and removing it from the GROUP BY would work.  I’d probably rewrite it to look a bit more like:

WITH querystats AS
(
	SELECT
		OBJECT_NAME(qt.objectid) AS ProcedureName,
		SUM(qs.execution_count) OVER (PARTITION BY OBJECT_NAME(qt.objectid)) AS ExecutionCount,
		qs.creation_time AS CreationTime,
		ROW_NUMBER() OVER (PARTITION BY OBJECT_NAME(qt.objectid) ORDER BY creation_time DESC) AS rownum
	FROM sys.dm_exec_query_stats AS qs
	CROSS APPLY sys.dm_exec_sql_text(qs.[sql_handle]) AS qt
	WHERE
		qt.[dbid] = DB_ID()
		AND OBJECT_NAME(qt.objectid) LIKE '%%'
)
SELECT
	qs.ProcedureName,
	qs.ExecutionCount,
	qs.ExecutionCount
FROM querystats qs
WHERE
	qs.rownum = 1;
Comments closed

Startup Stored Procedures

Dave Mason looks at using sp_procoption to execute stored procedures at startup:

If you have more than one task you want to run at startup, you could include code for each task in a single stored procedure. But that’s generally bad coding practice. Go with the modular approach and create a separate stored procedure for each distinct task. With sp_procoption, you can set more than one stored procedure for automatic execution. The MSDN documentation was not clear regarding the order of execution, though. I thought there might be something similar to sp_settriggerorder, but I wasn’t able to find anything like that. I ran a test on SQL 2014 with 3 stored procedures set for automatic execution.

Dave has interesting notes on procedure run order, where these procedures need to live, and even some ideas on what you might put into startup stored procedures.

Comments closed