Press "Enter" to skip to content

Author: Kevin Feasel

What Does Activity Monitor Do?

Tibor Karaszi explains each window in the Management Studio Activity Monitor:

The idea here is to show where SQL Server is waiting, “wait stats”.

It uses the same procedure as the “Waiting Tasks” diagram uses, #am_generate_waitstats, to get the information. See the above section for “Waiting Tasks” to understand the time dimension for this. For simplicity, we can say that it shows only wait stats for the past 30-60 seconds. This is important. Imagine that you had loads of a certain wait stats, but none just for the last minute. This pane can now fool you that you didn’t have any waits of that kind, just because you didn’t for the past minute. Note, though, that the “Cumulative Wait Time” column is the sum of wait in the group since SQL Server was re-started or since we last cleared the wait state (DBCC SQLPERF(“sys.dm_os_wait_stats”,CLEAR)).

In an attempt to be friendly, it will group and summarize wait stats into various groups. That would be fine if there were some documentation about which individual wait type is in each group. Also, some wait types are ignored. One of the ignored wait types is CXPACKET, another is THREADPOOL.

Activity Monitor isn’t very good, but sometimes you can’t get the good tools installed on a server and need to check something quickly.  In those cases, it’s a handy thing to know.

Comments closed

SSRS Category Charts & Ints

Kathi Kellenberger notices an oddity with SSRS Mobile Report category charts:

Notice that OrderYear displays decimal points. I switched the dataset in the Series field name property, and found that neither of the columns in the dataset can be used.

Numeric columns cannot be set as a Series name field. To work around this, I modified the dataset, casting OrderYear as a CHAR(4).

That’s not a great situation, but at least there’s a workaround.

Comments closed

Serverless Azure

Christos Matskas has an article on Azure Functions, Service Fabric, and Batch:

This service is the hidden gem of HPC (high performance computing) within the Azure Compute service family. As the name implies, Azure Batch is designed to run large-scale and high-performance computing applications efficiently in the cloud. When you’re faced with large workloads, all you have to do is to use Azure Batch to define compute resources to execute your applications in parallel and at the desired scale. A good use-case for Azure Batch would be to perform financial risk modelling, climate data analysis or stress testing. What makes Batch so useful is the fact that you don’t need to manually manage the node cluster, virtual networks or scheduling because all this is handled by the service. You need to define a job, any associated data and the number of nodes you want to utilise. It makes no difference if you need to run on one, a hundred or even thousands of nodes. The service is designed to scale according to the workload needs.

The cheapest server may very well be no server, and we’re at the point where relatively simple services could just run as Azure Functions or AWS Lambda functions.

Comments closed

Large Data Sets In Memory-Optimized Table Types

Jack Li explains a difference between memory-optimized tables and memory-optimized table types:

This customer was puzzled because he delete existing rows. At any given time, there should not be more than 1 million rows.  SQL Server should not have run out of memory.

This is actually by-design behavior documented in “Memory-Optimized Table Variables”).  Here is what is state “Unlike memory-optimized tables, the memory consumed (including deleted rows) by table variables is freed when the table variable goes out of scope)”.  With a loop like above, all deleted rows will be kept and consume memory until end of the loop.

Click through for the complete story.

Comments closed

Four-And-Twenty R Users

Ginger Grant explains why SQL Server R Services creates a group with twenty separate external users:

There are many reasons why a DBA might want to not allow clients to access server memory as that will tax the server. Turning it off is relatively simple. Go to the SQL Server Management Console and select SQL Server Launchpad for the instance of SQL Server running R Server.

In the picture of the screen, the instance of SQL Server I have running R Services is in SS2016. Right click on the server and select Properties, then click on the Advanced tab. When looking at the number of external users allowed by default, the number might look familiar. The reason there are twenty User IDs created for R Server is because Launchpad allocates by default external twenty users to connect from SQL Server to run R. If you don’t want to allow external users to run on a server, you will need to prevent the users from connecting by not enabling them to run R. To run R, users need to have db_rrerole permissions. If they do not have that, they cannot run R. On the production server, it is probably best that this permission not be granted to non-system users.

Read on for more details.

Comments closed

SSRS With Natural Earth Geospatial Data

Jeff Pries shows how to use the Natural Earth data set in a SQL Server Reporting Services report:

After proceeding through the New Layer Wizard three times to add three layers to the map, we have all of our data present.  We now just need to do a little housekeeping to make the map more presentable.  We’ll go through each layer and make slight tweaks to each.

Before adjusting the layers, first notice that we essentially have two legends.  The Legend box and the Map Scale box.  They both give us the same information.  Since the Legend is using more real estate, delete it.

There are a lot of steps involved, but the end result is a nice report.

Comments closed

Clustered Index And Physical Storage

Wayne Sheffield busts a myth:

In several of my last few blog posts, I’ve shared several methods of getting internal information from a database by using the DBCC PAGE command and utilizing the “WITH TABLERESULTS” option to be allowed to automate this process for further processing. This post will also do this, but in this case, we’ll be using it to bust a common myth—data in a clustered index is physically stored on disk in the order of the clustered index.

Busting this myth

To bust this myth, we’ll create a database, put a table with a clustered index into this database, and then we’ll add some rows in random order. Next, we will show that the rows are stored on the pages in logical order, and then we’ll take a deeper look at the page internals to see that the rows are not stored in physical order.

Read on for the proof.

Comments closed

Understanding Biml Table Definitions

Bill Fellows explains the Biml output for a single table:

Wow, that’s a lot! Let’s break it down.

Connections

Our Connections collection has a single entity in it, an OLE DB Connection named Adventureworks (remember, all of this is case sensitive so this Adventureworks is a different beast from AdventureWorks, ADVENTUREWOKRS, etc). This provides enough information to make a database connection. Of note, we have the server and catalog/database name defined in there. Depending on the type of connection used will determine the specific name used i.e. Initial Catalog & Data Source; Server & Database, etc. Look at ConnectionStrings.com if you are really wanting to see how rich (horrible) this becomes.

There’s a lot of XML to describe a single table, but a key benefit to Biml is that you write templates and scripts to generate this stuff rather than typing it out.

Comments closed

Merging With Columnstore

Niko Neugebauer does not like the MERGE statement when applied to columnstore indexes:

This blog post is focused on the MERGE statement for the Columnstore Indexes, or as I call it – the worst enemy of the Columnstore Indexes. It is extremely difficult to imagine some statement or way of making the worst out of the Columnstore Indexes, if not the infamous MERGE statement. Why ? Because it is not only making Columnstore Indexes perform slow, it will make them perform MUCH SLOWER then any Rowstore Indexes. Yes, you have read right – slower then ANY_ROWSTORE_INDEXES. In fact, this should be a hint that one should apply to the Merge statement, when it is executed against Columnstore Indexes! 🙂
I decide to dedicate a whole blog post on this matter, mainly to warn people of this pretty problematic statement – I hope not to see it being used for Columnstore Indexes in the future!

There is very little room for misunderstanding in Niko’s post.

Comments closed

Backup Strategy

Andy Galbraith has some advice for backups:

The workaround to this is to back the databases up to one tool and then to *copy* those backup files to the other tool.  The best recommendation I have in these situation is always to run “regular” SQL Server Agent job backups to a file location (either a local or network drive) and then to have your third-party tool use file backup for the actual BAK/TRN files from that location *rather* than running the “database agent” on the third party tool to backup the database directly.

In this model the third party tool never directly touches SQL Server – in this client’s environment you would run SQL Server Agent jobs similar to the current maintenance plan (although something better than maintenance plans, such as Ola Hallengren’s scripts or maybe MinionBackup) to backup to actual disk (in thiscase the B: drive) and then use Arcserve to backup the files that have been written on the B: drive.

There’s good advice around not using multiple tools to take backups.  This eliminates the possibility of needing to track down backups from two separate devices in order to restore to a point in time (e.g., if some of the log backups are on one device and some on the other).

Comments closed