Press "Enter" to skip to content

Day: February 27, 2017

Large Data Sets In Memory-Optimized Table Types

Jack Li explains a difference between memory-optimized tables and memory-optimized table types:

This customer was puzzled because he delete existing rows. At any given time, there should not be more than 1 million rows.  SQL Server should not have run out of memory.

This is actually by-design behavior documented in “Memory-Optimized Table Variables”).  Here is what is state “Unlike memory-optimized tables, the memory consumed (including deleted rows) by table variables is freed when the table variable goes out of scope)”.  With a loop like above, all deleted rows will be kept and consume memory until end of the loop.

Click through for the complete story.

Comments closed

Four-And-Twenty R Users

Ginger Grant explains why SQL Server R Services creates a group with twenty separate external users:

There are many reasons why a DBA might want to not allow clients to access server memory as that will tax the server. Turning it off is relatively simple. Go to the SQL Server Management Console and select SQL Server Launchpad for the instance of SQL Server running R Server.

In the picture of the screen, the instance of SQL Server I have running R Services is in SS2016. Right click on the server and select Properties, then click on the Advanced tab. When looking at the number of external users allowed by default, the number might look familiar. The reason there are twenty User IDs created for R Server is because Launchpad allocates by default external twenty users to connect from SQL Server to run R. If you don’t want to allow external users to run on a server, you will need to prevent the users from connecting by not enabling them to run R. To run R, users need to have db_rrerole permissions. If they do not have that, they cannot run R. On the production server, it is probably best that this permission not be granted to non-system users.

Read on for more details.

Comments closed

SSRS With Natural Earth Geospatial Data

Jeff Pries shows how to use the Natural Earth data set in a SQL Server Reporting Services report:

After proceeding through the New Layer Wizard three times to add three layers to the map, we have all of our data present.  We now just need to do a little housekeeping to make the map more presentable.  We’ll go through each layer and make slight tweaks to each.

Before adjusting the layers, first notice that we essentially have two legends.  The Legend box and the Map Scale box.  They both give us the same information.  Since the Legend is using more real estate, delete it.

There are a lot of steps involved, but the end result is a nice report.

Comments closed

Clustered Index And Physical Storage

Wayne Sheffield busts a myth:

In several of my last few blog posts, I’ve shared several methods of getting internal information from a database by using the DBCC PAGE command and utilizing the “WITH TABLERESULTS” option to be allowed to automate this process for further processing. This post will also do this, but in this case, we’ll be using it to bust a common myth—data in a clustered index is physically stored on disk in the order of the clustered index.

Busting this myth

To bust this myth, we’ll create a database, put a table with a clustered index into this database, and then we’ll add some rows in random order. Next, we will show that the rows are stored on the pages in logical order, and then we’ll take a deeper look at the page internals to see that the rows are not stored in physical order.

Read on for the proof.

Comments closed

Understanding Biml Table Definitions

Bill Fellows explains the Biml output for a single table:

Wow, that’s a lot! Let’s break it down.

Connections

Our Connections collection has a single entity in it, an OLE DB Connection named Adventureworks (remember, all of this is case sensitive so this Adventureworks is a different beast from AdventureWorks, ADVENTUREWOKRS, etc). This provides enough information to make a database connection. Of note, we have the server and catalog/database name defined in there. Depending on the type of connection used will determine the specific name used i.e. Initial Catalog & Data Source; Server & Database, etc. Look at ConnectionStrings.com if you are really wanting to see how rich (horrible) this becomes.

There’s a lot of XML to describe a single table, but a key benefit to Biml is that you write templates and scripts to generate this stuff rather than typing it out.

Comments closed

Merging With Columnstore

Niko Neugebauer does not like the MERGE statement when applied to columnstore indexes:

This blog post is focused on the MERGE statement for the Columnstore Indexes, or as I call it – the worst enemy of the Columnstore Indexes. It is extremely difficult to imagine some statement or way of making the worst out of the Columnstore Indexes, if not the infamous MERGE statement. Why ? Because it is not only making Columnstore Indexes perform slow, it will make them perform MUCH SLOWER then any Rowstore Indexes. Yes, you have read right – slower then ANY_ROWSTORE_INDEXES. In fact, this should be a hint that one should apply to the Merge statement, when it is executed against Columnstore Indexes! 🙂
I decide to dedicate a whole blog post on this matter, mainly to warn people of this pretty problematic statement – I hope not to see it being used for Columnstore Indexes in the future!

There is very little room for misunderstanding in Niko’s post.

Comments closed

Backup Strategy

Andy Galbraith has some advice for backups:

The workaround to this is to back the databases up to one tool and then to *copy* those backup files to the other tool.  The best recommendation I have in these situation is always to run “regular” SQL Server Agent job backups to a file location (either a local or network drive) and then to have your third-party tool use file backup for the actual BAK/TRN files from that location *rather* than running the “database agent” on the third party tool to backup the database directly.

In this model the third party tool never directly touches SQL Server – in this client’s environment you would run SQL Server Agent jobs similar to the current maintenance plan (although something better than maintenance plans, such as Ola Hallengren’s scripts or maybe MinionBackup) to backup to actual disk (in thiscase the B: drive) and then use Arcserve to backup the files that have been written on the B: drive.

There’s good advice around not using multiple tools to take backups.  This eliminates the possibility of needing to track down backups from two separate devices in order to restore to a point in time (e.g., if some of the log backups are on one device and some on the other).

Comments closed

Max Worker Threads

Erik Darling warns against messing with the Max Worker Threads setting:

The thing is, all changing that setting does is help you not run out of worker threads. It doesn’t make queries run any better or faster. In fact, under load, performance will most likely be downgraded to Absolute Rubbish© either way.

What’s worse? Running out of worker threads and queries having to wait for them to free up, or having way more queries trying to run on the same CPUs? Six of one, half dozen of punching yourself squarely in the nose.

I think there are a couple good counter-cases brought up in the comments (around mirroring and Service Broker), but it is solid general advice.

Comments closed

Column Specification On Insert

Michael Swart has a small console app which searches for INSERT statements missing column specifications:

I’ve got a program here that finds procedures with missing column specifications.

  • If for some reason, you don’t care about enforcing this rule for temp tables and table variables, then uncomment the line // visitor.TolerateTempTables = true;

  • It uses ScriptDom which you can get from Microsoft as a nuget package.

  • The performance is terrible in Visual Studio because ScriptDom uses Antlr which uses exceptions for flow control and this leads to lots of “first chance exceptions” which slows down debugging. Outside of Visual Studio, it’s just fine.

Click through for the code.

Comments closed