Press "Enter" to skip to content

Month: February 2017

Four-And-Twenty R Users

Ginger Grant explains why SQL Server R Services creates a group with twenty separate external users:

There are many reasons why a DBA might want to not allow clients to access server memory as that will tax the server. Turning it off is relatively simple. Go to the SQL Server Management Console and select SQL Server Launchpad for the instance of SQL Server running R Server.

In the picture of the screen, the instance of SQL Server I have running R Services is in SS2016. Right click on the server and select Properties, then click on the Advanced tab. When looking at the number of external users allowed by default, the number might look familiar. The reason there are twenty User IDs created for R Server is because Launchpad allocates by default external twenty users to connect from SQL Server to run R. If you don’t want to allow external users to run on a server, you will need to prevent the users from connecting by not enabling them to run R. To run R, users need to have db_rrerole permissions. If they do not have that, they cannot run R. On the production server, it is probably best that this permission not be granted to non-system users.

Read on for more details.

Comments closed

SSRS With Natural Earth Geospatial Data

Jeff Pries shows how to use the Natural Earth data set in a SQL Server Reporting Services report:

After proceeding through the New Layer Wizard three times to add three layers to the map, we have all of our data present.  We now just need to do a little housekeeping to make the map more presentable.  We’ll go through each layer and make slight tweaks to each.

Before adjusting the layers, first notice that we essentially have two legends.  The Legend box and the Map Scale box.  They both give us the same information.  Since the Legend is using more real estate, delete it.

There are a lot of steps involved, but the end result is a nice report.

Comments closed

Clustered Index And Physical Storage

Wayne Sheffield busts a myth:

In several of my last few blog posts, I’ve shared several methods of getting internal information from a database by using the DBCC PAGE command and utilizing the “WITH TABLERESULTS” option to be allowed to automate this process for further processing. This post will also do this, but in this case, we’ll be using it to bust a common myth—data in a clustered index is physically stored on disk in the order of the clustered index.

Busting this myth

To bust this myth, we’ll create a database, put a table with a clustered index into this database, and then we’ll add some rows in random order. Next, we will show that the rows are stored on the pages in logical order, and then we’ll take a deeper look at the page internals to see that the rows are not stored in physical order.

Read on for the proof.

Comments closed

Understanding Biml Table Definitions

Bill Fellows explains the Biml output for a single table:

Wow, that’s a lot! Let’s break it down.

Connections

Our Connections collection has a single entity in it, an OLE DB Connection named Adventureworks (remember, all of this is case sensitive so this Adventureworks is a different beast from AdventureWorks, ADVENTUREWOKRS, etc). This provides enough information to make a database connection. Of note, we have the server and catalog/database name defined in there. Depending on the type of connection used will determine the specific name used i.e. Initial Catalog & Data Source; Server & Database, etc. Look at ConnectionStrings.com if you are really wanting to see how rich (horrible) this becomes.

There’s a lot of XML to describe a single table, but a key benefit to Biml is that you write templates and scripts to generate this stuff rather than typing it out.

Comments closed

Merging With Columnstore

Niko Neugebauer does not like the MERGE statement when applied to columnstore indexes:

This blog post is focused on the MERGE statement for the Columnstore Indexes, or as I call it – the worst enemy of the Columnstore Indexes. It is extremely difficult to imagine some statement or way of making the worst out of the Columnstore Indexes, if not the infamous MERGE statement. Why ? Because it is not only making Columnstore Indexes perform slow, it will make them perform MUCH SLOWER then any Rowstore Indexes. Yes, you have read right – slower then ANY_ROWSTORE_INDEXES. In fact, this should be a hint that one should apply to the Merge statement, when it is executed against Columnstore Indexes! 🙂
I decide to dedicate a whole blog post on this matter, mainly to warn people of this pretty problematic statement – I hope not to see it being used for Columnstore Indexes in the future!

There is very little room for misunderstanding in Niko’s post.

Comments closed

Backup Strategy

Andy Galbraith has some advice for backups:

The workaround to this is to back the databases up to one tool and then to *copy* those backup files to the other tool.  The best recommendation I have in these situation is always to run “regular” SQL Server Agent job backups to a file location (either a local or network drive) and then to have your third-party tool use file backup for the actual BAK/TRN files from that location *rather* than running the “database agent” on the third party tool to backup the database directly.

In this model the third party tool never directly touches SQL Server – in this client’s environment you would run SQL Server Agent jobs similar to the current maintenance plan (although something better than maintenance plans, such as Ola Hallengren’s scripts or maybe MinionBackup) to backup to actual disk (in thiscase the B: drive) and then use Arcserve to backup the files that have been written on the B: drive.

There’s good advice around not using multiple tools to take backups.  This eliminates the possibility of needing to track down backups from two separate devices in order to restore to a point in time (e.g., if some of the log backups are on one device and some on the other).

Comments closed

Max Worker Threads

Erik Darling warns against messing with the Max Worker Threads setting:

The thing is, all changing that setting does is help you not run out of worker threads. It doesn’t make queries run any better or faster. In fact, under load, performance will most likely be downgraded to Absolute Rubbish© either way.

What’s worse? Running out of worker threads and queries having to wait for them to free up, or having way more queries trying to run on the same CPUs? Six of one, half dozen of punching yourself squarely in the nose.

I think there are a couple good counter-cases brought up in the comments (around mirroring and Service Broker), but it is solid general advice.

Comments closed

Column Specification On Insert

Michael Swart has a small console app which searches for INSERT statements missing column specifications:

I’ve got a program here that finds procedures with missing column specifications.

  • If for some reason, you don’t care about enforcing this rule for temp tables and table variables, then uncomment the line // visitor.TolerateTempTables = true;

  • It uses ScriptDom which you can get from Microsoft as a nuget package.

  • The performance is terrible in Visual Studio because ScriptDom uses Antlr which uses exceptions for flow control and this leads to lots of “first chance exceptions” which slows down debugging. Outside of Visual Studio, it’s just fine.

Click through for the code.

Comments closed

Improving Agent Alerts

Chris Bell has a way of making SQL Agent error messages a lot better:

Not very helpful. Sure, I know the job failed, and what step it failed on, but now I have to connect to the agent and look up the history to determine if this is something I have to worry about.

It would be nice to receive the details seen in the history of the job showing up in the email alert received.

Recently I have been working with systems that had all the alert on failure configured so we knew when things failed and could jump on re-running them if needed. We even had them showing up into a data team slack channel, so we had a history as well as notification to everyone on the team at the same time. The problem is that there were not any details in the alerts we received so we had to be able to connect and figure out what to do next or hope that our paid monitoring service would act on something after reading the details of the failure.

Chris has provided a script and gives some recommendations on job configuration which might reduce the number of alerts you get.

Comments closed

Handwriting Character Recognition

Tomaz Kastrun compares a few different libraries in terms of handwritten numeric character recognition:

Recently, I did a session at local user group in Ljubljana, Slovenija, where I introduced the new algorithms that are available with MicrosoftML package for Microsoft R Server 9.0.3.

For dataset, I have used two from (still currently) running sessions from Kaggle. In the last part, I did image detection and prediction of MNIST dataset and compared the performance and accuracy between.

MNIST Handwritten digit database is available here.

Tomaz has all of the code available as well.

Comments closed