Press "Enter" to skip to content

Author: Kevin Feasel

Retrieving WMI Information

Dave Mason shows how to retrieve WMI information (perhaps the “wrong” way):

In another post I explored a way to run WMI queries from tsql (sort of). Let’s do it again. The script below enables xp_cmdshell, runs WMIC.exe (capturing the output to a temp table), shreds the XML, returns a result set of columns and rows, and disables xp_cmdshell. If you’re adventurous and daring, run the script in a test environment.

With Powershell, as Dave mentioned, this really isn’t the best way to do the job, but there are times when you aren’t able to use Powershell.

Comments closed

Object Naming

Andy Galbraith warns you against…odd…database names:

I went and looked on the server, and sure enough in Management Studio I saw one database named “FinanceDB” and a database named “[FinanceDB]”.

This was on a SQL 2008R2 instance, but as a test I created a database named [test] on my local SQL 2014 instance and sure enough it worked!

The source of the problem at the client was the LiteSpeed maintenance plan.  Even though the backup task was set to backup all user databases, it wasn’t picking up the square-bracketed database.

I’d go a bit further and say that you should avoid everything but alpha-numeric characters and maybe underscore for databases, tables, views, and all other database objects.

Comments closed

M Query Timing

Chris Webb gives a couple of options for timing M queries:

In M, the DateTime.LocalNow() function returns the current system date and time at the time the function is called – which means that you could call it multiple times in the same query and get different results. There’s also the DateTime.FixedLocalNow() function which also returns the system date and time; the difference between the two is, as the documentation says:

This value is fixed and will not change with successive calls, unlike DateTime.LocalNow, which may return different values over the course of execution of an expression.

This is good to know for those query-tuning scenarios.

Comments closed

B-Tree Indexes

Derik Hammer is starting a new series on index types and has started with the venerable B-tree:

On more than one occasion I have been told that the B-Tree index is the index structure which was designed after the A-Tree index. Another common misconception is that it stands for Binary-Tree. As logical as those may seem, they are false. The ‘B’ in B-Tree does not actually have any specific meaning. Check out Ed McCreight’s explanation here (16:08) where he admits that the name discussion was never settled.

In its most basic form, the B-Tree index is a hierarchy of data pages (page structures lightly touched on in the next post of this series). The lowest level is called the leaf level, the highest level is the index root, and all levels in between are the intermediate levels. This structure is an improvement over the Binary Tree index because its balanced nature greatly improved the performance of maintenance operations such as, INSERT, DELETE, and UPDATE.

On the terminological point, I’d always heard that the “B” stood for “Balanced” because of the level flatness—in contrast to a “normal” tree, you wouldn’t have more than a pre-defined number of levels separation (usually one) between leaf nodes.

Comments closed

A Tribute To SQL 2005

Brent Ozar gives his tribute to SQL Server 2005:

I remember the first time I used SQL Server 2005.

I was a database administrator working on a new-build data warehouse project in Miami. Both our data warehouse and SQL Server 2005 were looking like they were going to come out at the same time, and I kept hoping I could make the timing work.

SQL Server 2005 looked so seductive. Let’s take just a moment to think back about all the cool new technologies it introduced:

SQL Server 2005 was, in my opinion, the first enterprise-quality version of SQL Server.  2000 was a great start but wasn’t quite there yet.  This version, though, was quite nice.

Comments closed

Circular References

Daniel Hutmacher looks at circular references in recursive CTEs:

Suppose you have a tree hierarchy where (among other members) “3” is the parent of “8”, “8” is the parent of “B” and “B” is the parent of “E”. You could easily draw this as a branch structure where the members could be profit centers of a company, divisions of government, managers and employees, product lines, cell references in an Excel sheet or pretty much anything that can be described as a hierarchy.

3
--8
  --B
    --E

Now, if we say that “E” is the parent of “3”, we’ve created a circular reference, and we end up with an infinite recursion. What that means is that if you follow the tree from the root to the leaf level, you’ll end up going round and round in circles. In terms of a database query, that means that the query will go on forever until it either fills up your log file or tempdb, or until the maximum number of recursions (OPTION MAXRECURSION) is reached, whichever happens first.

I did something similar a while back with foreign key dependencies.  Because foreign key relationships are potentially cyclical graphs, they can easily fall into the same scenario.

Comments closed

SQL Server Marketplace Images

Melissa Coates debates whether to use a SQL Server marketplace image for an Azure VM:

As more and more customers are interested in moving some portion of their BI/analytics workloads to cloud services, one question that comes up occasionally is whether or not you should start with a marketplace image that has SQL Server already installed. So far I’ve noted a few key considerations for this decision:

  1. Do you want to pay for the SQL Server license as part of the VM pricing?

  2. Do you want to configure SQL Server in a specific way (i.e., following best practices)?

  3. Do you want Azure to handle things like automated patching by default?

My rule of thumb is if it’s Express Edition or just for me to mess around with, I’m typically happy with an image.  But for a production setup, I’d want the fine-grained control at installation time that you just won’t get.

Comments closed

File Processing Pattern

Bill Fellows describes a pattern for processing files in an ETL scenario:

All ETL processing will use a common root node/directory. I call it SSISData to make it fairly obvious what is in there but call it as you will. On my dev machine, this is usually sitting right off C:. On servers though, oh hell no! The C drive is reserved for the OS. Instead, I work with the DBAs to determine where this will actually be located. Heck, they could make it a UNC path and my processing won’t care because I ensure that root location is an externally configurable “thing.” Whatever you’re using for ETL, I’m certain it will support the concept of configuration. Make sure the base node is configurable.

A nice thing about anchoring all your file processing to a head location is that if you are at an organization that is judicious with handing out file permissions, you don’t have to create a permission request for each source of data. Get your security team to sign off on the ETL process having full control to a directory structure starting here. The number of SSIS related permissions issues I answer on StackOverflow is silly.

It comes down to consistency and cleanliness.  Plan ahead and you’ll have a much nicer go of things.  Bill also provides a Biml POC, so check that out as well.

Comments closed

Drop Column In SQL Server 2012/2014

Aaron Bertrand explains a bug in SQL Server 2012 and 2014 which can crop up after dropping a column:

A fix has been published for SQL Server 2012 (SP2 and SP3), and one is coming for SQL Server 2014 (RTM and SP1) – though in this most recent update, SQL Server 2014 was removed from the list of affected versions (not sure it was accurate to remove those). I will update this space once the next 2014 CUs are released, but for now you should plan to rebuild tables after dropping columns as a matter of course and, more importantly, as part of the same maintenance operation.

This is a good reason to stay up to date on CUs.

Comments closed

Jupyter Notebooks With R

Andrie de Vries notes that Azure Machine Learning now supports Jupyter Notebooks with R:

I wrote about Jupyter Notebooks in September 2015 (Using R with Jupyter Notebooks), where I noted some of the great benefits of using notebooks:

  • Jupyter is an easy to use and convenient way of mixing code and text in the same document.

  • Unlike other reporting systems like RMarkdown and LaTex, Jupyter notebooks are interactive – you can run the code snippets directly in the document

  • This makes it easy to share and publish code samples as well as reports.

Jupyter Notebooks is a fine application, but up until now, you could only integrate it with Azure Machine Learning if you were writing Python code.  This move is a big step forward for Azure ML.

Comments closed