Press "Enter" to skip to content

Author: Kevin Feasel

OPENROWSET And BULK Rowsets

Dave Mason looks at the OPENROWSET function:

The built-in SQL Server function OPENROWSET() provides a way to access remote data from an OLE DB data source. It can be used with the BULK rowset provider to read data from a file without loading the data into a target table. This post will show the basics to get started with OPENROWSET(), the BULK rowset provider, and text files of fixed-width data fields.

For permanent connections, look into linked servers.  But for one-off things, OPENROWSET works fine.

1 Comment

What’s New In Power BI 2.0?

Meagan Longoria tells us what’s in Power BI version 2.0:

The Microsoft Power BI team was fast and furious in 2015, and there are no indications they are slowing down in 2016. If you haven’t checked out Power BI V2 since it was first released last summer, you might want to take another look. Many features have been added and updated since then. Based upon the release schedules since July, it seems there are 3 separate release cycles for Power BI:

  • The Power BI Service (PowerBI.com) gets weekly updates.

  • The Power BI Desktop tool gets monthly updates.

  • The Power BI mobile apps get monthly updates.

I expect no fewer than 6 updates per week from the Power BI team.

Comments closed

Multiple Instances On A VM

David Klee answers the question, when should you have multiple named instances on a single VM?

I am personally partial to having just one instance per VM, as long as the situation allows for it. The resource management area between SQL Server and Windows allows me to manage the overall resource consumption at the VM level, and en mass, managing at this layer rather than multiple layers is usually preferable. I claim that the extra overhead of managing more VMs is worth the resource management flexibility.

I agree with this.  The biggest advantage I see is in licensing, but if your environment is of a non-trivial size, you’re probably going to license the host instead of individual VMs.  Nevertheless, check out David’s pro-and-con list and see where your situation lies.

Comments closed

Automating SSAS deployments

Matt Smith as introduced SQL Server Analysis Services deployments to Octopus Deploy:

The only thing missing was SSAS. After watching Chris Webb’s video tutorial –Cube Deployment, Processing and Admin on Project Botticelli, I decided it had to use Microsoft.AnalysisServices.Deployment.exe. After a bit of scripting and testing, I managed to write a PowerShell that updates the xml config files for the deployment – it sets the ProcessingOption to DoNotProcess’. It updates the Data source – where the cube will refresh the data from. The script isn’t perfect. For starters, what if you have more then one data source? Also what if your not using SQL Server 2014? Still the great thing about open source is that other can update it. Anyone can improve it, its not reliant on me having free time. So hopefully by the time we move to SQL 2016 someone will have already updated it to work with SQL 2016.

A big part of product maturation is automated deployment.  Good on Matt for introducing that to the community.

Comments closed

Specify Valid Network Protocols

Steve Jones shows how to specify the set of network protocols people can use to connect to a SQL Server instance:

I ran across a question on network protocols recently, which is something I rarely deal with. Often the default setup for SQL Server is fine, but there are certainly times you should add or remove network connectivity according to your environment.

Microsoft’s guidance on protocols pushes you toward TCP/IP and that’s a good default.

Comments closed

Delayed Durability Deletions

Melissa Connors looks at using Delayed Durability while deleting a large batch of records:

Recently, while considering possible use cases for Delayed Durability, it occurred to me that data loss might be entirely acceptable in cases where the data would not truly be lost. I have worked with a number of applications that have processes that purge old information from the database. If a purge process failed in these applications, data would simply live a little bit longer, and be purged the next time the process was successful – they have a recovery mechanism built in as it is. I decided to test Delayed Durability in a database with a long-running purge to observe the potential performance impact. I chose a process that was clearly contributing to transaction log waits, because that is where the real performance impact comes from when delaying durability. If you do not have notable waits or some level of a bottleneck there, you are not likely to improve anything simply by turning on this feature.

I was not aware that you could set durability at the transaction level; I was under the mistaken impression that once you flipped the switch, all transactions were subject to Delayed Durability.  Disk-heavy operations (like large batches of deletions) does seem like a good use case for this.

Comments closed

Result Sets

Kenneth Fisher learns and teaches us about RESULT SETS:

Quick definition. A result set is the output of a query. It could result in a one row, one column output or a 100+ column, million+ row output. Either way that’s a result set. Note: you can have multiple result sets from a single object (stored procedure, function etc) call.

This was introduced in SQL Server 2012 and there are a couple of security-related scenarios in which RESULT SETS is helpful.  It also lets you rename columns in stored procedure calls, if you’re into that sort of thing.

Comments closed

Clear The Query Store

Grant Fritchey shows how to clear the Query Store in SQL Server 2016:

While setting up example code for my presentation at SQL Cruise (which is going to be a fantastic event), I realized I wanted to purge all the data from my Query Store, just for testing. I did a series of searches to try to track down the information and it just wasn’t there. So, I did what anyone who can phrase a question in less than 140 characters should do, I posted a question to Twitter using the #sqlhelp hash tag.

You can also call EXEC sp_query_store_remove_query to remove a specific query from the Query Store.

2 Comments

Mirrored Backups

Sean McCown talks about mirrored backups:

By mirroring backups, you’re saying that you want to backup to 2 locations simultaneously.  So let’s say you have the need to backup your DBs to a local SAN drive, but also you need to send them to another data center in case something happens to your local SAN.  The way to do that in SQL is with mirrored backups and the syntax looks like this:

BACKUP DATABASE MyDB TO DISK = ‘G:\MyDB.trn’ MIRROR TO DISK = ‘\\DC1\MyDB.trn’

So above you can see that SQL will write both of these files at once, and give you a good amount of redundancy for your DB backups.  However, this can go wrong when your network isn’t stable or when the link to the other data center is slow.  So you should only mirror backups when you can pretty much guarantee that it won’t fail or lag.  And as you can guess that’s a heavy burden to put on most networks.  In the situation last week that spawned this blog, the network went down for something like 9 hrs and caused the DB’s log to not be backed up that entire time, and hence the log grew and grew.  Now you’re in danger of bringing prod down and that’s clearly not what your backup strategy should do.

Sean talks about alternatives and then talks about how they’ve gotten around the problem with Minion Backup.  If you haven’t tried Minion Backup, it is well worth your time; it’s already a great product and I use it in a production environment I support.

Comments closed

Finding Object Counts

SQLWayne shows how to break down counts of objects by type:

And while it did the trick, I was wanting, for no particular reason, to also have the total number of objects and the percentage.  Again, no particular reason.  It might be able to be done with a window function, but that is also something that I have limited familiarity with, so I decided to approach it as a CTE.  And it works nicely.  The objs CTE gives me a count of each object type while the tots CTE gives me the count of all objects.  By giving each CTE a column with the value of 1, it’s easy to join them together then calculate a percentage.

That’s one of the nicest things about SQL as a language:  you access metadata the same way you access regular data, so that technique can be used to query other data sets as well.

1 Comment