Press "Enter" to skip to content

Day: March 16, 2016

AT TIME ZONE

Tom LaRock disusses the AT TIME ZONE function in SQL Server 2016:

Of course you will need to know what is allowed for you to use for the time zone name. Fortunately for us, this list is stored in the registry of the server. In other words, you can use whatever timezones are installed on the server. For a complete list you can query the sys.time_zone_info DMV:

If you work at a company with international dealings, you probably already have a time zone table somewhere, but this is a nice way of encapsulating possibly-slow time zone conversion and calculation operations.

Comments closed

R And SSH Tunnels

Steph Locke shows how to set up an SSH tunnel to connect to an external server within R:

Whilst down the rabbit hole, I discovered just in passing via a beanstalk article that there’s actually been a command line interface for PuTTY called plink. D’oh! This changed the whole direction of the solution to what I present throughout.

Using plink.exe as the command line interface for PuTTY we can then connect to our remote network using the key pre-authenticated via pageant. As a consequence, we can now use the shell() command in R to use plink. We can then connect to our database using the standard Postgres driver.

PuTTY is a must-have for any Windows box.

Comments closed

Get Diretory Information For SSAS

Jens Vestergaard shows us how to get the Data, Log, Temp, and Backup directories for Analysis Services using Powershell:

Just recently a reply was made to the Connect item, highlighting the fact, that the current values of the Data/Log/Temp and Backup Directories – meaning the currently configured values – is exposed through the Server.ServerProperties collection. According to the answer, only public property values are exposed.

Using PowerShell, we can now retrieve the desired information from any given instance of Analysis Services. Doing so would look something like this:

It’s good to know that this information is available via Powershell.

Comments closed

Compression Performance

Rolf Tesmer digs into the case of compression of building an index whose leading column has a low cardinality:

That first one is a cracker – it hit me once when compressing a SQL Server table (600M+ rows) on a 64 core Enterprise SQL Server.  After benchmarking several other data compression activities I thought I had a basic “rule of thumb” (based on GB data size and number of rows)… of which just happened to be coincidence!

This also begs the question of why would you use low selectivity indexes?  Well I can think of a few cases – but the one which stands out the most is the identification of a small number of rows within a greater collection – such as an Index on TYPE columns (ie; [ProcessingStatusFlag] CHAR(1) = [P]rocessed, [U]nprocessed, [W]orking, [F]ailed, etc)

… AND SO – lets do some testing to validate this puppy!

There’s a significant difference here, so check out Rolf’s post for the details.

Comments closed

Using The Default Trace

Jon Morisi shows how to use the default trace:

Often times while troubleshooting an issue, you’ll want more details than what you can find in the application log or SQL Log.  In the background, SQL Server runs a default trace which includes a lot of items to help with troubleshooting including (but not limited to) errors, warnings, and audit data.  I often run the following script as a quick way to find additional details for “ERROR” items from the default trace.

Jon notes that the default trace has been put on the deprecation list, so keep that in mind if you do use it.

Comments closed