Press "Enter" to skip to content

Day: June 12, 2018

Exception Handling In Scala

Shivangi Gupta shows off the Either keyword in Scala:

How to get values from Either?

There are many ways we will talk about all one by one.  One way to get values is by doing left and right projection. We can not perform any operation i.e, map, filter etc; on Either. Either provide left and right methods to get the left and right projection. Projection on either allows us to apply functions like map, filter etc.

For example,

scala> val div = divide(14, 7)
div: scala.util.Either[String,Int] = Right(2)

scala> div.right
res1: scala.util.Either.RightProjection[String,Int] = RightProjection(Right(2))

When we applied right on either, it returned RightProjection. Now we can extract the value from right projection using get, but if there is no value the compiler will blow up using get.

There’s more to Scala exception handling than just try-catch.

Comments closed

Azure Data Lake Analytics Updates

Michael Rys has a boatload of new updates for Azure Data Lake:

The top items include expanding our built-in support for standard file formats with native Parquet support for extractors and outputters (in public preview) and ORC (in private preview)!

In addition, since the fast file set feature now has been generally released, we can consume hundreds of thousands of such files in bulk in a single EXTRACT statement. We will publish a blog at a later date to give you much more detailed information on how this capability helps you to process so many files efficiently in a scalable way.

Important aspects of processing files at scale include:

  1. the ability to generate many files from a rowset in a single statement, providing a way to dynamically partition the data for future use with Hadoop or Spark, or to provide individual files for customers. This has been our top customer ask on the ADL Feedback forum –and now it is in private preview!

  2. the ability to handle many small files. We recommend that you make your files large enough for the processing to be efficient (300MB to 4GB is a good range), but often, your file formats (e.g., images) or data ingestion pipelines (e.g., EventHub archives) are not able to reach that size. Thus, we are adding the ability to group several files into a vertex to increase efficiency and lower cost of your job (we have seen 10 to 30 times improvement in some customer jobs!).

Read on for the full changelog.

Comments closed

Listing SQL Server Instances On A Server

Lori Brown has a script which lists installed SQL Server instances:

It is pretty common that I frequently am asked to take over a server with SQL on it sight unseen. Many times shops many not be fully aware if additional instances are or have been installed on their server. Since I am working on a set of scripts that we can use for discovery on new systems, I thought it would be handy to have a query that would give me a list of SQL instances with the version that are installed.

I found that this kind of information can be found in the registry which is accessible via a couple of undocumented extended procedures. Let me give the disclaimer right here…. Undocumented procedures from Microsoft are not supported and should not be used for regular production work. This is because they can change or be removed at any time. Now that I have stated that, the other obvious statement is that undocumented procedures can be pretty handy for some things, just use them with caution.

Click through for the script.

Comments closed

Right-Aligning Numbers In Management Studio

Daniel Hutmacher has a trick to right-align results in SQL Server Management Studio:

Here’s something I’ve found: the space character is roughly about half the width of a typical number character. So replace every leading space with two spaces, and it will look really neat in the grid:

Click through for the script.  This one goes near the back of the toolbelt, but it can come in handy when trying to troubleshoot values by eyeball and without Excel.

Comments closed

Automatic Tuning In SQL Server

Bob Ward has a post on automatic tuning in SQL Server 2017:

One of the key points I’ve been making to our customers about SQL Server on Linux is that the core database engine is the same as on Windows. Since Automatic Tuning is built into the core database engine, this feature works exactly the same on SQL Server on Linux.

As you watched my demo of Automatic Tuning for SQL Server on Windows, I used the Windows Performance Monitor to show the workload performance when automatic tuning corrects a query plan regression problem. But what about Linux? Windows Performance Monitor will not work with that platform.

Turns out I have some smart people working with me at Microsoft. I got help from Pedro Lopes from our Tiger Team and the engineering team who built SQL Operations Studio to show automatic tuning using that tool.

  • Pedro helped me build scripts that would query the DMV, sys.dm_os_performance_counters, which runs on SQL Server on Linux as it does on Windows (because it is built into the core engine). These scripts store results from this DMV in a temp table, and then query the temp table.

  • The folks at SQL Operations Studio showed me how to collect the result set from the temp table in a manner that would allow me to show the data in a Time Series chart in the tool. Kind of like a static perfmon chart of SQL Server counters.

With a bonus shout out to Tracy Boggiano.

Comments closed

Playing Blackjack With Power BI

Phillip Seamark goes and creates a blackjack game in Power BI:

The last of the three data-tables in the model used to control Player 1 is the ‘P1 Turn’ data-table.  This is simply a 10-row single column table with numbers 1 through 10.  The purpose of this table is to use in conjunction with a slicer that will help keep track of what turn Player 1 is up to.  A series of 5 bookmarks will be used to snapshot a slicer selected in 5 different states.  A series of bookmarks will be taken with this slicer having a different value selected which is how the game can keep track of the progress through the game.

A [P1 Card Filter] calculated measure keeps track of the selected value over the above slicer which is used as a filter on the table-visual that is used to reveal cards for Player 1.

It’s more a toy solution than an actual game, but it’s interesting to see.

Comments closed

Reporting Services Templates In Visual Studio 2017

Greg Low points out an oddity in where Reporting Services templates are located in SQL Server 2017:

Templates are just report files (ie: RDL files) but placed in a special folder on your system.

If you’re starting with a blank page to create a report, you’re doing it wrong.

But where do these templates come from? In the Visual Studio 2015 version of the tools, they were here:

C:\Program Files (x86)\Microsoft Visual Studio 14.0\
Common7\IDE\PrivateAssemblies\ProjectItems\ReportProject

I found after installing the VS2017 version of the tools though, that they weren’t there any more.

Greg has found them in two separate locations, so read on to learn more.

Comments closed

Collecting Plan Metrics With Trace Flag 7412

Grant Fritchey shows a lightweight way of capturing plan metrics:

I place a lot of emphasis on capturing actual execution plans because of the runtime metrics, but with Trace Flag 7412, we don’t need the plan. This is great news, because capturing execution plans, even using extended events, is an expensive proposition. However, using either the query_thread_profile event, or, Trace Flag 7412, we can get the runtime metrics without the plan.

Very interesting.  But as Grant points out, this is not a cost-free operation, so use as needed but not willy-nilly.

Comments closed