Press "Enter" to skip to content

Author: Kevin Feasel

Joining Lists of Values in T-SQL

Jason Brimhall shows how you can build a list of values using the table value constructor and join to it:

The table value constructor is basically like a virtual table not too different from a CTE or a subquery (in that they are all virtual tables of sorts). The table value constructor however can be combined with either of those other types and is a set of row expressions that get put into this virtual table in a single DML statement.

It’s one of the nicer things SQL Server 2008 gave us.

Comments closed

Date Columns and Query Folding in Power BI

Marc Lelijveld discovered a case where transforming a date+time to a date could potentially break query folding:

While working on projects, It is always my goal to keep query folding active as long as possible. Simply because it improves my dataset refresh timings and reduces the load executed on Power BI side.

While working on a project last week, I found something interesting in default Power BI functionality which breaks query folding, but can be avoided by slightly changing the Power Query code yourself or picking another option which results in exactly the same. So, this week a short blog post to make you aware of things which might brake query folding.

Click through for two scenarios, one where a transformation can break query folding and one where query folding is still fine.

Comments closed

Installing SQL Server on Windows Containers

Mohammad Darab shows how you can install SQL Server on a Windows Container:

Today, Microsoft released Windows Containers, currently in the Early Adoption Program here. You can use the same Docker Desktop for Windows but now you can use Windows Containers to deploy SQL Server (as you will see below).

This is primarily for dev/test environments. This is in no way production ready. Think about a developer asking for a SQL instance so that they can run their code against, etc. You can now easily, and quickly, spin up a container running SQL Server. Once they are done, you can remove it or stop it.

The downside is that your Docker Desktop for Windows is now stuck running Windows containers…

Comments closed

Collecting the Last Actual Plan for a Query

Gail Shaw explains a new Dynamic Management Function which works to get the latest execution plan for a particular query:

Getting the actual execution plan, that is the plan with run-time statistics for a query from an application has always been a little difficult. It’s fine if you can get the query running in Management Studio and reproducing the behaviour from the app, but that can be difficult.

There’s the query_post_execution_showplan event in Extended Events, but that’s a pretty heavy event and not something that I’d like to run on a busy server.

No more! SQL 2019 adds a new plan-related function to get the last actual plan for a query: sys.dm_exec_query_plan_stats.

Read on to see how to configure this, as well as a demo.

Comments closed

Querying Essbase from Power BI

Kellyn Pot’vin-Gorman shows how to query data from an Oracle Essbase cube in the Oracle Applications Cloud from Power BI:

The OAC environment that Opal gave me access possessed an example schema/data based on an Audio-Video store revenue for multiple years. 

I’d never worked with the OAC before, but I was quickly able to find five methods to connect Power BI to it, either to import or to direct connect to the data:

1. Data Direct makes a Rest API to connect to the Oracle Cloud.
2. Use the web interface with Power BI and “scrape the website”, allowing PBI to build the table from example.
3. Connect with an ODBC driver
4. Via the OAC interface, export to Excel and then import into Power BI
5. Via the OAC interface, export to a table and then import into Power BI as a CSV file.

So, uh, yeah, you can do it. At least five ways.

Comments closed

Scala 2.13 Changes

Anmol Sarna takes us through what’s new in Scala 2.13:

Last, but not the least, the team has invested heavily in compiler speedups during the 2.13 cycle which resulted in some major changes with respect to the compiler.

Compiler performance in 2.13 is 5-10% better compared to 2.12, thanks mainly to the new collections.

There are a lot of changes in this version. I wonder how long before Spark supports it fully.

Comments closed

Thoughts on Hadoop’s Future

Mark Litwintschik ties together a set of thoughts on the present and future of Hadoop:

At no point in Hadoop’s history has there been such a rich variety of features being offered as today and never before has it been so stable and battle-tested.

Hadoop projects are made up of millions of lines of code which have been written by thousands of contributors. In any given week there are 100s of developers working on the various projects. Most commercial database offerings are lucky to have a handful of engineers making any significant improvements to their code bases every week.

Mark takes a broad ecosystem approach (which I fully endorse) and so he sees the glass as more than half-full.

Comments closed

dbatools and Error Handling

Shane O’Neill takes us through some of the error-handling dynamics available in dbatools:

PowerShell errors normally contain useful information on what went wrong. With this information, the “what went wrong” can be fixed.

That being said, if you are writing PowerShell scripts and not raising valid error messages then I highly advise you to go back and fix that.

dbatools raises these error messages as friendly warning messages since we’ve found people will read a warning message quicker than they will read an error message.

There are several options available for you to handle errors, including viewing them as warnings, viewing as errors, and populating error text in variables.

Comments closed

Handling Spatial Data in Cosmos DB

Hasan Savran gives us some tips on storing spatial data in Cosmos DB:

Importing Spatial Data into CosmosDB can be a challenge. CosmosDB is not a relational database and you may need to change your data model structure to add spatial data. You cannot create a new container for spatial data and plan to join this container to your other containers. There are free tools which might help with GeoJson conversion, but you may still need to add converted geoJson data into your data models. Spatial data becomes very powerful when you find a way to join it with your application’s data.

    In the following example, I am going to download the hurricanes from NOAA website. Data is in CSV format so we may need to transform data to create a good data model for CosmosDB. I downloaded the all hurricane data for 2005 which was the year of Katrina hurricane. First thing I did, was to change the name of columns and make them more user-friendly. I have used the following names for columns. Here is a sample row from the CSV file.

Click through for the example.

Comments closed

DAX Median & 2 Billion+ Rows

Chris Webb has bad news for people with tables holding 2 billion-plus rows in Tabular format:

What’s more, the error will always occur even if you apply a filter to the table that returns less than two billion rows. The same problem occurs with some other functions, such as Percentile(), but it’s worth pointing out that the vast majority of DAX functions work as normal with tables with more than two billion rows – for example, in the pbix file used here the Sum() and CountRows() functions not only work fine but return instantly.

I haven’t seen that many Power BI examples with 2 billion or more rows in a table, but it can be an impediment when trying to use Analysis Services Tabular in cases with enormous amounts of data.

Comments closed