Press "Enter" to skip to content

Month: November 2019

Installing Power BI Gateway

Paulina Nowinska shows how to install the Power BI Gateway in its two separate modes:

This On-premises was created for a multi-developer environment. Here multiple people can work on the same Data Gateway if the administrator authorized them before. With this Data Gateway, you will have much more fun than Personal Mode. Why? Because of on-premises support not only Power BI just like his brother Personal Mode. Here you can provide quick and secure data transfer between data which is not in the cloud and Microsoft cloud services: PowerApps, Microsoft Flow, Azure Analysis Services, and Logic Apps and of course Power BI. Depending on your needs, you can choose one of them.

Click through for step-by-step instructions on both techniques.

Comments closed

View Filters and Short-Circuiting

Reitse Eskens takes us through a fun oddity with short-circuiting and views:

The question from my coworker was simple. Why is this happening? Because he’s selecting from the view, his instinct is that the returned result set should be filtered within the view first and that the resultset can be narrowed down further with the regular query.

It’s interesting that there’s deterministic behavior both ways. My recollection is that ANSI SQL does not honor short-circuiting, as all filters are considered to happen at the same time, and thus any ordering is valid. But in practice, there are places where different code bases end up with stable short-circuiting as an implementation detail.

Comments closed

Fun with CHAR(0)

Kenneth Fisher learns a bit about the 0 byte:

Ok, now things are getting interesting. An ASCII value of 0? I’ve never heard of that. I honestly didn’t know it was possible. As it happens, yes, it’s a real value and in SSMS it does a few strange things.

In the comments Denis Gobo is right: the 0 byte is the null terminator, which should appear at the end of a variable-length string to indicate that there’s nothing more to read there.

Comments closed

Your Power BI Administrator’s Privileges

Melissa Coates goes into exactly what it is that a Power BI admin can see and do:

I wrote about (and updated) this topic previously, but this is so important that it warrants revisiting. So let’s have a quick chat about what privileges a Power BI administrator has with respect to accessing data throughout the Power BI tenant.

All metadata throughout the tenant is available to the Power BI administrator (ex: if they want to enumerate a list of workspaces, reports, dashboards, etc using the APIs). So, metadata is easily discoverable but — technically speaking — a Power BI administrator cannot access datasets in Power BI unless they have permission to that workspace. However…

Read the whole thing.

Comments closed

Limitations with Memory-Optimized TempDB Metadata

Milos Radivojevic takes us through a few limitations in memory-optimized TempDB metadata tables in SQL Server 2019:

When we are about to enable a new feature, one of the first things we have to check is whether enabling this feature will break the existing code. Enabling this feature could bring two breaking changes: one is related to columnstore indexes, the other to transactions with memory-optimized tables.

I don’t think these limitations are that game-breaking, but if you’re regularly loading large tables in tempdb and using columnstore indexes on them, you might be in for a nasty surprise.

Comments closed

Evolution of the Data Lake

Jim Wankowski takes us through the history of data lakes:

It is important to understand the difference between data lakes and data warehouses. A data warehouse is highly structured. Much effort is done upfront in developing schemas and hierarchies prior to the data being loaded into a warehouse. There is no hierarchy or structure to the way data is stored in a data lake. The structure is applied afterward. There can be multiple schemas applied to the same data in a data lake.

Read on to learn how the data lake concept has evolved over the past few years.

Comments closed

Problems with SQL Server Index Recommendations

Brent Ozar has some grievances to air:

And if you don’t have time to review one query at a time, SQL Server makes wide-ranging analysis easy too, letting you query dynamic management views like sys.dm_db_missing_index_details to get index recommendations for the entire database. You can even use tools like sp_BlitzIndex to analyze and report on ’em.

Except…

Both of these – the index recommendations in the query plan and the ones in the DMVs – suffer from some pretty serious drawbacks.

Click through for the list. There are some doozies in there.

Comments closed

Use SQL for XML and JSON Creation

Lukas Eder argues that if you’re storing the data in SQL and you need to get data from a database into JSON or XML format, just use SQL for that:

In English: We need a list of actors, and the film categories they played in, and grouped in each category, the individual films they played in.

Let me show you how easy this is with SQL Server SQL (all other database dialects can do it these days, I just happen to have a SQL Server example ready:

Lukas makes a great point and has a FAQ to follow up on it. If there’s a reason for mapping at a higher layer—if you’re actually adding value rather than building out a set of converters—that’s one thing, but if you’re just accepting a data set and returning a JSON blob…well, your database product can do that too.

Comments closed

Optimizing for Sequential Keys

Milos Radivojevic is excited about OPTIMIZE_FOR_SEQUENTIAL_KEY:

The results show that the feature should be used for tables, where latch convoys already happen or are about to happen. In a not-so-heavy workloads, it brings a small overhead. You can see that inserts with 100 parallel sessions are 16% slower for a table with the optimized sequential key. However, when you hit the convoy, the feature definitely helps.

This graf is critical: if you don’t have high enough concurrency on the table, insertion can be a little slower than otherwise, so don’t go setting this for every table.

Comments closed