Press "Enter" to skip to content

Author: Kevin Feasel

Parameterized Visibility In SSRS Reports

Monica Rathbun shows how to show or hide clusters of columns in Reporting Services reports:

Ever had users come to you and request another version of a report just to add another field and group data differently? Today, was such the day for me. I really don’t like have multiple versions of the same report out there. So, I got a little fancy with the current version of the report and added a parameter then used expressions to group the data differently and hide columns. For those new to SSRS I’ve embedded some links to MSDN to help you along the way.

This is an easy-to-follow, step-by-step guide.

Comments closed

Synchronous Or Asynchronous Stats Updates

SQL Scotsman explains synchronous versus asynchronous stats updates:

With that said, it would seem that asynchronous statistics are better suited to OLTP environments and synchronous statistics are better suited to OLAP environments.  As synchronous statistics are the default though, people are reluctant to change this setting without good reason to.  I’ve worked exclusively in OLTP environments over the last few years and have never seen asynchronous statistics rolled out as the default.  I have personally been bitten by synchronous statistic updates on large tables causing query timeouts which I resolved by switching to asynchronous statistic updates.

This is an interesting, nuanced take on the issue.  My bias is toward asynchronous stats updates because I have been burned, but it’s interesting to read someone thinking through the implications of this seemingly simple choice.

Comments closed

Passing Parameters To SQL Queries Via Power BI

Chris Webb shows how to use the Value.NativeQuery() function to pass parameters to SQL Server queries:

It looks like, eventually, this will be the way that any type of ‘native’ query (ie a query that you write and give to Power Query, rather than a query that is generated for you) is run against any kind of data source – instead of the situation we have today where different M functions are needed to run queries against different types of data source. I guess at some point the UI will be updated to use this function. I don’t think it’s ‘finished’ yet either, because it doesn’t work on Analysis Services data sources, although it may work with other relational data sources – I haven’t tested it on anything other than SQL Server and SSAS. There’s also a fourth parameter for Value.NativeQuery() that can be used to pass data source specific options, but I have no idea what these could be and I don’t think there are any supported for SQL Server. It will be interesting to see how it develops over the next few releases.

It’s good to know that you can parameterize queries now.

Comments closed

The Limits Of SP1

Parikshit Savjani explains limitations in SQL Server 2016 SP1:

With the recent announcement of SQL Server 2016 SP1, we announced the consistent programmability experience for developers and ISVs, who can now maintain a single code base and build intelligent database applications which scale across all the editions of SQL Server. The processor, memory and database size limits does not change and remain as–in all editions as documented in the SQL Server editions page. We have made the following changes in our documentation to accurately reflect the memory limits on lower editions of SQL Server. This blog post is intended to clarify and provide more information on the memory limits starting with SQL Server 2016 SP1 on Standard, Web and Express Editions of SQL Server.

The development space has been expanded, but there’s still good reason for enterprises to use Enterprise Edition.

Comments closed

ETL With Spark

Eric Maynard demonstrates that moving data across Hadoop clusters can be sped up by using Spark:

By leveraging Spark for distribution, we can achieve the same results much more quickly and with the same amount of code. By keeping data in HDFS throughout the process, we were able to ingest the same data as before in about 36 seconds. Let’s take a look at Spark code which produced equivalent results as the bash script shown above — note that a more parameterized version of this code code and of all code referenced in this article can be found down below in the Resources section.

Read the whole thing.

Comments closed

SSAS Tabluar RAM Requirements

Bill Anton looks at ensuring your Tabular server has enough RAM:

In addition to being an “in-memory” technology, Analysis Services Tabular is also a “column-store” technology which means all the values in a table for a single column are stored together. As a result – and this is especially true for dimensional models – we are able to achieve very high compression ratios. On average, you can expect to see compression ratios anywhere from 10x-100x depending on the model, data types, and value distribution.

What this ultimately means is that your 2 TB data mart will likely only requrie between 20 GB of memory (low-end) and 200 GB (high-end) of memory. That’s pretty amazing – but still leaves us with a fairly wide margin of uncertainty. In order to further reduce the level of uncertainty, you will want to take a representative sample from your source database, load it into a model on a server in your DEV environment, and calculate the compression factor.

Read the whole thing; Bill has several factors he considers when sizing a machine.

Comments closed

SQL Server Port Changes

Steve Jones shows how to change the port of your SQL Server instance:

Notice that I have multiple instances here, so I need to choose one. Once I do, I see the protocols on the right. In this case, I want to look at the properties of TCP/IP, which is where I’ll get the port.

If I look at properties, I’ll start with the Protocol tab, but I want to switch to the IP Addresses tab. In here, you can see I’ll see an entry for each of the IPs my instance is listening on. I can see which ones are Active as well as the port. In my case, I have these set to dynamic ports.

My rules of thumb, which might differ from your rules of thumb:  disable the Browser, don’t change off of 1433 for a single instance, and hard-code ports if you happen to be using named instances.  There’s a small argument in favor of “hiding” your instance by putting it onto a higher port (i.e., 50000+), but that’s not a great way of protecting a system, as an attacker can run nmap (or any other port scanner) and find your instance.  The major exception to this is if you also have something like honeyports set up.  In that case, changing the port number can increase security, and will almost definitely increase the number of developers who accidentally get blackholed from the server.

Comments closed

Data Migration Assistant Timeouts

Kenneth Fisher shows that the Data Migration Assistant developers thought ahead:

I’ve been really excited about the new Data Migration Assistant (DMA) since I first heard about it. One of the things I like best about it is that unlike the old Upgrade Advisor it doesn’t have to be run on the server being upgraded. You can run it against any number of instances from a single workstation. The other day I was working from home and tried running the DMA against a couple of moderate size databases (about 1.25tb total) and I consistently got timeout errors.

Click through for the solution.  I’d prefer “have the queries be quick enough not to require this change” be the solution, but I don’t know exactly which queries they’re using, and some DMVs/DMFs can be quite slow.

Comments closed

Replicating To Azure SQL DB

Jeffrey Verheul shows how to enable replication from your on-prem SQL Server up to Azure SQL DB:

Replication to another on-premise instance is easy. You just follow the steps in the wizard, it works out-of-the-box, and the chances of this process failing are small. With replicating data to an Azure SQL database it’s a bit more of a struggle. Just one single word took me a few HOURS of investigation and a lot of swearing…

The magic word is “secure.”  Read the whole thing if you’re thinking of migrating an app to use Azure SQL DB and want to minimize downtime, or if you just want that extra level of protection that having a copy of your database out of the data center can give you.

Comments closed