Press "Enter" to skip to content

Month: December 2016

The Limits Of SP1

Parikshit Savjani explains limitations in SQL Server 2016 SP1:

With the recent announcement of SQL Server 2016 SP1, we announced the consistent programmability experience for developers and ISVs, who can now maintain a single code base and build intelligent database applications which scale across all the editions of SQL Server. The processor, memory and database size limits does not change and remain as–in all editions as documented in the SQL Server editions page. We have made the following changes in our documentation to accurately reflect the memory limits on lower editions of SQL Server. This blog post is intended to clarify and provide more information on the memory limits starting with SQL Server 2016 SP1 on Standard, Web and Express Editions of SQL Server.

The development space has been expanded, but there’s still good reason for enterprises to use Enterprise Edition.

Comments closed

ETL With Spark

Eric Maynard demonstrates that moving data across Hadoop clusters can be sped up by using Spark:

By leveraging Spark for distribution, we can achieve the same results much more quickly and with the same amount of code. By keeping data in HDFS throughout the process, we were able to ingest the same data as before in about 36 seconds. Let’s take a look at Spark code which produced equivalent results as the bash script shown above — note that a more parameterized version of this code code and of all code referenced in this article can be found down below in the Resources section.

Read the whole thing.

Comments closed

SSAS Tabluar RAM Requirements

Bill Anton looks at ensuring your Tabular server has enough RAM:

In addition to being an “in-memory” technology, Analysis Services Tabular is also a “column-store” technology which means all the values in a table for a single column are stored together. As a result – and this is especially true for dimensional models – we are able to achieve very high compression ratios. On average, you can expect to see compression ratios anywhere from 10x-100x depending on the model, data types, and value distribution.

What this ultimately means is that your 2 TB data mart will likely only requrie between 20 GB of memory (low-end) and 200 GB (high-end) of memory. That’s pretty amazing – but still leaves us with a fairly wide margin of uncertainty. In order to further reduce the level of uncertainty, you will want to take a representative sample from your source database, load it into a model on a server in your DEV environment, and calculate the compression factor.

Read the whole thing; Bill has several factors he considers when sizing a machine.

Comments closed

SQL Server Port Changes

Steve Jones shows how to change the port of your SQL Server instance:

Notice that I have multiple instances here, so I need to choose one. Once I do, I see the protocols on the right. In this case, I want to look at the properties of TCP/IP, which is where I’ll get the port.

If I look at properties, I’ll start with the Protocol tab, but I want to switch to the IP Addresses tab. In here, you can see I’ll see an entry for each of the IPs my instance is listening on. I can see which ones are Active as well as the port. In my case, I have these set to dynamic ports.

My rules of thumb, which might differ from your rules of thumb:  disable the Browser, don’t change off of 1433 for a single instance, and hard-code ports if you happen to be using named instances.  There’s a small argument in favor of “hiding” your instance by putting it onto a higher port (i.e., 50000+), but that’s not a great way of protecting a system, as an attacker can run nmap (or any other port scanner) and find your instance.  The major exception to this is if you also have something like honeyports set up.  In that case, changing the port number can increase security, and will almost definitely increase the number of developers who accidentally get blackholed from the server.

Comments closed

Data Migration Assistant Timeouts

Kenneth Fisher shows that the Data Migration Assistant developers thought ahead:

I’ve been really excited about the new Data Migration Assistant (DMA) since I first heard about it. One of the things I like best about it is that unlike the old Upgrade Advisor it doesn’t have to be run on the server being upgraded. You can run it against any number of instances from a single workstation. The other day I was working from home and tried running the DMA against a couple of moderate size databases (about 1.25tb total) and I consistently got timeout errors.

Click through for the solution.  I’d prefer “have the queries be quick enough not to require this change” be the solution, but I don’t know exactly which queries they’re using, and some DMVs/DMFs can be quite slow.

Comments closed

Replicating To Azure SQL DB

Jeffrey Verheul shows how to enable replication from your on-prem SQL Server up to Azure SQL DB:

Replication to another on-premise instance is easy. You just follow the steps in the wizard, it works out-of-the-box, and the chances of this process failing are small. With replicating data to an Azure SQL database it’s a bit more of a struggle. Just one single word took me a few HOURS of investigation and a lot of swearing…

The magic word is “secure.”  Read the whole thing if you’re thinking of migrating an app to use Azure SQL DB and want to minimize downtime, or if you just want that extra level of protection that having a copy of your database out of the data center can give you.

Comments closed

Truncate Table And Stats

Kendra Little shows that TRUNCATE TABLE does not always reset stats:

You might expect to see that the statistic on Quantity had updated. I expected it, before I ran through this demo.

But SQL Server never actually had to load up the statistic on Quantity for the query above. So it didn’t bother to update the statistic. It didn’t need to, because it knows that the table is empty, and this doesn’t show up in our column or index specific statistics.

Check it out.

Comments closed

Wiring A Raspberry Pi 3

Drew Furgiuele begins his project to build an easy button for backups:

I should also pause for a second and talk about wiring hobby boards like this. Good news first: you won’t electrocute yourself on it. I mean, if you do something really dumb like try to wire it underwater or eat it or something then maybe you could but you shouldn’t ever receive a shock while working with a board like this, even plugged in. The bad news is that even though you won’t damage yourself, you could very well damage the board if you just randomly plug things in. Here’s a hard and fast rule: if you’re not an electronics expert or an electrical engineer, leave it to experts to tell you where and how to wire. I’m not calling myself an expert here, but I have sort of a basic understanding of how to wire these things up. The point I’m attempting at making is: if you want to really learn and understand circuit design, there are lots of great resources of where to get started. And it’s quite a rabbit hole to go down, but it’s well worth your time if you want to learn more.

Read the whole thing.  Over a weekend, with your Pi 3.

Comments closed

Recurring Server-Side Traces

Kevin Hill shows how to set up a server-side trace which runs periodically:

How to set up a recurring Server-side SQL trace that runs every hour for 10 minutes.

Issues:

  • 6 people in the room are staring at me waiting for the last second request to be done at the end of an 11 hour day (3 of them from the VBV – Very Big Vendor)

  • Trace file names must be different, or you get errors

  • Trace files cannot end with a number

  • I can’t tell time when I am hungry and tired

Extended Events are still the preferred method over server-side traces for getting information, but when a vendor demands traces, the scope for saying “There’s a better way” diminishes quickly, and it’s good to know how to create a server-side trace so you aren’t opening Profiler regularly.

Comments closed