Press "Enter" to skip to content

Day: April 13, 2016

Database Mirroring

Derik Hammer has chosen Database Mirroring as his favorite feature:

With the end of SQL Server 2005, we also will soon see the end of database mirroring. There is a new feature releasing with SQL Server 2016 called Basic Availability Groups. This is the replacement for database mirroring. The use cases and limitations will appear very similar to database mirroring but it will use the Availability Group technology. In theory this will be like a stim-pack for the database mirroring feature while leaving it available in Standard Edition. Let’s cross our fingers that the Windows Failover Cluster components don’t slow down the failovers like it did with AGs.

A bold choice, but that “available in standard edition” thing is huge for smaller organizations which can’t afford Enterprise (especially with The Licensing Changes of 2012).

Comments closed

Delayed Durability

Kenneth Fisher talks about Delayed Durability as his favorite SQL Server feature:

In the end I decided it would be fun to post about one of the newer features;Delayed Durability. To understand DD (delayed durability) you have to understand how SQL Server implements durability. To start with there are two pieces of any data change. The actual data page and any log records that are created. In order to keep SQL Server as fast as possible the data pages are changed in memory and then written to disk as part of a background process. Unfortunately this means that you could lose those changes if the server crashes or shuts down before they are written to disk. In order to avoid this, and keep the changes durable, the transaction can not complete until the log information is actually written to disk. This means that one of the big bottlenecks in any given transaction is writing that log record to disk. The DD option basically changes that write from a synchronous to an asynchronous operation. The log writes are bundled together in 60k chunks before being written to disk. This can (depending on your work load) provide a huge increase in speed. I’m not going to bother discussing that part of it since Aaron Bertrand (b/t) and Melissa Connors (b/t) wrote great posts on just that (Aaron’s and Melissa’s).

What I want to discuss is the actual risk of data loss.

My philosophy on this is simple:  if you need delayed durability and you can afford the data loss, then maybe there’s a better data platform.  I want my SQL Server like I want my Grateful Dead concert:  ACID compliant.

Comments closed

Replication

Jon Morisi loves him some replication:

Seriously though, replication has been around since the beginning and it’s not going anywhere.  I can’t think of any other feature more prolific than replication.  Name another SQL Server HA/DR technology that is as extensible as replication.  Replication has gotten a bad rap over the years mostly on anecdotal comments that it “breaks all the time” or “it takes too much time to manage”.  I’ve worked in many environments and have setup dozens and dozens of instances of log shipping, mirroring, clusters, availability groups, and replication.  From my anecdotal experience, I can tell you I’ve had more trouble with availability groups than I have with replication.  If you have a good DBA that understands replication, uses it correctly, and is provided the correct tools (read $ for hardware/infrastructure) replication works just fine.  I have setup replication in a global environment in which multiple databases, publications, subscriptions, and agents ran around the clock and without issue.

Replication is very powerful, I agree…but it still breaks.  A lot.  I’m grateful for its existence and also for the fact that I’m not the one maintaining it…

Comments closed

Joining Ubuntu To AD

Chrissy LeMaire shows us how to connect to AD from Ubuntu:

Since 2009, it seems that a couple things have changed in the client realm. In particular, winbindfell out of favor to Likewise Open (which I used to <3) which was bought by BeyondTrust and turned into PowerBroker Open. But that’s since fallen out of favor to the SSSD or “System Security Services Daemon“. SSSD seems pretty cool but everyone hates its name and assume that its name is keeping it from greater adoption.

Sometimes when researching SSSD, you’ll come across a few mentions of FreeIPA which is similar to Active Directory, OpenLDAP, and ApacheDS. Oh, and I recently found out thatSamba4 allows Linux servers to join Active Directory as Domain Controllers (!!) but I can’t tell if it can be a forest of its own (reddit review here).

There are other players I’m leaving out but after a bit of casual research, no others seem to stand out. Ultimately, while there are a number of ways to setup AD/Linux authentication with Ubuntu, it appears that SSSD is the current way to go. Let’s go ahead and set that up.

Cf Ryan Adams and LeMaire’s separate posts back in March on the topic.  As Microsoft gets serious about Linux integration, I would love to see them simplify this process significantly, either by updating an existing open-source project (my preference) or creating their own open-source project.

Comments closed

Data Lakes

Jen Stirrup has a great primer on data lakes and factors to consider before you jump into the idea:

The organization will need to take a step back to understand better their existing status. Are they just starting out? Are other departments which are doing the same thing, perhaps in the local organization or somewhere else in the world? Once the organization understands their state better, they can start to broadly work out the strategy that the Data Lake is intended to provide.

As part of this understanding, the objective of the Data Lake will need to be identified. Is it for data science? Or, for example, is the Data Lake simply to store data in a holding pattern for data discovery? Identifying the objective will help align the vision and the goals, and set the scene for communication to move forward.

I would like to popularize the term Data Swamp for “that place you store a whole bunch of data of dubious origin and value.”  It’s the place that you promise management of course you can get the data back…as long as they never actually ask for it or are okay with reading terabytes of flat files from backup tapes.  The Data Swamp is the Aristotelian counterpart to the Data Lake, Goofus to its Gallant.  It will also, to my estimate, be the more common version.

Comments closed

Single-Socket OLTP Systems?

Joe Chang tosses a hardware-related bomb:

Today, it is time to consider the astonishing next step, that a single socket system is the best choice for a transaction processing systems. First, with proper database architecture and tuning, 12 or so physical cores should be more than sufficient for a very large majority of requirements.

We should also factor in that the second generation hyper-threading (two logical processors per physical core) from Nehalem on has almost linear scaling in transactional workloads (heavy on index seeks involving few rows). This is very different from the first generation HT in Willamette and Northwood which was problematic, and the improved first generation in Prescott which was somewhat better in the positive aspects, and had fewer negatives.

Joe knows a lot more about this than I do, but I’m very hesitant about this for two reasons.  First, scale.  When we start looking at hundreds of concurrent requests, would a single-socket machine really work?  I don’t know to answer to that, but in my simplistic “more is better than fewer” rule of thumb, I’d err on the side of caution, especially if it isn’t my money paying for this.

Second, there are batch processes and large background activities which occur even on extremely transactional OLTP systems.  Think about running CHECKDB or ETL processing or troubleshooting/monitoring processes.  These are going to be processes which benefit from parallelism, and if you’re seriously limiting core counts (which a single socket would necessarily do), you might end up in a bad way when they run even if your “normal” workload performs a little better.

Comments closed

SSIS RC2 Issues

Andy Leonard walks through his experience trying SSIS 2016 RC2:

The SSMS 2016 RC2 link works just fine. The SSDT 2016 RC2 link does not work.

Fear not! The SSIS team has provided a set of updated links for SSIS 2016 SSDT for RC2. There’s other good information in that post. If you want to tinker with SSIS 2016 RC2, I encourage you to read it.

But Wait, There’s More

Once I’d done all this, I could create an SSIS project and add a Script Task to a package. But I could not open the Visual Studio Tools for Applications (VSTA) code editor. When I clicked the “Edit Script…” button in the Script Task Editor, nothing happened.

I contacted the SSIS Development Team (we hang out), and let them know what I was seeing. They are aware of the issue and sent the following screenshot

Sounds like there are still some kinks to work out before release.

Comments closed

Is Power BI SSAS In The Cloud?

Koos van Strien hits us with an interesting thought about SSAS versus Power BI:

As I’m currently planning to migrate the entire BI architecture of one of my customers to the cloud, this made me think: can we ditch SSAS as we know it already in favor of Power BI? What are the alternatives?

To study that, I’ve put some diagrams together to show the possibilities of moving BI to the cloud. First, I’ll discuss the possible architectures, then the impossible architecture (but maybe the situation I was looking for).

One man’s opinion:  there will be SSAS for Azure.  I have no proof of this, and the nice part about having no proof is that I can throw out wild speculation without fear of violating NDA….  But to me, Power BI solves a different problem and acts in conjunction with SSAS rather than as its replacement.  I also don’t see any technical reasons why SSAS couldn’t live in the cloud, and so that leads me to believe that it will be there eventually.  But hey, it turns out wild speculation is occasionally wrong…

Comments closed