Press "Enter" to skip to content

Author: Kevin Feasel

Powershell Workflows

Cody Konior has a beef with Powershell workflows:

That’s inexplicable.

One thing which does make it all work is setting $PSRunInProcessPreference which, “If this variable is specified, all activities in the enclosing scope are run in the workflow process.” Unfortunately that doesn’t explain what’s really going on and what the impacts are, so I won’t use it. But here it is turning the original failing script into a working one.

I’ve never used Powershell workflows.  It sounds like potentially an exasperating experience.

Comments closed

Renaming SQL Servers

Wayne Sheffield shows what to do when you need to rename your SQL Server instance:

Sometimes you make a mistake, and forget to rename a syspred’d server before installing SQL Server. Or perhaps your corporate naming standard has changed, and you need to rename a server. Maybe you like to waste the time involved in troubleshooting connection issues after a server rename. In any case, you now find yourself where the name of the SQL Server is different than the physical name of the server itself, and you need to rename SQL Server to match the server’s physical name.

You could always rerun the setup program to rename the server. Fortunately, SQL Server provides an easier way to do this. You just need to run two stored procedures:sp_dropserver and sp_addserver.

Click through for details, including important considerations.

Comments closed

CHECKDB And Indexes On Persisted Computed Columns

Arun Sirpal diagnoses a slower-than-usual DBCC CHECKDB run:

All the signs of CHECKDB Latch contention.

DBCC – OBJECT – METADATA this latch can be a major bottleneck for DBCC consistency checks when indexes on computed columns exist.  As a side note DBCC_Multiobject scanner  is used to get the next set of pages to process during a consistency check.

Read on for the details and Arun’s solution.

Comments closed

Hadoop For .NET Developers

Elton Stoneman has a new Pluralsight course out:

My latest Pluralsight course is out now:

Hadoop for .NET Developers

It takes you through running Hadoop on Windows and using .NET to write MapReduce queries – proving that you can do Big Data on the Microsoft stack.

The course has five modules, starting with the architecture of Hadoop and working through a proof-of-concept approach, evaluating different options for running Hadoop and integrating it with .NET.

I’ve liked Elton’s courses, as he’s one of the few trainers who really takes the time to show how you can integrate .NET languages into a Hadoop ecosystem; the general philosophy is “go learn Java and Scala and Python and …”

Comments closed

Self-Paced HDInsight Training

Ashish Thapliyal introduces three EdX courses on HDInsight:

Implementing Real-Time Analysis with Hadoop in Azure HDInsight

Start course

In this four week course, you’ll learn how to implement low-latency and streaming Big Data solutions using Hadoop technologies like HBase, Storm, and Spark on Microsoft Azure HDInsight.

Course Syllabus

Use HBase to implement low-latency NoSQL data stores.
Use Storm to implement real-time streaming analytics solutions.
Use Spark for high-performance interactive data analysis.

These are free courses on EdX.  I personally wouldn’t bother getting the certificate, but hey, it’s your money.

Comments closed

Hortonworks HDP 2.5 Available

Hortonworks has a new version of their data platform, 2.5:

We are very pleased to announce that the Hortonworks Data Platform (HDP) Version 2.5 is now generally available for download. As part of a Open and Connected Data Platforms offering from Hortonworks, HDP 2.5 brings a variety of enhancements across all elements of the platform spanning data science, data access to security to governance.

At Hadoop Summit 2016 San Jose on 06/28/2016, we unveiled the latest innovation package within Hortonworks Data Platform 2.5.

The top points of interest:  Spark 2, Kafka 0.10.0, Ambari 2.4, and Storm 1.0.1.  These are four big projects with major improvements.  Looks like I’ve got something to do this weekend…

Comments closed

Turbo LogShip

Richie Lee announces a tool to make log shipping more powerful:

To resolve this, you can restore files under NORECOVERY, then switch to STANDBY: when restoring a log backup, you have two restore choices: NORECOVERY and STANDBY. Both these choices will allow further log restores, but STANDBY is the option to choose if you want the database to be read-only. NORECOVERY leaves the database in a transactionally inconsistent state: it does not roll back uncommitted transactions into a tuf file. So it is possible to restore the log files in NORECOVERY mode, and then restore a final log with the STANDBY option to enable the database to be read-only (it is pretty neat that you can switch between STANDBY and NORECOVERY in this way.) We can do this because we honestly don’t care about all those in-between restores being transactionally consistent. Sadly, this option is not an out-the-box operation, and so requires writing a custom job to restore the log files. I’ve read online a few methods to achieve this, and I have written my own custom restore process.

Check out Richie’s project on GitHub.

Comments closed

Benchmarking Azure SQL Database Wait Stats

John Sterrett explains wait stats and which stats are most important for Azure SQL Database:

With an instance of SQL Server regardless of using IaaS or on-premise, you would want to focus on all the waits that are occurring in your instance because the resources are dedicated to you.

In database as a service (DaaS), Microsoft gives you a special DMV that makes troubleshooting performance in Azure easier than any other competitor.  This feature is the dm_db_wait_stats DMV.  This DMV allows us specifically to get the details behind why our queries are waiting within our database and not the shared environment.  Once again it is worth repeating, wait statistics for our database in a shared environment.

Click through for a stored procedure John has provided to collect wait stats in a Waits schema.

Comments closed

Visualizing NFL Data

Allison Tharp looks at NFL play-by-play data using R:

Lets look at how teams played on offense depending on where they were on the field (their yardline) and the down they were on.  The fields in our dataframe that we will care about here are yfog (yards from own goal), type (rush or pass), dwn (current down number: 1,2,3, or 4).  We will want a table with each of these columns as well as a sum column.  That way, we can see how many times a pass attempt was done on the 4th down when a team was X yards from their own goal.

To do this, we will use a package called plyr.  The Internet says that this package makes it easy for us to split data, mess with it, and then put it back together.  I am not convinced the tool is easy, but I haven’t spent too much time with it.

Check it out for some ideas on what you can do with R.

Comments closed