Press "Enter" to skip to content

Author: Kevin Feasel

Network Monitoring in Powershell

Jeffrey Hicks builds out a quick Powershell script to test network performance:

I hope you’ve been trying your hand at the scripting challenges being posted on the Iron Scripter web site. The challenges are designed for individuals to do on their own to build up their PowerShell scripting skills. A few weeks ago, a challenge was posted to create a network monitoring tool using PowerShell and the Write-Progress cmdlet. I thought I’d share my notes on the challenge and some of the code I came up with.

It’s an interesting challenge and solution.

Comments closed

Preventing Unexpected Failover with AG Patching

Josh Darnell dealt with an issue with Availability Groups:

I had a 2-node availability group (AG) + fileshare witness system experience an unexpected failover recently.

The synchronous secondary was being patched, and when it came back up from a reboot, the current primary unexpectedly failed over. We weren’t done with all the patching on the secondary, so this caused a short outage, and we had to fail back to the original primary to finish the patching (which is of course another short interruption in availability).

The root cause was interesting enough that I decided to share the story here, and provide some general advice and debugging tips along the way.

Click through to understand why this happened and how you might be able to avoid the pain Josh experienced.

Comments closed

Setting Up Your Own R Package Repository

Steve Belcher explains how to configure a custom package repository in your environment:

One of the strengths of the R language is the thousands of third-party packages that have been made publicly available via CRAN, the Comprehensive R Archive Network. R includes several functions that make it easy to download and install these packages. However, in many enterprise environments, access to the Internet is limited or non-existent. In such environments, it is useful to create a local package repository that users can access from within the corporate firewall.

Your local repository may contain source packages, binary packages, or both. If at least some of your users will be working on Windows systems, you should include Windows binaries in your repository. Windows binaries are R-version-specific; if you are running R 3.3.3, you need Windows binaries built under R 3.3. These versioned binaries are available from CRAN and other public repositories. If at least some of your users will be working on Linux systems, you must include source packages in your repository.

There are some tools which help out with this, so read the whole thing.

Comments closed

C# Notebooks with Cosmos DB

Hasan Savran takes us through Jupyter notebooks in Cosmos DB:

Jupyter Notebooks are in everywhere in these days. You can write chunk of code and run it on a web application without worrying about compiler is a great feeling. C# has been little bit late to the party, but we started to see C# Notebooks lately too. Azure Cosmos DB announced their version if C# Notebook this week.
     You can reach all notebook functionalities under the Data Explorer link, There are bunch of sample notebooks you will see under the Notebook link.

There are some limitations here, like needing to use the SQL API, but it’s an interesting approach to data access in Cosmos DB.

Comments closed

The Pain of Nested Powershell Modules

Fred Weinmann explains why you probably don’t want to build a nested module in Powershell:

Yay, but … that is not enough for some people:

– What if somebody copy&pastes it to another machine and forgets the dependencies?
– What if another module uses the same dependency, but at a different version?
– What if I don’t want to confront the user with those dependencies?
– What if a later version of my dependency breaks things? I just tested this version!

And the answer to all four of those is the same: Ship dependencies within your own module, hidden from prying eyes. With the NestedModule feature of PowerShell modules we natively support that as well. Great! Feels good, feels stable, feels reliable, feels … solid.
It’s an illusion.

Click through to understand why this benefit is illusory.

Comments closed

Reviewing SSMS Client Statistics

Reitse Eskens learns about SQL Server Management Studio’s client statistics:

In my case, i was looking for the amount of bytes received from server to determine the network speed. The number of rows is one thing, but i can’t easily tell if a row is 1 or 1000 kilobytes. By checking out the bytes received i can get some feel for the datasize. If there’s a huge amount of data coming towards me, that explains why i’ve got to wait for minutes. If there’s only a few kilobytes in the end, maybe something else is going wrong.

Reitse also takes some time to figure out how the client statistics tool works.

Comments closed

When to Have Multiple Azure Data Factories

Paul Andrew explains how to become a factory mogul:

The obvious and easy reason for having multiple Data Factory’s could be that you simply want to separate your business processes. Maybe they all have separate data delivery requirements and it just makes management of data flows easier to handle. For example:

– Sales
– Finance
– HR

They could have different data delivery deadlines, they process on different schedules and don’t share any underlying connections.

You may also have multiple projects underway that mean you want to keep teams isolated.

But that’s not the only reason, so click through to learn several other reasons why you might have multiple Azure Data Factory instances running.

Comments closed

Blocking Inbound Connections to SQL Server

John Morehouse shows one quick way of preventing anybody else from connecting to your SQL Server instance:

We even tried to restart the instance into single user mode, however, every time that happened something else would take the connection before we could get into the instance.  We eventually restarted the SQL Server instance to normal operation so that we could investigate why we could not get a connection when in single user mode.

Turns out that with the production nature of the instance, the clients large farm of application servers was connecting to it faster than we could.   This was discovered by using sp_who2, however, you could use the DMV sys.dm_exec_connections to see what is connecting to the instance if you desired.  So, we needed a way to block incoming connections while not being evasive like shutting down the application servers or a large network change.

This is where the brilliance comes in.

Click through for the idea. This is the type of thing you keep in your back pocket in a real pinch, but hope never to need to use.

Comments closed