Press "Enter" to skip to content

Month: September 2017

The Decline(?) Of Google Search

Vincent Granville argues that Google search is on a slow decline:

What has happened over the last few years is that many websites are now getting most of their traffic from sources other than Google. Google is no longer the main source of traffic for most websites, because webmasters pursue other avenues to generate relevant traffic, in particular social networks and newsletter – as it is easier to attract the right people and promote the right content through these channels. Think about this: How did you discover Data Science Central? For most recent members, the answer is not Google anymore. In that sense, Google has lost its monopoly when it comes to finding interesting information on the Internet. The reason is that Google pushes more and more search results from partners, their own products, possibly content that fits with its political agenda, big advertisers, old websites, big websites, and web spammers who find a way to get listed at the top. In the meanwhile, websites such as ours promote more and more articles from little high quality publishers and great bloggers that have a hard time getting decent traffic from Google. For them, we are a much bigger and better source of traffic, than Google.

I think this is a fairly optimistic view of the situation, as there’s a difference between “I want to learn about a topic” versus “I want to learn this specific thing.”  I think Vincent’s argument is much stronger on the former, but when it comes to the latter, the first thing I hear people say is that they’re googling it.

Comments closed

Synchronizing Availability Group Objects

Derik Hammer has a process to maintain logins, backup devices, linked servers, SQL Agent details, and more between Availability Group nodes:

A notable limitation of this process is that it does not update existing objects. Jobs which already exist but were updated, will not be altered. I chose to omit that functionality because it presents merge complications and problems. For example, the cleanest way to handle the process would be to drop and create the object each time the synchronization runs. If that happened, however, there would be gaps when logins didn’t exist and applications would fail to connect, SQL Agent jobs would lose history, and/or the processing of a job would fail because it was dropped part way through executing.

With that limitation aside, this is a very interesting process and I recommend giving it a careful read.  Derik also includes the Powershell script at the end.

Comments closed

Getting Fancier With VM Creation

Raul Gonzalez shows how to spin up a Hyper-V VM using Powershell:

There are a lots of command to create or manipulate VM’s and I’m still only scratching the surface, but although I’m not a PS person, I have to admit that every time I want to do something, I find relatively easy to find a powershell command or a script for it, so I like it.

For instance, creating new virtual machines it’s a simple as one command

New-VM

And that’s only the beginning, we can add the different virtual hardware like in the UI, Drives, Network Adapters and so on. And then configure memory, CPU and NUMA, etc…

This is the script which I’m more or less running to create my VM’s, this in particular will be a Hyper-V Host itself, so there are a couple of interesting settings I’ll tell you about later.

Click through for Raul’s script.

Comments closed

Checking Availability Group Status With Powershell

Tracy Boggiano shows off a script which checks Availability Group status of selected servers:

My favorite thing to automate using PowerShell is checking on the status of things on multiple servers.  For example, after patching your environment running a quick query to make sure the version number is the same.  In this example, we will use a cmdlet my coworker wrote in combination in my cmdlet to check the health of all the Availability Groups across our landscape or you could use it just check one.  After all I do consider myself to be an HA/DR nut.

I’ve blogged about my coworker’s Get-CmsHost cmdlet before but now he has and shared it on github so you can read more about here.

In my cmdlet I use the same code that used in the SSMS AG dashboard to check the status of my Availability Groups.

Tracy includes her cmdlet as well as several example calls.

Comments closed

Considerations With Third-Party Backup Tools

Gianluca Sartori gives us a laundry list of potential problems with third-party database backup solutions:

2. Potentially dangerous separation of duties
Backup tools are often run and controlled by windows admins, who may or may not be the same persons responsible for taking care of databases. Well, surprise: if you’re taking backups you’re responsible for them, and backups are the main task of the DBA, so… congrats: you’re the DBA now, like it or not.
If your windows admins are not ok with being the DBA, but at the same time are ok with taking backups, make sure that you discuss who gets accountable for data loss when thing go south. Don’t get fooled: you must not be responsible for restores (which, ultimately, is the reason why you take backups) if you don’t have control over the backup process. Period.

There is plenty of sound advice in this post.  These points also apply to roll-your-own solutions as well, but the main focus is on enterprise backup tools, which are in many cases surprisingly shoddy.

Comments closed

Powershell And CMS

Mark Wilkinson loves Powershell and he loves Central Management Servers and he loves combining the two:

Get-CmsHosts is a function I wrote as part of a custom PowerShell module we maintain internally at my employer. It is simple to use, but is the base of most automation projects I work on.

Simple Example

PS> Get-CmsHosts -SqlInstance 'srv-' -CmsInstance srv-mycms-01

This example will connect to srv-mycms-01 and return a distinct list of instance host names registered with that CMS server that start with the string srv-. This output can then be piped to other commands:

Read on for more examples and details, and then grab the script at the end of Mark’s post.

Comments closed

Disabling Named Pipes Using Powershell

Brian Carrig shows how to disable the Named Pipes protocol using Powershell:

Windows and POSIX systems both support something referred to as “named pipes”, although they are different concepts. For the purposes of this post I am referring only to the Windows version. By default on most editions of SQL Server (every edition except Express Edition), there are three supported and enabled protocols for SQL server to listen on – Shared Memory, TCP/IP and Named Pipes. The inclusion of named pipes has always confused me somewhat. In theory, named pipes allow communication between applications without the overhead of going through the network layer. This advantage disappears when you want to communicate over the network using named pipes. In all modern versions of SQL Server, named pipes does not support Kerberos, so for most shops you likely will not be using or should not be using named pipes to communicate with SQL Server.

Security best practices dictate that if you are not using a particular protocol, you should disable it. There is an option to disable this is in the GUI in Configuration Manager but since this T-SQL Tuesday blog post is about using Powershell it does not make sense to cover it here. Nor is it particular easy to use the GUI to make a configuration change across hundreds of SQL instances. Unfortunately, I have not found a good way to make this change that does not involve using WMI, if anybody is aware of a better method, I welcome your feedback.

Read the whole thing.  You should have Named Pipes enabled if you’re running a NetBIOS network.  But because it’s not 1997 anymore, you probably shouldn’t be running a NetBIOS network.

Comments closed

Spinning Up Hyper-V VMs With Powershell

Andrew Pruski shows us how to enable Hyper-V and create a Windows VM using Powershell:

I’m constantly spinning up VMs and then blowing them away. Ok, using the Hyper-V GUI isn’t too bad but when I’m creating multiple machines it can be a bit of a pain.

So here’s the details on the script I’ve written, hopefully it could be of some use to you too.

Click through for the script; it’s ultimately just a few lines of code.

Comments closed

Naive PCA With R

Pablo Bernabeu gives us a naive method for performing a Principal Component Analysis:

STAGE 1.  Determine whether PCA is appropriate at all, considering the variables

  • Variables should be inter-correlated enough but not too much. Field et al. (2012) provide some thresholds, suggesting that no variable should have many correlations below .30, or any correlation at all above .90. Thus, in the example here, variable Q06 should probably be excluded from the PCA.

  • Bartlett’s test, on the nature of the intercorrelations, should be significant. Significance suggests that the variables are not an ‘identity matrix’ in which correlations are a sampling error.

  • KMO (Kaiser-Meyer-Olkin), a measure of sampling adequacy based on common variance (so similar purpose as Bartlett’s). As Field et al. review, ‘values between .5 and .7 are mediocre, values between .7 and .8 are good, values between .8 and .9 are great and values above .9 are superb’ (p. 761). There’s a general score as well as one per variable. The general one will often be good, whereas the individual scores may more likely fail. Any variable with a score below .5 should probably be removed, and the test should be run again.

  • Determinant: A formula about multicollinearity. The result should preferably fall below .00001.

PCA is a powerful tool in several fields, including clinical testing.

Comments closed

Serverless Lambda Architecture

Laith Al-Saadoon shows off a new Amazon Web Services product, AWS Glue, which allows you to build a data processing system on the Lambda architecture without directly provisioning any EC2 instances:

With the launch of AWS Glue, AWS provides a portfolio of services to architect a Big Data platform without managing any servers or clusters. AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics. You can create and run an ETL job with a few clicks in the AWS Management Console. You simply point AWS Glue to your data stored on AWS, and AWS Glue discovers your data and stores the associated metadata (for example, the table definition and schema) in the AWS Glue Data Catalog. After it’s cataloged, your data is immediately searchable, queryable, and available for ETL.

AWS Glue generates the code to execute your data transformations and data loading processes. Furthermore, AWS Glue provides a managed Spark execution environment to run ETL jobs against a data lake in Amazon S3. In short, you can now run a Lambda Architecture in AWS in a completely 100% serverless fashion!

“Serverless” applications allow you to build and run applications without thinking about servers. What this means is that you can now stream data in real-time, process huge volumes of data in S3, and run SQL queries and visualizations against that data without managing server provisioning, installation, patching, or capacity scaling. This frees up your users to spend more time interpreting the data and deriving business value for your organization.

Laith has a working demo of the process available as well.

Comments closed