Next, I wanted to make the alerts be a little more meaningful. The alert for a scoring play was already pretty good – it sends something like: BUF – Q4 – TD – J.Boykin 4 yd. pass from C.Jones (pass failed) Drive: 8 plays, 83 yards in 1:08 IND (19) at BUF (18). This is good, and in fact it is what I want the rest of the alerts to look like. However, I’d like the subject of the email to have the name of the team that scored (before it was just ‘Scoring Play’).
To do that, I needed to find out how to get the name of the scoring team. This was a little tricky because the documentation for the nflgame library, though pretty good, doesn’t give a good indication on how to find this.
Read on for more details, including specifics on turnovers and penalties.
When the time comes for a bunny to pass the battery, it may be out of free choice, or it might be because its script went down a path where passing it along is The Thing To Do. At this juncture, the team’s collective memory and playbook comes to the fore, and agreed rules dictate who the battery goes to. It doesn’t really matter what those rules are for the moment. The important point is that control is transferred by the players themselves using shared rules and a team whiteboard tracking who is ready to go, which team member might be most deserving, who has been waiting the longest etc. This code of conduct and state, this bushido of bunny bonhomie, is what we call a scheduler.
This is building up to something big…
Not very readable. At least, I can’t read it easily, so I need to split that into something a bit friendlier, like an actual list, one line per item. Fortunately, PowerShell has a split command that should do the job, as per this post ScriptingGuy’s Split Method in PowerShell.
Split() is just a .NET string method; you can use any overloads of that method in Powershell just like you could in C# or F#.
What do you do when you run into that missing database situation and the inevitable denial that will ensue? This is when an audit can save the day. Through an audit, you can discover who dropped the database and when it happened. Then you have hard data to take back to the team to again ask what happened. Taking the info from a previous article of mine, we can alter the script I published there and re-use it for our needs here.
This is available in the default trace or, as Jason points out, you can create an Extended Event (which data can live much longer than that in the default trace).
The input for this stream is set to an event hub which has a standard subscription. The basic subscription, which is of course cheaper, has one default consumer group. With a standard subscription multiple consumer groups can be created and more importantly named. When setting up the inputs there is a blank for the name of the consumer group. If you have a basic subscription this will be empty. If it is empty, then the event hub won’t pass data to the stream analytics job. Perhaps there is a way to get a basic event hub to work with a stream analytics job, but I couldn’t make it happen. When I created an event hub with a standard subscription and created a consumer group and added that name to the input of a streaming analytics job, it worked.
Read on for details.
Copied from somewhere else on the internet, this PowerShell script will return the product key used for a SQL instance Install. Super useful when changing licenses on temporary VM’s I spin up and play around with to SQL Developer whose instances have passed the Enterprise evaluation use-by date. Putting this here for my own benefit. I claim no kudos!
Click through for the code.
Python is often used in conjunction with the scikit-learn collection of libraries. The most important libraries used for ML in Python are grouped inside a distribution called Anaconda. This is the distribution that’s also used inside Azure ML1. Besides Python and scikit-learn, Anaconda contains all kinds of Data Science-oriented packages. It’s a good idea to install Anaconda as a distribution and use Jupyter (formerly IPython) as development environment: Anaconda gives you almost the same environment on your local machine as your code will run in once in Azure ML. Jupyter gives you a nice way to keep code (in Python) and write / document (in Markdown) together.
Anaconda can be downloaded from https://www.continuum.io/downloads.
If you’re going down this path, Anaconda is absolutely a great choice.
However, when run via SQL Agent, it succeeds. GAH!
I tried 50 different variations; modifying the script, various TRY..CATCH blocks found on the internet. Nothing. Every single one of them succeeded.
Then I remembered that by default, even though it had an error, by default errors always continue. ($ErrorActionPreference=”Continue”. So I added this line at the top:
Read on for the answer.
We’re happy to announce that we’ve made it much faster to get started with the Data Lake Store and Analytics services starting today. Before today, when you tried to sign up for these services you had to go through an approval process that introduced a delay of at least one hour.
Now, you no longer have to wait for approval, and you can simply create an account immediately.
Yan also has some “getting started” links to help you out, now that you don’t have to wait for an account.