Let me be clear about something: If you don’t have your databases in source control, there’s no point in thinking about anything else. Everything else follows on from this point. Getting your code in source control is the absolute starting point of all deployment pipelines. Some people have very strong views about whether to use git or TFS, but frankly I’m less concerned about the SVN of choice and more concerned about whether all code that is deployed is in source control. But the point is there’s no point in fretting about how to use Octopus Deploy if you haven’t got your code in source control.
The morals of this story are to crawl before you walk, and when you do learn to walk, don’t walk on lava. I like the extended Minecraft metaphor, which sets this post off from many others of its ilk.
Environment variables are exposed with a PowerShell drive known as “$env:”. It’s possible to browse through all of the environment variables by typing $env: at the console and hitting the tab key. This will allow you to see the names of each environment variable in alphabetical order.
The $env: drive is the recommended place to refer to any environment variables with PowerShell. However, it’s possible also to read the variables via .NET in PowerShell by using the GetEnvironmentVariable static method on the Environment class. This is essentially the same task.
Read on to see how you can use these in your scripts.
Cumulative distribution graph is a commonly used chart type to express the performance metrics in percentile; it plots the percent of users who had performance metric greater or lesser than the threshold for the website.
The graph below shows the CDF graph for web page response time
From the CDF graph above, we see that at the 90th percentile, the web page response time of a website is 10.3 seconds. This means that 10% of the users in the time frame that the data was collected in had an overall web page load time of more than 10.3 seconds.
These are metrics as they relate to systems operations, but the general rules apply elsewhere as well. Also, 10.3 seconds to load a webpage seems…slow.
# INNER JOIN
(sum1, sum2, by.x =
, by.y =
, all =
(sum1, sum2, sum1$month1 == sum2$month2,
#| 3| -25| 3| 911|
#| 2| -33| 2| 853|
There’s no commentary, so it’s all script all the time. H/T R-bloggers
So let’s say you ran this script (or, maybe someone checked it in as a database change to production). For a while, things are great: you’re making changes to data on your publisher and things are flowing nicely to your subscribers. Sooner or later though, someone’s going to ask you to set up a new subscription (or maybe you need to reinitialize one). Let’s simulate that on my lab: we’re going to remove Person.Address from replication and we’re going to put it back, and then create a snapshot. The key difference here is that now, Person.Address has system versioning turned on. When we try and add the table back to the publication, we’re in for a shock:
This could come back to bite you, so if you use replication and are interested in temporal tables, read this closely.
I was just using it since a few days when I found an interesting case. My query had a native query
[Query = “select * from myTable where field=” & value]
When I tried to execute it, I received a message from the Power Query SDK that
The evaluation requires a permission that has not been provided. Data source kind: ‘SQL’. Permission kind: ‘NativeQuery’.
Read on for the solution.
Traditionally, developers would develop code without thinking much about operations. They’d get some new code ready, deploy it somehow, and hope it didn’t break much. And the Operations team would brace themselves for a ton of pain, and start pushing back on change, and be seen as a “BOFH”, and everyone would be happy. I still see these kinds of places, although for the most part, people try to get along.
With DevOps, the idea is that developers work in a way that means that things don’t break.
I know, right.
My tongue-in-cheek-or-maybe-not version of this is, DevOps is when you put developers in the on-call rotation. This provides motivation to build tools that actually explain what’s going on and write code that plays nicer with others.
In the world of DevOps, an Operations team might utilize a monitoring tool that feeds useful directly back to Developers and Testers. Developers & Testers may cross train, so both learn how to effectively write automated unit tests. Developers & Testers could cross train with Operations, to improve application deployment automation processes.
These examples all share one common theme – teams reaching outside of their traditional skill boundaries, to actively engage, learn, and integrate. This active engagement is what has often been missing from traditional operations.
Andy’s post is a good example of the positive take on DevOps (and the one to which I subscribe).
Recently I got a request from a user that he wanted to copy a specific set of tables and their indexes into a new database to ship to the vendor for analysis. The problem was that the DB had thousands of tables (8,748 to be precise). Hunting and pecking for specific tables from that is possible but tedious. Even if I managed to do that, I still have to manually script out the indexes and run them in target as the native “Import/Export Wizard” does not do indexes. It only copies the table structure and data! I am not a big fan of point and click anyway.
My first thought was to see if dbatools had something similar, though a quick glance at the command list says perhaps not yet.
A quick blog post on finding where the trend line is hiding in Power BI Desktop. Docs will state it is in the analytics pane for certain types of visualization. However, it doesn’t always show up:
Click through to see what the necessary pre-conditions to see a trend line.