Press "Enter" to skip to content

Category: Documentation

Creating Powershell Documentation In VS Code

Rob Sewell has a post covering a nice addition to Visual Studio Code when you’re building Get-Help documentation for a cmdlet:

Now you can simply type <# and your help will be dynamically created. You will still have to fill in some of the blanks but it is a lot easier.

Here it is in action in its simplest form

But it gets better than that. When you add parameters to your function code they are added to the help as well. Also, all you have to do is to tab between the different entries in the help to move between them

Looks like a nice time-saver.

Comments closed

Log Info DMF

Andrew Pruski looks at the new sys.dm_db_log_info dynamic management function in SQL Server 2017:

This new DMF is intended to replace the (not so) undocumented DBCC LOGINFO command. I say undocumented as I’ve seen tonnes of blog posts about it but never an official Microsoft page.

This is great imho, we should all be analyzing our database’s transaction log and this will help us to just that. Now there are other undocumented functions that allow us to review the log (fn_dblog and fn_dump_dblog, be careful with the last one).

I’m glad to see this make the cut, as replacing informative / idempotent DBCC commands with DMVs makes it easier to query this data, particularly in conjunction with other DMVs or system tables.

Comments closed

Seeing Statistics In Execution Plans

Pedro Lopes announces that the statistics used to compile a plan are now available as part of the execution plan details:

OptimizerStatsUsage is available in cached plans, so getting the “estimated execution plan” and the “actual execution plan” will have this information.

In the above example, I see the ModificationCount is very high (almost as much as the table cardinality itself) which after closer observation, the statistic had been updated with NORECOMPUTE.

And looking and the Seek itself, there is a large skew between estimated and actual rows. In this case, I now know a good course of action is to update statistics. Doing so produces this new result: ModificationCounter is back to zero and estimations are now correct.

This will be a good addition to SQL Server 2017.

Comments closed

Code Beautifier

Dave Mason has a small application for auto-formatting code:

One of the challenges I’ve faced as a blogger is quickly reproducing code that looks good in HTML. I’ve tried a few different online code conversion sites, and even a C# library. But I never quite got the results I wanted. I found myself revising the HTML to fix up key words, comments, and operators that sometimes got missed, were the wrong color, or the wrong font. All that HTML editing was tedious and time consuming.

What I really wanted to do was copy my code from SSMS, Visual Studio, or the PowerShell ISE and paste it into my blog so I could spend less time as a web dev and more time writing. Since none of the other tools I found gave me the results I wanted, I wrote my own. The code for my little app is available on GitHub. (It’s my first project, so if something is missing or wrong, let me know.) It was created with Visual Studio 2015–there’s a compiled exe if you prefer that.

Check it out.

Comments closed

Code Formatting

Bert Wagner has a few tips on code formatting to make it more readable:

The second example above consistently indents lines, adds new lines, and follows consistent coding patterns. This makes it easy to skim the code quickly.

Books have chapters, headings, and paragraphs defined by formatting that make it easy to find what is needed at a glance — formatting code makes it possible to find things easily too.

The examples Bert uses are all C#, but apply to most languages.  I think consistency is key, even more so than your ideal format.  This reduces friction between developers, at least outside of the “what should our coding standards be?” meetings…

Comments closed

Stop Being Your Own Worst Enemy With Code

Bert Wagner has advice on making code understandable for future-you:

At the time I wrote it, I probably thought my code was beautiful. An elegant masterpiece. It should have been printed, framed, and hung on a wall of The Programming Hall of Fame. As clever as I thought I may have been a few years ago, I rarely am able to read my old code without some serious time wasted debugging.

This problem plagued me regularly. I tried different techniques to try and make my code easier to understand.

Bert has some good thoughts here, and I’ll add two small bits.  First, there’s a saying that it takes more mental effort to debug code than it takes to write it, so if you’re writing code at the edge of your understanding, effective debugging becomes difficult to impossible.  Second, unless you see a business rule frequently enough to internalize it, your greatest familiarity with the “whys” of the system is right when you are developing.  There is huge value in taking the time to document the rules in an accessible manner; even if you wrote the code, you probably won’t remember that weird edge case at 4 AM six months from now, when you need to remember it the most.

Comments closed

Accepting Risk

Daniel Hutmacher argues that modern companies have reached an inefficient risk equilibrium:

Which brings us to the matter of getting stuff done. Imagine if everything you do has to be approved by a stakeholder and a manager, every line of code you write is peer-reviewed, then tested in a dev test environment, then in an acceptance test environment (which should both contain reasonably fresh, yet scrambled copies of the production data), then approved for deployment by the stakeholder (who ideally should also take time to verify the results), and finally deployed to production by two other people, under a four-eyes principle where no single person can perform any change in production alone. Sprinkle this with a bunch of project meetings, all while leaving a long and winding trail of tickets and documentation.

This is how most development cycles look. Except, you know, the test environments are rarely fresh, the tests aren’t really that thorough, and the peer-review could probably be called a peer-glance at best.

A lot of this depends upon the industry and the likelihood that an outage will cause direct physical harm to people.  I’m personally ambivalent about where the right risk acceptance point is, but Daniel makes a good argument on the “accept more risk” side.

Comments closed

Thoughts On Deprecated Technologies

Dan Guzman discusses a couple deprecated components which are still hanging around:

The message is loud and clear that ODBC is the supported and preferred path for native applications going forward. The Data Access Technologies Road Map provides an overview and history of Microsoft data access technologies, which I recommend you peruse to ensure you are not inadvertently using deprecated or unsupported technologies for new development and, for existing applications, consider moving from legacy data access technologies to current ones when practical.

The current Microsoft ODBC Driver for SQL Server as of this writing is ODBC Driver 13 for SQL Server. Note that that both the 13.0 and 13.1 versions of this driver have the same “ODBC Driver 13 for SQL Server” display name listed under installed programs and ODBC Data Source Administrator. If installed, the driver will be listed under installed programs along with the corresponding driver version (when viewed detail mode). The 13.1 version adds support for the Always Encrypted feature. These ODBC Drivers are available from the link I mentioned earlier.

They will pry OLE DB from my cold, dead hands.  For my money, it’s still the best SQL Server data access technology for Integration Services, beating out ODBC both in terms of ease of use and performance.

Comments closed

Documenting A Data Warehouse

Jesse Seymour discusses a few forms of documentation for a data warehouse:

Extended properties are a great way to internally document the data warehouse.  The key advantage here is that the values of these extended properties can be retrieved with a T-SQL query.  This allows us to access this information with a view as needed.  My favorite method of using this is to create an SSRS report that end users can run to look up the attributes and comments I store in the extended property.  Data warehouse tools take some of the pain out of the process.  Unfortunately, not all tools support use of extended properties.  Make sure your tool does or consider changing tools.  Be sure to document the names and use cases for each property you create.  Consistency is the key to the value here.

I’ve never been a big fan of extended properties, mostly because I typically don’t work with tools which expose that information easily.  Regardless, there are other important forms of documentation, so read on.

Comments closed

Rules And Exceptions

Shane O’Neill had to change a mental process regarding granting permissions:

With CREATE permissions this isn’t the case; there is a piece of the above template that isn’t needed, and it’s quite easy to see why when I sat down and thought about it.

Specifically, it’s this bit:

<On What>

I’m granting CREATE permissions; since I haven’t created anything, I can’t grant the permission on anything.

I like this post for the direct reason (granting certain permissions doesn’t require specifying an object), but for the implicit point as well:  we build up internal systems of rules and processes as we act on things.  This inductive reasoning tends to work well for us in most scenarios, but at some point, our systems break down and we find out either that we need to incorporate edge cases into our system, or that we were actually focusing on an edge case the entire time.

Comments closed