Press "Enter" to skip to content

Category: Documentation

Leaving Good Comments in a Stored Procedure

Erik Darling comments on your comments:

Possibly the least helpful, but most humorous, way of leaving comments, is a large block of green text up at the top of a module.

There are all sorts of helpful insights buried in those comments to help me as a consultant understand my audience.

But… 

I agree with a lot of where Erik is going with his thoughts. The area where we probably have some daylight is that I’d rather limit comments to statements of why rather than what. Sure, when I’m pseudo-coding out a procedure, I’ll have a bunch of little “do this thing here” types of comments, but I remove those as I build out the code. Instead, explain why you’re doing something if it isn’t patently obvious, if you rewrote a query in a more complicated to read fashion because it performs much better, that kind of thing.

But in fairness, as long as your comments actually reflect the code, it’s really hard to say any code base is ever over-commented. It’s way easier to go the opposite direction.

Comments closed

Documenting a Tabular Model

Olivier Van Steenlandt builds the docs:

A few months ago, I chatted with colleagues about our Tabular Model. More specifically the lack of Tabular Model documentation. Since we were thinking about replacing our current model, I started to think about how to integrate documentation easily.

Having documentation is 1 thing, making sure it’s used is something completely different. And then we’re not even talking about keeping it up to date. My initial idea was to include the documentation task during the development phase. That said, time to get the thoughts into practice.

Read on to see what Olivier did.

Comments closed

Using SQL Doc to Find Object Dependencies

Steve Jones looks for links:

In the SQL Doc application, you can dive down into the various objects in your database. As I’ve shown below, I navigated on the left side down to a specific object.

This gives me the basics of this object, but I can scroll down and see more. The lower part below the script shows what this object depends on (Uses) and what other objects depend this one (Used By). In this case, this object depends on dbo.ErrorLog and dbo.uspPrintError.

Read on to learn more about how it works and some tips from Steve.

Comments closed

The Benefits of Checklists

Aaron Bertrand checks a box:

If there has been one constant throughout my career, it’s change. As applications become more complex and we continue improving reliability, there will always be the next patch, upgrade, new replica, new cluster, and even new cloud region – or moving to the cloud in general. For complex architectures, multiple teams are often actively involved, and even more who want to be “in the know” during any changes.

We use tickets (JIRA) to track and document the work, and incidents (FireHydrant) to expose the status to internal and external customers. But these are complex systems to keep current in real-time. And while nearly everything we do is scripted, broad audiences can’t consume code – even when saturated with comments. Since multiple teams are involved, the code is scattered across disparate things like runbooks, which are not easy or desirable to combine. How can a wide range of people stay coordinated during a major change?

For more complicated tasks, I’m all-in on creating either checklists or dedicated runbooks. I have a client that uses merge replication, and every once in a while, we need to rebuild replication. In that case, we have a more detailed runbook with step-by-step instructions, but this is great for keeping track of complex processes, whether or not they go cross-team.

Also, callout to the greatest Site Reliability Engineer ever to play the game, Mario Lemieux.

Comments closed

Thoughts on Community-Driven Documentation in Postgres

Robert Haas shares some thoughts:

In my opinion, the PostgreSQL documentation is simultaneously excellent and fairly poor, and both its excellence and its shortcomings are direct results of the process by which the documentation is produced. The PostgreSQL documentation is stored in the same git repository as the source code, and anyone who patches the source code so as to change documented behavior must also patch the documentation to match.

This means that nearly all documentation updates are made by the developer who is most familiar with what is changing in the code, or sometimes by another developer who has studied those changes closely. Therefore, the documentation is usually extremely accurate. Sure, there are oversights, but it would be incredible to discover that some PostgreSQL command has a documented option which doesn’t actually exist, or that a parameter which is documented to take a string argument actually takes an integer or a Boolean. Typically, the descriptions of what SQL statements do and how that behavior is changed by parameter settings or options passed to the command itself are crisp and precise.

But read the whole thing, as there are downsides to this approach.

Comments closed

Adding Help to Your Powershell Code

Robert Cain helps those who help themselves:

Having good help is vital to the construction of a module. It explains not only how to use a function, but the purpose of the module and even more.

Naturally I’ve included good help text in the ArcaneBooks module, but as I was going over the construction of the ArcaneBooks module I realized I’d not written about how to write help in PowerShell. So in this post and the next I’ll address this very topic.

Read on for Robert’s thoughts on the topic, including standard ways to add content comments so Powershell’s built-in Get-Help cmdlet works for you.

Comments closed

The Library of Congress Control Number (LCCN)

Robert Cain continues a series on book archival:

This is part of my ongoing series on my ArcaneBooks project. The goal is to provide a module to retrieve book data via provided web APIs. In the SEE ALSO section later in this post I’ll provide links to previous posts which cover the background of the project, as well as how to use the OpenLibrary APIs to get data based on the ISBN.

In this post I will provide an overview of using the Library of Congress API to get data based on the LCCN, short for Library of Congress Control Number.

This has been an interesting series to watch, as it’s a practical application of non-work use of a series of practical development skills.

Comments closed

Fixing the Parallelism Documentation

Erik Darling shreds the docs:

The section with the weirdest errors and omissions is right up at the top. I’m going to post a screenshot of it, because I don’t want the text to appear here in a searchable format.

That might lead people not reading thoroughly to think that I condone any of it, when I don’t.

Erik pulls no punches on this post. Hopefully the end result is that this part of the documentation improves.

Comments closed

Tips on Navigating Postgres Documentation

Laetitia Avrot dishes dirt on Postgres documentation:

I could have created a very easy post with quick tips on psql, like how to disable this horrible pager the “ancient” Postgres contributors insist on keeping on by default (BTW, it’s \pset pager off, you’re welcome, you’ll thank me later), but as I wrote an entire website on that exact topic, I thought I needed to find something else.

So here is my topic: how to use the Postgres documentation! Yes, that documentation content is great, but no, that documentation is not easy to navigate at first.

Click through for tips on the best ways to navigate through this documentation, as well as important pages and topics based on your use case and role.

Comments closed

Reviewing an Existing Data Model with Power BI Model Documenter

Marc Lelijveld wants to see what’s out there in the wild:

In some scenarios, it can happen that you do not even have a Power BI desktop data model. For example, when you migrated from Analysis Services to Power BI Premium, or in case you have to deal with large datasets and it is directly developed using Visual Studio, Tabular Editor or any other tool of your preference and deployed over the XMLA endpoint. Similar setup could be that you once enriched your data model using Tabular Editor or ALM Toolkit, which resulted in the fact that your Power BI Desktop file, is no longer your golden version of your data model.

Another scenario could be gaining an overview of partitioning when using incremental refresh. The partitions of Incremental Refresh are only generated in the Power BI Service. So, including this information in your generated documentation is only possible when you connect directly to the Power BI Service.

But what if you still want to show a complete view of your Power BI data model, and extract insights using the Power BI Model Documenter? I can tell you; it is possible!

Read on to see what you can do in that case.

Comments closed