There are actually about 40 things it checks for.
Current limitations are that queries with a cursor or temp table are not analyzed. There’s also a bug where the missing indexes and warnings appear on the wrong node/operator. Since the tool is using estimated plans at the moment, it may not be as accurate.
I’m planning on a few new features in the next month to add feeding the utility a query plan and displaying the original query. I’m also planning on adding history and the ability to execute a query from the tool. Before we get to those we need to fix some known bugs though. I’m hoping that you. Yes! you can help me identify other bugs to make this a great tool for the SQL community.
The product is in beta, so check it out and send Daniel some feedback.
For those that may be unfamiliar, Cruise Control features include:
1. Kafka broker resource utilization tracking
2. The ability to query the latest replica state (offline, URP, out of sync) from brokers
3. Goal-based resource distribution
4. Anomaly detection with self-healing
5. Admin operations on Kafka (add/remove/demote brokers, rebalance cluster, run PLE)
In this post, we will take a look at the frontend for Cruise Control, which provides a birds-eye view of all the Kafka installations and provides a single place to manage all of them.
That’s a lot of functionality in one tool.
From experience I know it’s important to know what applications you can use locally with Azure to manage SQL Server solutions. So you have the right tools for the job.
For instance, I was talking with some people at a client’s site the other day about deciding what application to use to future proof themselves.
In this post I will cover applications for use with Windows, MacOS and Linux distributions.
I don’t think I’m spoiling too much in saying that about 80% of these are the same tools you would use for on-prem work.
We know some of you really hate linked servers so we have been working on a powershell collection which will allow you to install the inspector without using linked servers to centrally log the information and instead the powershell function Invoke-SQLUndercoverInspector will do the rest for you (We will be writing a blog post about how you can use this soon) – this is currently a pre-release version so it’s a work in progress – I must say a massive thank you to Shane O’Neill (b | t) without his powershell skills this wouldn’t turned out as well as it has, thanks Shane!
If you’ve already downloaded this version, be aware that there is a hotfix.
In 2018, I’ve found myself frequently running a series of static SQL Server queries and copying/pasting the output of each query into a separate sheet in a Microsoft Excel Worksheet file. They say if you perform a manual task X number of times or more, you should automate it. Great advice, right? There are a number of ways to export SQL Server query results to Excel, and an internet search readily turns up many options. I didn’t find any that suited my needs, so I wrote my own utility, uncleverly named “SQL to Excel”.
SQL to Excel is a modest command line utility. It iterates though a folder of *.sql script files, executing each script, capturing the script query output, and writing the results to a Microsoft Excel Worksheet file (one worksheet per script file). And yes, it’s an actual Excel file, not a .csv file. The C# code for SQL to Excel is available as a GitHub repository.
Click through for more details. This sounds a bit more robust than building Integration Services packages to do the same.
If you’ve been in hibernation, today you woke up to a world where Microsoft has embraced open source and Linux. What was once unthinkable is now happening. What is going on? Why am I even talking about this?
Since the introduction of SQL Server 2017 and the support for Linux-based deployments, I’ve had a steady stream of questions from C-levels on down to DBAs asking in essence this: “Do I need to abandon SQL Server on Windows Server and learn Linux?” I would use something stronger if this was a casual conversation, but the answer is an emphatic “NO!” SQL Server still runs just fine and is supported on Windows Server (including Windows Server 2019, which is just released). Support is not ending any time soon. Linux is just another option and there may be enhancements specific to each platform because of their differences. It’s not an “either/or” thing. So breathe, OK? If you have a use case for Linux, by all means deploy SQL Server on it.
I am on the SQL on Linux bandwagon and enjoy the path that Microsoft is forging, but Allan provides us a critical tonic in this regard.
What tool do you use for Analysis Services Tabular development? SSDT right, what else? Here is a little secret. I almost don’t use SSDT anymore, except for limited tasks, such as importing new tables and visualizing relationships. I switched to a great community tool – Tabular Editor and you should too if you’re frustrated with the SSDT Tabular Designer. Back in 2012 Microsoft ported the Power Pivot designer to SSDT to let BI practitioners implement Tabular models. This is why you still get weird errors that Excel has encountered some error. Microsoft haven’t made any “professional” optimizations despite all the attention that Tabular gets. As a result, developers face:
Performance issues – As your model grows in complexity, it gets progressively slower for even simple changes, such as renaming columns. The problem of course is that any change results in a commit operation to the workspace database. SSDT requires a workspace database for the Data View but it slows down all tasks even if it doesn’t have data. While the data view is useful for data analysts, I’d personally rather sacrifice it to gain development speed.
The horrible measure grid – Enough said. To Microsoft credit, the Tabular Explorer helps somewhat but it still doesn’t support the equivalent of the SSAS MD script editor.
No automation for repetitive tasks – It’s not unusual to create many measure variants, such as YTD, QTD. SSDT doesn’t help much automating them.
It does look interesting.
Simon Su has an interesting tool available:
Now I develop a tool to analyze AG log block movement latency between replicas and create report accordingly.
Click through for more info and check it out on Github.
Marching onward to dbatools 1.0, a ton of commands have been renamed to align with our now mature naming scheme. These changes were made in today’s release, version 0.9.410 aka regularlegs, now available on GitHub and the PowerShell Gallery.
Here’s the general idea:
DbaDatabase has mostly been renamed to DbaDb with a couple exceptions
DbaSql has been changed to just Dba, no exceptions. Think of Dba as “Sql”.
DbaConfig has been renamed to DbatoolsConfig
TempDbConfiguration has been renamed to TempdbConfig
All Configuration commands are Config except SpConfigure
DbaDacpac has been renamed to DbaDacPackage. Dac is the prefix for our data-tier application commands.
DbaDbQueryStoreOptions has been renamed to DbaDbQueryStoreOption
Some of this is in preparation for breaking changes in dbatools 1.0. There’s a lot of stuff in this release, so check it out.
Andy Leonard shows off parts of SSIS Catalog Compare. First up is the catalog reference script:
As you may glean from this analysis of one script generated for SSIS Catalog References management, the Transact-SQL for scripting SSIS Catalog artifacts in SSIS Catalog Compare is rigorous and includes several existence and error-condition checks prior to performing any updates. The script is designed to be idempotent, as well, meaning the script will succeed and the results will be repeatable and predictable each time the script is executed – and that the script itself is re-executable.
Then an environment script:
The final portion of the script checks for the existence of the Environment Variables and responds accordingly. This is a three-step process:
- Drop the Environment Variable if it exists.
- Create the Environment Variable.
- Set the Environment Variable value.
If the Environment Variable exists the script drops it. Why? SSIS Catalog Compare wants to be sure the environment variable is created with the proper data type and initial values.
And connection literals:
These messages are intended to be copied and stored in the Notes field of a ticketing system in a DevOps enterprise. Note the detail contained herein:
Script Name – the path to the file used to perform the operation.
Generated From – the SQL Server instance of the SSIS Catalog host from which the script was generated.
Catalog Name – redundant at present because all SSIS Catalogs are named “SSISDB.”
Folder Name – the name of the SSIS Catalog Folder that contains the scripted artifact.
Project Name – the name of the SSIS Project that contains the scripted artifact.
Project Connection Name – the name of the SSIS Project Connection.
Generated By – the name of the enterprise account used to generate the artifact’s script.
- Note: SSIS Catalog Compare respects the security model of the SSIS Catalog. Windows Authentication is required to perform many SSIS Catalog operations.
Generated Date – the date and time the script was generated.
Generated From – the version of CatalogBase used in the generation of the artifact script.
- Executing On – the name of the machine on which CatalogBase was running.
Deployed to Instance – the SQL Server instance hosting the target SSIS Catalog.
Deploy Date – the date and time the deployment script was executed.
Deploy By – the enterprise account used to deploy the artifact script.
Andy has put a lot of thought into SSIS Catalog Compare, so go check it out.