In this quick post I will show you my parallel plan and how I use MAXDOP = 1 to suppress parallel plan generation so the operation will be executed serially. (Disclaimer – I am not saying this is the right thing to do, merely using it as an example of tweaking this setting, to be honest in 10 years I have changed MAXDOP = 1 twice). I executed a query in Azure. You can see the classic operators such as gather streams and repartition streams.
This change will affect all queries hitting that database, so it’s a coarser tool than changing cost threshold for parallelism (not allowed) or setting MAXDOP per-query (allowed).
1. Create a Linked Server to 2nd instance on 1st Instance
2. Create an empty table to hold the comparison data
3. Insert the 1st instance sys.configurations data into the table
4. Insert the 2nd instance sys.configurations data into the table using Linked Server
5. Compare to get the differences
6. Cleanup – Drop Linked Server and the temporary comparison table
Click through for the script.
I can only recall one time in the past several years (at least a decade) that I’ve found triggers to be useful. It involves data migration.
The problem: You have a massive, high-activity table. Let’s call this Table_A. You need to make significant changes to it. For example, the clustered index needs to change. How do you accomplish this?
I’ve used a similar process with good success in the past.
SQL Server 2017 Automatic Tuning looks for queries where execution plans change and performance regresses. This feature depends on Query Store being enabled. Note, even if you don’t turn on Automatic Tuning you still get the benefits of having access to the data. That is right. Automatic Tuning would tell you what it would do if it was enabled. Think of this as free performance tuning training. Go look at the DMVs and try to understand why the optimizer would want to lock in an execution plan. We will actually go through a real-world example:
Click through for the example. I wouldn’t automatically trust these automatic results, but my experience has been generally positive.
In my mind there are a couple of ways to move a database across resource groups. They vary from scripting to just using the Azure portal. I am going to use the Azure portal and do the following.
- Export a database in resource group X to a storage account Z.
- Import the file from the storage account Z into a database that is in resource group Y.
It’s just like a “backup and restore” strategy, all with the assumption that you are working within the same subscription ID.
Read on for a step-by-step demonstration on how to do this.
The SET LANGUAGE command allows us to choose a language for a session. By session here I mean by SPID. Each query tab you open in SSMS is another thread to the database and receives a SPID. This can be called by almost anyone who has permissions to access the database because it only requires membership in the public role to execute.
Now let us change the session language to Russian.
You can change the default language for all sessions, as well as switching language for a specific session.
Here are some of the best practices post installing SQL Server on Linux that can help you maximize database performance:
1. To maintain efficient Linux and SQL Scheduling behavior, it’s recommended to use the ALTER SERVER CONFIGURATION command to set PROCESS AFFINITY for all the NUMANODEs and/or CPUs. [Setting Process Affinity]
2. To reduce the risk of tempdb concurrency slowdowns in high performance environments, configure multiple tempdb files by adding additional tempdb files by using the ADD FILE command. [tempdb Contention]
3. Use mssql-conf to configure the memory limit and ensure there’s enough free physical memory for the Linux operating system.
Some of these are common for Windows and Linux (like multiple tempdb files) but there are several Linux-specific items here.
Where Can I Get the Catalogue From?
The Undercover Catalogue is available from our GitHub site.
What Does the Catalogue Store?
The Undercover Catalogue stores all manner of useful information on your SQL Servers,
- Instances –
- Users and permissions
- Agent Jobs
with many more modules planned in future releases.
Check it out and I’m sure they’d love feedback. Also, read on for where this toolkit is going.
In both cases you can see that sys.sql_expression_dependencies provides the same basic info. But when it comes to tables are referenced by a stored procedure, sys.sql_expression_dependencies gives you less detail than you can get when using sys.dm_sql_referenced_entities. The biggest difference in both examples is that with sys.sql_expression_dependencies you will get info on cross-database and cross-server dependencies which can be super helpful.
However, once again I have to sound a note of caution because even sys.sql_expression_dependencies does not catch things referenced in a dynamic SQL string. Sales.Orders is found in dynamic SQL in the, [Application].[Configuration_EnableInMemory] & [DataLoadSimulation].[Configuration_ApplyDataLoadSimulationProcedures] stored procedures but it does not catch this. So far the best way to find objects in dynamic SQL strings that I know of is to check for the object in sys.sql_modules.
Read the whole thing.
Question: How can I find the owner of a SQL Server schema ? I want o find the owner through a t-sql solultion , rather than looking through the GUI.
Answer: To find a schema owner you can use either sys.schema view or the information_schema.schemata. Since SQL 2005, information_schema.schemata has come into line with sys.schema. information_schema.schemata returns schemas just from the current database.
Click through for simple examples of both methods.