The config file has a lot of options, in short this is where you configure a database connection string and reference your jdbc jar file. Full details are here. By default any of the examples that come with the Solr distribution use a plain text username and password. This can be potentially viewed from the front end:http://hostname:8983/solr/ > Select Collection from the drop-down > Click data Import > expand configurationObviously we do not want to store our username and password in plain text. The config file includes an option to encrypt the password and then store the key in a separate file.
Storing passwords in plaintext is a classic mistake that I see far too often. And then when someone checks in that config file to a public GitHub repo…
When applying a new SQL Server cumulative update, hot fix, or upgrade SQL Server doesn’t always apply all the fixes in the patch. When you upgrade the database engine in-place, databases you had already stay at their pre-upgrade compatibility level, which means they run under the older set of optimizer rules. Additionally, many optimizer fixes are not turned on. The reason for this is that while they may improve overall query performance, they may have negative impact to some queries. Microsoft actively avoids making breaking changes to its software.
To avoid any negative performance impacts, Microsoft has hidden optimizer fixes behind a trace flag, giving admins the option to enable or disable the updated fixes. To take advantage of optimizer fixes or improvements you would have enable trace flag 4199 after applying each hot fix or update or set it up as a startup parameter. Did you know this? This was something I learned while working with an existing system, years into my career. I honestly assumed it would just apply any applicable changes that were in the patch to my system. Trace flag 4199 was introduced in the SQL Server 2005-era. In SQL Server 2014, when Microsoft made changes to the cardinality estimator they protected the changes with trace flags as well, giving you the option to run under compatibility level 120 and not have the cardinality estimator changes in effect.
Things changed starting with SQL Server 2016.
Click through to see how SQL Server 2016 made it a bit easier.
I started with
Get-DbaSpConfigureto retrieve the settings available from
sp_configureas these were the most important to my comparison. I ran this against production as well as each of my test instances and saved the results of each to a variable. Because accessing my production instance requires either jumping through hoops or using SQL Authentication, I passed
-SqlCredential (get-credential -Message "Prod" -UserName MySQLLogin)so I’d be prompted for that password instead of using Windows Authentication.
It’s good to have an automated process in place to script comparisons, either against a known good build or across servers which ought to be the same. Things that ought to be the same often aren’t.
Your servers have many-core CPUs, right? And you want SQL to utilize those cores to the best of its ability, distributing the many users’ workloads fairly amongst them, yes? Damn right, you paid $3k or more per core in freaking licensing costs! “OK”, says SQL Server, “I’ll use all available CPUs for any query with a ‘cost’ over ‘5’“. (To give context here, in case you’re not aware, ‘5’ is a LOW number; most OLTP workload queries are in the double to triple digits). “But wait!”, you protest, “I have more than 1 user, obviously, and I don’t want their horrible queries bringing all CPUs to their knees and forcing the 50 other user queries to wait their turn!”
Nate has a few recommendations here, as well as a picture of kittens.
Performance problems for a SQL Server based application are likely to be caused by environmental factors and not buggy code.
Whether it is a configuration you can change in SQL Server, Windows Server, VMware, or the network it is likely the first course of action is to perform a quick assessment of the environment. This is where understanding the various configurations and best practices are key. Knowing what to look for can save tons of time.
A mistake I often see is a performance issue is passed off to someone else (more senior) and that engineer assumes a lot of things without checking. People are going to relay the problem as they see it – not as it actually is. This leads to skipping over some elementary checks which can save time and frustration from tracking down imaginary bugs.
Start troubleshooting with a quick environmental check.
There are quite a few checks here.
Now we can check the data in the table: –SELECT * FROM [dbo].[TestTable] GO
As expected, there’s a gap in the IDs. Why it jumps to 1002 is discussed in the connect item.
OK, now let’s try running the same code again but this time we will disable the identity cache.
This doesn’t eliminate gaps altogether; those can still happen upon rollback of a transaction. This is reason #1 why you should not use identity columns as proper sequences.
As of CTP 2.1 for SQL Server 2017 a set of new environment variables are available. These variables allow us to configure each SQL Server container as we spin them up. The first version of SQL Server on Linux came with:
These had to be set for the container to start. The SA_PASSWORD has be a complex password or the container will not start. CTP 2.1 introduced:
Read on for the new variables and an example on how to use them.
So we see that now for Machine Learning there is also Python support and that SSIS has new scale out capabilities. In addition to this Reporting Services – Native installation has been removed (and no more SharePoint integration!).
Dan covers several tools, including Power BI Report Server and client.
When you run a query, SQL Server estimates how “expensive” it is in a fake costing unit, let’s call it Estimated QueryBucks.
If a query’s Estimated QueryBucks is over the “Cost Threshold for Parallelism” setting in SQL Server, it qualifies to potentially use multiple processors to run the query.
The number of processors it can use is defined by the instance level “Max Degree of Parallelism” setting.
When writing TSQL, you can specify maxdop for individual statements as a query hint, to say that if that query qualifies to go parallel, it should use the number of processors specified in the hint and ignore the server level setting. (You could use this to make it use more processors, or to never go parallel.)
Read the whole thing, or watch/listen to the video.
At one client I was working with in 2015 they had a server which was configured very similarly to what I’ve described above. They had 32 cores, with 11 NUMA nodes. One NUMA node has 4 cores, the next 9 had three cores each, while the 11th NUMA node had one core in it.
When monitoring the performance on the server what we saw was that three cores were running at 100% (or close to it) while the other cores were running at about 4%.
The reason for this, is that SQL Server is NUMA aware, and it was trying to get the best performance possible out of the configuration that it had access to.
Moderation in everything. Also, it’s important to plan growth and check every once in a while for oddities like this.