Press "Enter" to skip to content

Category: Administration

Installing Linux And Then SQL Server On Linux

David Alcock has a couple posts covering installation of SQL Server on a brand new Ubuntu VM.  First, David installs Ubuntu:

The system requirements for running SQL Server on Ubuntu 16.04.2 contains the following

Note

You need at least 3.25GB of memory to run SQL Server on Linux. For other system requirements, see System requirements for SQL Server on Linux.
On the create VM window the Memory is currently set to 1024 MB so by clicking the Customize Hardware button I can change the allocated memory to 4GB (4096 MB) as in the screenshot below:

Then, he explains the process of installing SQL Server:

Let’s break it down a little bit. First sudo, which is giving root permissions to a particular command this is as opposed to sudo su which I had to do later on in the install to switch to superuser mode for the session.

Next is apt. Apt is a command line tool which works with the Advanced Packaging Tool and enables to perform installs, updates and removals of software packages. In this case we’re installing curl so we use the install command.

I think Microsoft did a good job of simplifying the installation process on Linux and making it “Linux-y,” with an easy installation and then post-installation configuration.

Comments closed

Rebuilding Full-Text Catalogs

Thomas Rushton ran into an issue with full-text indexing component versions:

Restoring 27 databases; they all restored properly, but 15 of them gave a warning along these lines:

Warning: Wordbreaker, filter, or protocol handler used by catalog ‘FOOBARBAZ’ does not exist on this instance. Use sp_help_fulltext_catalog_components and sp_help_fulltext_system_components check for mismatching components. Rebuild catalog is recommended.

Read on for the solution.

Comments closed

Permissions On XML Schema Collections

Shane O’Neill diagnoses a permissions issue with XML Schema Collections…or is it?

In my head I’m thinking of all the things that I can do to try and troubleshoot this problem.

  1. Extended Events my session,
  2. Ask my Senior DBA,
  3. Cry

Then I realize that I’m jumping the gun again and I slow down, and check the first error message again. This time without the developers shouting in my ear, about permissions.

This is a great example of why it’s important to troubleshoot using a methodical, logical process.  If you get it stuck in your head that the answer is quite obviously something, you lose a bunch of time if it turns out that it isn’t quite as obvious.

Comments closed

Incorrect PFS Free Space

Dave Mason walks through troubleshooting one database corruption scenario:

I’ve been lucky with database corruption during my career. I could probably count on one hand the number of times I’ve had to deal with it. A couple times, it was in a customer’s environment–they managed it themselves, but called me in to help. The other incidents were ones I inherited from a backup I had to restore into a production environment. The first time it happened to me, I didn’t realize it until days later when DBCC CHECKDB ran during a weekend maintenance window. After that, I added a new “rule” to my list: always run DBCC CHECKDB after restoring a database from someone else. That rule paid dividends today.

Here’s the output I saw:

Msg 8914, Level 16, State 1, Line 50
Incorrect PFS free space information for page (1:2564368) in object ID 457768688, index ID 1, partition ID 72057619124060160, alloc unit ID 72057594116767744 (type LOB data). Expected value   0_PCT_FULL, actual value 100_PCT_FULL.
CHECKDB found 0 allocation errors and 1 consistency errors in table 'tbl_Redacted' (object ID 457768688).
CHECKDB found 0 allocation errors and 1 consistency errors in database 'db_redacted'.
repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (db_redacted).

Read on to see how Dave solved this issue.

Comments closed

Synchronizing Logins And Jobs

Ryan Adams enumerates several methods for synchronizing logins and SQL Agent jobs across mirrored instances or Availabilty Group replicas:

There is an awesome set of PowerShell cmdlets out there written by MVP Chrissy LeMaire.  This method is my personal choice.  It works great and is easy to automate.  You can run it with SQLAgent or you can just use Scheduled Tasks in the OS.  The scheduled tasks method is a little cleaner, but you don’t get to see it in SQL Server.  Also if you are on a cluster and running Windows 2012 you can cluster the task scheduler as an added benefit.

Chrissy wrote this with the intent of making migrations easier, and she succeeded.  In fact, I made it a point to thank her at MVP Summit last year because it made my life insanely easier.  The advantage here is that you can automate a lot more than than just logins.  In fact you can migrate and automate pretty much anything at the server level.  Here is the link that I guarantee you are going to bookmark followed by a video demo where I show how to install and automate the syncing of logins using both the SQLAgent method and the Scheduled Tasks method.

https://dbatools.io/

DBATools would be my preference in this situation as well, but click through to see four other methods, as well as code.

Comments closed

Tuning Apache Solr

Michael Sun explains how to optimize Apache Solr’s memory usage:

For Oracle JDK 8, both CMS and G1 GC are supported. As rule of thumb, if the heap size is less than 28G, CMS works well. Otherwise G1 is a better choice. If you choose G1, there are more details about G1 configuration in the part 2 of this blog. You can also find helpful guidance in Oracle’s G1 tuning guide.

Meanwhile it’s always a good idea to enable GC logging. The overhead of GC logging is trivial but it gives us a better understanding how the JVM uses memory under the hood. This information is essential in GC troubleshooting. Here is an example of GC logging settings.

There’s some good administrative assistance, but also tips on more efficient querying.

Comments closed

Out Of User Memory Quota

Jack Li troubleshoots an In-Memory OLTP error:

[INFO] HkDatabaseTryAcquireUserMemory(): Database ID: [7]. Out of user memory quota: requested = 131200; available = 74641; quota = 34359738368; operation = 1.

This is my first time to see this error.  As usual, I relied on source code to find answers.   The message is a result of enforcing memory quota for In-memory OLTP usage.  As documented in “In-Memory OLTP in Standard and Express editions, with SQL Server 2016 SP1”, SQL Server 2016 SP1 started to allow In-Memory OLTP to be used in all editions but enforce memory quotas for editions other than Enterprise edition.  The above message is simply telling you that you have reached the quota and what ever operation you did was denied.

Jack provides more context around the error as well.

Comments closed

Finding Failed Queries

Andrew Pruski shows how to use extended events to find queries with errors:

What this is going to do is create an extended event that will automatically startup when the SQL instance starts and capture all errors recorded that have a severity level greater than 10.

Full documentation on severity levels can be found here but levels 1 through 10 are really just information and you don’t need to worry about them.

I’ve also added in some extra information in the ACTION section (for bits and bobs that aren’t automatically included) and have set the maximum number of files that can be generated to 10, each with a max size of 5MB.

Check it out.  At one point, I had created a small WPF application to show me errors that extended events caught.  It completely freaked out a developer when I IM’d him and told him how to fix the query he’d just run from the privacy of his cube, with me nowhere to be seen.

Comments closed

Abusing The Uniquifier

Denis Gobo shows what happens when you run out of unique values available to the uniquifier:

You would get the following error..straight from the beast himself apparently

Msg 666, Level 16, State 2, Line 1
The maximum system-generated unique value for a duplicate group was exceeded for index with partition ID 72057594039173120. Dropping and re-creating the index may resolve this; otherwise, use another clustering key.

I will be using DBCC PAGE and DBCC IND in this blog post, if you want to learn how to use these yourself, take a look at How to use DBCC PAGE

One horror story along these lines I’ve heard was a system where the developers would insert every new row with a clustered index value of 0 and then subsequently update the row to set the column to its correct value.  This does not decrement the uniquifier, though, so eventually you hit the limit even if there are only a relatively small number of 0-valued rows.

Comments closed

Make Those Clustered Indexes Unique

Thomas Rushton shows what happens when your clustered index is not unique and you have a lot of time to kill:

The theory behind clustered indexes is that they are (usually) unique – after all, they define the logical layout of your table on disk. And if you have multiple records with the same clustering index key, then which order would they be in? If you don’t define the CI as unique, then SQL Server will add (behind the scenes) a so-called “Uniqueifier” (or maybe “uniquifier”) to fix that. Grant’s first post in the thread referenced above gives some information about how to see this Uniqu[e]ifier in the table structure itself.

Read the whole thing.

Comments closed