Press "Enter" to skip to content

Category: Administration

Finding Broken Views

Bill Fellows has a script to test each view to see if it is broken:

Shh, shhhhhh, we’re being very very quiet, we’re hunting broken views. Recently, we were asked to migrate some code changes and after doing so, the requesting team told us we had broken all of their views, but they couldn’t tell us what was broken, just that everything was. After a quick rollback to snapshot, thank you Red Gate SQL Compare, I thought it’d be enlightening to see whether anything was broken before our code had been deployed.

You’ll never guess what we discovered.

Read on to see what they discovered (spoilers:  broken views) and how Bill fixed the problem.

Comments closed

More On The New Service Model

Randolph West summarizes the new SQL Server patching model:

  • Every twelve months after GA, the installation files will be updated to contain all the Cumulative Updates in what is effectively now a service pack, but won’t be called that. This will also become the slipstream update. In other words, you’re more likely to be up to date when installing from scratch, later in the release cycle.

  • Customers on the GDR (General Distribution Release) release cycle will only get important security and corruption fixes, as before. You can switch to the standard CU release cadence any time, but once you do, you can’t switch back to GDR.

Brent Ozar thinks CU12 might become the new SP1 in the minds of managers:

So now fast forward to late 2018, early 2019. You’re about to build a new SQL Server for a project, and you have two choices:

  • SQL Server 2018 – which is basically the new dev branch, getting monthly updates, or
  • SQL Server 2017 (or 2016, or 2014) – which is the stable branch, getting quarterly updates

Once a version has hit CU12, and it only gets updates once a quarter, it might be considered Good Enough For Our Apps. Managers might see 2017/2016/2014 interchangeably at that point – which might be great for the second most recent version’s adoption.

It will be interesting to see how companies adopt this new model.

Comments closed

Unattended Installation Of SQL Server 2017 On Linux

Denzil Ribeiro walks us through an unattended installation and configuration of SQL Server 2017 on Linux:

SQL 2017 bits are generally available to customers today. One of the most notable milestones in the 2017 release is SQL Server on Linux. Setup has been relatively simple for SQL Server on Linux, but often there are questions around unattended install. For SQL Server on Linux, there are several capabilities that are useful in unattended install scenarios:

  • You can specify environment variables prior to the install that are picked up by the install process, to enable customization of SQL Server settings such as TCP port, Data/Log directories, etc.

  • You can pass command line options to Setup.

  • You can create a script that installs SQL Server and then customizes parameters post-install with the mssql-conf

The sample script link seems like it’s broken, but you can see it all on Denzil’s Github repo.

Comments closed

Limiting Docker Container Resources

Andrew Pruski shows how to cap the resources available to a container:

What I’ve done here is use the cpus and memory switches to limit that container to a maximum of 2 CPUs and 2GB of RAM. There are other options available, more info is available here.

Simple, eh? But it does show something interesting.

I’m running Docker on my Windows 10 machine, using Linux containers. The way this works is by spinning up a Hyper-V Linux VM to run the containers (you can read more about this here).

Read on to learn more.

Comments closed

Running Out Of Ints

Paul Randal explains the unlikelihood that you’d run out of bigints in a table:

So with 1 million rows per second, you’ll be generating 1 million x 3,600 (seconds in an hour) x 24 (hours in a day) = 86.4 billion rows per day, so you’ll need about 1.4 terabytes of new storage per day. If you’re using the bigint identity as a cluster key, each row needs new space, so you’ll need almost exactly 0.5 petabytes of new storage every year.

At that rate, actually running out of bigint values AND storing them would take roughly 150 thousand petabytes. This is clearly impractical – especially when you consider that storing *just* a bigint is pretty pointless – you’d be storing a bigint and some other data too – probably doubling the storage necessary, at least.

By contrast, if you have a staging table that flows 10 million rows a day (meaning 10 million leave and 10 million new ones enter), you’ll overflow an int column in less than a year.  It’s worth thinking about data sizes before deciding on the type of a surrogate key.  Bigint is the safest, and if you think you’ll need it, go with it.  But there is that storage overhead.

Comments closed

Connecting To Named Instances Sans Name

Derek Colley shows what to do when trying to connect to a named instance without specifying its name:

This is a good question.  There’s a few ways to address it:

  • We could configure two different connection strings and on failover, the alternative is used (by any arbitrary application mechanism)
  • For mirroring, we can use the Failover Partner= element in the connection string – but this is transactional replication being used for DR, and failover should be manual
  • We could set the instance names to be the same – however in this case it was too late since changing the instance name cleanly in SQL Server is practically impossible and breaks all sorts of things

None of these solutions was suitable.  However, on each server, there was only one named instance and no default instance.  What can we do?

Read on for a fourth solution.

Comments closed

Pausing Versus Stopping SQL Servers

Arun Sirpal demonstrates what the Pause option does on a SQL Server:

As stated via official Microsoft documentation “Pausing the Database Engine service prevents new users from connecting to the Database Engine, but users who are already connected can continue to work until their connections are broken. Use pause when you want to wait for users to complete work before you stop the service. This enables them to complete transactions that are in progress”.

Very handy! Let’s see it in action and compare it to a STOP.

Read on for more.

Comments closed

SQL Server R Services Troubleshooting

Ginger Grant walks us through a few troubleshooting tactics with SQL Server 2016 R Services:

Much to my surprise after this I received an error

Msg 39019, Level 16, State 1, Line 1
An external script error occurred:
Unable to launch the runtime. ErrorCode 0x80070490: 1168(Element not found.).
Msg 11536, Level 16, State 1, Line 1
EXECUTE statement failed because its WITH RESULT SETS clause specified 1 result set(s), but the statement only sent 0 result set(s) at run time.

I looked in the log files and didn’t find any errors.  I checked the configuration manager to ensure that I had some user ids configured in the configuration manager.  Nothing seemed to make any difference.  Looking online, the only error that I saw which might possibly be close was a different error message about 8.3 naming and the working directory.

This service is somewhat finicky to set up in my experience, though once you have it configured, it tends to be pretty stable.

Comments closed

Docker Stop Versus Docker Kill

Andrew Pruski explains why docker kill is so much faster than docker stop:

When running demos and experimenting with containers I always clear down my environment. It’s good practice to leave a clean environment once you’ve finished working.

To do this I blow all my containers away, usually by running the docker stop command.

But there’s a quicker way to stop containers, the docker kill command.

Sending SIGTERM isn’t particularly polite and doesn’t let processes clean up, which could leave your process in an undesirable state during future runs.  But if you’re just re-deploying a container, you don’t really care about the prior state of the now-disposed container.

Comments closed

Minimizing Problem Reproductions

Lonny Niederstadt explains the value of a minimalistic repro:

Often among the hardest of my decisions is whether I should spend more time trying to simplify a given problem by eliminating additional factors, shrinking the data needed for a repro, etc… or just put all that effort into investigation purely aimed at understanding the behavior. I expect that to be a long-term challenge 🙂

I was quite happy with the way this particular one worked out, though. It started as a maze… access violations generated on a SQL Server 2016 instance with dozens of databases. The access violation came from an insert query using a synonym – with the underlying table in another database! (I didn’t know that was possible – or *ever* desirable – until I saw it in this context.) The AV was occurring during a multi-step data transfer process and it was hard to isolate the data flowing into and out of the two databases in question. But after some finagling, I got the problem repro pretty doggone small. Reproduced the AVs on several CUs of SQL Server 2016 and on SQL Server 2017 RC2.

If you think you’re going to enlist the help of someone outside your organization, then you definitely want a minimalistic repro.  That will reduce the risk of red herrings, reduce the burden of assistance, and make it much more likely that some poor sap in support can actually fix your problem if it turns out to be a bug.

Comments closed