I was the host this month for T-SQL Tuesday #147 and here is a look at the people who blogged (that I know about). If I’ve missed anyone, let me know.
Click through for a summary.
Comments closedA Fine Slice Of SQL Server
I was the host this month for T-SQL Tuesday #147 and here is a look at the people who blogged (that I know about). If I’ve missed anyone, let me know.
Click through for a summary.
Comments closedKenneth Fisher pulls out the company directory:
This month the topic we are blogging about is Upgrade Strategies. Or, how do we look at SQL Server upgrades. In my case I want to talk about the absolute hardest part of any upgrade at my company.
I should point out that I work for a large company with a lot of moving parts. Over the course of my tenure here I’ve helped to support hundreds to thousands of SQL Server instances. And at least for us, the technical part of an upgrade isn’t too bad. Where we almost always run into problems is Who do we contact?
Read on for Kenneth’s thoughts on the topic.
Comments closedT-SQL Tuesday this month is all about upgrades, so here are a few more thoughts on the topic. Let’s start with Reitse Eskens:
Things change when you’re working with a lot of data or when you’re trying to use SQL Server to the max. At some point you’ll see new features that will support your workload or add security for your data. Here are the steps I’d take to evaluate new versions.
Todd Kleinhans tells a horror story with a happy ending:
I had just been laid off for the first time in my life from a dot com. I was a classic ASP web developer, a junior development DBA, and I knew Access and FileMaker Pro. Interviewed and got hired on as a contractor to help with Access and ASP.
Before me, a local consulting company was retained to help them with the migration from Access to SQL Server. It was a disaster.
Rob Farley thinks about ways to make the upgrade process smoother:
Comments closedI don’t envy application vendors who don’t have a strong DevOps story. They need to get it in place so that they can scale, but it doesn’t happen overnight. The road to good deployment practice is long and is threatened by all kinds of things. Code coverage is rarely complete, and problems seem to find those places that don’t have good testing in place yet (typically because problems are avoided in the areas that do have good testing). All this is so much easier when a project is starting from scratch, and not the culmination of a decade or more of development needing to be compatible with the last four versions of SQL Server and Windows.
Alex Stuart tends to end up on the late adopter side of things:
My experience of SQL upgrades is that they tend to be largely dictated by neccessity, either of the ‘the Security team is getting really twitchy about these old servers’ or ‘crap, it’s license renewal time and the vendor doesn’t support x’ variety. I’ve never performed one that wasn’t under some sort of pressure. How do we get here?
Click through for the downsides of really slow adoption. The biggest problem is that you’re trading current convenience for future pain when slow-playing adoption, as Alex mentions.
Comments closedDeepthi Goguri discusses upgrading:
When I started my first job as a DBA seven years ago, my project was to migrate several SQL Servers and all the servers were in SQL Server 2000. In my first SQL class at my school, I started my learning with SQL Server 2012. It was a shock to me to work on SQL 2000 databases at the time (as I am not familiar with the SQL Server 2000 yet), especially as it was my first job as a DBA.
My first project was to migrate approximately two hundred and fifty SQL 2000 SQL Servers to SQL Server 2012/2016. It took us a couple of years to successfully migrate all these Servers.
Deepthi mentions fear as a demotivating factor. In fairness, fear is a valid response to upgrades for two separate reasons: first, because the changes they release might break your existing code (something very common in the data science world); and second, because new code has new bugs that you haven’t discovered or worked around yet.
Comments closedFirst of all, I want to thank everyone who took some time to participate in this month’s T-SQL Tuesday: Upending Preconceived Notions. There’s a fantastic line-up of contributions! I was tickled that many of the topics chosen were things I once held one viewpoint on, later learned more, and evolved that viewpoint.
Click through for the list and Andy’s thoughts on each.
Comments closedAaron Bertrand has learned a thing or two about filtered indexes:
Confession time. For filtered indexes, I have long held the following impressions:
1. That the definition of the filter had to exactly match the predicate in the query.
2. That
col IN (1, 2)
was not supported in a filter definition, since it’s the same ascol = 1 OR col = 2
, which is definitely not supported.
If I were to take a wild guess, I’d think impression 1 was probably influenced by the extreme limitations filtered indexes have with parameterized queries. Anyhow, read the whole thing and learn why both of these are wrong.
Comments closedRichard Swinbank is a rule-breaker:
When I started out writing T-SQL, I learned rules – things I should do, things to avoid – from other, more experienced developers. For example, when writing stored procedures:
– don’t use dynamic SQL
– don’t set variables wih
SELECT
– never use
WITH RECOMPILE
There are often great reasons for these. Dynamic SQL can leave you prone to SQL injection,
SELECT
ing into variables can have unexpected effects, andRECOMPILE
can bring an OLTP system to its knees.
I think most rules of this sort tend to fit into something like four categories: convenience, take caution, observe in the breech, or shibboleths. Some rules, like formatting rules, are for convenience, whether that be for yourself or others. Not using dynamic SQL is a cautionary rule, for the reasons Richard mentions. The final two categories are variants on rules which exist to show that you are one of us, with observation in the breech being rules that we say we follow but only do when others are looking, and shibboleths being rules we follow regardless of whether people are looking. Formatting might fit into this as well, or having rules prohibiting (or enforcing) use of subqueries in the SELECT clause.
Very rarely are prohibitional rules 100% applicable, though as Richard notes, rules are a very useful shorthand.
Comments closedMala Mahadevan discusses business logic:
There are many things on which I’ve changed my opinion in my journey as a data professional. For this blog post, I picked a common one – where business logic should reside. During my years as a production DBA – there was (and probably is) a hard-held belief that databases are meant for CRUD operations only and business logic belongs in the application/middle tier. This belief has its place – DBAs don’t like debugging code that is application-specific, or be tasked with why data looks a certain way or what code caused it. Business logic can also be very complex and get deeply embedded in one place (the database). I have believed in this and fought for this at most places I’ve worked as a DBA.
Over time though, my opinion has changed.
Building on top of Mala’s point, people often conflate “business logic” with “data logic.” Rules governing the shape of data are as much data logic rules as they are business logic. The simplest cases include things like nullability, data sizes and lengths, and how data elements interact. It can also include ensuring the non-existence of invalid states through constraints. There can be practical reasons why we wouldn’t implement some piece of data logic in the database itself—for example, perhaps the particular data logic rule would be prohibitively expensive or becomes too complex to maintain as triggers—but in general, I’d estimate that people and firms under-specify their data logic in the database rather than over-specifying business logic.
One last bit that I’d hit on is that it’s okay to have overlapping logic in both, especially when it comes to data validation. We shouldn’t blithely accept inputs from client-side scripts—we evaluate it server-side and ensure that nobody has fiddled with the form and tried to create an invalid state. Then, when writing to the database, it can still make sense to check again for the exact same reason.
Comments closedTwo overlapping posts on temp tables? Order up! First, Deb Melkin realizes tempdb is like a hotel room:
I was always taught to clean up after myself so I thought dropping my local temp tables after creating them in a proc did that. I knew that once the session was dropped, the table no longer was there so SQL Server would eventually take care of it. But then you had the cases where you ran the same stored proc multiple times in the same session so you wanted to make sure the table was gone at the end of your procedure to prevent issues.
Second, Deepthi Goguri learns the importance of getting temp tables right the first time:
I always had an opinion of changing the schema of the temp tables or adding indexes later after the temp table creation doesn’t affect in any way but it’s actually not a good idea. Yes, you heard that right. Altering or adding indexes later after temp table creation within the stored procedures will be marked for deletion and so the cached temp table metadata cannot be reused. It is suggestable to create the index as you create the temp table using the inline index syntax.
Both reference Pam Lahoud’s tempdb talk at EightKB (video), so check that out too.
Comments closed