There may come a time when you need to generate a list of SQL logins and the last time their password was changed. By using the LOGINPROPERTY of the name in sys.server_principals we can generate such a list.
Note: This will only work with SQL logins.
If you want (or need) to know that passwords are being updated but can’t turn on password policies, this at least answers the initial question.
A stored procedure with a single
@ProductIDparameter would allow us to add
WHERE ProductID = @ProductIDto both derived tables, which would make the query really fast. In my testing, it ran in under 100ms.
The problem is that this would require numerous code changes to the existing system. Many of the queries that rely on the view also use additional
WHEREconditions, meaning that a stored procedure is impractical.
Enter the table-valued user-defined function, or TVF.
I’m glad that the TVF worked out for him, but personally, I’m not a big fan of functions. In this case, though, it does the trick and leaves everyone happy.
The initial email didn’t have a lot of details, so I started asking questions to understand what version was being installed, the environment configuration, etc. Turns out this was a two-node Windows Server Failover Cluster (WSFC) with multiple SQL Server 2012 instances installed, and one of the instances was still running on the node this person was trying to patch. To be clear, the two nodes were SRV1 and SRV2, and the instances were PROD-A and PROD-B running on SRV1, and PROD-C which was running on SRV2. This person was trying to install the cumulative update on SRV2.
The behavior Erin describes is a little bit crazy, but at least there’s a good explanation and way to solve the issue.
MIND YOUR COMPATIBILITY LEVEL
When going to 2014 (as of today, 2016’s RTM hasn’t been announced yet), you’ll have to decide whether or not the new cardinality estimator suits you. There’s not a cut and dry answer, you’ll have to test it on your workload. If you’d like some of the more modern SQL features added to your arsenal, you can bump yourself up to 2012-levels to get the majority of them.
The interesting survey would be, among people who still have SQL 2005 installations, how many will move as a result of Microsoft declaring end-of-life for 2005. My expectation is a fairly low percentage—by this point, I figure a at least a strong minority of 2005 instances are around for regulatory or compliance reasons (e.g., some piece of regulated software was certified only for 2005).
You can think of page compression as doing data deduplication within a page. If there is some value repeated in multiple spots on a page, then page compression can store the repetitive value only once, and save some space.
Page compression is actually a process that combines three different compression algorithms into a bigger algorithm. Page compression applies these three algorithms in order:
1) Row compression
2) Prefix compression
3) Dictionary compression
Page compression is my go-to compression option, typically. There are some cases in which it doesn’t work well, so check beforehand (start with sp_estimate_data_compression_savings), but I’ve had good luck with page compression.
I think one of the single biggest challenges I face as I attempt to warehouse data that originates as a SharePoint list is the handful of miscellaneous descriptive fields, such as approval status, request status, or something similar. Typically, this fields are setup as Choice fields in the SharePoint list so they have a known range of values, but its still a pain to have to build a dimension for each one.
Enter the junk dimension. Ever since I learned about this concept it has made my life so much easier. What the junk dimension does is perform a cross join against the different fields and creates a row for every possible combination of fields.
Junk dimensions are nice for those low-cardinality attributes which are important but don’t really fit anywhere else. The important thing to remember about a junk dimension is that you don’t want it to be too large: if you have 5 attributes, each of which has 8 possible values, you have 8^5 (32,768) rows. That’s not so bad, but make it 10 attributes and now your table has 1,073,741,824 rows, and that’s a lot of rows for a single dimension. If you find yourself in that scenario, you might want to create two junk dimensions (bringing you back to 2 dimensions with 32K rows), review your design to see if all those attributes are necessary, or review your design to see if your “junk” dimension is hiding a real dimension.
Now, you have a meaningful list of wait statistics that will tell you exactly why, if not where, your server is running slow. Unfortunately, these waits still need to be interpreted. If you read further on Paul’s blog, you’ll see he has a number of waits and their causes documented. That’s your best bet to start understanding what’s happening on your system (although, I hear, Paul might be creating a more complete database of wait stats. I’ll update this blog post should that become available).
Wait stats are fantastic tools for figuring out your server resource limitations given the server’s historic query workload. They’re the first place to look when experiencing server pains.
IT professionals (and amateurs), it’s time we had a chat. It’s time to stop dragging and dropping (or copying and pasting) files between servers and/or workstations.
It’s clumsy. It’s childish. It uses memory on the server.
Oh, and there’s a really easy tool to copy files built into Windows – Robocopy.
The syntax is pretty easy and robocopy handles small files well. Check out Nic Cain’s comment, though, if you’re going to copy large files in production.
This month I challenged the blogging community to share their own creations in Power BI. We got a ton of great entries this month, thank you everyone who participated! My overarching goal for this month’s topic was to get folks who may not normally play in the BI space to use this fantastic solution and maybe get some ideas flowing on how they may be able to apply it in their everyday work.
The part I like most about T-SQL Tuesday is that it introduces you to a whole new set of bloggers and a whole new set of perspectives on any particular topic.
Only want to use in specific scenarios. Basically, from various sources, I’ve always heard that these explicit query hints should typically be designed for edge cases or specific scenarios that are tested, documented, and known to scale appropriately. Anytime you introduce hints, you are taking control from the query analyzer and indicating you know best…. This might be the case, but test test test!
OPTION(RECOMPILE) is like dynamite: use it to blow up big problems, but understand beforehand what’s going to happen.