And here is a very lengthy (~900 lines) T-SQL Code that I generated from SSMS & SQL Profiler to check the same Dependencies of a Table in SQL Server 2014. You can also create a Stored Procedure and apply the Table & Schema as parameters.
You can just replace the Table & Schema in the first 2 lines and execute the code to check table dependencies
You might be able to optimize this script, but it’s nice to have a starting point.
A quick one today, just looking for strings. I wrote an article on this, so there’s more detail there, but here’s a bit of code you can look through and see what it does.
He didn’t tag his post T-SQL Tuesday, but it certainly is apropos.
What makes this interesting is when I am using UNION to join the results. How do you place a final resultset from a UNION, EXCEPT, or INTERSECT into a temporary table using SELECT INTO? Where does the INTO portion of the query go?
This is actually a pretty simple thing to do. The INTO for the SELECT INTO goes into the first query of the set. An example of UNIONing the results from sys.dm_exec_query_stats and sys.dm_exec_query_stats into a temporary table is provided in listing 1.
No subqueries are necessary here.
As it turns out, when you have a character string in SQL Server that contains character 0x000, it really doesn’t know what to do with it most of the time, especially when you’re dealing with Unicode strings.
I did track down http://sqlsolace.blogspot.com/2014/07/function-dbostripunwantedcharacters.html, but I generally try to avoid calling UDF’s in my queries.
Jay’s got an answer which works, so check it out. Also, I second the use of the #sqlhelp hashtag. There’s a great community watching that hashtag.
Let’s say you have a table called
dbo.BugReports, and you need to change it to
dbo.SupportIncidents. This can be quite disruptive if you have references to the original name scattered throughout stored procedures, views, functions, and application code. Modern tools like SSDT can make a refactor relatively straightforward within the database (as long as queries aren’t constructed from user input and/or dynamic SQL), but for distributed applications, it can be a lot more complex.
A synonym can allow you to change the database now, and worry about the application later – even in phases. You just rename the table from the old name to the new name (or use
ALTER TABLE ... SWITCHand then drop the original), and then create a synonym named with the old name that “points to” the new name
I’ve used synonyms once or twice, but they’re pretty low on my list, in part because of network effects: if I create this great set of synonyms but the next guy doesn’t know about them, it makes maintenance that much harder.
The SWITCH statement can instantly ‘move’ data from one table to another table. It does this by updating some meta data, to say that the new table is now the owner of the data instead of the old table. This is very useful as there is no physical data movement to cause the stresses mentioned earlier. There are a lot of rules enforced by SQL Server before it will allow this to work. Essentially each table must have the same columns with the same data types and NULL settings, they need to be in the same file group and the new table must be empty. See here for a more detailed look at these rules.
If you can take a downtime, this is pretty easy. Otherwise, making sure that the two tables are in sync until the switchover occurs is a key problem to keep in mind.
Medians as a concept are simple enough. If you have a large number of values, like a range of statistical values, you want to pick the middle one. The median, as opposed to the average is useful for a number of reasons, one of them that you can reduce the effect of so-called outlier values.
The fact that SQL Server doesn’t have a fast, built-in median function surprises me, to be honest. The best alternative I’ve found was a CLR function in SQL#.
So here is the thing. When you change one you change them all. That means if you don’t specify a precision when you can then you get the default. That’s not exactly a common problem though. Usually what you are changing is the precision (or possibly the datatype). What is a common mistake is not specifying the nullability.
When modifying DDL, make sure that you keep it consistent and complete.
There are many ways of doing it in SQL. (and Rosetta Code is a good place to view solutions in various other languages). I believe that Peter Larsson holds the record for the fastest calculation of the Luhn test for a sixteen-digit credit card, with this code. As it stands, it isn’t a general solution, but it can be modified for different lengths of bank card.
Phil has two interesting T-SQL functions in the code and wants to find more.
However I wanted to add some tests. Does this really work? What if I don’t have a backslash? I thought the best way to do this was with a few tSQLt tests, which I quickly built. The entire process was 5-10 minutes, which isn’t a lot longer than if I had been running random tests myself with a variety of strings.
The advantage of tests is that if I come up with a new case, or another potential bug, I copy the test over, change the string and I have a new test, plus all the regressions. I’m not depending on my memory to run the test cases.
I first put the code in a function, which makes it easier to test.
tsqlt is a great tool for database unit testing.