I mean, I’m not that old, but spinning thru a few decades is still slower than just typing 4 digits on my keyboard — especially if your input-box is smart enough to flip my keyboard into “numeric only” mode.
Another seemingly popular date-picker UX is the “calendar control”. Oh gawd. It’s horrible! Clicking thru pages and pages of months to find and click (tap?) on an itty bitty day box, only to realize “Oh crap, that was the wrong year… ok let me go back.. click, click, tap..” ad-nauseum.
Food for thought.
In SQL Server, datetime2’s format is defined as follows:YYYY-MM-DD hh:mm:ss[.fractional seconds]YYYY-MM-DDTHH:mm:ss.sssZ
Honestly, they look pretty similar. However, there are few key differences:
JSON separates the date and time portion of the string with the letter
Zis optional and indicates that the datetime is in UTC (if the
-05:00for Eastern Standard Time)
Read on for a few scripts handling datetime conversions between these types.
There are a lot of great examples out there on how to build your own custom Time Intelligence into Analysis Services (MD). Just have a look at this, this, this, this and this. All good sources for solid Time Intelligence in SSAS.
One thing they have in common though, is that they all make the assumption that there is and will always be 52 weeks in a year. The data set I am currently working with is built on ISO 8601 standard. In short, this means that there is an (re-) occurrence of a 53rd full week as opposed to only 52 in the Gregorian version which is defined by: 1 Gregorian calendar year = 52 weeks + 1 day (2 days in a leap year).
The 53rd occurs approximately every five to six years, though this is not always the case. The last couple of times we saw 53 weeks in a year was in 1995, 2000, 2006, 2012 and 2015. Next time will be in 2020. This gives you enough time to either forget about the hacks and hard-coded fixes in place to mitigate the issue OR bring your code in a good state, ready for the next time.
Dates and currency are hard problems.
Matan Yungman and Guy Glantser take a hack at DATEDIFF versus DATEADD for date calculations. First up is Matan:
Pretty simple right?
Well, it is, and since this problem is pretty common, I used this solution in many performance tuning sessions I performed over the years.
There’s a slight problem though: This solution isn’t 100% accurate.
When carefully looking at the results, I find out that for the first query, I get 5859 rows, and for the second query, I get 5988 rows. Where does this difference come from?
I tested both queries on a sample table, which has millions of rows, and only around 500 rows in the last 90 days. The first query produced a table scan, while the second query produced an index seek. Of course, the execution time of the second query was much lower than the first query.
Both queries were supposed to return the orders in the last 90 days, but the first query returned 523 rows, and the second query returned 497 rows. So what’s going on?
The answer has to do with the way DATEDIFF works. This function returns the number of date parts (days, years, seconds, etc.) between two date & time values. It does that by first rounding down each one of the date & time values to the nearest date part value, and then counting the number of date parts between them.
They both start from the same base problem, but end up with slightly different formulations of a solution.
Correlated Datetime Columns works. Clearly it’s not something you’re going to enable on all your databases. Probably most of your databases don’t have clustered indexes on datetime columns let alone enough tables with correlation between the data stored in them. However, when you do have that type of data correlation, enabling Correlated Datetime Columns and ensuring you have a clustered index on the datetime column is a viable tuning mechanism. Further, this is a mechanism that has been around since 2005. Just so you know, I did all my testing in SQL Server 2016, so this something that anyone in the right situation can take advantage of. Just remember that TANSTAAFL always applies. Maintaining the statistics needed for the Correlated Datetime Columns is done through materialized views that are automatically created through the optimization process. You can see the views in SSMS and any queries against the objects. You’ll need to take this into account during your statistics maintenance. However, if Correlated Datetime Columns is something you need, this is really going to help with this, fairly narrow, aspect of query tuning.
I don’t know that I’ll ever do this, but it’s worth filing away just in case.
SQL Server 2016 and Azure SQL Database V12 use the raw datetime internal value without rounding during conversion to another temporal type. The value is rounded only once during conversion, to the target type precision. The end result will be the same as before SQL Server 2016 when the target type precision is 3 or less. However, the converted value will be different when the target type precision is greater than 3 and the internal time unit interval is not evenly divisible by 3 (i.e. rounded source datetime millisecond value is 3 or 7). Note the non-zero microseconds and nanoseconds in the script results below and that rounding is based on the target type precision rather than the source.
This is a good thing on net, but be aware of this if you try to compare datetime versus datetime2 values.
Date time rounding (with
ceiling_date()) now supports unit multipliers, like “3 days” or “2 months”:
ceiling_date(ymd_hms("2016-09-12 17:10:00"), unit = "5 minutes")#>  "2016-09-12 17:10:00 UTC"
If you handle date and time data in R, Lubridate is a tremendous asset.
Most data warehouses and data marts require a date dimension or calendar table. Those of us that have been building data warehouses in SQL Server for a while have collected our favorite scripts to build out a date dimension. For a standard date dimension, I am a fan of Aaron Bertrand’s script posted on MSSQLTips.com. But the current version (as of Aug 8, 2016) of Azure SQL Data Warehouse doesn’t support computed columns, which are used in Aaron’s script.
Click through for the script.
But despite the longwindedness of it, I love it, because at no point did I need to figure out that Adelaide was in +10:30, or that Eastern was -5:00 – I simply needed to know the time zone by name. Figuring out whether daylight saving should apply or not was handled for me.
It works by using the Windows registry, which has all that information in it, but sadly, it’s not perfect when looking back in time. Australia changed the dates in 2008, and the US changed its dates in 2005 – both countries saving daylight for more of the year. AT TIME ZONE understands this. But it doesn’t seem to appreciate that in Australia in the year 2000, thanks to the Sydney Olympics, Australia started daylight saving about two months earlier. This is a little frustrating, but it’s not SQL’s fault – we need to blame Windows for that. I guess the Windows registry doesn’t remember the hotfix that went around that year. (Note to self: I might need to ask someone in the Windows team to fix that…)
Like most companies dealing with multiple time zones, we ended up building a table and function to translate. This is a nice, built-in way of doing something very similar.
The point of these functions is to make it easier to construct a date, or datetime, or datetime2 variable, when you know the individual parts.
DATEFROMPARTS()takes three arguments:
day, and returns a
datevalue. So, for example,
SELECT DATEFROMPARTS(2016,7,6);would yield the date
Read on for a comparison of this function against about a dozen other methods of building dates from components.