Randolph West explains why you might start at MIN(INT) for an identity integer column:
A quick(er) post this week, in response to Greg Low’s blog post from a few weeks ago titled “Don’t start identity columns or sequences with large negative values.”
Greg writes that you shouldn’t use large negative values in a table, because… it’s hard to read them, I guess? And also they don’t compress well.
I disagree … to a degree. Dang, words are hard. Anyway, when I design a table I create what’s called a surrogate key as my primary key, which is a value that is intended for the table to uniquely identify a row so that it participates in relational activities like joins and foreign keys in an efficient way. In other words the identity column is not for me, it’s for the database engine. I don’t need to worry about what row a value has. I choose the data type for that identity column based on the estimated number of rows, not whether I can memorize that a
[StatusID]
of 5 means something. Magic numbers are bad, mmmkay?
I don’t mind using negative values, especially for things like queue tables where the rows are ephemeral. The identity values may be harder to read, but as Randolph points out, in those types of cases, you aren’t really reading the values anyhow.