Chris Hyde shows a few techniques for splitting out data into training, testing, and validation sets:
We see right away that this method failed horribly as all of the data was placed into the same dataset. This holds true no matter how many times we execute the code, and it happens because the RAND() function is only evaluated once for the whole query, and not individually for each row. To correct this we’ll instead use a method that Jeff Moden taught me at a SQL Saturday in Detroit several years ago – generating a NEWID() for each row, using the CHECKSUM() function to turn it into a random number, and then the % (modulus) function to turn it into a number between 0 and 99 inclusive.
I’d have to test it out, but I’d think you could modify method 3 to include a CROSS APPLY
to perform one ABS(CHECKSUM(NEWID())
and get exact counts that way without a temp table.