Updating Large Tables In SQL Server And Oracle

Kevin Feasel



Jana Sattainathan has a post on how he was able to move and update billions of rows, using both Oracle and SQL Server as examples:

The key thing to remember with SQL Server is to convert to a non-integer value by using a “decimal” as shown in the above example with “10.”. This is the same as saying “10.0”. Without the “.”, it will result in uneven splits from rounding errors of integers. It is not the result that you intend to have it you want accurate results.

To show you the difference, I have included the SQL and results of a query that uses “.” and the other that does not, with “.” being the only difference:

It’s a good article, and definitely an important thing to think about when you have large tables.

Related Posts

Choosing A Hadoop Data Format

Silvia Oliveros has a set of considerations to help you choose a file format for your data in Hadoop: What does your pipeline look like, and what steps are involved? Some of the file formats were optimized to work in certain situations. For example, Sequence files were designed to easily share data between Map Reduce […]

Read More

More Isn’t Better With Data Collection

Kevin Feasel



Andy Leonard argues that more data is not better data: The Problem I am Trying To Solve Is more data better? In his 2012 book, Antifragile, Nassim Nicholas Taleb (fooledbyrandomness.com | @nntaleb) – the first data philosopher I encountered – states: “The fooled-by-data effect is accelerating. There is a nasty phenomenon called ‘Big Data’ in […]

Read More


April 2017
« Mar May »