Randolph West explains why the default value of cost threshold for parallelism is too low at 5:
Unfortunately, the default setting of 5 means that queries are likely to hit that threshold more often than not on modern hardware, and the optimizer is forced to look at parallel plans unnecessarily. A side-effect to this is that queries running in parallel will block queries running on a single thread, so short-running queries will be delayed by long-running queries needing the same resources.
Therefore, if I am setting up a SQL Server instance from scratch, I will set this value to 50 by default, and monitor the performance counters. Once a query cost exceeds 50, then it can use all the cores up to the limit defined by the max degree of parallelism.
50 is a reasonable first stab at a default. I’ll be honest, though: I’m not sure I know exactly where to set the cost threshold for parallelism. I agree that 5 is too low and 50 is better, but I don’t have a great feel for when it should move up or down short of “everything’s going parallel in your database, so increase the value” or “nothing’s ever going parallel, so maybe decrease it a skosh.”