For the data file, the impact can be illustrated in the following chain of events:
- A new 1MB data file is created that contains no information. (ie. a 1MB data file containing 0MB of data)
- Data is written to the data until it reaches the file size. (ie. the 1MB data file now contains 1MB of data)
- The SQL server suspends normal operations to the database while the data file is grown by 1MB. (ie. the data file is now 2MB and contains 1MB of data) If Instant File Initialization (IFI) is enabled, the file is expanded and database operations resume. If IFI is not enabled, the expanded part of the data file must be zeroed before db operations resume, resulting in an additional delay.
- Once the data file has been grown successfully, the server resumes normal database processing. At this point the server loops back to Step 2.
The server will continue this run-pause-run-pause processing until the data file reaches its Maxsize, or the disk becomes full. If the disk that the data file resides on has other files on it (ie. the C drive, or a disk that is shared by several databases), there will be other disk write events happening between the data file growth events. This may cause the data file expansion segments to be non-contiguous, increasing the file fragmentation and further decreasing the database performance.
This is all to answer the question, “What’s the problem with missing a few log backups?”