Disk Fragmentation and servers performance loss

Disks were designed as efficient long-term data storage devices and they’ve lived up to that design criteria well. The first disks were large, clunky, fragile, and had very limited storage capacity. Over time, disks have significantly evolved. Performance has also come a long way with today’s disk throughput being orders of magnitude more than even a decade ago. Overall the disk storage market has boomed and products are keeping up with demand.

But does this mean that disks are perfect? Unfortunately, no, it doesn’t…

There are many issues to consider when operating disks, such as the disk lifetime, disk throughput, disk redundancy and the very infamous disk fragmentation.

To make it short, fragmentation occurs when data or free space on a disk drive is non-contiguous. There are a variety of causes for disk fragmentation and although most modern systems attempt to prevent it, it is inevitable.

There are several problems that result from disk fragmentation. The most commonly understood problem, that of performance degradation, is certainly the most likely to occur on systems.

Let’s take a look at file servers.

During normal use, a file server will have a number of files stored on the local disk. The users will delete, rewrite and add files to the file share. This is a fairly typical example of the type of operation that can happen hundreds or thousands of times per minute on a busy file server. Each of these operations create some fragmentation and little by little, its level will reach unacceptable levels.

As you might imagine, the problem gets even worse as the free disk space shrinks. Less free space means that the system must use any free space fragments, no matter where they are. This suboptimal condition can result in a thoroughly fragmented system.

The core function of a file server is to read and write files and communicate the data from those files on the network. Therefore, any slowdown in reading or writing the files will have a negative impact on the system’s performance. This situation can also shorten the life of disk drives by causing the over usage of the disk heads to read and write fragmented files.

Unfortunately, with the constant I/O that goes on with busy servers, fragmentation of files and free space substantially increases backup times and anti-virus scans and degrades system performance. The effects of the loss of performance include fewer customers served per hour, less productive employees and more burdened systems. This loss of efficiency could severely impact the business if it continues for a long time.

So what is the solution? Very simple. Systems need to be defragged, regularly enough so as not to allow fragmentation to build up.

Most servers run 24/7, so scheduling even essential routine maintenance tasks such as backups can be problematic. Therefore, some IT administrators are left with no other choice than putting off defrag processes until levels of performance are absolutely intolerable, at which point the system needs to be brought down for a manual/ scheduled defragmentation to be run.

An automatic defragmenter is very handy as it operates in the background and keeps systems defragmented in real time. As opposed to most defrag solutions, a fully automatic tool such as Diskeeper allows access to the data during the defragmentation process and servers are constantly kept at peak performance.

Disk fragmentation is a serious problem that affects every business that relies on Windows systems. To keep servers at peak performance, it is advised to check the levels of fragmentation with a third party tool and to act quickly before fragmentation gets out of control. With most companies nowadays relying on technology to function, to keep disks healthy and at the top of their performance should always be a priority.

Celine Bouquillon
With Mike Danseglio

Share