using ionice to stop mdresync from killing the system

The system was at a load of 7-8
The scan read from both local disks with a combined rate of 300-360MB/s

Found the system ran a scan of the linux raid luns.
I know similar scans from real disk arrays or raid controllers i.e. the sniffer[tm] on emc clariion arrays, running as a background verify.
The problem is that the scan with the md array driver on my debian box was written without respect for the fact that servers are ususally meant to serve.

waxh0002:/usr/share/mdadm# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md3 : active raid1 sda4[0] sdb4[1]
560331008 blocks [2/2] [UU]
[===============>…..] check = 77.8% (436326464/560331008) finish=30.2min speed=68253K/sec

ionice -c 3 -p 1721

I was able to use ionice to put the scan back to the lowest prio class “idle” and that was all it needed, the system is still doing the verify, but now with much less performance impact.

I guess the impact of the verify is very much obvious. And the impact of ionice is also visible, but keep in mind the disk performance also degrades as we head to the end of the disk…


2 thoughts on “using ionice to stop mdresync from killing the system

  1. DarkFader – what is the process-name that you altered? Only persistant processs I can find is md_raid0, and I have given it a real-time priority using ionice -c 1, while also giving it a nice of -1 … both to try to speed it up.


  2. the process (thread) was called md_resync md1 i think. can’t really remember (the mdadm maintainer fixed my original issue)

    real time priority will not really speed up a process that is slow. it will just ensure the process is never interrupted for more than a few msecs.

    So… er.. the resync is not a persistent processes , but a kernel thread that will be launched when the cronjob is started (should be called something like mdadm).

    IO speed of MD is not very related to that, more to your disks/sata controllers and CPU.

    i.e. the server in the above example did up to 330MB/s (reading from two disks at the same time to compare), whereas my box at home can hardly exceed 100MB/s. So 50MB/s from 1.5TB WD drives is not quite fast, but I didn’t look at tuning it. If it’s possible then probably by tuning the readahead settings for the array member disks. I doubt it’s a md issue, raid1 is really not hard to get right.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s