The system was at a load of 7-8
The scan read from both local disks with a combined rate of 300-360MB/s
Found the system ran a scan of the linux raid luns.
I know similar scans from real disk arrays or raid controllers i.e. the sniffer[tm] on emc clariion arrays, running as a background verify.
The problem is that the scan with the md array driver on my debian box was written without respect for the fact that servers are ususally meant to serve.
waxh0002:/usr/share/mdadm# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md3 : active raid1 sda4 sdb4
560331008 blocks [2/2] [UU]
[===============>…..] check = 77.8% (436326464/560331008) finish=30.2min speed=68253K/sec
ionice -c 3 -p 1721
I was able to use ionice to put the scan back to the lowest prio class “idle” and that was all it needed, the system is still doing the verify, but now with much less performance impact.
I guess the impact of the verify is very much obvious. And the impact of ionice is also visible, but keep in mind the disk performance also degrades as we head to the end of the disk…