Vergleich der Dateisysteme EXT4, XFS und ZFS
Unter Linux laufen viele Server mit zwei Festplatten im RAID-1 Verbund. Dabei werden die Daten auf den Festplatten gespiegelt um Datenverlust beim Ausfall einer Festplatte zu vermeiden. Wir haben uns angesehen welche Leistung verschiedene Dateisysteme in dieser Konfiguration bringen.
Der Test wurde mit folgenden Komponenten durchgeführt:
- CentOS 8.1 / kernel 4.18.0-147.5.1.el8_1.x86_64
- Pro Dateisystem je zwei Festplatten im RAID-1 Verbund:
- Model Family: HGST Travelstar 5K1000
- Device Model: HGST HTE541010A9E680
- User Capacity: 1,000,204,886,016 bytes [1.00 TB]
- Sector Sizes: 512 bytes logical, 4096 bytes physical
- Rotation Rate: 5400 rpm
- Form Factor: 2.5 inches
ZFS Dateisystem erstellen
[root@fstest ~]# zpool create -f -o ashift=12 -m /zfspool zfspool \ mirror ata-HGST_HTE541010A9E680_J540001MJGTEXC ata-HGST_HTE541010A9E680_J5400013JZ6NAC [root@fstest ~]# zpool status -v pool: zfspool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM zfspool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ata-HGST_HTE541010A9E680_J540001MJGTEXC ONLINE 0 0 0 ata-HGST_HTE541010A9E680_J5400013JZ6NAC ONLINE 0 0 0 NAME PROPERTY VALUE SOURCE zfspool type filesystem - zfspool available 899G - zfspool compressratio 1.00x - zfspool quota none default zfspool reservation none default zfspool recordsize 128K default
EXT4 Dateisystem erstellen
[root@fstest ~]# mkfs.ext4 /dev/md0 mke2fs 1.44.3 (10-July-2018) Creating filesystem with 244157360 4k blocks and 61046784 inodes Filesystem UUID: d592518f-30d6-43f5-8b8a-3852c0c4fbb4 Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848 Allocating group tables: done Writing inode tables: done Creating journal (262144 blocks): done Writing superblocks and filesystem accounting information: done
XFS Dateisystem erstellen
[root@fstest ~]# mkfs.xfs /dev/md1 meta-data=/dev/md1 isize=512 agcount=4, agsize=61039340 blks = sectsz=4096 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 = reflink=1 data = bsize=4096 blocks=244157360, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1 log =internal log bsize=4096 blocks=119217, version=2 = sectsz=4096 sunit=1 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0
Tests mit bonnie++
Mehr über das Programm bonnie++ finden sie auf Wikipedia (englisch): Bonnie++
Bonnie schreibt in diesem Test 128 GB Daten auf die Festplatten, da der Server über 64 GB Arbeitsspeicher verfügt und so Caching-Effekte minimiert werden sollen.
ZFS (Version) 0.8.3
Version 1.98 | Sequential Output | Sequential Input | Random Seeks | Sequential Create | Random Create | |||||||||||||||||||||||
Size | Per Char | Block | Rewrite | Per Char | Block | Num Files | Create | Read | Delete | Create | Read | Delete | ||||||||||||||||
M/sec | % CPU | M/sec | % CPU | M/sec | % CPU | M/sec | % CPU | M/sec | % CPU | /sec | % CPU | /sec | % CPU | /sec | % CPU | /sec | % CPU | /sec | % CPU | /sec | % CPU | /sec | % CPU | |||||
# | 126G | 123 | 99 | 73 | 16 | 36 | 9 | 255 | 99 | 122 | 13 | 113.9 | 5 | 16 | 4966 | 92 | +++++ | +++ | 1524 | 15 | 5059 | 92 | +++++ | +++ | 1237 | 13 | ||
Latency | 64300us | 13486ms | 21657ms | 65773us | 3948ms | 408ms | Latency | 3486us | 1152us | 3105ms | 3541us | 23us | 3762ms |
EXT4 (mdraid / raid 1)
Version 1.98 | Sequential Output | Sequential Input | Random Seeks | Sequential Create | Random Create | |||||||||||||||||||||||
Size | Per Char | Block | Rewrite | Per Char | Block | Num Files | Create | Read | Delete | Create | Read | Delete | ||||||||||||||||
M/sec | % CPU | M/sec | % CPU | M/sec | % CPU | M/sec | % CPU | M/sec | % CPU | /sec | % CPU | /sec | % CPU | /sec | % CPU | /sec | % CPU | /sec | % CPU | /sec | % CPU | /sec | % CPU | |||||
# | 126G | 424 | 97 | 90 | 12 | 39 | 4 | 958 | 93 | 100 | 5 | 346.1 | 5 | 16 | 12255 | 36 | +++++ | +++ | +++++ | +++ | 23530 | 68 | +++++ | +++ | +++++ | ++ | ||
Latency | 18630us | 8545ms | 1140ms | 42759us | 289ms | 836ms | Latency | 326us | 496us | 1211us | 266us | 12us | 2516us |
XFS (mdraid / raid 1)
Version 1.98 | Sequential Output | Sequential Input | Random Seeks | Sequential Create | Random Create | |||||||||||||||||||||||
Size | Per Char | Block | Rewrite | Per Char | Block | Num Files | Create | Read | Delete | Create | Read | Delete | ||||||||||||||||
M/sec | % CPU | M/sec | % CPU | M/sec | % CPU | M/sec | % CPU | M/sec | % CPU | /sec | % CPU | /sec | % CPU | /sec | % CPU | /sec | % CPU | /sec | % CPU | /sec | % CPU | /sec | % CPU | |||||
# | 126G | 651 | 85 | 79 | 6 | 38 | 4 | 896 | 94 | 105 | 5 | 232.6 | 27 | 16 | 7873 | 38 | +++++ | +++ | 12755 | 31 | 8304 | 40 | +++++ | +++ | 12830 | 33 | ||
Latency | 10684us | 18498us | 4093ms | 23959us | 176ms | 112ms | Latency | 464us | 158us | 212us | 290us | 15us | 184us |
Tests mit fio - 16KB Blockgröße
Die Dokumentation zu fio (flexible I/O tester) finden Sie hier.
Zufällige Lese-/Schreibanfragen, 70% lesend, 30% schreibend, 16 KB Blockgröße, Direct-IO
fio --name=randrw --rw=randrw --direct=1 --ioengine=libaio --bs=16k --numjobs=8 --rwmixread=70 --size=1G --runtime=600 --group_reporting
fio --name=randrw --rw=randrw --direct=1 --ioengine=libaio --bs=16k --numjobs=8 --rwmixread=70 --size=1G --runtime=600 --group_reporting
ZFS 〉 70% lesen 〉 30% schreiben 〉 16KB Blockgröße
read: IOPS=706, BW=11.0MiB/s (11.6MB/s)(5731MiB/519310msec) write: IOPS=303, BW=4852KiB/s (4969kB/s)(2461MiB/519310msec) Run status group 0 (all jobs): READ: bw=11.0MiB/s (11.6MB/s), 11.0MiB/s-11.0MiB/s (11.6MB/s-11.6MB/s), io=5731MiB (6010MB), run=519310-519310msec WRITE: bw=4852KiB/s (4969kB/s), 4852KiB/s-4852KiB/s (4969kB/s-4969kB/s), io=2461MiB (2580MB), run=519310-519310msec
EXT4 - 〉 70% lesen 〉 30% schreiben 〉 16KB Blockgröße
read: IOPS=132, BW=2127KiB/s (2178kB/s)(1247MiB/600427msec) write: IOPS=57, BW= 913KiB/s ( 935kB/s)( 535MiB/600427msec) Run status group 0 (all jobs): READ: bw=2127KiB/s (2178kB/s), 2127KiB/s-2127KiB/s (2178kB/s-2178kB/s), io=1247MiB (1307MB), run=600427-600427msec WRITE: bw=913KiB/s (935kB/s), 913KiB/s-913KiB/s (935kB/s-935kB/s), io=535MiB (561MB), run=600427-600427msec Disk stats (read/write): md0: ios=79802/34639, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=39901/35743, aggrmerge=0/134, aggrticks=1519734/631509, aggrin_queue=2119860, aggrutil=15.01% sde: ios=16184/35743, merge=0/134, ticks=205446/139020, in_queue=327748, util=8.74% sdc: ios=63618/35743, merge=0/134, ticks=2834022/1123998, in_queue=3911973, util=15.01%
XFS - 〉 70% lesen 〉 30% schreiben 〉 16KB Blockgröße
read: IOPS=104, BW=1680KiB/s (1720kB/s)(984MiB/600108msec) write: IOPS=45, BW=727KiB/s ( 745kB/s)(426MiB/600108msec) Run status group 0 (all jobs): READ: bw=1680KiB/s (1720kB/s), 1680KiB/s-1680KiB/s (1720kB/s-1720kB/s), io=984MiB (1032MB), run=600108-600108msec WRITE: bw=727KiB/s (745kB/s), 727KiB/s-727KiB/s (745kB/s-745kB/s), io=426MiB (447MB), run=600108-600108msec Disk stats (read/write): md1: ios=63000/27327, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=31500/28345, aggrmerge=0/3, aggrticks=813483/586447, aggrin_queue=1375129, aggrutil=12.18% sdg: ios=13054/28345, merge=0/4, ticks=146353/101802, in_queue=235378, util=7.11% sdf: ios=49946/28346, merge=0/3, ticks=1480614/1071093, in_queue=2514880, util=12.18%
Tests mit fio - 128KB Blockgröße
Zufällige Lese-/Schreibanfragen, 70% lesend, 30% schreibend, 128 KB Blockgröße, Direct-IO
fio --name=randrw --rw=randrw --direct=1 --ioengine=libaio --bs=128k --numjobs=8 --rwmixread=70 --size=1G --runtime=600 --group_reporting
fio --name=randrw --rw=randrw --direct=1 --ioengine=libaio --bs=128k --numjobs=8 --rwmixread=70 --size=1G --runtime=600 --group_reporting
ZFS 〉 70% lesen 〉 30% schreiben 〉 128KB Blockgröße 〉 128 GB Daten
[root@fstest fio]# fio --name=randrw --rw=randrw --direct=0 --ioengine=libaio --bs=128k --numjobs=8 \ --rwmixread=70 --size=16G --runtime=600 --group_reporting randrw: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=1 ... fio-3.7 Starting 8 processes randrw: Laying out IO file (1 file / 16384MiB) randrw: Laying out IO file (1 file / 16384MiB) randrw: Laying out IO file (1 file / 16384MiB) randrw: Laying out IO file (1 file / 16384MiB) randrw: Laying out IO file (1 file / 16384MiB) randrw: Laying out IO file (1 file / 16384MiB) randrw: Laying out IO file (1 file / 16384MiB) randrw: Laying out IO file (1 file / 16384MiB) Jobs: 8 (f=8): [m(8)][100.0%][r=15.1MiB/s,w=4736KiB/s][r=121,w=37 IOPS][eta 00m:00s] randrw: (groupid=0, jobs=8): err= 0: pid=25652: Mon Mar 23 00:31:15 2020 read: IOPS=101, BW=12.6MiB/s (13.2MB/s)(7581MiB/600088msec) slat (usec): min=35, max=981460, avg=78730.98, stdev=66695.87 clat (nsec): min=1428, max=37411, avg=3757.49, stdev=1925.39 lat (usec): min=37, max=981466, avg=78736.05, stdev=66696.27 clat percentiles (nsec): | 1.00th=[ 1768], 5.00th=[ 3152], 10.00th=[ 3216], 20.00th=[ 3248], | 30.00th=[ 3312], 40.00th=[ 3344], 50.00th=[ 3376], 60.00th=[ 3408], | 70.00th=[ 3472], 80.00th=[ 3728], 90.00th=[ 4256], 95.00th=[ 4896], | 99.00th=[16512], 99.50th=[17024], 99.90th=[18048], 99.95th=[20608], | 99.99th=[27520] bw ( KiB/s): min= 255, max= 7424, per=12.52%, avg=1618.84, stdev=922.75, samples=9588 iops : min= 1, max= 58, avg=12.61, stdev= 7.21, samples=9588 write: IOPS=43, BW=5579KiB/s (5713kB/s)(3269MiB/600088msec) slat (usec): min=51, max=336266, avg=958.61, stdev=10476.92 clat (nsec): min=1572, max=23886, avg=1981.22, stdev=629.80 lat (usec): min=53, max=336271, avg=961.20, stdev=10477.13 clat percentiles (nsec): | 1.00th=[ 1672], 5.00th=[ 1736], 10.00th=[ 1784], 20.00th=[ 1832], | 30.00th=[ 1864], 40.00th=[ 1896], 50.00th=[ 1928], 60.00th=[ 1960], | 70.00th=[ 1992], 80.00th=[ 2024], 90.00th=[ 2096], 95.00th=[ 2160], | 99.00th=[ 3376], 99.50th=[ 3664], 99.90th=[12736], 99.95th=[14144], | 99.99th=[18304] bw ( KiB/s): min= 255, max= 5120, per=15.22%, avg=848.86, stdev=612.50, samples=7887 iops : min= 1, max= 40, avg= 6.60, stdev= 4.79, samples=7887 lat (usec) : 2=24.28%, 4=65.70%, 10=8.45%, 20=1.53%, 50=0.04% cpu : usr=0.02%, sys=0.15%, ctx=59928, majf=0, minf=142 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=60644,26155,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=1 Run status group 0 (all jobs): READ: bw=12.6MiB/s (13.2MB/s), 12.6MiB/s-12.6MiB/s (13.2MB/s-13.2MB/s), io=7581MiB (7949MB), run=600088-600088msec WRITE: bw=5579KiB/s (5713kB/s), 5579KiB/s-5579KiB/s (5713kB/s-5713kB/s), io=3269MiB (3428MB), run=600088-600088msec
EXT4 〉 70% lesen 〉 30% schreiben 〉 128KB Blockgröße 〉 128 GB Daten
[root@fstest fio]# fio --name=randrw --rw=randrw --direct=0 --ioengine=libaio --bs=128k --numjobs=8 \ --rwmixread=70 --size=16G --runtime=600 --group_reporting randrw: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=1 ... fio-3.7 Starting 8 processes randrw: Laying out IO file (1 file / 16384MiB) randrw: Laying out IO file (1 file / 16384MiB) randrw: Laying out IO file (1 file / 16384MiB) randrw: Laying out IO file (1 file / 16384MiB) randrw: Laying out IO file (1 file / 16384MiB) randrw: Laying out IO file (1 file / 16384MiB) randrw: Laying out IO file (1 file / 16384MiB) randrw: Laying out IO file (1 file / 16384MiB) Jobs: 8 (f=8): [m(8)][100.0%][r=12.0MiB/s,w=5381KiB/s][r=96,w=42 IOPS][eta 00m:00s] randrw: (groupid=0, jobs=8): err= 0: pid=11169: Mon Mar 23 00:29:37 2020 read: IOPS=115, BW=14.5MiB/s (15.2MB/s)(8671MiB/600062msec) slat (usec): min=554, max=610323, avg=68985.35, stdev=55557.33 clat (nsec): min=1964, max=131469, avg=4047.76, stdev=1586.81 lat (usec): min=558, max=610332, avg=68990.75, stdev=55557.73 clat percentiles (nsec): | 1.00th=[ 2448], 5.00th=[ 3312], 10.00th=[ 3504], 20.00th=[ 3568], | 30.00th=[ 3600], 40.00th=[ 3632], 50.00th=[ 3696], 60.00th=[ 3728], | 70.00th=[ 3824], 80.00th=[ 4128], 90.00th=[ 4704], 95.00th=[ 5728], | 99.00th=[12224], 99.50th=[13632], 99.90th=[17792], 99.95th=[20096], | 99.99th=[28032] bw ( KiB/s): min= 255, max=12032, per=12.50%, avg=1849.42, stdev=1184.94, samples=9600 iops : min= 1, max= 94, avg=14.39, stdev= 9.26, samples=9600 write: IOPS=49, BW=6391KiB/s (6544kB/s)(3745MiB/600062msec) slat (usec): min=77, max=160148, avg=408.05, stdev=4833.27 clat (nsec): min=1317, max=30439, avg=1817.31, stdev=631.19 lat (usec): min=78, max=160153, avg=410.31, stdev=4833.45 clat percentiles (nsec): | 1.00th=[ 1448], 5.00th=[ 1512], 10.00th=[ 1560], 20.00th=[ 1640], | 30.00th=[ 1704], 40.00th=[ 1736], 50.00th=[ 1768], 60.00th=[ 1816], | 70.00th=[ 1848], 80.00th=[ 1880], 90.00th=[ 1960], 95.00th=[ 2064], | 99.00th=[ 2960], 99.50th=[ 3920], 99.90th=[12480], 99.95th=[13248], | 99.99th=[16320] bw ( KiB/s): min= 255, max= 6400, per=14.52%, avg=928.03, stdev=720.88, samples=8263 iops : min= 1, max= 50, avg= 7.19, stdev= 5.64, samples=8263 lat (usec) : 2=28.10%, 4=56.05%, 10=14.52%, 20=1.30%, 50=0.04% lat (usec) : 250=0.01% cpu : usr=0.02%, sys=0.22%, ctx=69859, majf=0, minf=140 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=69371,29959,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=1 Run status group 0 (all jobs): READ: bw=14.5MiB/s (15.2MB/s), 14.5MiB/s-14.5MiB/s (15.2MB/s-15.2MB/s), io=8671MiB (9093MB), run=600062-600062msec WRITE: bw=6391KiB/s (6544kB/s), 6391KiB/s-6391KiB/s (6544kB/s-6544kB/s), io=3745MiB (3927MB), run=600062-600062msec Disk stats (read/write): md0: ios=69368/30275, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=34685/30086, aggrmerge=0/231, aggrticks=2384338/888440, aggrin_queue=3240162, aggrutil=12.18% sde: ios=33255/30064, merge=0/250, ticks=2043693/809189, in_queue=2821071, util=11.65% sdc: ios=36116/30109, merge=0/212, ticks=2724983/967692, in_queue=3659254, util=12.18%
XFS 〉 70% lesen 〉 30% schreiben 〉 128KB Blockgröße 〉 128 GB Daten
[root@fstest fio]# fio --name=randrw --rw=randrw --direct=0 --ioengine=libaio --bs=128k --numjobs=8 \ --rwmixread=70 --size=16G --runtime=600 --group_reporting randrw: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=1 ... fio-3.7 Starting 8 processes randrw: Laying out IO file (1 file / 16384MiB) randrw: Laying out IO file (1 file / 16384MiB) randrw: Laying out IO file (1 file / 16384MiB) randrw: Laying out IO file (1 file / 16384MiB) randrw: Laying out IO file (1 file / 16384MiB) randrw: Laying out IO file (1 file / 16384MiB) randrw: Laying out IO file (1 file / 16384MiB) randrw: Laying out IO file (1 file / 16384MiB) Jobs: 8 (f=8): [m(8)][100.0%][r=23.4MiB/s,w=8840KiB/s][r=187,w=69 IOPS][eta 00m:00s] randrw: (groupid=0, jobs=8): err= 0: pid=10376: Mon Mar 23 00:42:49 2020 read: IOPS=113, BW=14.2MiB/s (14.9MB/s)(8535MiB/600059msec) slat (usec): min=624, max=673486, avg=70227.61, stdev=61519.31 clat (nsec): min=2109, max=24242, avg=3674.62, stdev=865.32 lat (usec): min=628, max=673491, avg=70232.37, stdev=61519.33 clat percentiles (nsec): | 1.00th=[ 3408], 5.00th=[ 3440], 10.00th=[ 3472], 20.00th=[ 3504], | 30.00th=[ 3536], 40.00th=[ 3568], 50.00th=[ 3600], 60.00th=[ 3600], | 70.00th=[ 3632], 80.00th=[ 3664], 90.00th=[ 3728], 95.00th=[ 3760], | 99.00th=[ 6112], 99.50th=[10816], 99.90th=[16768], 99.95th=[17792], | 99.99th=[21632] bw ( KiB/s): min= 256, max=12288, per=12.50%, avg=1820.82, stdev=1244.41, samples=9598 iops : min= 2, max= 96, avg=14.18, stdev= 9.72, samples=9598 write: IOPS=49, BW=6280KiB/s (6431kB/s)(3680MiB/600059msec) slat (usec): min=63, max=409, avg=82.03, stdev=10.00 clat (nsec): min=1297, max=26489, avg=1801.48, stdev=555.07 lat (usec): min=65, max=443, avg=84.24, stdev=10.29 clat percentiles (nsec): | 1.00th=[ 1432], 5.00th=[ 1480], 10.00th=[ 1512], 20.00th=[ 1576], | 30.00th=[ 1704], 40.00th=[ 1768], 50.00th=[ 1816], 60.00th=[ 1848], | 70.00th=[ 1880], 80.00th=[ 1928], 90.00th=[ 1976], 95.00th=[ 2024], | 99.00th=[ 2160], 99.50th=[ 2256], 99.90th=[12992], 99.95th=[14400], | 99.99th=[17536] bw ( KiB/s): min= 255, max= 7680, per=14.84%, avg=931.85, stdev=738.46, samples=8087 iops : min= 1, max= 60, avg= 7.23, stdev= 5.77, samples=8087 lat (usec) : 2=27.83%, 4=71.05%, 10=0.64%, 20=0.46%, 50=0.02% cpu : usr=0.02%, sys=0.19%, ctx=68315, majf=0, minf=187 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=68283,29441,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=1 Run status group 0 (all jobs): READ: bw=14.2MiB/s (14.9MB/s), 14.2MiB/s-14.2MiB/s (14.9MB/s-14.9MB/s), io=8535MiB (8950MB), run=600059-600059msec WRITE: bw=6280KiB/s (6431kB/s), 6280KiB/s-6280KiB/s (6431kB/s-6431kB/s), io=3680MiB (3859MB), run=600059-600059msec Disk stats (read/write): md1: ios=68410/23675, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=34141/23760, aggrmerge=63/27, aggrticks=2395357/744590, aggrin_queue=3110750, aggrutil=11.10% sdg: ios=34611/23754, merge=64/34, ticks=2398039/710562, in_queue=3079169, util=11.10% sdf: ios=33672/23767, merge=63/20, ticks=2392676/778618, in_queue=3142332, util=10.98%
Tests mit Perl
Ein Perl-Skript das 100.000 Dateien mit 4KB bzw. 128KB Größe erzeugt:
# datasize: 4k ------------------------- # ext4 file_create 100000 write: 16592.003/sec, write/6.027 secs, write_sync/7.022 secs rewrite: 9100.009/sec, rewrite/10.989 secs, rewrite_sync/0.524 secs read: 38387.716/sec, read/2.605 secs, read_sync/0.817 secs delete: 28161.081/sec, delete/3.551 secs, delete_sync/0.951 secs # xfs file_create 100000 write: 10093.873/sec, write/9.907 secs, write_sync/14.865 secs rewrite: 8312.552/sec, rewrite/12.030 secs, rewrite_sync/8.461 secs read: 35880.875/sec, read/2.787 secs, read_sync/6.708 secs delete: 7859.781/sec, delete/12.723 secs, delete_sync/6.815 secs # zfs file_create 100000 write: 4301.075/sec, write/23.250 secs, write_sync/7.250 secs rewrite: 1630.683/sec, rewrite/61.324 secs, rewrite_sync/1.212 secs read: 29770.765/sec, read/3.359 secs, read_sync/0.018 secs delete: 2045.073/sec, delete/48.898 secs, delete_sync/0.143 secs
# datasize: 128k ------------------------- # ext4 file_create 100000 write: 1185.579/sec, write/84.347 secs, write_sync/181.638 secs rewrite: 625.403/sec, rewrite/159.897 secs, rewrite_sync/0.263 secs read: 10447.137/sec, read/9.572 secs, read_sync/0.431 secs delete: 17721.070/sec, delete/5.643 secs, delete_sync/0.829 secs # xfs file_create 100000 write: 4438.132/sec, write/22.532 secs, write_sync/179.358 secs rewrite: 433.937/sec, rewrite/230.448 secs, rewrite_sync/6.740 secs read: 9834.776/sec, read/10.168 secs, read_sync/5.795 secs delete: 4660.918/sec, delete/21.455 secs, delete_sync/0.591 secs # zfs file_create 100000 write: 356.019/sec, write/280.884 secs, write_sync/103.739 secs rewrite: 433.661/sec, rewrite/230.595 secs, rewrite_sync/103.261 secs read: 181.663/sec, read/550.469 secs, read_sync/0.005 secs delete: 1975.075/sec, delete/50.631 secs, delete_sync/5.093 secs