ISCSI + mdadm Raid1 vs DRBD.Часть 1

Сегодня решил провести тест,без каких либо тюнингов, тюнинг DRBD и скорость сети с двумя сетевухами во второй части.

Оборудование

Есть 2 компьютера c одинаковой конфигурацией:
ОС Ubuntu 12.04 server
Ос установлена на отдельном винте 80GB
Винты для сетевой синхронизации 500GB
CPU Intel(R) Core(TM) i5-2400 CPU @ 3.10GHz
RAM 4GB

Тесты жестких дисков

Запись:
На одном из компьютеров 500GB жесткий диск подключен как sda, на другом как sdb
Жесткий диск sda оказался немного медленнее, смотрим его результат, все тестирование после форматирования будет проводится блоками 512K:

dd if=/dev/zero of=/dev/sda bs=4K count=10000
40960000 bytes (41 MB) copied, 0,375801 s, 109 MB/s
dd if=/dev/zero of=/dev/sda bs=512K count=512
268435456 bytes (268 MB) copied, 2,3445 s, 114 MB/s
dd if=/dev/zero of=/dev/sda bs=1024K count=512
536870912 bytes (537 MB) copied, 5,006 s, 107 MB/s

Чтение:

dd of=/dev/zero if=/dev/sda bs=4K count=10000
40960000 bytes (41 MB) copied, 0,388215 s, 106 MB/s
dd of=/dev/zero if=/dev/sda bs=512K count=512
268435456 bytes (268 MB) copied, 2,28756 s, 117 MB/s
dd of=/dev/zero if=/dev/sda bs=1024K count=512
536870912 bytes (537 MB) copied, 4,47793 s, 120 MB/s
hdparm -tT /dev/sda
/dev/sda:
 Timing cached reads:   22084 MB in  2.00 seconds = 11053.06 MB/sec
 Timing buffered disk reads: 350 MB in  3.01 seconds = 116.09 MB/sec

Пробуем форматировать в разные файловые системы и сравнивать результат:

mkfs.ext2 /dev/sda
mount /dev/sda /var/data/
dd if=/dev/zero of=/var/data/test bs=512K count=20000
10485760000 bytes (10 GB) copied, 81,1496 s, 129 MB/s

dd of=/dev/zero if=/var/data/test bs=512K count=20000
10485760000 bytes (10 GB) copied, 81,5468 s, 129 MB/s

mkfs.ext2 /dev/sdb
mount /dev/sdb /var/data/
dd if=/dev/zero of=/var/data/test bs=512K count=20000
10485760000 bytes (10 GB) copied, 78,8655 s, 133 MB/s

dd of=/dev/zero if=/var/data/test bs=512K count=20000
10485760000 bytes (10 GB) copied, 61,3564 s, 171 MB/s

mkfs.ext3 /dev/sda
mount /dev/sda /var/data/
dd if=/dev/zero of=/var/data/test bs=512K count=20000
10485760000 bytes (10 GB) copied, 110,137 s, 95,2 MB/s

dd of=/dev/zero if=/var/data/test bs=512K count=1000
524288000 bytes (524 MB) copied, 4,41567 s, 119 MB/s

mkfs.ext3 /dev/sdb
mount /dev/sdb /var/data/
dd if=/dev/zero of=/var/data/test bs=512K count=20000
10485760000 bytes (10 GB) copied, 109,412 s, 95,8 MB/s

dd of=/dev/zero if=/var/data/test bs=512K count=1000
524288000 bytes (524 MB) copied, 4,17264 s, 126 MB/s
mkfs.ext4 /dev/sda
mount /dev/sda /var/data/
dd if=/dev/zero of=/var/data/test bs=512K count=20000
10485760000 bytes (10 GB) copied, 80,5518 s, 130 MB/s

dd of=/dev/zero if=/var/data/test bs=512K count=20000
10485760000 bytes (10 GB) copied, 80,6576 s, 130 MB/s

mkfs.ext4 /dev/sdb
mount /dev/sdb /var/data/
dd if=/dev/zero of=/var/data/test bs=512K count=20000
10485760000 bytes (10 GB) copied, 73,9922 s, 142 MB/s

dd of=/dev/zero if=/var/data/test bs=512K count=20000
10485760000 bytes (10 GB) copied, 63,2027 s, 166 MB/s
mkfs.reiserfs /dev/sda
mount /dev/sda /var/data/
dd if=/dev/zero of=/var/data/test bs=512K count=20000
10485760000 bytes (10 GB) copied, 103,225 s, 102 MB/s

dd of=/dev/zero if=/var/data/test bs=512K count=20000
10485760000 bytes (10 GB) copied, 81,2434 s, 129 MB/s

mkfs.reiserfs /dev/sdb
mount /dev/sdb /var/data/
dd if=/dev/zero of=/var/data/test bs=512K count=20000
10485760000 bytes (10 GB) copied, 82,6313 s, 127 MB/s

dd of=/dev/zero if=/var/data/test bs=512K count=20000
10485760000 bytes (10 GB) copied, 79,8497 s, 131 MB/s
mkfs.xfs /dev/sda
mount /dev/sda /var/data/
dd if=/dev/zero of=/var/data/test bs=512K count=20000
10485760000 bytes (10 GB) copied, 77,269 s, 136 MB/s

dd of=/dev/zero if=/var/data/test bs=512K count=20000
10485760000 bytes (10 GB) copied, 81,2329 s, 129 MB/s

mkfs.xfs /dev/sdb
mount /dev/sdb /var/data/
dd if=/dev/zero of=/var/data/test bs=512K count=20000
10485760000 bytes (10 GB) copied, 73,3584 s, 143 MB/s

dd of=/dev/zero if=/var/data/test bs=512K count=20000
10485760000 bytes (10 GB) copied, 65,0281 s, 161 MB/s

Скорость сети

perf -c slave -p 80 -t 180 -f Mbytes -i 20
------------------------------------------------------------
Client connecting to slave, TCP port 80
TCP window size: 0.02 MByte (default)
------------------------------------------------------------
[  3] local 10.10.10.6 port 59974 connected with 10.10.10.85 port 80
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-20.0 sec  1098 MBytes  54.9 MBytes/sec
[  3] 20.0-40.0 sec  1097 MBytes  54.9 MBytes/sec
[  3] 40.0-60.0 sec  1097 MBytes  54.8 MBytes/sec

iperf -c slave -p 80 -t 180 -f Mbytes -i 20 -F /var/data/test
------------------------------------------------------------
Client connecting to slave, TCP port 80
TCP window size: 0.02 MByte (default)
------------------------------------------------------------
[  4] local 10.10.10.6 port 59973 connected with 10.10.10.85 port 80
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-20.0 sec  1166 MBytes  58.3 MBytes/sec
[  4] 20.0-40.0 sec  1165 MBytes  58.2 MBytes/sec

iperf -c slave -p 80 -t 180 -f Mbytes -i 20 -w 1M

Подготовка ISCSI + mdadm Raid1

Вначале на одном из узлов настраиваем ISCSI target
ISCSI настроен с помощью LIO
На втором узле подключаем винт и объеденяем в рейд 1:

mdadm --create /dev/md0 --chunk=512 --spare-devices=0 --force --level=1 --raid-devices=2 /dev/sda /dev/sdc

Смотрим скорость синхронизации:

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdc[1] sda[0]
      488255360 blocks super 1.2 [2/2] [UU]
      [=======>.............]  resync = 39.7% (193960000/488255360) finish=95.5min speed=51336K/sec

Вообще скорость в процессе была от 48-55MB/s.
Тесты с dd

if=/dev/zero of=/dev/md0 bs=512K count=512
512+0 records in
512+0 records out
268435456 bytes (268 MB) copied, 4,92976 s, 54,5 MB/s
dd if=/dev/zero of=/dev/md0 bs=1024K count=512
512+0 records in
512+0 records out
536870912 bytes (537 MB) copied, 14,0946 s, 38,1 MB/s
dd if=/dev/zero of=/dev/md0 bs=4096K count=512
512+0 records in
512+0 records out
2147483648 bytes (2,1 GB) copied, 43,3706 s, 49,5 MB/s

************************************************************

dd of=/dev/zero if=/dev/md0 bs=1024K count=512
512+0 records in
512+0 records out
536870912 bytes (537 MB) copied, 4,52246 s, 119 MB/s
hdparm -tT /dev/md0

/dev/md0:
 Timing cached reads:   22008 MB in  2.00 seconds = 11015.51 MB/sec
 Timing buffered disk reads: 340 MB in  3.01 seconds = 113.11 MB/sec

DRBD
/etc/drbd.d/global_common.conf

global {
        usage-count yes;
}

common {
        protocol C;

        handlers {
                pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot 

-f";
                pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot 

-f";
                local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";
        }

        startup {
        }

        disk {
        }

        net {

                sndbuf-size 0;
                max-buffers 8000;
                max-epoch-size 8000;
 	}

        syncer {
                rate 120M;

        }
}

/etc/drbd.d/r0.res

resource r0
{
        on node1
        {
           device /dev/drbd0;
           disk /dev/sda;
           address 10.10.10.85:7788;
           meta-disk internal;
        }
        on node2
        {
           device /dev/drbd0;
           disk /dev/sdb;
           address 10.10.10.6:7788;
           meta-disk internal;
        }
}

Синхронизация проходит вот так:

version: 8.3.11 (api:88/proto:86-96)
srcversion: 71955441799F513ACA6DA60
 0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r-----
    ns:1996508 nr:0 dw:0 dr:2029080 al:0 bm:121 lo:1 pe:8 ua:250 ap:0 ep:1 wo:f oos:4863761
20
        [>....................] sync'ed:  0.5% (474976/476924)Mfinish: 2:12:27 speed: 61,18
4 (60,468) K/sec

Проверяем как работает:

dd if=/dev/zero of=/dev/drbd0 bs=512K count=20000
10485760000 bytes (10 GB) copied, 271,447 s, 38,6 MB/s

Форматируем, проверяем:
mkfs.ext4 /dev/drbd0

mount /dev/drbd0 /var/data/

Запись и чтение:

dd if=/dev/zero of=/var/data/test bs=512K count=20000
10485760000 bytes (10 GB) copied, 345,559 s, 30,3 MB/s

dd of=/dev/zero if=/var/data/test bs=512K count=20000
10485760000 bytes (10 GB) copied, 81,8498 s, 128 MB/s

mkfs.xfs /dev/drbd0

mount /dev/drbd0 /var/data/

Запись и чтение:

dd if=/dev/zero of=/var/data/test bs=512K count=20000
10485760000 bytes (10 GB) copied, 345,954 s, 30,3 MB/s

dd of=/dev/zero if=/var/data/test bs=512K count=20000
10485760000 bytes (10 GB) copied, 81,2412 s, 129 MB/s

С утилитой fio
Конфиг:

[write]
rw=write
size=10g
directory=/var/data

Результат:

write: (g=0): rw=write, bs=4K-4K/4K-4K, ioengine=sync, iodepth=1
fio 1.59
Starting 1 process
write: Laying out IO file(s) (1 file(s) / 10240MB)
Jobs: 1 (f=1): [W] [100.0% done] [0K/30515K /s] [0 /7450  iops] [eta 00m:00s]
write: (groupid=0, jobs=1): err= 0: pid=5916
  write: io=10240MB, bw=29835KB/s, iops=7458 , runt=351453msec
    clat (usec): min=1 , max=23754 , avg=133.07, stdev=1141.02
     lat (usec): min=1 , max=23754 , avg=133.22, stdev=1141.02
    bw (KB/s) : min=16182, max=1137015, per=100.07%, avg=29856.62, stdev=42517.39
  cpu          : usr=0.91%, sys=2.84%, ctx=37097, majf=0, minf=25
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w/d: total=0/2621440/0, short=0/0/0
     lat (usec): 2=5.48%, 4=61.88%, 10=26.78%, 20=3.70%, 50=0.74%
     lat (usec): 100=0.01%, 250=0.01%, 500=0.01%
     lat (msec): 4=0.01%, 10=1.06%, 20=0.33%, 50=0.01%

Run status group 0 (all jobs):
  WRITE: io=10240MB, aggrb=29835KB/s, minb=30551KB/s, maxb=30551KB/s, mint=351453msec, maxt=351453msec

Disk stats (read/write):
  drbd0: ios=0/79980, merge=0/0, ticks=0/908339108, in_queue=923996212, util=100.00%

Конфиг чтения:

[read]
rw=read
size=10g
directory=/var/data

Результат:

read: (g=0): rw=read, bs=4K-4K/4K-4K, ioengine=sync, iodepth=1
fio 1.59
Starting 1 process
read: Laying out IO file(s) (1 file(s) / 10240MB)
Jobs: 1 (f=1): [R] [100.0% done] [124.5M/0K /s] [31.2K/0  iops] [eta 00m:00s]
read: (groupid=0, jobs=1): err= 0: pid=5947
  read : io=10240MB, bw=128147KB/s, iops=32036 , runt= 81826msec
    clat (usec): min=0 , max=43380 , avg=30.91, stdev=257.71
     lat (usec): min=0 , max=43380 , avg=30.95, stdev=257.71
    bw (KB/s) : min=107305, max=137216, per=100.08%, avg=128247.10, stdev=7121.54
  cpu          : usr=2.42%, sys=7.00%, ctx=42004, majf=0, minf=26
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w/d: total=2621440/0/0, short=0/0/0
     lat (usec): 2=97.15%, 4=1.27%, 10=0.01%, 20=0.01%, 50=0.01%
     lat (usec): 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%
     lat (msec): 2=1.24%, 4=0.31%, 10=0.01%, 20=0.01%, 50=0.01%

Run status group 0 (all jobs):
   READ: io=10240MB, aggrb=128147KB/s, minb=131222KB/s, maxb=131222KB/s, mint=81826msec, maxt=81826msec

Disk stats (read/write):
  drbd0: ios=122658/13, merge=0/0, ticks=465096/33228, in_queue=523312, util=100.00%

Конфиг:

[random-read]
rw=randread
size=128m
directory=/var/data

Результат:

random-read: (g=0): rw=randread, bs=4K-4K/4K-4K, ioengine=sync, iodepth=1
fio 1.59
Starting 1 process
Jobs: 1 (f=1): [r] [98.2% done] [2088K/0K /s] [510 /0  iops] [eta 00m:03s]
random-read: (groupid=0, jobs=1): err= 0: pid=5895
  read : io=131072KB, bw=819620 B/s, iops=200 , runt=163756msec
    clat (usec): min=117 , max=24780 , avg=4992.64, stdev=3128.51
     lat (usec): min=118 , max=24780 , avg=4992.91, stdev=3128.51
    bw (KB/s) : min=  620, max= 3424, per=99.94%, avg=799.49, stdev=231.33
  cpu          : usr=0.13%, sys=1.72%, ctx=33290, majf=0, minf=26
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w/d: total=32768/0/0, short=0/0/0
     lat (usec): 250=2.02%, 500=14.71%, 750=0.46%, 1000=0.09%
     lat (msec): 2=2.76%, 4=19.28%, 10=58.94%, 20=1.68%, 50=0.07%

Run status group 0 (all jobs):
   READ: io=131072KB, aggrb=800KB/s, minb=819KB/s, maxb=819KB/s, mint=163756msec           , maxt=163756msec

Disk stats (read/write):
  drbd0: ios=32754/0, merge=0/0, ticks=161896/0, in_queue=163844, util=100.00%

Запись опубликована в рубрике ISCSI, Кластеры, Программный Raid. Добавьте в закладки постоянную ссылку.