ISCSI LIO подойдет для кластера Hyper-V, Windows Server. Часть 1

Это очень актуальная штука, примеров работы в русском интернете нет, поэтому будет в несколько частей
Вначале конфигурация без комментариев:

apt-get install lio-utils
/etc/init.d/target restart
/etc/init.d/target status
tcm_node --block iblock_0/hdd_sdb /dev/sdb
lio_node --addtpg iqn.2012-09.local.dim:ubuntulio 1
lio_node --addlun iqn.2012-09.local.dim:ubuntulio 1 0 lun_my_sdb iblock_0/hdd_sdb
lio_node --addnp iqn.2012-09.local.dim:ubuntulio 1 192.168.153.130:3260
lio_node --demomode iqn.2012-09.local.dim:ubuntulio 1
lio_node --disableauth iqn.2012-09.local.dim:ubuntulio 1
echo 0 > /sys/kernel/config/target/iscsi/iqn.2012-09.local.dim:ubuntulio/tpgt_1/attrib/demo_mode_write_protect
lio_node --enabletpg iqn.2012-09.local.dim:ubuntulio 1
echo yes |tcm_dump --o
/etc/init.d/target restart


А вот готовый скрипт:

#!/bin/sh

# Определяем переменные

### Адрес
IP="10.0.0.136:3260"

#### Устройство
HDD="/dev/sdb"
# Виртуальное устройство(Вся работа через него)
HDD_DEV="iblock_0/hdd_sdb"

#### Название таргета
TGT_NAME="iqn.2013-05.local.dim:ubuntulio"
# Индекс таргета
TGT_IND="1"
# Для краткости, тк везде используется <таргет пробел  индекс>
TGT=$TGT_NAME" "$TGT_IND

####  Название LUN
LUN_NAME="lun_my_sdb"
# Индекс LUN
LUN_IND="0"
# Для краткости, тк используется      <индекс  пробел лун>
LUN=$LUN_IND" "$LUN_NAME


# Прицепляем к виртуальному реальное устройство
tcm_node --block       $HDD_DEV $HDD

# Добавляем таргет
lio_node --addtpg      $TGT
# Добавляем лун
lio_node --addlun      $TGT $LUN $HDD_DEV
# Указываем адрес
lio_node --addnp       $TGT $IP
# Демо режим, разрешено подключение для всех
lio_node --demomode    $TGT
# Отключаем проверку логина и пароля
lio_node --disableauth $TGT

# Включаем запись
echo 0    > /sys/kernel/config/target/iscsi/$TGT_NAME/tpgt_1/attrib/demo_mode_write_protect
# Тюнинг скорости
echo 64   > /sys/kernel/config/target/core/$HDD_DEV/attrib/queue_depth
echo 1024 > /sys/kernel/config/target/core/$HDD_DEV/attrib/optimal_sectors
echo 1    > /sys/kernel/config/target/core/$HDD_DEV/attrib/emulate_write_cache

# Включаем таргет
lio_node --enabletpg $TGT

# Сохраняем настройку
#echo yes |tcm_dump --o

Устанавливаем:

apt-get install lio-utils

Рестартуем, проверяем статус:

/etc/init.d/target restart
/etc/init.d/target status

Создаем таргет название «iqn.2012-09.local.dim:ubuntulio» , номер 1:

lio_node --addtpg iqn.2012-09.local.dim:ubuntulio 1

Прикрепляем блочные устройства к виртуальному iblock_0:

##### tcm_node --block <HBA(host bus adapter)>/<StorageObjectName> <PathName>/<BlockDeviceName> 
tcm_node --block iblock_0/hdd_sdb /dev/sdb
tcm_node --block iblock_0/hdd_sdc /dev/sdc
##### отключить так # tcm_node --freedev iblock_0/hdd_sdb

Добавляем диски к нашему таргету(Луны):

##### lio_node --addlun <TargetIQN(iSCSI Qualified Name)> <TPG(Target Portal Group )#> <LUN#> <LUNName> <HBA>/<StorageObjectName>
lio_node --addlun iqn.2012-09.local.dim:ubuntulio 1 0 lun_my_sdb iblock_0/hdd_sdb
lio_node --addlun iqn.2012-09.local.dim:ubuntulio 1 1 lun_my_sdc iblock_0/hdd_sdc
##### отключить так lio_node --dellun iqn.2012-09.local.dim:ubuntulio 1 0

Указываем таргету ip адрес:

lio_node --addnp iqn.2012-09.local.dim:ubuntulio 1 192.168.153.130:3260

Для простоты настройки, разрешаем подключатся для всех:

lio_node --demomode iqn.2012-09.local.dim:ubuntulio 1

Отключаем проверку логина и пароля:

lio_node --disableauth iqn.2012-09.local.dim:ubuntulio 1

Разрешаем запись, тк если включили demo режим, запись запрещена:

echo 0 > /sys/kernel/config/target/iscsi/iqn.2012-09.local.dim:ubuntulio/tpgt_1/attrib/demo_mode_write_protect

Активируем таргет:

lio_node --enabletpg iqn.2012-09.local.dim:UbuntuLio 1

Записать все изменения и рестарт:

echo yes |tcm_dump --o
/etc/init.d/target restart

Поглядеть что получилось:

lio_node --listendpoints

Тюнинг, нужен обязательно, тк скорость записи может сильно огорчить, делается в этом файле:
/etc/target/tcm_start.sh

echo 64 > /sys/kernel/config/target/core/iblock_0/hdd_sda/attrib/queue_depth
echo 1024 > /sys/kernel/config/target/core/iblock_0/hdd_sda/attrib/optimal_sectors
echo 1 > /sys/kernel/config/target/core/iblock_0/hdd_sda/attrib/emulate_write_cache

Подключаем к второму узлу диск

1) Устанавливаем ПО

aptitude install open-iscsi

2) Редактируем файлик:
/etc/iscsi/iscsid.conf

node.startup = automatic

3) Подключаем диск:

iscsiadm -m discovery -t st -p 192.168.153.130
/etc/init.d/open-iscsi restart

Форматируем в XFS и монтируем:

mkfs.xfs /dev/sdc -f
mount /dev/sdc /var/data/

Тесты

***************************************************************************************
***************************************************************************************

Скорость на принмающей стороне

Файловая система XFS
Запись:

dd if=/dev/zero of=/var/data/test bs=512K count=20000
10485760000 bytes (10 GB) copied, 122.334 s, 85.7 MB/s
write: (g=0): rw=write, bs=4K-4K/4K-4K, ioengine=sync, iodepth=1
fio 1.59
Starting 1 process
write: Laying out IO file(s) (1 file(s) / 10240MB)
Jobs: 1 (f=1): [W] [100.0% done] [0K/82165K /s] [0 /20.6K iops] [eta 00m:00s]
write: (groupid=0, jobs=1): err= 0: pid=3892
  write: io=10240MB, bw=80398KB/s, iops=20099 , runt=130423msec
    clat (usec): min=1 , max=72052 , avg=49.16, stdev=512.55
     lat (usec): min=1 , max=72052 , avg=49.25, stdev=512.56
    bw (KB/s) : min= 7992, max=1916544, per=100.07%, avg=80457.33, stdev=134495.87
  cpu          : usr=1.53%, sys=4.70%, ctx=29654, majf=0, minf=25
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w/d: total=0/2621440/0, short=0/0/0
     lat (usec): 2=22.17%, 4=65.40%, 10=10.42%, 20=0.83%, 50=0.07%
     lat (usec): 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
     lat (msec): 2=0.01%, 4=1.08%, 10=0.01%, 20=0.02%, 50=0.01%
     lat (msec): 100=0.01%

Run status group 0 (all jobs):
  WRITE: io=10240MB, aggrb=80398KB/s, minb=82327KB/s, maxb=82327KB/s, mint=130423msec, maxt=130423msec

Disk stats (read/write):
  sdc: ios=0/18360, merge=0/4, ticks=0/18967364, in_queue=19045500, util=99.66%

Чтение:

dd of=/dev/zero if=/var/data/test bs=512K count=20000
10485760000 bytes (10 GB) copied, 82.7898 s, 127 MB/s
read: (g=0): rw=read, bs=4K-4K/4K-4K, ioengine=sync, iodepth=1
fio 1.59
Starting 1 process
read: Laying out IO file(s) (1 file(s) / 5120MB)
Jobs: 1 (f=1): [R] [100.0% done] [95158K/0K /s] [23.3K/0  iops] [eta 00m:00s]
read: (groupid=0, jobs=1): err= 0: pid=3917
  read : io=5120.0MB, bw=92307KB/s, iops=23076 , runt= 56798msec
    clat (usec): min=0 , max=38927 , avg=42.82, stdev=337.46
     lat (usec): min=0 , max=38927 , avg=42.89, stdev=337.46
    bw (KB/s) : min=82944, max=93168, per=100.05%, avg=92350.48, stdev=1455.37
  cpu          : usr=1.46%, sys=5.05%, ctx=21304, majf=0, minf=26
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w/d: total=1310720/0/0, short=0/0/0
     lat (usec): 2=95.78%, 4=1.36%, 10=0.01%, 20=1.28%, 50=0.01%
     lat (usec): 100=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
     lat (msec): 4=1.56%, 10=0.01%, 20=0.01%, 50=0.01%

Run status group 0 (all jobs):
   READ: io=5120.0MB, aggrb=92307KB/s, minb=94522KB/s, maxb=94522KB/s, mint=56798msec, maxt=56798msec

Disk stats (read/write):
  sdc: ios=20458/5, merge=4/0, ticks=109284/2596, in_queue=111868, util=99.89%

multipath-tools

Файловая система XFS
Конфиг такой:

defaults
{
      path_selector       "round-robin 0"
      path_grouping_policy multibus
      rr_min_io           200
      no_path_retry       5
   failback            immediate
   user_friendly_names yes
}
blacklist
{
            devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
            devnode "^hd[a-z][[0-9]*]"
            devnode "^sda$"
            devnode "^sdb$"

}

multipaths
{
   multipath
   {
      wwid 1IET_00010001
   }
}
multipath -ll
mpath6 (360014052f56151466c442dba812b3daf) dm-0 LIO-ORG,IBLOCK
size=466G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  |- 10:0:0:0 sdc 8:32 active ready running
  `- 11:0:0:0 sdd 8:48 active ready running

Запись:

Чтение:

dd of=/dev/zero if=/var/data/test bs=512K count=20000
10485760000 bytes (10 GB) copied, 77,0579 s, 136 MB/s
read: (g=0): rw=read, bs=4K-4K/4K-4K, ioengine=sync, iodepth=1
fio 1.59
Starting 1 process
read: Laying out IO file(s) (1 file(s) / 5120MB)
Jobs: 1 (f=1): [R] [100.0% done] [104.5M/0K /s] [26.2K/0  iops] [eta 00m:00s]
read: (groupid=0, jobs=1): err= 0: pid=3759
  read : io=5120.0MB, bw=103298KB/s, iops=25824 , runt= 50755msec
    clat (usec): min=0 , max=33254 , avg=38.25, stdev=304.52
     lat (usec): min=0 , max=33254 , avg=38.32, stdev=304.52
    bw (KB/s) : min=75752, max=116270, per=100.12%, avg=103419.59, stdev=8151.59
  cpu          : usr=1.61%, sys=5.22%, ctx=21158, majf=0, minf=26
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w/d: total=1310720/0/0, short=0/0/0
     lat (usec): 2=96.58%, 4=1.15%, 10=0.03%, 20=0.67%, 50=0.01%
     lat (usec): 100=0.01%, 500=0.01%, 1000=0.01%
     lat (msec): 2=0.35%, 4=1.21%, 10=0.01%, 20=0.01%, 50=0.01%

Run status group 0 (all jobs):
   READ: io=5120.0MB, aggrb=103297KB/s, minb=105776KB/s, maxb=105776KB/s, mint=50755msec, maxt=50755msec

Disk stats (read/write):
  dm-0: ios=20457/4, merge=3/0, ticks=97528/88940, in_queue=192472, util=99.89%, aggrios=10243/2, aggrmerge=0/0, aggrticks=48782/32, aggrin_queue=48812, aggrutil=56.21%
    sdc: ios=10268/3, merge=0/0, ticks=54804/48, in_queue=54848, util=56.21%
    sdd: ios=10219/2, merge=0/0, ticks=42760/16, in_queue=42776, util=44.14%

Bonding

auto eth0
  iface eth0 inet manual
  bond-master bond0
auto eth1
  iface eth1 inet manual
  bond-master bond0
auto bond0
  iface bond0 inet static
  address 10.10.10.6
  netmask 255.255.255.0
  bond-mode 0
  bond-miimon 100
  bond-slaves none
  mtu 7000

Скорость сети:

iperf -c slave  -t 180 -f Mbytes -i 20
------------------------------------------------------------
Client connecting to slave, TCP port 5001
TCP window size: 0.08 MByte (default)
------------------------------------------------------------
[  3] local 10.10.10.6 port 48805 connected with 10.10.10.123 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-20.0 sec  3541 MBytes   177 MBytes/sec
[  3] 20.0-40.0 sec  3541 MBytes   177 MBytes/sec

Запись:

dd if=/dev/zero of=/var/data/test bs=512K count=20000
10485760000 bytes (10 GB) copied, 70.0693 s, 150 MB/s
write: (g=0): rw=write, bs=4K-4K/4K-4K, ioengine=sync, iodepth=1
2.0.8
Starting 1 process
write: Laying out IO file(s) (1 file(s) / 10240MB)
Jobs: 1 (f=1): [W] [100.0% done] [0K/126.6M /s] [0 /32.4K iops] [eta 00m:00s]
write: (groupid=0, jobs=1): err= 0: pid=3686
  write: io=10240MB, bw=142685KB/s, iops=35671 , runt= 73489msec
    clat (usec): min=1 , max=11141 , avg=27.12, stdev=407.53
     lat (usec): min=1 , max=11141 , avg=27.27, stdev=407.53
    clat percentiles (usec):
     |  1.00th=[    1],  5.00th=[    1], 10.00th=[    2], 20.00th=[    2],
     | 30.00th=[    2], 40.00th=[    3], 50.00th=[    3], 60.00th=[    3],
     | 70.00th=[    3], 80.00th=[    3], 90.00th=[    4], 95.00th=[    5],
     | 99.00th=[   20], 99.50th=[   35], 99.90th=[ 7200], 99.95th=[ 7200],
     | 99.99th=[ 7520]
    bw (KB/s)  : min=111332, max=1833352, per=100.00%, avg=142869.16, stdev=147359.98
    lat (usec) : 2=8.42%, 4=75.63%, 10=12.43%, 20=2.49%, 50=0.64%
    lat (usec) : 100=0.04%, 250=0.01%
    lat (msec) : 4=0.01%, 10=0.34%, 20=0.01%
  cpu          : usr=3.96%, sys=10.79%, ctx=9802, majf=0, minf=23
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=2621440/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
  WRITE: io=10240MB, aggrb=142684KB/s, minb=142684KB/s, maxb=142684KB/s, mint=73489msec, maxt=73489msec

Disk stats (read/write):
  sdc: ios=0/18037, merge=0/0, ticks=0/10412420, in_queue=10455912, util=99.38%

Чтение:

dd of=/dev/zero if=/var/data/test bs=512K count=20000
10485760000 bytes (10 GB) copied, 62,4676 s, 168 MB/s
read: (g=0): rw=read, bs=4K-4K/4K-4K, ioengine=sync, iodepth=1
fio 1.59
Starting 1 process
read: Laying out IO file(s) (1 file(s) / 5120MB)
Jobs: 1 (f=1): [R] [100.0% done] [126.3M/0K /s] [31.6K/0  iops] [eta 00m:00s]
read: (groupid=0, jobs=1): err= 0: pid=1788
  read : io=5120.0MB, bw=128326KB/s, iops=32081 , runt= 40856msec
    clat (usec): min=0 , max=29552 , avg=30.64, stdev=250.39
     lat (usec): min=0 , max=29552 , avg=30.72, stdev=250.39
    bw (KB/s) : min=111393, max=137453, per=100.09%, avg=128436.58, stdev=6660.82
  cpu          : usr=2.02%, sys=6.70%, ctx=21229, majf=0, minf=26
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued r/w/d: total=1310720/0/0, short=0/0/0
     lat (usec): 2=95.85%, 4=1.34%, 10=0.22%, 20=0.88%, 50=0.13%
     lat (usec): 100=0.02%, 250=0.01%, 500=0.01%, 1000=0.01%
     lat (msec): 2=1.19%, 4=0.37%, 10=0.01%, 20=0.01%, 50=0.01%

Run status group 0 (all jobs):
   READ: io=5120.0MB, aggrb=128325KB/s, minb=131405KB/s, maxb=131405KB/s, mint=40856msec, maxt=40856msec

Disk stats (read/write):
  sdc: ios=20460/3, merge=2/0, ticks=77568/276, in_queue=77816, util=99.82%

Запись опубликована в рубрике ISCSI, Кластеры. Добавьте в закладки постоянную ссылку.