Linux下NAS存储multipath多路径配置方法

标签:
device-mapperlinux多路径nasit |
分类: 我不知道的事 |
在Linux上,与存储设备打交道,免不了谈到多路径。
|- 7:0:0:0 sdb 8:16 active ready
running
|- 7:0:1:0 sdc 8:32 active ready
running
|- 8:0:0:0 sdd 8:48 active ready
running
`- 8:0:1:0 sde 8:64 active ready
running
Device Boot
Start
End
Blocks Id
System
Device Boot
Start
End
Blocks Id
System
Device Boot
Start
End
Blocks Id
System
Device Boot
Start
End
Blocks Id
System
Device Boot
Start
End
Blocks Id
System
Device Boot
Start
End
Blocks
Id System
而通常在Linux上若要配置普通存储的多路径时,只需要安装上相关产商提供的内核安装软件包便可以做好多路径。上次初次接触到NAS存储,IBM
6240。后面根据存储工程师提供的一些参考文档,以及网上搜索相关的方法。后面发现,其实配置NAS的多路径也并没之前想的复杂,直接用linux自带的多路径软件就可以解决。
当然首先得确认linux自带的多路径软件包是否有安装好。
device-mapper
device-mapper-event
device-mapper-multipath
当然需要在你之前配置yum源,同时我们还要检查对应的模块是否有成功加载。
先检查一下,是否有dm_round_robin,dm_multipath模块:
[root@localhost etc]# lsmod |grep dm_
dm_round_robin
2717
1
dm_multipath
17649
2 dm_round_robin
dm_mirror
14101
0
dm_region_hash
12170 1
dm_mirror
dm_log
10122
2 dm_mirror,dm_region_hash
dm_mod
81692
5 dm_multipath,dm_mirror,dm_log
如果没有我们也可以检查一下系统中是否有:
#locate dm_multipath.ko
确认系统中是有相关模块的话,我们就手动加载相关模块:
#modprobe dm_multipath
#modprobe dm_round_robin
然后我们需要开启multipathd服务:
#/etc/init.d/multipathd start
multipath的准备工作做好了。我这边的环境需要先说明一下。当前已经从NAS存储划分了一个lun给服务器,在Linux服务器中也可以看到有4个块设备。通常我们在系统中看到的块设备是原始划分lun数量的4倍,也就是说我这边1个lun,就会显示4个块设备。如果是4个lun,就会显示16个块设备。最终我们需要通过多路径软件去配置,对应每一个lun就有一个设备映射的文件。
下面我们可以通过sanlun这个命令来查看,系统中是否有正常识别到这个lun的存储信息。
当然需要安装一个工具软件
ibm_linux_host_utilities-6-1.x86_64.rpm
[root@localhost etc]# sanlun lun show
controller(7mode)/
device
host
lun
vserver(Cmode)
lun-pathname
filename
adapter
protocol size
mode
--------------------------------------------------------------------------------------------------
N6240A
/vol/vol_A_SVR/lun_xxx
/dev/sde
host8
FCP
204.8g 7
N6240A
/vol/vol_A_SVR/lun_xxx
/dev/sdd
host8
FCP
204.8g 7
N6240A
/vol/vol_A_SVR/lun_xxx
/dev/sdc
host7
FCP
204.8g 7
N6240A
/vol/vol_A_SVR/lun_xxx
/dev/sdb
host7
FCP
204.8g 7
这是我们需要查询一下我们已有的存储lun的wwid号,我这边是配置HA集群,两台服务器都是加载同样的lun,所以在两台服务器上,所查询到的lun的wwid号都是一样的。在查询wwid号时,还需要注意区分好本地硬盘,因为它也会显示一个wwid号。
可以执行以下命令,找到对应不同的块设备的uid号,因为我的sda是本地硬盘,而从sdb到sde都是由单个lun对应出的设备。
所以我这的wwid号显然就是“360a98000443139694f2b33616a416a4e”
[root@localhost etc]# multipath -v3
=======================================================
部分信息省略……
sda: uid = 36005076043c0813818bf8290071abf46 (callout)
sdb: uid = 360a98000443139694f2b33616a416a4e (callout)
sdc: uid = 360a98000443139694f2b33616a416a4e (callout)
sdd: uid = 360a98000443139694f2b33616a416a4e (callout)
sde: uid = 360a98000443139694f2b33616a416a4e (callout)
部分信息省略……
===== paths list =====
uuid
hcil
dev dev_t pri dm_st chk_st
vend/prod
--More--
36005076043c0813818bf8290071abf46 0:2:0:0 sda 8:0
1 undef ready
IBM,Serve
360a98000443139694f2b33616a416a4e 7:0:0:0 sdb 8:16
4 undef ready
NETAPP,LU
360a98000443139694f2b33616a416a4e 7:0:1:0 sdc 8:32
4 undef ready
NETAPP,LU
360a98000443139694f2b33616a416a4e 8:0:0:0 sdd 8:48
1 undef ready
NETAPP,LU
360a98000443139694f2b33616a416a4e 8:0:1:0 sde 8:64
1 undef ready
NETAPP,LU
部分信息省略……
====================================================
这时我们检查下wwids文件,看下wwid号是否一致:
[root@localhost etc]# cat
multipath/wwids
# Multipath wwids, Version : 1.0
# NOTE: This file is automatically maintained by multipath and
multipathd.
# You should not need to edit this file in normal
circumstances.
#
# Valid WWIDs:
/360a98000443139694f2b33616a416a4e/
为了让多路径软件正常工作,然后我们可以在multipath的配置文件中进行配置。
多路径的配置文件放在/etc/multipath.conf
当然在我配置时,是没有这个文件的。我们需要在/usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf.defaults 里面对应的多路径配置帮助文档中找到配置文件模板。
对应按照实际环境需要修改,同时修改对应的wwid号。
以下是我的实际配置文档:
# cat /etc/multipath.conf
# This is a basic configuration file with some examples, for
device mapper
# multipath.
# For a complete list of the default configuration values,
see
#
/usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf.defaults
# For a list of configuration options with descriptions,
see
#
/usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf.annotated
#
# REMEMBER: After updating multipath.conf, you must run
#
# service multipathd reload
#
# for the changes to take effect in multipathd
## By default, devices with vendor = "IBM" and product =
"S/390.*" are
## blacklisted. To enable mulitpathing on these devies,
uncomment the
## following lines.
#blacklist_exceptions {
# device {
# vendor "IBM"
# product "S/390.*"
# }
#}
## Use user friendly names, instead of using WWIDs as
names.
defaults {
user_friendly_names yes
path_grouping_policy
multibus #默认路径组策略
failback
immediate
#失效处理方式,当前为立即
no_path_retry
fail
}
##
## Here is an example of how to configure some standard
options.
##
#
#defaults {
# udev_dir /dev
# polling_interval 10
# selector "round-robin 0"
# path_grouping_policy multibus
# getuid_callout "/lib/udev/scsi_id --whitelisted
--device=/dev/%n"
# prio alua
# path_checker readsector0
# rr_min_io 100
# max_fds 8192
# rr_weight priorities
# failback immediate
# no_path_retry fail
# user_friendly_names yes
#}
##
## The wwid line in the following blacklist section is shown
as an example
## of how to blacklist devices by wwid. The
2 devnode lines are the
## compiled in default blacklist. If you want to blacklist
entire types
## of devices, such as all scsi devices, you should use a
devnode line.
## However, if you want to blacklist specific devices, you
should use
## a wwid line. Since there is no guarantee
that a specific device will
## not change names on reboot (from /dev/sda to /dev/sdb for
example)
## devnode lines are not recommended for blacklisting specific
devices.
##
blacklist {
#
wwid 360a98000443139694f2b33616a416a4e
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z]"
}
multipaths {
multipath {
wwid 360a98000443139694f2b33616a416a4e
alias fc-dm0
#多路径映射设备的名字,自定义
path_grouping_policy multibus
path_checker tur #决定路径状态的方式
path_selector "round-robin 0" #选择路径I/O切换操作的方式
# failback manual
# rr_weight priorities
# no_path_retry 5
}
# multipath {
# wwid 1DEC_____321816758474
# alias red
# }
#}
devices {
device {
vendor "IBM"
product "N6240"
path_grouping_policy multibus
getuid_callout
"/lib/udev/scsi_id --whitelisted
--device=/dev/%n" #获取唯一设备号
path_checker readsector0
path_selector "round-robin 0"
# hardware_handler "0"
failback immediate
# rr_weight priorities
# no_path_retry queue
}
# device {
# vendor "COMPAQ "
# product "MSA1000
"
# path_grouping_policy multibus
# }
}
OK,上面是我的配置文档。也可以说是一个简单的配置,可以实际应用。
随后我们重启下multipathd服务,再来用以下命令检查一下多路径的配置。
[root@localhost etc]# multipath -ll
fc-dm0 (360a98000443139694f2b33616a416a4e) dm-0
NETAPP,LUN
size=205G features='2 pg_init_retries 50' hwhandler='0'
wp=rw
`-+- policy='round-robin 0' prio=2 status=active
这是我们看到对应的4个块设备已经是激活运行状态,并且其策略是round-robin 负载均衡模式。
这时同时也可以用fdisk -l命令来检查多路径配置状态。
[root@localhost etc]# fdisk -l
WARNING: GPT (GUID Partition Table) detected on '/dev/sda'!
The util fdisk doesn't support GPT. Use GNU Parted.
Disk /dev/sda: 897.0 GB, 896998047744 bytes
255 heads, 63 sectors/track, 109053 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
/dev/sda1
1
109054
875974655+ ee GPT
Disk /dev/sdb: 219.9 GB, 219936718848 bytes
255 heads, 63 sectors/track, 26739 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes
Disk identifier: 0x15af0270
Disk /dev/sdc: 219.9 GB, 219936718848 bytes
255 heads, 63 sectors/track, 26739 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes
Disk identifier: 0x15af0270
Disk /dev/sdd: 219.9 GB, 219936718848 bytes
255 heads, 63 sectors/track, 26739 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes
Disk identifier: 0x15af0270
Disk /dev/sde: 219.9 GB, 219936718848 bytes
255 heads, 63 sectors/track, 26739 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes
Disk identifier: 0x15af0270
Disk /dev/mapper/fc-dm0: 219.9 GB, 219936718848 bytes
255 heads, 63 sectors/track, 26739 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 65536 bytes
Disk identifier: 0x15af0270
这时我们看到我们配置的多路径的设备名 fc-dm0
就说明成功了。其实我们可以发现它会在设备映射的目录里面/dev/mapper/。
当然我们也可以一个小测试来实验一下我们所配置的多路径映射是否正真能应用。
首先我们需要格式化一样多路径设备。
在分区前我们最好运行一下pvcreate命令:
# pvcreate /dev/mapper/fc-dm0
# fdisk /dev/mapper/fc-dm0
在我们分了一个主分区后便可以看到,/dev/mapper/fc-dm0p1 设备。
然后格式化一下,再挂载起来:
#mkfs.ext4 /dev/mapper/fc-dm0p1
#mount /dev/mapper/fc-dm0p1 /mnt
接着我们用dd命令来创建测试文件:
# dd if=/dev/zero of=/mnt/testfile bs=1G
count=20
然后我们可以用sar命令来监控磁盘I/O情况:
#sar -d 1 100
最后还可以通过拔掉一条路径来测试多路径的失效情况。