千家信息网

RAID,LVM牛逼功能及用法

发表于:2025-01-20 作者:千家信息网编辑
千家信息网最后更新 2025年01月20日,一、高级文件系统管理1) 设定文件系统配额2) 设定和管理软RAID设备3) 配置逻辑卷4) 设定LVM快照5) btrfs文件系统二、配置磁盘配额演示步骤如下:1、分区挂载将/home目录下的所有文
千家信息网最后更新 2025年01月20日RAID,LVM牛逼功能及用法

一、高级文件系统管理



1) 设定文件系统配额

2) 设定和管理软RAID设备

3) 配置逻辑卷

4) 设定LVM快照

5) btrfs文件系统


二、配置磁盘配额


演示步骤如下:

1、分区挂载将/home目录下的所有文件拷贝至/dev/sdc1

fdsik /dev/sdc

mount /dec/sdc1 /mnt/test

mv /home/* /mnt/test

mount /dev/sdc1 /home


2、启动配额挂载选项

/dev/sdc1 /mnt/test ext4 defaults 0 0

/dev/sdc1 /home ext4 usrquota,grpquota 0 0


3、启动数据库,如遇到报错则关闭防火墙及重新挂载(mount -0 remount usrquota,grpquota)

setenforce 0 #临时关闭selinux

getenforce #查看selinux状态

quotacheck -cug /home #创建磁盘配额数据库


4、启动数据库

quotaon -p /home #查看是否已启动数据库

quotaon /home #启动数据库

repquota /home #报告各个家目录下用户的默认磁盘配额


5、配置配额项

edquota alren #给alren用户配置配额

setquota alren 100000 150000 0 0 /home


6、测试

dd if=/dev/zero of=/home/alren/testfile bs=1M count=100

dd if=/dev/zero of=/home/alren/testfiel bs=1M count=160


代码演示:



[root@centos6 ~]# cat /etc/fstab## /etc/fstab# Created by anaconda on Thu Aug 11 03:07:57 2016## Accessible filesystems, by reference, are maintained under '/dev/disk'# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info#tmpfs                   /dev/shm                tmpfs   defaults        0 0devpts                  /dev/pts                devpts  gid=5,mode=620  0 0sysfs                   /sys                    sysfs   defaults        0 0/dev/sdb1               swap                     swap   pri=10          0 0/swapfile              swap                     swap     defaults,pri=100  0 0/dev/sdc1              /mnt/test               ext4     defaults 0 0/dev/sdc1              /home                  ext4      usrquota,grpquota 0  0UUID="39208cf4-4d84-430b-ab53-7a26ad9d786d" /mnt/lv0  ext4  defaults  0 0UUID=240533cf-b37f-4460-974f-702bab867da5 /                       ext4    defaults1 1UUID=4e245c68-a392-4ce9-9a99-5d32d8d43872 /boot                   ext4    defaults1 2UUID=86aa7b74-24df-4043-ba83-f3b41a99ce0e /testdir                ext4    defaults1 2[root@centos6 home]# mount -o remount,usrquota,grpquota /home[root@centos6 home]# quotacheck -cug /home[root@centos6 home]# lsalren  aquota.group  aquota.user  chen  cheng  chenggg  lost+found[root@centos6 home]# quotaon -p /homegroup quota on /home (/dev/sdc1) is offuser quota on /home (/dev/sdc1) is off[root@centos6 home]# quotaon /home[root@centos6 home]# quotaon -p /homegroup quota on /home (/dev/sdc1) is onuser quota on /home (/dev/sdc1) is on[root@centos6 home]# setquota alren 100000 150000 0 0 /home[root@centos6 ~]# repquota /home*** Report for user quotas on device /dev/sdc1Block grace time: 7days; Inode grace time: 7days                        Block limits                File limitsUser            used    soft    hard  grace    used  soft  hard  grace----------------------------------------------------------------------root      --   37952       0       0            967     0     0chen      --   43644       0       0           3198     0     0chenggg   --      32       0       0              8     0     0cheng     --      32       0       0              8     0     0alren     --      32  100000  150000              9     0     0[root@centos6 ~]#[root@centos6 home]# edquota alren[root@centos6 home]# su - alren[alren@centos6 ~]$ quota alrenDisk quotas for user alren (uid 524):     Filesystem  blocks   quota   limit   grace   files   quota   limit   grace      /dev/sdc1      32  100000  150000               8       0       0[alren@centos6 ~]$ repquota /homerepquota: Cannot open quotafile /home/aquota.user: 权限不够repquota: Quota file not found or has wrong format.repquota: Not all specified mountpoints are using quota.[alren@centos6 ~]$ dd if=/dev/zero of=/home/alren bs=1M count=100dd: 正在打开"/home/alren": 是一个目录[alren@centos6 ~]$ dd if=/dev/zero of=/home/alren/testfile  bs=1M count=100sdc1: warning, user block quota exceeded.记录了100+0 的读入记录了100+0 的写出104857600字节(105 MB)已复制,0.613277 秒,171 MB/秒[alren@centos6 ~]$ dd if=/dev/zero of=/home/alren/testfile  bs=1M count=160sdc1: warning, user block quota exceeded.sdc1: write failed, user block limit reached.dd: 正在写入"/home/alren/testfile": 超出磁盘限额记录了147+0 的读入记录了146+0 的写出153567232字节(154 MB)已复制,0.876813 秒,175 MB/秒[alren@centos6 ~]$





三、独立冗余磁盘整列(RAID,本文所述为软RAID)


1、简介

RAID:Redundant Arrays of Inexpensive(Independent)Disks,1988年由加利福尼亚大学伯克利分校(University of California-Berkeley) "A Case for Redundant Arrays of Inexpensive Disks"提出。多个磁盘合成一个"阵列"来提供更好的性能、冗余,或者两者都提供,就叫做独立磁盘冗余整列。

2、特点

1)提高IO性能,提升磁盘读写

2)提高耐用性,磁盘冗余来实现

3)多块磁盘组织在一起的工作方式有所不同


3、raid级别

RAID-0:将数据切割成等分,然后按次序存储在磁盘中

RAID-1:将数据切割成等分,然后分别存入各个磁盘中

RAID-5:将数据切割成等分,然后按次序存储在磁盘中,并且每个磁盘轮流存储校验信息

......

RAID-6: 将数据切割成等分,然后按次序存储在磁盘中,并且每个磁盘轮流存储校验信息,且存两份校验信息

RAID-10: 先做raid1,然后做raid0即可

RAID-01: 先做raid0,然后做raid1即可


3、raid级别及其特性

RAID-0;读写性能提升

可用空间n*min(s1,s2...)

无容错能力

最少磁盘数量:2个

RAID-1:读性能提升,写性能有所下降

1*min(s1,s2,...)

有冗余能力

最少磁盘数量:2,2N

RAID-5: 读写性能提升

可用空间(N-1)*min(s1,s2)

有容错能力,只允许坏一块磁盘

最少磁盘数量:3,3+

RAID-6:读写性能提升

可用空间(N-2)*min(s1,s2,...)

有容错能力允许坏两块磁盘

最少磁盘数量:4,4+

RAID-10:读、写性能提升

可用空间:N*min(S1,S2,...)/2

有容错能力:每组镜像最多只能坏一块

最少磁盘数:4, 4+

RAID-01:读、写性能提升

可用空间:N*min(S1,S2,...)/2

有容错能力:每组镜像最多只能坏一块

最少磁盘数:4, 4+

常用级别:RAID-0, RAID-1, RAID-5, RAID-10, RAID-50




软RAID-5基本选项:

mdadm:为软raid提供管理界面,为空余磁盘添加冗余,RAID设备可为/dev/md0、/dev/md1、/dev/md2、 /dev/md3等等

mdadm命令:

语法格式:mdadm [mode] [options]

模式:

创建:-C

装配:-A

管理:-f,-r,-a

-C:创建模式

-n #:使用#个快设备来创建次RAID

-l #:指明RAID及级别

-a {yes|no}:自动创建目标raid设备的设备文件

-c chunck_size:指明块设备的大小

-x #:指明空闲磁盘个数

-D: 显示raid的详细信息

mdadm -D /dev/md#

管理模式:

-f:标记指定磁盘为损坏

-a:添加磁盘

-r:移除磁盘

观察md的模式:

cat /proc/mdstat

停止md设备:

mdamd -S /dev/md#



软RAID-5实现步骤:


1、创建磁盘分区,本实验使用5块磁盘做实验,其中4块为真实使用,1块为冗余备份

fdisk /dev/sd{b,c,d,e,f}1 #磁盘分区10G并改其文件类型为fd


2、创建raid设备

mdadm -C /dev/md0 -a yes -l 5 -n 4 -x1 /dev/sd{b,c,d,e,f}1

mdamd -D /dev/md0

cat /proc/mdstat


3、格式化创建的md0设备

mkfs.ext4 /dev/md0


4、设置开机自动挂载

vim /etc/fstab

UUID="b92ddd51-c555-4948-b1d5-8563b697a2f1" /mnt/raid ext4 defaults 0 0

5、生成配置文件/etc/mdadm.conf

mdadm -Ds /dev/md0 > /etc/mdadm.conf

mdadm -S /dev/md0 停止raid

mdadm -A /dev/md0 启动raid


6、测试

mdadm /dev/md0 -f /dev/sdf1 #模拟损坏

mdadm /dev/md0 -r /dev/sdf1 #删除成员

mdadm /dev/md0 -a /dev/sdf1 #增加

mdadm -G /dev/md0 -n 6 -a /dev/sdd4 #增加成员

mkfs.ext4 /dev/md0 #重新格式化


7、删除raid

umount /mnt/raid

mdadm -S /dev/md0 停止raid

rm -f /etc/mdadm.conf

vi /etc/fstab

fdisk /dev/sda

mdadm --zero-superblock /dev/sdd1


演示:


[root@centos7 ~]# mdadm -C /dev/md0 -a yes -l 5 -n 4 -x1 /dev/sd{b,c,d,e,f}1mdadm: /dev/sdb1 appears to contain an ext2fs file system       size=5242880K  mtime=Thu Jan  1 08:00:00 1970Continue creating array? ymdadm: Defaulting to version 1.2 metadatamdadm: array /dev/md0 started.[root@centos7 ~]# mdadm -D /dev/md0/dev/md0:        Version : 1.2  Creation Time : Tue Aug 30 11:28:31 2016     Raid Level : raid5     Array Size : 31432704 (29.98 GiB 32.19 GB)  Used Dev Size : 10477568 (9.99 GiB 10.73 GB)   Raid Devices : 4  Total Devices : 5    Persistence : Superblock is persistent    Update Time : Tue Aug 30 11:28:42 2016          State : clean, degraded, recovering Active Devices : 3Working Devices : 5 Failed Devices : 0  Spare Devices : 2         Layout : left-symmetric     Chunk Size : 512K Rebuild Status : 18% complete           Name : centos7.localdomain:0  (local to host centos7.localdomain)           UUID : 40fbcb9e:3de8f63f:0ec52e1d:98020537         Events : 3    Number   Major   Minor   RaidDevice State       0       8       17        0      active sync   /dev/sdb1       1       8       33        1      active sync   /dev/sdc1       2       8       49        2      active sync   /dev/sdd1       5       8       65        3      spare rebuilding   /dev/sde1       4       8       81        -      spare   /dev/sdf1[root@centos7 ~]# mdadm -D /dev/md0/dev/md0:        Version : 1.2  Creation Time : Tue Aug 30 11:28:31 2016     Raid Level : raid5     Array Size : 31432704 (29.98 GiB 32.19 GB)  Used Dev Size : 10477568 (9.99 GiB 10.73 GB)   Raid Devices : 4  Total Devices : 5    Persistence : Superblock is persistent    Update Time : Tue Aug 30 11:29:42 2016          State : clean, degraded, recovering Active Devices : 3Working Devices : 5 Failed Devices : 0  Spare Devices : 2         Layout : left-symmetric     Chunk Size : 512K Rebuild Status : 88% complete           Name : centos7.localdomain:0  (local to host centos7.localdomain)           UUID : 40fbcb9e:3de8f63f:0ec52e1d:98020537         Events : 15    Number   Major   Minor   RaidDevice State       0       8       17        0      active sync   /dev/sdb1       1       8       33        1      active sync   /dev/sdc1       2       8       49        2      active sync   /dev/sdd1       5       8       65        3      spare rebuilding   /dev/sde1       4       8       81        -      spare   /dev/sdf1[root@centos7 ~]# mdadm -D /dev/md0/dev/md0:        Version : 1.2  Creation Time : Tue Aug 30 11:28:31 2016     Raid Level : raid5     Array Size : 31432704 (29.98 GiB 32.19 GB)  Used Dev Size : 10477568 (9.99 GiB 10.73 GB)   Raid Devices : 4  Total Devices : 5    Persistence : Superblock is persistent    Update Time : Tue Aug 30 11:29:52 2016          State : clean Active Devices : 4Working Devices : 5 Failed Devices : 0  Spare Devices : 1         Layout : left-symmetric     Chunk Size : 512K           Name : centos7.localdomain:0  (local to host centos7.localdomain)           UUID : 40fbcb9e:3de8f63f:0ec52e1d:98020537         Events : 18    Number   Major   Minor   RaidDevice State       0       8       17        0      active sync   /dev/sdb1       1       8       33        1      active sync   /dev/sdc1       2       8       49        2      active sync   /dev/sdd1       5       8       65        3      active sync   /dev/sde1       4       8       81        -      spare   /dev/sdf1[root@centos7 ~]# mkfs.ext4 /dev/md0mke2fs 1.42.9 (28-Dec-2013)Filesystem label=OS type: LinuxBlock size=4096 (log=2)Fragment size=4096 (log=2)Stride=128 blocks, Stripe width=384 blocks1966080 inodes, 7858176 blocks392908 blocks (5.00%) reserved for the super userFirst data block=0Maximum filesystem blocks=2155872256240 block groups32768 blocks per group, 32768 fragments per group8192 inodes per groupSuperblock backups stored on blocks:32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,4096000Allocating group tables: doneWriting inode tables: doneCreating journal (32768 blocks): doneWriting superblocks and filesystem accounting information: done[root@centos7 ~]# mdadm -Ds /dev/md0 >/etc/mdadm.conf[root@centos7 ~]# mdadm -S /dev/md0mdadm: stopped /dev/md0[root@centos7 ~]# mdadm -A /dev/md0mdadm: /dev/md0 has been started with 4 drives and 1 spare.[root@centos7 ~]# mdadm  -D /dev/md0/dev/md0:        Version : 1.2  Creation Time : Tue Aug 30 11:28:31 2016     Raid Level : raid5     Array Size : 31432704 (29.98 GiB 32.19 GB)  Used Dev Size : 10477568 (9.99 GiB 10.73 GB)   Raid Devices : 4  Total Devices : 5    Persistence : Superblock is persistent    Update Time : Tue Aug 30 11:30:29 2016          State : clean Active Devices : 4Working Devices : 5 Failed Devices : 0  Spare Devices : 1         Layout : left-symmetric     Chunk Size : 512K           Name : centos7.localdomain:0  (local to host centos7.localdomain)           UUID : 40fbcb9e:3de8f63f:0ec52e1d:98020537         Events : 18    Number   Major   Minor   RaidDevice State       0       8       17        0      active sync   /dev/sdb1       1       8       33        1      active sync   /dev/sdc1       2       8       49        2      active sync   /dev/sdd1       5       8       65        3      active sync   /dev/sde1       4       8       81        -      spare   /dev/sdf1[root@centos7 ~]# mdadm  /dev/md0 -f /dev/sdb1mdadm: set /dev/sdb1 faulty in /dev/md0[root@centos7 ~]# mdadm  -D /dev/md0/dev/md0:        Version : 1.2  Creation Time : Tue Aug 30 11:28:31 2016     Raid Level : raid5     Array Size : 31432704 (29.98 GiB 32.19 GB)  Used Dev Size : 10477568 (9.99 GiB 10.73 GB)   Raid Devices : 4  Total Devices : 5    Persistence : Superblock is persistent    Update Time : Tue Aug 30 11:32:19 2016          State : clean, degraded, recovering Active Devices : 3Working Devices : 4 Failed Devices : 1  Spare Devices : 1         Layout : left-symmetric     Chunk Size : 512K Rebuild Status : 5% complete           Name : centos7.localdomain:0  (local to host centos7.localdomain)           UUID : 40fbcb9e:3de8f63f:0ec52e1d:98020537         Events : 20    Number   Major   Minor   RaidDevice State       4       8       81        0      spare rebuilding   /dev/sdf1       1       8       33        1      active sync   /dev/sdc1       2       8       49        2      active sync   /dev/sdd1       5       8       65        3      active sync   /dev/sde1       0       8       17        -      faulty   /dev/sdb1[root@centos7 ~]# cat /proc/mdstatPersonalities : [raid6] [raid5] [raid4]md0 : active raid5 sdb1[0](F) sdf1[4] sde1[5] sdd1[2] sdc1[1]      31432704 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [_UUU]      [========>............]  recovery = 43.8% (4592156/10477568) finish=0.7min speed=129724K/secunused devices: [root@centos7 ~]# mdadm  -D /dev/md0/dev/md0:        Version : 1.2  Creation Time : Tue Aug 30 11:28:31 2016     Raid Level : raid5     Array Size : 31432704 (29.98 GiB 32.19 GB)  Used Dev Size : 10477568 (9.99 GiB 10.73 GB)   Raid Devices : 4  Total Devices : 5    Persistence : Superblock is persistent    Update Time : Tue Aug 30 11:33:40 2016          State : clean Active Devices : 4Working Devices : 4 Failed Devices : 1  Spare Devices : 0         Layout : left-symmetric     Chunk Size : 512K           Name : centos7.localdomain:0  (local to host centos7.localdomain)           UUID : 40fbcb9e:3de8f63f:0ec52e1d:98020537         Events : 37    Number   Major   Minor   RaidDevice State       4       8       81        0      active sync   /dev/sdf1       1       8       33        1      active sync   /dev/sdc1       2       8       49        2      active sync   /dev/sdd1       5       8       65        3      active sync   /dev/sde1       0       8       17        -      faulty   /dev/sdb1[root@centos7 ~]# mdadm  -G /dev/md0 -n 6 -a /dev/sdb2mdadm: Need 2 spares to avoid degraded array, and only have 1.       Use --force to over-ride this check.[root@centos7 ~]# mdadm  -G /dev/md0 -n 6 -a /dev/sdb2 --forcemdadm: added /dev/sdb2mdadm: Failed to initiate reshape!unfreeze[root@centos7 ~]# mkfs.ext4 /dev/md0mke2fs 1.42.9 (28-Dec-2013)Filesystem label=OS type: LinuxBlock size=4096 (log=2)Fragment size=4096 (log=2)Stride=128 blocks, Stripe width=384 blocks1966080 inodes, 7858176 blocks392908 blocks (5.00%) reserved for the super userFirst data block=0Maximum filesystem blocks=2155872256240 block groups32768 blocks per group, 32768 fragments per group8192 inodes per groupSuperblock backups stored on blocks:32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,4096000Allocating group tables: doneWriting inode tables: doneCreating journal (32768 blocks): doneWriting superblocks and filesystem accounting information: done[root@centos7 ~]# mdadm -D /dev/md0/dev/md0:        Version : 1.2  Creation Time : Tue Aug 30 11:28:31 2016     Raid Level : raid5     Array Size : 31432704 (29.98 GiB 32.19 GB)  Used Dev Size : 10477568 (9.99 GiB 10.73 GB)   Raid Devices : 4  Total Devices : 6    Persistence : Superblock is persistent    Update Time : Tue Aug 30 11:34:55 2016          State : clean Active Devices : 4Working Devices : 5 Failed Devices : 1  Spare Devices : 1         Layout : left-symmetric     Chunk Size : 512K           Name : centos7.localdomain:0  (local to host centos7.localdomain)           UUID : 40fbcb9e:3de8f63f:0ec52e1d:98020537         Events : 41    Number   Major   Minor   RaidDevice State       4       8       81        0      active sync   /dev/sdf1       1       8       33        1      active sync   /dev/sdc1       2       8       49        2      active sync   /dev/sdd1       5       8       65        3      active sync   /dev/sde1       0       8       17        -      faulty   /dev/sdb1       6       8       18        -      spare   /dev/sdb2[root@centos7 ~]# mdadm /dev/md0 -r /dev/sdb1mdadm: hot removed /dev/sdb1 from /dev/md0[root@centos7 ~]# mdadm /dev/md0 -a /dev/sdb1mdadm: added /dev/sdb1[root@centos7 ~]# mdadm -D /dev/md0/dev/md0:        Version : 1.2  Creation Time : Tue Aug 30 11:28:31 2016     Raid Level : raid5     Array Size : 31432704 (29.98 GiB 32.19 GB)  Used Dev Size : 10477568 (9.99 GiB 10.73 GB)   Raid Devices : 4  Total Devices : 6    Persistence : Superblock is persistent    Update Time : Tue Aug 30 11:35:46 2016          State : clean Active Devices : 4Working Devices : 6 Failed Devices : 0  Spare Devices : 2         Layout : left-symmetric     Chunk Size : 512K           Name : centos7.localdomain:0  (local to host centos7.localdomain)           UUID : 40fbcb9e:3de8f63f:0ec52e1d:98020537         Events : 43    Number   Major   Minor   RaidDevice State       4       8       81        0      active sync   /dev/sdf1       1       8       33        1      active sync   /dev/sdc1       2       8       49        2      active sync   /dev/sdd1       5       8       65        3      active sync   /dev/sde1       6       8       18        -      spare   /dev/sdb2       7       8       17        -      spare   /dev/sdb1[root@centos7 ~]#



三、LVM(逻辑卷管理)

1、简介

LVM: Logical Volume Manager,允许对卷进行方便操作的抽象层,包括重新设定文件系统的大小,允许在多个物理设备间重新组织文件系统。将设备指定为物理卷,用一个或多个物理卷来创建一个卷组,物理卷是用固定大小的物理区域(PE)来定义,在物理卷上创建逻辑卷,然后在逻辑卷上创建文件系统。


2、pv管理工具

1)显示pv信息

pvs:简要pv信息

pvdisplay:详细pv信息

2)创建pv

pvcreate /dev/DEVICE


3、vg管理工具

1)显示卷组

vgs:简要vg信息

vgdisplay:详细vg信息

2)创建逻辑卷

vgcreate [-s #[kKmMgGtTpPeE]] VolumeGroupName PhysicalDevicePath [PhysicalDevicePath...]

3)管理卷组

vgextend VolumeGroupName PhysicalDevicePath [PhysicalDevicePath...] vgreduce VolumeGroupName PhysicalDevicePath [PhysicalDevicePath...]

4)删除卷组

先做vgremove,再做pvremove


4、lv工具

1)显示逻辑卷

lvs:显示逻辑卷简单信息

lvdisplay:显示详细逻辑卷信息

2)删除逻辑卷

lvremove /dev/VG_NAME/LV_NAME

3)重设文件系统大小

fsadm [options] resize device [new_size[BKMGTEP]] resize2fs [-f] [-F] [-M] [-P] [-p] device [new_size]


5)扩展逻辑卷

lvextend -L [+]#[mMgGtT] /dev/VG_NAME/LV_NAME

resize2fs /dev/VG_NAME/LV_NAME


6)缩减逻辑卷

umount /dev/VG_NAME/LV_NAME

e2fsck -f /dev/VG_NAME/LV_NAME

resize2fs /dev/VG_NAME/LV_NAME

[mMgGtT]

lvreduce -L [-]#[mMgGtT] /dev/VG_NAME/LV_NAME


演示:




[root@centos6 ~]# lsblkNAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTsr0     11:0    1  3.7G  0 romsda      8:0    0  120G  0 disk├─sda1   8:1    0  200M  0 part /boot├─sda2   8:2    0   80G  0 part /├─sda3   8:3    0   20G  0 part /testdir├─sda4   8:4    0    1K  0 part├─sda5   8:5    0    4G  0 part [SWAP]└─sda6   8:6    0    2G  0 partsdb      8:16   0  120G  0 disk└─sdb1   8:17   0    2G  0 part [SWAP]sdc      8:32   0   20G  0 disk├─sdc2   8:34   0   10G  0 part└─sdc1   8:33   0    2G  0 part /mnt/testsdd      8:48   0   20G  0 disksde      8:64   0   20G  0 disksdf      8:80   0   20G  0 disk[root@centos6 ~]#[root@centos6 ~]# pvcreate /dev/sd{c2,d}  Physical volume "/dev/sdc2" successfully created  Physical volume "/dev/sdd" successfully created[root@centos6 ~]# pvs  PV         VG   Fmt  Attr PSize  PFree  /dev/sdc2       lvm2 ---- 10.00g 10.00g  /dev/sdd        lvm2 ---- 20.00g 20.00g[root@centos6 ~]# pvdisplay  "/dev/sdc2" is a new physical volume of "10.00 GiB"  --- NEW Physical volume ---  PV Name               /dev/sdc2  VG Name  PV Size               10.00 GiB  Allocatable           NO  PE Size               0  Total PE              0  Free PE               0  Allocated PE          0  PV UUID               PZRtfc-8dci-dW2V-ayy6-RVHQ-6oMh-q8LhwC  "/dev/sdd" is a new physical volume of "20.00 GiB"  --- NEW Physical volume ---  PV Name               /dev/sdd  VG Name  PV Size               20.00 GiB  Allocatable           NO  PE Size               0  Total PE              0  Free PE               0  Allocated PE          0  PV UUID               X7gN2P-RysJ-Woci-IiIu-IphR-elkT-sAtSID################创建逻辑卷组##################[root@centos6 ~]# vgcreate vg0 /dev/sd{c2,d}  Volume group "vg0" successfully created[root@centos6 ~]# vgs  VG   #PV #LV #SN Attr   VSize  VFree  vg0    2   0   0 wz--n- 30.00g 30.00g[root@centos6 ~]# vgdisplay  --- Volume group ---  VG Name               vg0  System ID  Format                lvm2  Metadata Areas        2  Metadata Sequence No  1  VG Access             read/write  VG Status             resizable  MAX LV                0  Cur LV                0  Open LV               0  Max PV                0  Cur PV                2  Act PV                2  VG Size               30.00 GiB  PE Size               4.00 MiB  Total PE              7679  Alloc PE / Size       0 / 0  Free  PE / Size       7679 / 30.00 GiB  VG UUID               gbfTZO-aqo8-kdfg-cLkM-xXug-VWRK-hl1qSA################创建逻辑卷###################[root@centos6 ~]# lvcreate -n lv0 -L 15G vg0  Logical volume "lv0" created.[root@centos6 ~]# lvs  LV   VG   Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert  lv0  vg0  -wi-a----- 15.00g[root@centos6 ~]# lvdisplay  --- Logical volume ---  LV Path                /dev/vg0/lv0  LV Name                lv0  VG Name                vg0  LV UUID                XJ1Nco-ZP4s-h93D-YkIy-DcbN-6TEq-4XXJDI  LV Write Access        read/write  LV Creation host, time centos6.localdomain, 2016-08-24 21:26:41 +0800  LV Status              available  # open                 0  LV Size                15.00 GiB  Current LE             3840  Segments               1  Allocation             inherit  Read ahead sectors     auto  - currently set to     256  Block device           253:0[root@centos6 ~]# mkfs.ext4 /dev/vgvg0/         vga_arbiter[root@centos6 ~]# mkfs.ext4 /dev/vg0/lv0mke2fs 1.41.12 (17-May-2010)文件系统标签=操作系统:Linux块大小=4096 (log=2)分块大小=4096 (log=2)Stride=0 blocks, Stripe width=0 blocks983040 inodes, 3932160 blocks196608 blocks (5.00%) reserved for the super user第一个数据块=0Maximum filesystem blocks=4026531840120 block groups32768 blocks per group, 32768 fragments per group8192 inodes per groupSuperblock backups stored on blocks:  32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208正在写入inode表: 完成Creating journal (32768 blocks): 完成Writing superblocks and filesystem accounting information: 完成This filesystem will be automatically checked every 36 mounts or180 days, whichever comes first.  Use tune2fs -c or -i to override.[root@centos6 ~]# vi /etc/fstab[root@centos6 ~]# blkid /dev/vgvg0/         vga_arbiter[root@centos6 ~]# blkid/dev/sda2: UUID="240533cf-b37f-4460-974f-702bab867da5" TYPE="ext4"/dev/sda1: UUID="4e245c68-a392-4ce9-9a99-5d32d8d43872" TYPE="ext4"/dev/sda3: UUID="86aa7b74-24df-4043-ba83-f3b41a99ce0e" TYPE="ext4"/dev/sda5: UUID="f8ef48ef-b141-48e5-9735-ff9089bd54ba" TYPE="swap"/dev/sda6: UUID="ca0c47c7-edb0-4685-8b29-44c6a5bf7a11" TYPE="ext4" LABEL="MYHOME"/dev/sdb1: UUID="443bb126-8dc0-45a3-acfe-9a37629bb511" TYPE="swap"/dev/sdc2: UUID="PZRtfc-8dci-dW2V-ayy6-RVHQ-6oMh-q8LhwC" TYPE="LVM2_member"/dev/sdd: UUID="X7gN2P-RysJ-Woci-IiIu-IphR-elkT-sAtSID" TYPE="LVM2_member"/dev/mapper/vg0-lv0: UUID="bac8210f-143d-4f89-a3fe-b75be6060274" TYPE="ext4"/dev/sdc1: UUID="7f140c30-7c34-4387-abac-b4687870463c" TYPE="ext4"[root@centos6 ~]# vi /etc/fstab[root@centos6 ~]# mount -a[root@centos6 ~]# dfFilesystem          1K-blocks    Used Available Use% Mounted on/dev/sda2            82438832 5772100  72472428   8% /tmpfs                  502068       0    502068   0% /dev/shm/dev/sda1              194241   39067    144934  22% /boot/dev/sda3            20511356   45044  19417736   1% /testdir/dev/sdc1             2005848   84784   1815840   5% /mnt/test/dev/sdc1             2005848   84784   1815840   5% /home/dev/mapper/vg0-lv0  15350768   38384  14525952   1% /mnt/lv0[root@centos6 ~]# cd /mnt/lv0/[root@centos6 lv0]# lslost+found[root@centos6 lv0]# df -hFilesystem           Size  Used Avail Use% Mounted on/dev/sda2             79G  5.6G   70G   8% /tmpfs                491M     0  491M   0% /dev/shm/dev/sda1            190M   39M  142M  22% /boot/dev/sda3             20G   44M   19G   1% /testdir/dev/sdc1            2.0G   83M  1.8G   5% /mnt/test/dev/sdc1            2.0G   83M  1.8G   5% /home/dev/mapper/vg0-lv0   15G   38M   14G   1% /mnt/lv0################扩展逻辑卷#################[root@centos6 lv0]# lvextend -L +6G /dev/vg0/lv0  Size of logical volume vg0/lv0 changed from 15.00 GiB (3840 extents) to 21.00 GiB (5376extents).  Logical volume lv0 successfully resized.[root@centos6 lv0]# df -hFilesystem           Size  Used Avail Use% Mounted on/dev/sda2             79G  5.6G   70G   8% /tmpfs                491M     0  491M   0% /dev/shm/dev/sda1            190M   39M  142M  22% /boot/dev/sda3             20G   44M   19G   1% /testdir/dev/sdc1            2.0G   83M  1.8G   5% /mnt/test/dev/sdc1            2.0G   83M  1.8G   5% /home/dev/mapper/vg0-lv0   15G   38M   14G   1% /mnt/lv0[root@centos6 lv0]# resize2fs /dev/vg0/lv0resize2fs 1.41.12 (17-May-2010)Filesystem at /dev/vg0/lv0 is mounted on /mnt/lv0; on-line resizing requiredold desc_blocks = 1, new_desc_blocks = 2Performing an on-line resize of /dev/vg0/lv0 to 5505024 (4k) blocks.The filesystem on /dev/vg0/lv0 is now 5505024 blocks long.[root@centos6 lv0]# df -hFilesystem           Size  Used Avail Use% Mounted on/dev/sda2             79G  5.6G   70G   8% /tmpfs                491M     0  491M   0% /dev/shm/dev/sda1            190M   39M  142M  22% /boot/dev/sda3             20G   44M   19G   1% /testdir/dev/sdc1            2.0G   83M  1.8G   5% /mnt/test/dev/sdc1            2.0G   83M  1.8G   5% /home/dev/mapper/vg0-lv0   21G   42M   20G   1% /mnt/lv0##############新增逻辑卷组及加入逻辑卷################[root@centos6 ~]# pvcreate /dev/sde1  Physical volume "/dev/sde1" successfully created[root@centos6 ~]# vgextend /dev/vg0/ /dev/sdesde   sde1[root@centos6 ~]# vgextend /dev/vg0/ /dev/sde1  Volume group name "vg0/" has invalid characters.  Cannot process volume group vg0/[root@centos6 ~]# vgextend vg0  /dev/sde1  Volume group "vg0" successfully extended[root@centos6 ~]# vgs  VG   #PV #LV #SN Attr   VSize  VFree  vg0    3   1   0 wz--n- 43.00g 22.00g[root@centos6 ~]# lvs \>  LV   VG   Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert  lv0  vg0  -wi-ao---- 21.00g[root@centos6 ~]# pvs  PV         VG   Fmt  Attr PSize  PFree  /dev/sdc2  vg0  lvm2 a--u 10.00g  9.00g  /dev/sdd   vg0  lvm2 a--u 20.00g     0  /dev/sde1  vg0  lvm2 a--u 13.00g 13.00g[root@centos6 ~]# lvcreate -L +13G /dev/vg0/lv0  Volume group name expected (no slash)  Run `lvcreate --help' for more information.[root@centos6 ~]# lvextend -L +13G /dev/vg0/lv0  Size of logical volume vg0/lv0 changed from 21.00 GiB (5376 extents) to 34.00 GiB (8704extents).  Logical volume lv0 successfully resized.[root@centos6 ~]# lvs  LV   VG   Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert  lv0  vg0  -wi-ao---- 34.00g[root@centos6 ~]# resize2fs /dev/vg0/lv0resize2fs 1.41.12 (17-May-2010)Filesystem at /dev/vg0/lv0 is mounted on /mnt/lv0; on-line resizing requiredold desc_blocks = 2, new_desc_blocks = 3Performing an on-line resize of /dev/vg0/lv0 to 8912896 (4k) blocks.The filesystem on /dev/vg0/lv0 is now 8912896 blocks long.[root@centos6 ~]# df -h |grep "vg0-lv0"/dev/mapper/vg0-lv0   34G   45M   32G   1% /mnt/lv0################删除lv/vg/pv##################[root@centos6 ~]# lsblkNAME               MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTsr0                 11:0    1  3.7G  0 romsda                  8:0    0  120G  0 disk├─sda1               8:1    0  200M  0 part /boot├─sda2               8:2    0   80G  0 part /├─sda3               8:3    0   20G  0 part /testdir├─sda4               8:4    0    1K  0 part├─sda5               8:5    0    4G  0 part [SWAP]└─sda6               8:6    0    2G  0 partsdb                  8:16   0  120G  0 disk└─sdb1               8:17   0    2G  0 part [SWAP]sdc                  8:32   0   20G  0 disk├─sdc2               8:34   0   10G  0 part│ └─vg0-lv0 (dm-0) 253:0    0   34G  0 lvm  /mnt/lv0└─sdc1               8:33   0    2G  0 part /mnt/testsdd                  8:48   0   20G  0 disk└─vg0-lv0 (dm-0)   253:0    0   34G  0 lvm  /mnt/lv0sde                  8:64   0   20G  0 disk└─sde1               8:65   0   13G  0 part  └─vg0-lv0 (dm-0) 253:0    0   34G  0 lvm  /mnt/lv0sdf                  8:80   0   20G  0 disk[root@centos6 ~]# lvs  LV   VG   Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert  lv0  vg0  -wi-ao---- 34.00g[root@centos6 ~]# lvremove /dev/vg0/lv0  Logical volume vg0/lv0 contains a filesystem in use.[root@centos6 ~]# umount /mnt/lv0/[root@centos6 ~]# lvremove /dev/vg0/lv0Do you really want to remove active logical volume lv0? [y/n]: y  Logical volume "lv0" successfully removed[root@centos6 ~]# pvs  PV         VG   Fmt  Attr PSize  PFree  /dev/sdc2  vg0  lvm2 a--u 10.00g 10.00g  /dev/sdd   vg0  lvm2 a--u 20.00g 20.00g  /dev/sde1  vg0  lvm2 a--u 13.00g 13.00g[root@centos6 ~]# pvremove /dev/sdc2 --force  WARNING: PV /dev/sdc2 belongs to Volume Group vg0 (consider using vgreduce).  /dev/sdc2: physical volume label not removed.  (If you are certain you need pvremove, then confirm by using --force twice.)[root@centos6 ~]# pvs  PV         VG   Fmt  Attr PSize  PFree  /dev/sdc2  vg0  lvm2 a--u 10.00g 10.00g  /dev/sdd   vg0  lvm2 a--u 20.00g 20.00g  /dev/sde1  vg0  lvm2 a--u 13.00g 13.00g[root@centos6 ~]# pvremove /dev/sdc2 --force  WARNING: PV /dev/sdc2 belongs to Volume Group vg0 (consider using vgreduce).  /dev/sdc2: physical volume label not removed.  (If you are certain you need pvremove, then confirm by using --force twice.)[root@centos6 ~]# lvs[root@centos6 ~]# pvs  PV         VG   Fmt  Attr PSize  PFree  /dev/sdc2  vg0  lvm2 a--u 10.00g 10.00g  /dev/sdd   vg0  lvm2 a--u 20.00g 20.00g  /dev/sde1  vg0  lvm2 a--u 13.00g 13.00g[root@centos6 ~]# vgremove vg0  Volume group "vg0" successfully removed[root@centos6 ~]# vgs[root@centos6 ~]# pvs  PV         VG   Fmt  Attr PSize  PFree  /dev/sdc2       lvm2 ---- 10.00g 10.00g  /dev/sdd        lvm2 ---- 20.00g 20.00g  /dev/sde1       lvm2 ---- 13.01g 13.01g[root@centos6 ~]# pvremove /dev/sdc2  Labels on physical volume "/dev/sdc2" successfully wiped[root@centos6 ~]# pvs  PV         VG   Fmt  Attr PSize  PFree  /dev/sdd        lvm2 ---- 20.00g 20.00g  /dev/sde1       lvm2 ---- 13.01g 13.01g[root@centos6 ~]# pvremove /dev/sdd  Labels on physical volume "/dev/sdd" successfully wiped[root@centos6 ~]# pvremove /dev/sde1  Labels on physical volume "/dev/sde1" successfully wiped[root@centos6 ~]# lsblkNAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTsr0     11:0    1  3.7G  0 romsda      8:0    0  120G  0 disk├─sda1   8:1    0  200M  0 part /boot├─sda2   8:2    0   80G  0 part /├─sda3   8:3    0   20G  0 part /testdir├─sda4   8:4    0    1K  0 part├─sda5   8:5    0    4G  0 part [SWAP]└─sda6   8:6    0    2G  0 partsdb      8:16   0  120G  0 disk└─sdb1   8:17   0    2G  0 part [SWAP]sdc      8:32   0   20G  0 disk├─sdc2   8:34   0   10G  0 part└─sdc1   8:33   0    2G  0 part /mnt/testsdd      8:48   0   20G  0 disksde      8:64   0   20G  0 disk└─sde1   8:65   0   13G  0 partsdf      8:80   0   20G  0 disk[root@centos6 ~]#################演示成功#################





本文为小耳朵原创作品,如有瑕疵可留言管理员哦:)





0