這篇文章主要為大家展示了“ceph rbd在線resize的示例分析”,內容簡而易懂,條理清晰,希望能夠幫助大家解決疑惑,下面讓小編帶領大家一起研究并學習一下“ceph rbd在線resize的示例分析”這篇文章吧。
擴容前
[root@mon0 ceph]# rbd create myrbd/rbd1 -s 1024 --image-format=2 [root@mon0 ceph]# rbd ls myrbd rbd1 [root@mon0 ceph]# rbd info myrbd/rbd1 rbd image 'rbd1': size 1024 MB in 256 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.12ce6b8b4567 format: 2 features: layering
擴容
[root@mon0 ceph]# rbd resize myrbd/rbd1 -s 2048 Resizing image: 100% complete...done.
在rbd1未格式化和掛載之前,直接resize就可以了。如果rbd1已經格式化并掛載了,需要一些額外的操作:
[root@mon0 ceph]# rbd map myrbd/rbd1 [root@mon0 ceph]# rbd showmapped id pool image snap device 0 test test.img - /dev/rbd0 1 myrbd rbd1 - /dev/rbd1 [root@mon0 ceph]# mkfs.xfs /dev/rbd1 log stripe unit (4194304 bytes) is too large (maximum is 256KiB) log stripe unit adjusted to 32KiB meta-data=/dev/rbd1 isize=256 agcount=9, agsize=64512 blks = sectsz=512 attr=2, projid32bit=0 data = bsize=4096 blocks=524288, imaxpct=25 = sunit=1024 swidth=1024 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=8 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 [root@mon0 ceph]# mount /dev/rbd1 /mnt [root@mon0 ceph]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 529G 20G 482G 4% / tmpfs 16G 408K 16G 1% /dev/shm /dev/sdb 559G 33G 527G 6% /openstack /dev/sdc 1.9T 75M 1.9T 1% /cephmp1 /dev/sdd 1.9T 61M 1.9T 1% /cephmp2 /dev/rbd1 2.0G 33M 2.0G 2% /mnt [root@mon0 ceph]# rbd resize myrbd/rbd1 -s 4096 Resizing image: 100% complete...done. [root@mon0 ceph]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 529G 20G 482G 4% / tmpfs 16G 408K 16G 1% /dev/shm /dev/sdb 559G 33G 527G 6% /openstack /dev/sdc 1.9T 75M 1.9T 1% /cephmp1 /dev/sdd 1.9T 61M 1.9T 1% /cephmp2 /dev/rbd1 2.0G 33M 2.0G 2% /mnt [root@mon0 ceph]# xfs_growfs /mnt meta-data=/dev/rbd1 isize=256 agcount=9, agsize=64512 blks = sectsz=512 attr=2, projid32bit=0 data = bsize=4096 blocks=524288, imaxpct=25 = sunit=1024 swidth=1024 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=8 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 data blocks changed from 524288 to 1048576 [root@mon0 ceph]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 529G 20G 482G 4% / tmpfs 16G 408K 16G 1% /dev/shm /dev/sdb 559G 33G 527G 6% /openstack /dev/sdc 1.9T 75M 1.9T 1% /cephmp1 /dev/sdd 1.9T 61M 1.9T 1% /cephmp2 /dev/rbd1 4.0G 33M 4.0G 1% /mnt
還有一種情況是,rbd1已經被掛載到一個vm上:
virsh domblklist myvm rbd resize myrbd/rbd1 #這里需要通過virsh blockresize進行操作 virsh blockresize --domain myvm --path vdb --size 100G rbd info myrbd/rbd1
以上是“ceph rbd在線resize的示例分析”這篇文章的所有內容,感謝各位的閱讀!相信大家都有了一定的了解,希望分享的內容對大家有所幫助,如果還想學習更多知識,歡迎關注億速云行業資訊頻道!
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。