ec2 doesn't let you do a live resize on an attached elastic block store; and the procedure for resizeng offline is a bit awkward - make a snapshot and restore that snapshot into a bigger EBS volume (here's a stack overflow article about that).
LVM lets you add space to a volume dynamically, and ext2 can cope with live resizing of a filesystem now. So if I was using LVM, I think I'd be able to do this live.
So what I'm going to do is:
- firstly move this volume to LVM without resizing. This will involve downtime as it will be roughly a variant of the above-mentioned "go offline and restore to a different volume"
- secondly use LVM to add more space: by adding another EBS to use in addition (rather than as a replacement) for my existing space; adding that to LVM; and live resizing the ext2 partition.
First, move this volume to LVM without resizing.
The configuration at the start is that I have a large data volume mounted at /backup, directly on an attached EBS device, /dev/xvdf
.
$ df -h /backup Filesystem Size Used Avail Use% Mounted on /dev/xvdf 99G 48G 52G 49% /backup
in AWS web console, create a volume that is a little bit bigger than the volume i already have. so 105 gb. no snapshot. make sure its in same availability zone as the instance/other volume.
attach volume to instance, in the aws console.
on the linux instance, it should now appear:
$ dmesg | tail [15755792.707506] blkfront: regular deviceid=0x860 major,minor=8,96, assuming parts/disk=16 [15755792.708148] xvdg: unknown partition table $ cat /proc/partitions major minor #blocks name 202 1 8388608 xvda1 202 80 104857600 xvdf 202 96 110100480 xvdgxvdg is the new EBS device.
Despite that dmesg warning, screw having a partition table - I'm using this as a raw device. It might suit your tastes at this moment to create partitions though, but it really doesn't matter.
Now I'm going to make that 105Gb on xvdg
into some LVM space: (there's a nice LVM tutorial here if you want someone else's more detailed take)
# pvcreate /dev/xvdg Physical volume "/dev/xvdg" successfully created # vgcreate backups /dev/xvdg Volume group "backups" successfully created
Now we've created a volume group backups
which contains one physical volume - /dev/xvdg
. Later on we'll add more space into this backups
volume group, but for now we'll make it into some space that we can put a file system onto:
# vgdisplay | grep 'VG Size' VG Size 105.00 GiBso we have 105.00 GiB available - the size of the whole new EBS volume created earlier. It turns out not quite, so I'll create a logical volume with only 104Gb of space. What's a wasted partial-gigabyte in the 21st century?
# lvcreate --name backup backups --size 105g Volume group "backups" has insufficient free space (26879 extents): 26880 required. # lvcreate --name backup backups --size 104g Logical volume "backup" created
Now that new logical volume has appeared and can be used for a file system:
$ cat /proc/partitions major minor #blocks name 202 1 8388608 xvda1 202 80 104857600 xvdf 202 96 110100480 xvdg 253 0 109051904 dm-0 # ls -l /dev/backups/backup lrwxrwxrwx 1 root root 7 Jul 25 20:35 /dev/backups/backup -> ../dm-0It appears both as
/dev/dm-0
and as /dev/backups/backup
- this second name based on the parameters we supplied to vgcreate
and lvcreate
.
Now we'll do the bit that involves offline-ness: I'm going to take the /backup volume (which is /dev/xvdf
at the moment) offline and copy it into this new space, /dev/dm-0
.
# umount /backup # dd if=/dev/xvdf of=/dev/dm-0This
dd
takes quite while (hours) - its copying 100gb of data. While I was waiting, I discovered that you can SIGUSR1 a dd process on linux to get IO stats: (thanks mdm)
$ sudo killall -USR1 dd $ 41304+0 records in 41303+0 records out 43309334528 bytes (43 GB) copied, 4303.97 s, 10.1 MB/s
Once that is finished, we can mount the copied volume:
# mount /dev/backups/backup /backup # df -h /backup Filesystem Size Used Avail Use% Mounted on /dev/mapper/backups-backup 99G 68G 32G 69% /backupNow we have the same sized volume, with the same data on it, but now inside LVM.
Second, add more space
Now we've got our filesystem inside LVM, we can start doing interesting things.
The first thing I'm going to do is reuse the old space on /dev/xvdf
as additional space.
To do that, add it as a physical volume; add that physical volume to the volume group; allocate that new space to the logical volume; and then resize the ext2 filesystem.
These commands add the old space into the volume group:
# pvcreate /dev/xvdf Physical volume "/dev/xvdf" successfully created # vgextend backups /dev/xvdf Volume group "backups" successfully extended
... and these commands show you how much space is available (by trying to allocate too much) and then add to the space:
# lvresize /dev/backups/backup -L+500G Extending logical volume backup to 604.00 GiB Insufficient free space: 128000 extents needed, but only 25854 available # lvresize /dev/backups/backup -l+25854 Rounding up size to full physical extent 25.25 GiB Extending logical volume backup to 129.25 GiB Logical volume backup successfully resized
Even though we've now made the dm-0
/ /dev/backups/backup
device much bigger, the filesystem on it is still the same size:
df -h /backup Filesystem Size Used Avail Use% Mounted on /dev/mapper/backups-backup 99G 68G 32G 69% /backup
But not for long...
Unfortunately:
# resize2fs /dev/backups/backup resize2fs 1.41.12 (17-May-2010) Filesystem at /dev/backups/backup is mounted on /backup; on-line resizing required old desc_blocks = 7, new_desc_blocks = 9 resize2fs: Kernel does not support online resizingthe version of the kernel on this host doesn't allow online resizing (some do). So I'll have to unmount it briefly to resize:
# umount /backup # resize2fs /dev/backups/backup resize2fs 1.41.12 (17-May-2010) Resizing the filesystem on /dev/backups/backup to 33882112 (4k) blocks. The filesystem on /dev/backups/backup is now 33882112 blocks long. # mount /dev/backups/backup /backup # df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/backups-backup 128G 68G 60G 53% /backupSo there's the bigger fs. (though not as big as I had expected... I only seem to have got 30G extra worth of storage, not 100 as I was expecting...
Well it turns out that all the space wasn't allocated to this LV even though I thought I'd done that:
# vgdisplay ... Alloc PE / Size 33088 / 129.25 GiB Free PE / Size 19390 / 75.74 GiB ...but no matter. I can repeat this procedure a second time without too much trouble (indeed doing this procedure easily is the whole reason I want LVM installed...
Having done that, I end up with the expected bigger filesystem:
# df -h /backup Filesystem Size Used Avail Use% Mounted on /dev/mapper/backups-backup 202G 68G 135G 34% /backup
Now whenever I want to add more space, I can repeat step 2 with just a tiny bit of downtime for that particular filesystem; and if I get round to putting on a kernel with online resizing (my raspberry pi has it, why doesn't this?) then I won't need downtime at all...