Increasing a CentOS Linux LVM partition size, the dangerous way
by Eric Stewart on Feb.23, 2012, under Computers, Technology
First, the important legal disclaimer: I tend to be really good at Googling stuff. I can also put information from several different sources together successfully. I also consider myself unusually lucky. If you are not so lucky, it’s not my fault. Always have a disaster recovery plan, because if you follow these instructions blindly and things don’t go so well for you, then you failed to read the bits about “*boom*” and “effective destruction” and heed them appropriately.
If you have any experience with VMWare vCenter, you know that granting additional RAM and Disk space is quite easy (and you should be able to present the additional disk space to your system as if the original hard drive became magically larger). It might require you powering down a VMWare virtual machine once, but for the most part, should be easy to figure out. In my current work environment, we have a whole department that manages the VMWare infrastructure and takes care of that part for us.
The not so easy part is having your Linux system actually see the disk. There are easy ways to do this, and for non-root partitions, it’s fairly safe.
For root partitions, however, one of the methods could result in a spectacular loss of an operating virtual machine.
There are two basic ways to increase the available space for a partition:
- Add a new partition using fdisk, add the partition to the LVM, and then resize your file system. May not even require a reboot (aside from the initial power down to add the space).
- Destroy the existing partition (!), rebuild it with the larger size, tell LVM the partition has grown, and then resize your file system. The first time I did it, I was greatly surprised by the success, especially since I fumbled around quite a bit, and the idea of changing a partition entry just seems to scream “don’t do this!”.
I don’t like #1. It uses up a partition slot and just makes the system feeling … messy. #2, however, needs bravery. And in both cases, you’d be an idiot not to ensure you don’t have some kind of quick recovery method, like a snapshot or some kind of VMWare backup. Some of the steps in #2, if you know even as little as I do about how Linux file systems work, could result in the effective destruction of said virtual machine … especially if you don’t pay attention.
So, this is what we’re dealing with:
# df -k Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 16220072 8346836 7036016 55% / /dev/sda1 101086 40639 55228 43% /boot tmpfs 3053044 0 3053044 0% /dev/shm
In my case this was a future member of a cluster, and “/” on some of the existing cluster members would complain about getting full. So we asked for the 20GB we had to be increased to 60GB:
# fdisk /dev/sda The number of cylinders for this disk is set to 6527. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Command (m for help): p Disk /dev/sda: 53.6 GB, 53687091200 bytes 255 heads, 63 sectors/track, 6527 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 2610 20860402+ 8e Linux LVM
Note the difference between the number of cylinders for the disk (which I’ve underlined), and the “End” cylinder for /dev/sda2. That should be some indication of the change in the disk from its original size (or, a massive brain fart by the administrator that originally laid out the partition table). Oh, and also make note of the Start cylinder, because you’re going to need that information later. Oh, and the “Id” is important. Get it wrong at the right point, and *boom* there goes your data.
Okay, so, at this point, you’re going to do the most conceptually dangerous part (still in the fdisk from above):
Command (m for help): d Partition number (1-4): 2
Here’s the good news: You haven’t screwed up anything yet (as far as I understand it). You can quit fdisk at this point without writing the table out and it won’t kill anything. Write out the partition table now, and you might as well delete the VM completely and start all over.
The next step is to recreate partition 2, making it bigger than it was and changing it’s type/id:
Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 2 First cylinder (14-6527, default 14): Using default value 14 Last cylinder or +size or +sizeM or +sizeK (14-6527, default 6527): Using default value 6527 Command (m for help): t Partition number (1-4): 2 Hex code (type L to list codes): 8e Changed system type of partition 2 to 8e (Linux LVM)
So far so good. While in my case the default start cylinder was the one I wanted, make sure it’s the one you want when you go through these steps. Put in the wrong one and *boom*, there goes your data. Let’s take a look at it (note: you can still back out now, as you haven’t written it out):
Command (m for help): p Disk /dev/sda: 53.6 GB, 53687091200 bytes 255 heads, 63 sectors/track, 6527 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 6527 52323705 8e Linux LVM
Yay, a lot bigger … or at least it will be. Now, here’s where I start to get nervous and hope I’ve remembered all of my steps or have bookmarked/Googled all of the right pages. Write that bad boy out:
Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. WARNING: Re-reading the partition table failed with error 16: Device or resource busy. The kernel still uses the old table. The new table will be used at the next reboot. Syncing disks.
Oh that’s a scary bit of text. Um. Maybe we should reboot and see what happens? So, if you haven’t figured it out yet, I’m working on my second system while writing this. So I’m a bit nervous now while the system reboots. I could be watching it via the vSphere client but that’s too easy …
Okay! So the system is back up (at least enough for me to ssh into it). A quick “df -k” shows nothing different there. “fdisk -l” does show what we’d expect – the larger partition. Next, we gotta start working with LVM.
# vgdisplay VolGroup00 /dev/hdc: open failed: No medium found --- Volume group --- VG Name VolGroup00 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 3 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 2 Max PV 0 Cur PV 1 Act PV 1 VG Size 19.88 GB PE Size 32.00 MB Total PE 636 Alloc PE / Size 636 / 19.88 GB Free PE / Size 0 / 0 VG UUID 4crWPr-2a1F-fFSs-2pKt-Banc-QZZ7-2ONZHW
Wait, what? Oh yeah, we have to tell VolGroup00 to use the space.
# lvm pvresize /dev/sda2
Physical volume "/dev/sda2" changed
1 physical volume(s) resized / 0 physical volume(s) not resized
# lvm pvdisplay
--- Physical volume ---
PV Name /dev/sda2
VG Name VolGroup00
PV Size 49.90 GB / not usable 25.18 MB
Allocatable yes
PE Size (KByte) 32768
Total PE 1596
Free PE 960
Allocated PE 636
PV UUID zG4mdG-737s-bar8-B4ax-2rZB-6IQy-FFF3qY
Okay. The “Free PE” is important so we keep track of that. “vgdisplay” shows us what we wanted to see originally:
# vgdisplay --- Volume group --- VG Name VolGroup00 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 2 Max PV 0 Cur PV 1 Act PV 1 VG Size 49.88 GB PE Size 32.00 MB Total PE 1596 Alloc PE / Size 636 / 19.88 GB Free PE / Size 960 / 30.00 GB VG UUID 4crWPr-2a1F-fFSs-2pKt-Banc-QZZ7-2ONZHW
Okay, we’re doing a little better now. At this point in my research someone suggested a couple of things: “lvm vgchange -a y” to activate the logical volumes, and “e2fsck -f /dev/VolGroup00/LogVol00”. Thing is, we’re dealing with “/” here. It’s already active, and you can’t safely fsck it while it’s running. So:
# lvm lvextend -l +960 /dev/VolGroup00/LogVol00 Extending logical volume LogVol00 to 45.97 GB Logical volume LogVol00 successfully resized # vgdisplay --- Volume group --- VG Name VolGroup00 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 5 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 2 Max PV 0 Cur PV 1 Act PV 1 VG Size 49.88 GB PE Size 32.00 MB Total PE 1596 Alloc PE / Size 1596 / 49.88 GB Free PE / Size 0 / 0 VG UUID 4crWPr-2a1F-fFSs-2pKt-Banc-QZZ7-2ONZHW
Yay!
# df -k Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 16220072 8346616 7036236 55% / /dev/sda1 101086 40639 55228 43% /boot tmpfs 3053044 0 3053044 0% /dev/shm
Ooo wait, not done yet.
# resize2fs /dev/VolGroup00/LogVol00 resize2fs 1.39 (29-May-2006) Filesystem at /dev/VolGroup00/LogVol00 is mounted on /; on-line resizing required Performing an on-line resize of /dev/VolGroup00/LogVol00 to 12050432 (4k) blocks. The filesystem on /dev/VolGroup00/LogVol00 is now 12050432 blocks long. # df -k Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 46694192 8354732 35931400 19% / /dev/sda1 101086 40639 55228 43% /boot tmpfs 3053044 0 3053044 0% /dev/shm
Booyah! Now we’re talking. Please note that the resize may take a bit of time. At this point I would strongly suggest running:
# shutdown -rF now
To reboot the box and forcing an fsck. If the box comes back up with a minimum of errors on fsck, you should should be good to go!
Again, I want to stress that I do not advocate this as the best or smartest way to accomplish this procedure. I have a touch of OCD and a touch of ADHD, so the end result of there still only being two used partitions gives me warm fuzzies.
Good luck!
- Twitter: Just start your Twitter message with @BotFodder and I'll respond to it when I see it.
- Reply to the post: Register (if you haven't already) on the site, submit your question as a comment to the blog post, and I'll reply as a comment.