Posted 2020-01-30 with tags EC2, AWS

Stop the Instance in Question

Identify the Root Volume and Detach It

Attach Said Volume to Another EC2 Instance

Log onto Working Instance and Mount the Problematic Volume

Use lsblck to find the volume in order to mount it:

lsblk                                                                                                                                                                              
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
xvda    202:0    0    8G  0 disk 
└─xvda1 202:1    0    8G  0 part /
xvdg    202:96   0  100G  0 disk 
└─xvdg1 202:97   0  100G  0 part 
loop0     7:0    0 89.1M  1 loop /snap/core/8213
loop1     7:1    0    7M  1 loop /snap/rg/15
loop2     7:2    0 89.1M  1 loop /snap/core/8268
loop3     7:3    0  4.7M  1 loop /snap/rg/7
loop5     7:5    0  7.4M  1 loop /snap/rg/23

In this case the volume in question is /xvdg, and we just need to mount the partition located at /dev/xvdg1.

Create a Directory to use for the Mount Point

mkdir /mnt/recovery

Mount the Problematic Volume for Inspection/Repair

mount /dev/xvdg1 /mnt/recovery

At this point the problematic volume will be available at /mnt/recovery. Conduct your investigation and/or repair and proceed with the next steps to restore the fixed volume back to the root partition of the original EC2 instance.

Unmount the Volume

umount -d /mnt/recovery

Detach the Volume in AWS Console

Find the volume in question in the console:

Volumes -> Actions -> Detach Volume

Attach the Fixed Volume to the Original EC2 Instance

Now that the volume is fixed, we'll reattach as the root instance

Start the Fixed EC2 Instance

At this point hopefully the original instance is fixed and you'll be able to log back on.