I accidentally the whole /
chmod -R ./ 664 * inside a script, I just ran
chmod -R / 664 *. Worse, this was inside AWS, no directly console access. Woops...
This isn't the worst thing to recover from if you're sitting in front of a physical machine and have access to the physical disk or recovery mode, but recovering via SSH alone makes it harder. Especally so when SSH is broken due to permission errors.
However, this isn't too bad even in AWS, and I say that as someone with very limited AWS knowlege. The quick steps are:
- Stop your current instance (i-alpha)
- Detach your current volume (vol-alpha)
- Create a new instance of the exact same type/OS as your current instance.
- Start your new instance (i-beta) and then attach your old volume (vol-alpha) to this new instance
- SSH into your new instance
On to actually fixing the permissions! I used Option #2 here: https://superuser.com/questions/1252600/fix-permissions-of-server-after-accidental-chmod-debian/1252665#1252665. All credit to izkon! However, I did find that as the script was running, it would write permissions to link files, thus screwing up the original. I wrote a small python script that ignores link files (credit /u/kalgynirae) and does some other error checking.
- Run the following command to create a list of all proper file permissions using your new instance as the standard:
find / -name '*' -printf '%m %p\0' > working-permissions.txt
- Mount the volume in your new instance (
mount /dev/xvdf1 /mnt/broken)
- Use our list of proper permissions to fix our broken OS. At this point I had trouble with the original perl script from the Super User link above. I've re-written it in Python to ignore symlinked files and do some error checking:
#!/usr/bin/python import os f = open('working-permissions.txt','r') for line in f.read().split('\0'): line = line.strip() args = line.split(" ", 1) if len(args) == 2: path = "/mnt/broken" + args mode = int(str(args),8) if os.path.islink(path): # Do not modify symlinks pass else: try: os.chmod(path, mode) except Exception, e: #print "Failure: mode=" + str(mode) + " path=" + path #print e pass else: print "WARNING! args != 2: " + line
Now unmount the volume in your new instance, and remount it on the old one.
umount /dev/xvdf1on your current SSH session (in i-beta)
- in the AWS console, detach the old volume (vol-alpha) from your new instance (i-beta)
- Attach the old volume to your old instance (vol-alpha is now reattached to i-alpha). You'll have to mount the disk as /dev/sda1, not whatever automatic location AWS gives.
- Start i-alpha, your original broken volume.
At this point you should be able to, at the very least, SSH to your system and elevate privileges. You can now shut down and destroy the old image, unless you need to reference it to fix anything else.