Re: Swapping Drives - Sanity Check
- Date: Fri, 22 Feb 2019 23:53:22 -0800
- From: David Christensen <dpchrist@xxxxxxxxxxxxxxxx>
- Subject: Re: Swapping Drives - Sanity Check
On 2/22/19 6:17 AM, songbird wrote:
Stephen P. Molnar wrote:
My Debian Stretch system has three HD's. I want to remove one of the
HD's (not sda) and replace it with a new HD..
What I need to be sure of is, if I remove the old drive from the fstab
and delete the mount point will the system boot after I put in the new
HD. so that I can edit the fstab and create a mount point for the new
Hence, the request for the sanity check.
as long as you don't have anything on the
current one that is being used by the system
it should be ok.
So long as the system and/or any programs are not using a drive, then
you can remove that drive.
for the short term, just to make sure you
don't have to track the stuff down again you
can just comment the lines out in the fstab
but leave them there until you are sure things
Leave the old drive installed, comment out its entry in fstab, leave the
mount point intact, reboot, and test if everything still works.
If everything still works, then power down, remove the old drive,
install the new drive, boot, and configure the new drive.
If something is broken, then you will need to trouble-shoot.
On 2/22/19 6:19 AM, Stephen P. Molnar wrote:
> The OS is on dev/sda. The disk I changing is /dev/sdc
As other readers have noted, device nodes for drives are unpredictable.
On 2/22/19 8:36 AM, Stephen P. Molnar wrote:
> Here is my fstab:
> # /etc/fstab: static file system information.
> # Use 'blkid' to print the universally unique identifier for a
> # device; this may be used with UUID= as a more robust way to name
> # that works even if disks are added and removed. See fstab(5).
> # <file system> <mount point> <type> <options> <dump> <pass>
> # / was on /dev/sda1 during installation
> UUID=ce25f0e1-610d-4030-ab47-129cd47d974e / ext4
> errors=remount-ro 0 1
> # swap was on /dev/sda5 during installation
> UUID=a8f6dc7e-13f1-4495-b68a-27886d386db0 none swap
> sw 0 0
> /dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0
> UUID=900b5f0b-4f3d-4a64-8c91-29aee4c6fd07 /sdb1 ext4 errors=remount-ro
> 0 1
> UUID=d65867da-c658-4e35-928c-9dd2d6dd5742 /sdc1 ext4 errors=remount-ro
> 0 1
> UUID=007c1f16-34a4-438c-9d15-e3df601649ba /sdc2 ext4 errors=remount-ro
> 0 1
As other readers have noted, using UUID's for the fstab first field
(fs_spec) is okay. Newer Linux kernels offer more meaningful options,
such as GPT labels and drive make/ model/ serial number (ID) strings.
As other readers have noted, using device node base names such as
'/sdb1' for the fstab second field (fs_file) is confusing and could
cause you to make a painful mistake. I agree with the suggestions of
using names based upon what the drive contains -- '/data', '/music',
'/sneaker', etc.. I also physically mark my drives with the exact same
> Before disconnection the power to the drives,
Understand that if you disconnect the power cable to a motherboard,
drive, peripheral, etc., but not all the other cables (e.g. SATA cable),
you can fry electronics. If you're going to unplug something,
completely unplug it.
> I edited out their lines
> in fstab. I disconnecting the power to sdb and sdc and started the
> computer. It booted for a few lines until it encountered the line
> starting with 'start job fgfor device disk by . . .' (at least that what
> i jotted down). then t\iot Then it through the three HD's, two of which
> had the power unplugged) for 1 minute and 30 seconds and then went on to
> tell me that I could log on as root or ctrl-D to continue. Ctrl-D
> didn't work so I logged oh as root
You need to capture exact error messages and type them exactly into your
posts. Use a digital camera, smart phone, tablet PC, etc..
> At that point I did 'journalctl -xb and got 1237 lines which were
> meaningless to me.
Take a bunch of pictures, then RTFM, STFW, and/or post here.
> startx got me to the Root Desktop.
I avoid running X as root.
> The only option open to me at that point was to logout as root, the
> options of restart and shutdown were grayed out as being unavailable.
> At this point I admitted defeat did 'shutdown -h now' in a terminal and
> put the system back in its original state.
> Obviously, I'm missing something!
Does the machine work now?
If so, follow my suggestion above "Leave the old drive installed...".