<< Episode i: outline
>> Episode iii: docker recipes
In this episode: installing a secondary disk in the machine, making a system backup, installing Debian, and configuring essential service.
Adding a secondary disk to the machine
Why? Because I’d like to keep the old system around while tinkering with the new one, just in case something goes south. Also the old system is full of config files that are still useful.
To this end I grabbed a spare SSD I had lying around and popped it into the machine.
…except it was actually a bit more involved than I expected. (Skip ahead if you’re not interested in this.) The machine has two M.2 slots with the old system disk occupying one of them, and 6 SATA ports on the motherboard, being used by the 6 HDDs. The spare SSD would need a 7th SATA port.
I could go get another M.2 but filling the second M.2 takes away one SATA channel, so I would be back to being one port short. 🤦
The solution was a PCI SATA expansion card which I happened to have around. Alternatively I could’ve disconnected one of the less immediately useful arrays and free up some SATA ports that way.
Taking a system backup
This was simplified by the fact I used a single partition for the old system, so there are no multiple partitions for things like /home, /var, /tmp, swap etc.
So:
fdisk
to wipe the partition table on the backup SSD and create one primary Linux partition across the whole disk. In fdisk that basically means hitting Enter-Enter-Enter and accepting all defaults.cp -avx
to copy the root filesystem to it.Regarding this last point: I’ve seen many people recommend dd
or clonezilla but that also duplicates the block ID, which you then need to change or there would be trouble. Others go for rsync
or piping through tar
, with complicated options, but cp -ax
is perfectly fine for this.
A backup is not complete without making the SSD bootable as well.
blkid
to find the UUID of the SSD partition and change it in the SSD’s /etc/fstab
./swapfile
in the root which I created with dd
and formatted with mkswap
, which was copied by cp
. I mount it with /swapfile swap swap defaults 0 0
so that stays the same.mount --bind
/dev, /sys and /proc.Here’s an example for that last point, taken from this nice reddit post:
mkdir /mnt/chroot
mount /dev/SDD-partition-1 /mnt/chroot
mount --bind /dev /mnt/chroot/dev
mount --bind /proc /mnt/chroot/proc
mount --bind /sys /mnt/chroot/sys
mount --bind /tmp /mnt/chroot/tmp
chroot /mnt/chroot
Verify /etc/grub.d/30-os_prober or similar is installed and executable
Update GRUB: update-grub
Install GRUB: grub-install /dev/SDD-whole-disk
Update the initrd: update-initramfs -u
I verified that I could boot the backup disk before proceeding.
Installing Debian stable
Grabbed an amd64 ISO off their website and put it on a flash stick. I used the graphical gnome-disk-utility application for that but you can also do dd bs=4M if=image.iso of=/dev/stick-device
. Usual warnings apply, make sure it’s the right /dev, umount any pre-existing flash partition first etc.
Booting the flash stick should have been uneventful but the machine would not see it in BIOS or in the quick-boot BIOS menu so I had to poke around and disable some kind of secure feature (which will be of no use to you on your BIOS so good luck with that).
During the install I selected the usual things like keymap, timezone, the disk to install to (the M.2, not the SSD backup with the old system), chose to use the whole disk in one partition as before, created a user, disallowed login as root, and requested explicit installation of a SSH server. Networking works via DHCP so I didn’t need to configure anything.
After install
SSH’d into the machine using user + password, copied the public SSH key from my desktop machine into the Debian ~/.ssh/authorized_keys
, and then disabled password logins in /etc/ssh/sshd_config
and service ssh restart
. Made sure I could login with public key.
Installed ntpdate
and added 0 * * * * /usr/sbin/ntpdate router.lan &>/dev/null
to the root crontab.
Mounting RAID arrays
The RAID arrays are MD (the Linux software driver) so they should have been already detected by the Linux kernel. A quick look at /proc/mdstat
and /etc/mdadm/mdadm.conf
confirms that the arrays have indeed been detected and configured and are running fine.
All that’s left is to mount the arrays. After creating /mnt
directories I added them to /etc/fstab
with entries such as this:
UUID=array-uuid-here /mnt/nas/array ext4 rw,nosuid,nodev 0 0
…and then systemctl daemon-reload
to pick up the new fstab right away, followed by a mount -a
as root to mount the arrays.
Publishing the arrays over NFS
Last step in restoring basic functionality to my LAN is to publish the mounted arrays over NFS.
I installed and used aptitude
to poke around Debian packages for a suitable NFS server, then installed nfs-kernel-server
.
Next I added the arrays to /etc/exports
with entries such as this:
/mnt/nas/array desktop.lan(rw,async,secure,no_subtree_check,mp,no_root_squash)
And after a service nfs-kernel-server restart
the desktop machine was able to mount the NFS shares without any issues.
For completion’s sake, the desktop machine is Manjaro, uses nfs-utils
and mounts NFS shares in /etc/fstab
like this:
nas:/mnt/nas/array /mnt/nas/array nfs vers=4,rw,hard,intr,noatime,timeo=10,retrans=2,retry=0,rsize=1048576,wsize=1048576 0 0
The first one (with nas: prefix) is the remote dir, the second is the local dir. I usually mirror the locations on the server and clients but you can of course use different ones.
All done
That’s essential functionality restored, with SSH and NFS + RAID working.
In the next episode I will attempt to install something non-essential, like Emby or Deluge, in a docker container. I intend to keep the system installed on the metal very basic, basically just ssh, nfs and docker over the barebones Debian install. Everything else should go into docker containers.
My goals when working with docker containers:
Basically the idea is that if I ever lose the system disk I should be able to simply reinstall a barebones Debian and redo the containers using the stored configs, and all the important mutable files would be on RAID.
See you next time, and meanwhile I appreciate any comments about what I could’ve done better as well as suggestions for next steps!.
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.
Rules:
Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.
No spam posting.
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.
Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
No trolling.
Resources:
Any issues on the community? Report it using the report flag.
Questions? DM the mods!