Skip to main content

Installing FreeBSD on OVH

While OVH has a number of Linux-based options for their low-end VPS offerings, I wanted to try installing FreeBSD. As far as I can tell, OVH doesn't offer the ability to provide a bootable .iso or .img installer image for their VPS offerings (unlike their dedicated server instances). Fortunately, their VPS offers a recovery console with SSH access along with the use of dd, gzip/gunzip, and xz. This grants SSH access to therecovery console to write the disk image directly to the virtual drive which I learned from murf.

The following was all done on a fairly stock FreeBSD 11 install.

My first problem was that the official downloadable .raw hard-drive image was something like 20.1GB which was just slightly larger than my available 20.0GB of space so I couldn't directly write the image provided on the FreeBSD site.

Create a local image file

First I need to create a drive-image file that is the right size.

user@localhost$ dd if=/dev/zero of=$HOME/freebsd.img bs=20m count=1k

This will create a 20gb image file that should fit exactly in the available space on the OVH VPS. If you're using the lowest-end VPS, change that to 10m to make a 10gb drive instead.

Create a device for the file

In order to install to this file as a disk, FreeBSD needs to recognize it. This can be done with the mdconfig command as root.

user@localhost$ su -
Password: ********
# mdconfig -f ~user/freebsd.img -u 0

This will create a md0 device to which FreeBSD can be installed.

Install FreeBSD

With the md0 device created we can run bsdinstall and choose md0 as our target drive.

If your current setup uses ZFS and already has bootpool & zpool pools and you plan to use the automated "Guided root on ZFS" install, you'll need to instruct bsdinstall to use pool names that don't conflict with your existing pools. While the installer gives the ability to change the main pool name, it doesn't give a way to change the name of the boot-pool without setting an environment variable. I recommend just setting them both in the environment for simplicity.

# export ZFSBOOT_POOL_NAME=myzpool
# export ZFSBOOT_BOOT_POOL_NAME=mybootpool
# bsdinstall

For the Keymap Selection, continue with the default keymap unless you have reason not to.

For the hostname, this post uses "ovh".

For your mirror selection, choose something geographically close. I arbitrarily chose one of the Primary FTP sites.

Since my target OVH instance has 4GB of RAM, this guide opts for the "Guided root on ZFS" install option. If I only had the 2GB VPS, I might go with UFS instead.

For options, I use

  • Specify the pool type as a stripe and add the md0 disk to the pool
  • Since I only have 20GB total disk space to work with, I reduce my swap size from the default 2GB down to either 1GB or 512MB
  • I like to specify that my disks are encrypted
  • I leave the partition scheme set to GPT, keep 4K sectors, and don't bother mirroring or encrypting my swap
and yes, I want to destroy the contents of md0 which should just be an empty file.

After the installer fiddles with the disks, partitions, and pools, it will ask which sets you want to install. Again, with only ~20GB of space to use, I strip this down to the basics

Install FreeBSD

# Make note of existing values for # $EXTERNAL_IP # $GATEWAY_IP # $DNS_NAMESERVER # $DNS_SEARCH # if using ZFS on both the host where building the .raw image # and within the image itself # set the pool names so they don't conflict # if you already have pools named "bootpool" and "zpool" export ZFSBOOT_POOL_NAME=ovhzpool export ZFSBOOT_BOOT_POOL_NAME=ovhbootpool bsdinstall # can do a UFS or ZFS # skip networking for now # but do enable sshd and ntpd if you want them # from https://wiki.freebsd.org/MasonLoringBliss/LegacyZFSandGELI # for my local install, using a shell to do the disk layout gpart destroy -F md0 gpart create -s gpt md0 gpart add -t freebsd-boot -s 128k -l boot md0 gpart add -t freebsd-zfs -l ${ZFSBOOT_POOL_NAME} -a 1M md0 # optionally add -T to skip passing TRIM commands to the SSD geli init -e AES-XTS -l 128 -s 4096 -b -g gpt/${ZFSBOOT_POOL_NAME} geli attach gpt/${ZFSBOOT_POOL_NAME} zpool create -R /mnt -O canmount=off -O mountpoint=/ -O atime=off -O compression=on ${ZFSBOOT_POOL_NAME} gpt/${ZFSBOOT_POOL_NAME}.eli zfs create -o mountpoint=/ ${ZFSBOOT_POOL_NAME}/ROOT zpool set bootfs=${ZFSBOOT_POOL_NAME}/ROOT ${ZFSBOOT_POOL_NAME} zfs create ${ZFSBOOT_POOL_NAME}/home zfs create -o canmount=off ${ZFSBOOT_POOL_NAME}/usr zfs create ${ZFSBOOT_POOL_NAME}/usr/jails zfs create ${ZFSBOOT_POOL_NAME}/usr/local zfs create ${ZFSBOOT_POOL_NAME}/usr/obj zfs create ${ZFSBOOT_POOL_NAME}/usr/src zfs create ${ZFSBOOT_POOL_NAME}/usr/ports zfs create ${ZFSBOOT_POOL_NAME}/usr/ports/distfiles zfs create -o canmount=off ${ZFSBOOT_POOL_NAME}/var zfs create ${ZFSBOOT_POOL_NAME}/var/log zfs create ${ZFSBOOT_POOL_NAME}/var/tmp zfs create ${ZFSBOOT_POOL_NAME}/tmp # do add a new user that you'll use to log in, # and make sure they're added to the additional group "wheel" # at the end of bsdinstall, enter a shell # edit /etc/rc.conf to add the lines ifconfig_vtnet0="inet $EXTERNAL_IP netmask 255.255.255.255 broadcast $EXTERNAL_IP" static_routes="net1 net2" route_net1="$GATEWAY_IP -interface vtnet0" route_net2="default $GATEWAY_IP " ifconfig_vtnet0_ipv6="inet6 $IPV6ADDR prefixlen 64" ipv6_defaultrouter="$IPV6ROUTE" # if you plan to use jails, don't let syslogd listen on all addresses syslogd_flags="-ss" # and set up a loopback interface to use cloned_interfaces="lo1" # edit /etc/resolv.conf search vps.ovh.ca nameserver 213.186.33.99 # edit /etc/ssh/sshd_config ListenAddress $EXTERNAL_IP PermitRootLogin no # if ZFS unmount all the new ZFS items under /mnt mount | awk '/\/mnt/{print "umount " $3' | sort -r mount | awk '/\/mnt/{print "umount " $3' | sort -r | sh # ZFS still has these pools active so disconnect them zpool export $ZFSBOOT_POOL_NAME zpool export $ZFSBOOT_BOOT_POOL_NAME # if encrypted, detach the GELI devices if needed: geli detach /dev/md0.eli # done making changes so unmount it umount /mnt mdconfig -d -u 0 # gzip it up to save bandwidth (~400MB instead of 20GB or roughly 2%) # keeping it in case we need it again gzip --keep freebsd.img # restart VPS in rescue mode # ssh into your VPS rescue $IP: ssh root@$IP # unmount the drive you'll be writing to root@vps: umount /mnt/vdb1 root@vps: exit # upload the image to the drive ssh root@$IP "gunzip | dd of=/dev/vdb bs=1M" < freebsd.img.gz