# Alpine An Ansible playbook for installing an encrypted and compressed data Alpine container host. The playbook should run from any live Linux system with network, SSH and a Python interpreter. The goal is to install the host even if the VPS provider doesn't support Alpine officially. ## How it works Root (`/`) and boot (`/boot`) directories are unencrypted. Storage (`/srv`), including Docker volumes and static files, are encrypted using plain dm-crypt. With plain encryption you don't need to keep dumped LUKS headers around. Just use a sufficiently long passphrase and don't lose it. Both root and data are Btrfs partitions with zstd compression enabled. After boot, you can open the `/srv` partition and start Docker. This is a manual process so far. `/var/log` is mounted in memory, so logs are lost after rebooting. We're working towards sending every log to a remote database using syslog-ng, but OpenRC would need to be replaced, most likely by Monit. ### Why isn't LUKS used? We've had really bad experiences with LUKS2 encryption, where data was completely lost even when we had external header dumps. The only reply we had about this on the dm-crypt list is that LVM+LUKS setups are discouraged. With plain encryption we don't have to keep extra stuff around, and according to some obscure comments on dm-crypt documentation, provides a 1:1 mapping between blocks, so hypothetically SSD storage can perform trimming. ### Why encrypting everything except root? To be able to fully encrypt a VM we would need to use GRUB2 with a LUKS1 `/boot`. This is possible to do on a physical machine, where you can type the password somewhat comfortably, but on a remote VM this would need to be done over VNC. Arguably the host operator can access the VM's RAM so it's game over from the start, but we still want to keep data encrypted at rest. ## Setting the remote hosts Boot any live system ISO. This playbook is tested with latest Archlinux. Network configuration should be automatic, but sometimes you need to do it manually, specially if the provider doesn't support DHCP or the gateway is outside the netmask, since it confuses some DHCP clients. The Python interpreter is required to run Ansible and you also need `parted` to partitioning the disk. Once booted, change root password on the VNC console: ```bash echo "root:something you can type comfortably over VNC" | chpasswd - ``` Make sure you can login via SSH via `root`. This is by default in Archlinux. ```bash grep PermitRootLogin /etc/ssh/sshd_config ``` If it's not set to `yes`, edit the file and restart `sshd`. ## Setting the local machine Clone this repository. ```bash cd ~/Projects/Sutty git clone https://gitea.sutty.coop.ar/Sutty/ansible-alpine-host.git cd ansible-alpine-host ``` Install dependencies. ```bash # For Arch and Arch-base distros pacman -Sy ansible make pwgen ``` Copy your SSH public key to remote hosts so Ansible (and you) can login without a password. ```bash ssh-copy-id root@your.host.name ``` ## Configuring the playbook ### General configuration #### Syslog Syslog-ng is used to centralize logging into a single node. Edit the IP address for your log server on `alpines.yml`: ```yaml vars: - log_server: "EKU:MEN:IP:ADD::RESS" + log_server: "10.13.12.1" ``` ### Create a vault password ```bash make vault.key ``` > **Important:** Since the `vault.key` file contains the password for > your vault, **don't commit this into git** and keep a copy on your > password storage (KeepassXC is a good option). **Always make > backups!**. ### Inventory Add your hostnames into the `inventory.yml` file. You can add host variables here or on separate files in the `host_vars/` directory, following [Ansible documentation](https://docs.ansible.com/ansible/latest/network/getting_started/first_inventory.html). We use `host_vars/`, so after adding a hostname, create a correspoding file. ```yaml # inventory.yml alpines: hosts: your.host.name: another.host.name: ``` ```yaml # host_vars/your.host.name.yml --- # If you don't have DNS yet, set it to the IP address ansible_host: "192.50.248.13" # This isn't used but it's good to have it at hand gateway: "192.99.148.254" netmask: "32" # Optional IPv6 configuration. ip6: "2607:5300:60:555a::4:0" gateway6: "2607:5300:60:55ff:ff:ff:ff:ff" netmask6: "112" # Device where the installation will be performed. If using VirtIO, # it'll probably be `vda` disk_device: "/dev/vda" ``` Finally, add an encryption key for the `/srv` partition and a password for the `root` user. ```bash make encrypt-string host="your.host.name" name="key" make encrypt-string host="your.host.name" name="root" ``` Do all this once per host. ## Installing Once everything's done, run `make`. You can probably run it several times in a row and stuff shouldn't break (this is called idempotency). ```bash make alpines ``` > We use `make` so we don't have to remember a lot of flags. You can > see what's happening. ## Post-install Once the hosts are installed and rebooted, you can open their `/srv` partitions and start the Docker service. Move the hostname from the `alpines` group into the `dockers` group in your `inventory.yml`. ```diff alpines: hosts: - your.host.name: dockers: hosts: + your.host.name: ``` And run the `make` rule: ```bash make dockers ``` ## Development We test this over VMs managed via `virt-manager`. Follow the GUI for creating a VM of type Generic OS, load an ISO (we use Archlinux) and create a virtual hard drive of at least 1.1GB.