Create a Custom Linux Setup for WSL2

I just came across the amazing Windows Subsystem for Linux 2 (WSL2) feature which allows you to create a consistent Linux environment.

If you have not came across WSL2 yet, Microsoft advertises it as:

The Windows Subsystem for Linux lets developers run a GNU/Linux environment – including most command-line tools, utilities, and applications – directly on Windows, unmodified, without the overhead of a traditional virtual machine or dualboot setup.

Although, Microsoft provides some Linux distributions through its Windows Store, this leaves the question open what you should do if your favorite distro is not available over this channel, and how to achieve a consistent setup.

The great news is that you can import an exported Container image in WSL2 as your own distro. Microsoft’s documentation Import any Linux distribution to use with WSL shows this with vanilla CentOS 8. But let’s take this a little bit further and add customizations.

We are going to use CentOS Stream 8 as our base distribution. Then we install some additional tooling including zsh and oh my zsh. And of course we do this in a single Dockerfile.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
FROM quay.io/centos/centos:stream8
ARG USERNAME
ARG PASSWORD
RUN dnf update -y \
        && \
    dnf install -y \
        epel-release \
        && \
    dnf install -y \
        git \
        man \
        passwd \
        powerline-fonts \
        sudo \
        util-linux-user \
        zsh \
        && \
    dnf clean all -y \
        && \
    adduser -G wheel ${USERNAME} && \
        echo -e "[user]\ndefault=${USERNAME}" >> /etc/wsl.conf && \
        echo ${PASSWORD} | passwd --stdin ${USERNAME}
USER ${USERNAME}
RUN curl -fsSL https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh | sh - 
  1. We build this with: docker build -t wsl-centos:stream8 --build-arg USERNAME=veit --build-arg PASSWORD=password .

  2. Now we need a container instance which we can export afterwards: Create a container instance: docker run -t wsl-centos:stream8 bash ls /

  3. List all container instances docker ps -a.

    Terminal with a list of container images.

  4. Copy the container ID to the clipboard.

  5. Export the container to a tar file: docker export --output="centos-stream8.tar" [CONTAINER_ID]. Paste the container ID from the previous step into this command.

Now let’s configure the WSL2 environment on Windows:

  1. Windows needs a storage location for your running WSL. The storage location should be fast and have enough space for the VM which will be created. I will put this in my home directory: md C:\Users\veit\wsldistrostorage\centos-stream8

  2. We can import the tar file into WSL: wsl --import [DISTRO NAME] [STORAGE LOCATION] [FILE NAME].. e.g. wsl --import CentOS-Stream8 C:\Users\veit\wsldistrostorage\centos-stream8 centos-stream8.tar

  3. Finally, lets start the new WSL distro: wsl -d CentOS-Stream8

Of course this works with any Linux distribution for which you can find a Docker base image. And the nice part is, that you can create a repeatable setup with the Dockerfile.

Enforce Non-Root Pods with Pod Security Standards

In my post Friends don’t let friends run containers as root, I took a simplified view that Kubernetes does not add any security policies. I went for this simplified statement because, when I wrote the post, Kubernetes 1.22 was moving away from Pod Security Policies, and there had not been any replacement in a released version.

Beginning with Kubernetes 1.23, Pod Security Standards are introduced to replace Pod Security Policies. Kubernetes administrators can choose between the policy profiles privileged, baseline, and restricted. The policies are cumulative, which means that baseline contains all the rules from privileged, and restricted has all rules from privileged and from baseline.

Profile Description
Privileged Unrestricted policy, providing the broadest possible level of permissions. This policy allows for known privilege escalations.
Baseline Minimally restrictive policy, which prevents known privilege escalations. Permits the default (minimally specified) Pod configuration.
Restricted Heavily restricted policy, following current Pod hardening best practices.

See Kubernetes - Pod Security Standards for more details.

Note: On the Kubernetes website, the words policy profile and level are used interchangeably. The term policy profile is used for a set of policies. Whereas the term level is used when Pod Security Standards are enabled. I am going to follow this pattern in this post as well.

Select the Security Profile to Prevent Pods Running as Root

To prevent any Pod from running as root, we have to select the profile restricted. Restricted requires the fields

  • spec.securityContext.runAsUser
  • spec.containers[*].securityContext.runAsUser
  • spec.initContainers[*].securityContext.runAsUser
  • spec.ephemeralContainers[*].securityContext.runAsUser

to be either null or set to a non-zero value. This is precisely what we want. Restricted has more rules you need to comply with. Please review the list of policies included in the restricted profile to get a better understanding of the other rules.

As you can see, even with the profile restricted, the requirement is that you cannot run the container with user ID 0. This means that you could run the pod with user IDs 1, 500, or 1827632. And, if you set the runAsUser to null, then the user ID defined in the image will be used. If that is the root user, you will get an error message like “Error: container has runAsNonRoot and image will run as root”.

Enabling Pod Security Standards

Pod Security Standards are a somewhat optional feature. You need to configure them per namespace by applying a label to the namespace. The label defines which policy profile should be used and what Kubernetes does with a Pod if a policy violation has been detected. The mode can be one of the following:

Mode Description
enforce Policy violations will cause the pod to be rejected.
audit Policy violations will trigger the addition of an audit annotation to the event recorded in the audit log but are otherwise allowed.
warn Policy violations will trigger a user-facing warning but are otherwise allowed.

The label you apply to the namespace takes the form pod-security.kubernetes.io/<MODE>: <LEVEL>. To require all pods to comply with baseline level, you need to apply the label pod-security.kubernetes.io/enforce: baseline. If you want to inform the users and write the violation to the audit log, you can also add labels for this. In the end, you end up with the following namespace manifest:

1
2
3
4
5
6
7
8
apiVersion: v1
kind: Namespace
metadata:
  name: default
  labels:
    pod-security.kubernetes.io/enforce: baseline
    pod-security.kubernetes.io/audit: baseline
    pod-security.kubernetes.io/warn: baseline

Conclusion

The Pod Security Standard is an excellent next step after Kubernetes deprecated Pod Security Policies in Kubernetes v1.21. Currently, the feature is in the beta stage, and an administrator has to configure it per namespace. Once you begin using profiles baseline and restricted, you have an effective tool to make the cluster more secure.

The recommendation I give in my post Friends don’t let friends run containers as root changes only slightly for Kubernetes:

  • Kubernetes:
    • For running a plain container in Kubernetes, you don’t have to do anything. Kubernetes starts the container with the user ID set by the USER instruction in the Dockerfile.
    • The Deployment manifest might need additional configurations depending on the selected Pod Security Standard profile.

How Snapshots Saved My Time Machine Backups

Until last year I kept my Time Machine backups on a USB drive next to my computer. And although everything worked fine, I didn’t feel comfortable with so much data stored on a single disk. Hence, during summer 2020, I bought myself a DiskStation DS1520+ to put my Time Machine backups on a much more secure and reliable solution. The DS1520+ supports a RAID. Consequently, my data would not be lost caused by a single disk error. Synology has excellent documentation, how you can enable Time Machine backups to a NAS over SMB.

And everything ran very smoothly until I upgraded to macOS Monterey. Afterward, the Time Machine backups gave me some headaches. Quite often, macOS told me it can’t perform any new backup. The error message was:

Time Machine detected that your backups on "mynas.local" cannot be reliably restored. Time Machine must erase your existing backup history and start a new backup to correct this. Button "Remind me Tomorrow", Button "Erase Backup History"

This wasn’t an issue with the hard disks; the disks were ok. But for some reason, the Time Machine Backups went corrupt. Fortunately, I remembered that I enabled Btrfs snapshots for most folders on my NAS. This allowed me to go back to a time when the Time Machine backup was still ok. In the end, I only lost the incremental backups of a single day.

Btrfs snapshots are a life-saver. I have snapshots enabled for all my shared folders, which contain important files. e.g.

  • Photos
  • Videos
  • Personal Files
  • Time Machine Backups

Snapshots are quick to take, easy to restore, and pretty lightweight. For my 1,800 GB Time Machine backups, I have snapshots of less than 40 GB. (~2.2 %). And that is why you should enable them as soon as possible for your valuable files.

Enabling Btrfs Snapshots

  1. Open the Synology Disk Station Manager, and check in the Package Center under Installed that the package Snapshot Replication is available. Please install this package before proceeding.
  2. Open Snapshot Replication, select Snapshots and the Shared Folder you want to configure the snapshots for.
    Open the window of the Snapshot Application, with Snapshots selected in the sidebar, and the shared folder for which snapshots should be activated selected.
  3. Select Settings and configure the settings for schedule, retention, and advanced.
    The setting for the snapshot schedule depends on how often your data changes and what is your accepted amount of data loss. For most of my shared folders, it is more than enough to take a snapshot every 24 hours.

    The setting for the snapshot schedule depends on how often your data changes and what is your accepted amount of data loss. For most of my shared folders, it is more than enough to take a snapshot every 24 hours.

    I set the number of latest snapshots to keep to the maximum. If you  don&rsquo;t set up a retention policy, snapshots will stop once you have reached the maximum of 1024.

    I set the number of latest snapshots to keep to the maximum. If you don’t set up a retention policy, snapshots will stop once you have reached the maximum of 1024.

    I use GMT time zone to name the snapshots.

    I use GMT time zone to name the snapshots.

  4. Click OK to complete the configuration.

Restoring from a Btrfs Snapshot

  1. Open Snapshot Replication, select Recovery and the Shared Folder you want to restore.
    Open the window of the Snapshot Application, with Recovery selected in the sidebar, and the shared folder which  you want to rollback to a snapshot.
  2. Open Recover and select a snapshot when everything is ok. The more recent the snapshot, the smaller the data loss.
    After clicking on Recover, a list of available snapshots of the selected shared folder is displayed.
  3. Open the menu Action and select Restore to this Snapshot.
  4. In the dialog select, Take a snapshot before restoring to create a snapshot of the current hard disk so that you can always restore to a point in time before the recovery takes place.
    Dialog box with the option "Take a snapshot before restoring" enabled.