Notes: Linux Containers

Linux Containers are “the new hotness”. If there’s a feature Ubuntu has over other distros, its containers built-in. LXD is the modern tool used for containers. It can be installed elsewhere, but that’s not how we roll here.

LXD containers are not VM’s, but are designed to work exactly like them. The key difference is they are fully native, and they access to hardware directly (well networking aside). You can even grant access to a GPU. Containers are a very flexible tool for your everyday Linux use.

Ubuntu 16.04 ships with LXD 2.0, but for some of the advanced features, you’re going to want the latest.

Working with containers is A LOT like working with vagrant boxes.


It’s also worth noting that while LXD only runs on Linux, the client (lxc) can be run on other OS’s including Windows and Mac. What this lets you do is set up remote connections to LXD containers. I’m not going to cover remotes here, but infrastructurally speaking it can be used from other platforms (even just other Linux machines).

Disabling IPv6


You can optionally disable IPv6 support in LXD.

Where lxdbr0 is the ldx-br0 bridge created during setup.

Frankly though, this doesn’t change much. I thought it did more, but the containers themselves are still assuming an IPv6 IPs, just you can’t see them via lxc list anymore.


Modern Kernels on LTS Ubuntu

Starting with Ubuntu 16.04 LTS, you are able to make your Ubuntu install subscribe to the latest changes to the Linux kernel. There are 3 channels you can subscribe to:

  • GA-16.04 (General Availability)
  • HWE-16.04 (Hardware Enablement)
  • HWE-16.04-Edge (Cutting Edge Hardware Enablement)

By default Ubuntu puts you on the GA track, meaning in Ubuntu 16.04’s case, you’re getting Kernel 4.4.x. Switching to HWE, you get a current Kernel. At the time of this writing, that’s 4.10.x.

HWE channels are good up until the next major LTS release of Ubuntu. Then you effectively get put on the GA track of the now current LTS release (i.e. 18.04 starting April 2018). It is then expected you’ll upgrade to the new LTS release, where you can begin again, switching to the next HWE series.

More details:

How to install HWE:

Then reboot to apply the change.

I’m not 100% sure how necessary this is, but I was under the impression that I read something that called for newer that 4.4.x kernel. Who knows. I’ll make a note here if I find it again.

Canonical also offers a live Kernel patching service.

Notable because rebooting is not required, but beyond 3 machines you need to start paying for a support plan. Also (and this is key), the livepatching services is limited to GA releases. Yes, no HWE kernels via livepatch.

Linux Network Interfaces

This is a key file on Ubuntu. It’s not even specific to LXD, but Linux in general. To create advanced Linux Networking configurations, from bridges to VLANs, you do it here.

A default Ubuntu Server install will give you a relatively simple configuration. The ever important loopback interface (lo), and a list of ethernet adapters.

WiFi and some specialty services (VMs) are handled by other applications. Interestingly, my Ubuntu Desktop machine’s interfaces file is far more bare.

It looks like in Desktop Ubuntu, another service is being run to support plug-and-play networking.

Preparing the Host machine for VLANs and Bridging

Next we need to enable 802.1Q support.

Now would be a good time to move your computer to a Tagged port on your Managed Router.


Bridges, LANs, and VLANs

The bulk of our work is going to be in /etc/network/interfaces

(NOTE: Your default interface might not be named eth0, but what’s important is that you use the correct name anywhere eth0 is used)

Starting with our bare-bones interfaces file above, we’re going to add a VLAN.

At the bottom of the file do the following.

This creates a VLAN interface. Notably, the .10 included after the interface name. It is true that VLANs are a bit of a hack, but they’re quite useful.

In theory this above code should actually work without the vlan-raw-device line (because the interface name matches), but alas it didn’t want to work for me. It took a lot of iteration and reading to really nail down the right way to do this. 😉

Another thing I want to point out is “manual” keyword above. Depending on your needs, you could have alternatively gone with “static” or “dhcp”, but if you want a hook up a VLAN socket to a LXD container, you should set this to “manual”.

Speaking of LXD, we’re not done.

LXD cannot directly connect to an interface (at least not without fully taking it over). Also its VLAN support is limited.

So what you can do instead is create a bridge!

Creating a bridge is extremely similar. To bridge the main interface, letting your LXD container connect to the LAN with its own IP and MAC address, do the following.

Our bridge named br0, we can safely connect to this with an LXD container.

Bridging a VLAN is much the same.

Notably, we have bridged the VLAN interface we created.

You can name your bridges whatever you want. Bridges can be re-used across multiple LXD containers, hence the generic names “br0” and “br0.10”. There’s nothing special about the name “br0”. You don’t even need a “br0” to exist if all you want is a “br0.10”.

Once an interface exists in /etc/network/interfaces, you can do the following to start the interface.

Put all together.

Instead of bringing up all the interfaces manually, you can just reboot. That’s the point anyway. Make sure they work on boot.

You can check the status of your VLANs easily.

LXD Containers

Working with containers is extremely easy. We’ve covered the basic syntax above. Once you’ve bash’d in to a container, you treat it like it’s a standalone Linux machine (sudo apt update; sudo apt upgrade).

I didn’t bother with ZFS myself, though many docs seems to recommend it. The dir interface type is really nice, meaning the files exist on your normal file system (meaning you can access files as the host without starting or mounting the volume). I ran in to conflicting information about ZFS, or more specifically, analysis that suggested it wasn’t actually worthwhile. This was in a context outside of LXD containers though. There may be some advantage here, or not. It’s hard to know without trying it and benchmarking it.

Lets move on to the next key feature of LXD: Profiles.

LXD Container Profiles

By default, all LXD containers use a profile named “default”. We can view it by doing the following.

You’ll get something like this.

Great starting point, but we have some parts we’d like to change.

Lets create a profile for our direct LAN bridge. Start by copying the default.

Now to make our “lan” profile use our LAN bridge, we change the parent. Currently it’s set to lxdbr0, which is the bridge created when LXD was initialized. We’d prefer br0.

Easy. Now when you restart your container, it’ll request an IP over DHCP via it’s new virtual network interface.

VLANs work identically, but again, we must reference a bridge.


Now that we have profiles, we need to apply the profiles to our our containers.

Installing UniFi Controller in an LXD container

Phew! Here’s what this was all for.

Create/edit a new source for packages.

Paste this in the file for the Stable packages.

Install the GPG key and install.

Now figure out the IP address of LXD container. From inside:

Or outside:

For example, lets say it’s ““.

Once you know the IP, connect to port 8443 over HTTPS (yes S).

And you’re in!


Appendix: How to allow SSH to work again

Haha! It looks like somewhere along the line I lost the ability to SSH in to my host machine.

Anyways, I saw a solution somewhere, but I forgot where. Update this later when I have time.

Appendix: References

I’ve lost track of what is referenced where, so here’s a link dump.