Start using Containers on Ubuntu

trevm999

Well-Known Member
Reaction score
890
Location
Canada
By now you've probably seen some of the hype around containers, the next level of virtualization. Most of the discussion is about Docker containers, and it mostly revolves around application deployment. While, that's mostly the focus of Docker containers, containers can be used for pretty much anything.

Canonical creates a wrapper for the LXC container technology called LXD. With LXD, managing containers is pretty similar to managing VMs, so you can get started quick and easy.

The scope of this how-to is to showcase getting a container setup on Ubuntu 16.04 and communicating on your LAN. The instructions are assuming everything is running off of DHCP and you've already got Ubuntu installed

** Important Note: If you are running your Ubuntu install as a VM you will have to change a setting on the VM's virtual network adapter in order for the container guests to communicate on the network. In Hyper-V you need to Enable MAC address spoofing. In VMWare or VirtualBox you need to enable promiscuous mode.


First of, you need to create a bridge. This turns your Ubuntu Server into a switch so that your containers can access the LAN and pull IPs.

So this is what we need to install for that
Code:
sudo apt-get install bridge-utils

Next we need to configure the networking settings.
Code:
sudo nano /etc/network/interfaces
Or you can use your favorite text editor instead of nano

Remove eth0 (or comment out) from your file
Code:
# auto eth0
# iface eth0 inet dhcp

Add a bridge, and include your network interface in the bridge
Code:
auto br0
iface br0 inet dhcp
    bridge_ports eth0

Save and reboot

Once rebooted `ip addr show` should now show br0 pulling and IP, and eth0 should be listed but no longer pulling an IP

By default, LXD uses a bridge that isn't connected to your LAN. So we're going to swap that bridge for your new bridge.

Code:
lxc profile edit default

Change parent: lxdbr0 to
Code:
parent: br0

And save

Now create your container. This downloads the container image and sets it up. The image is saved for creating more containers in the future
Code:
lxc launch ubuntu:16.04 nameofyourchoosing

To view your list of containers and see if they're pulling an IP

Code:
lxc list

To enter bash on your container
Code:
lxc exec nameofyourchoosing bash


See more LXD commands for working with your containers here
https://stgraber.org/2016/03/19/lxd-2-0-your-first-lxd-container-312/
 
  • Like
Reactions: GTP
Just adding that if the container is a trusted container, then the easiest way to deal with container host permissions is to make the container as a privileged container

Code:
lxc config set containername security.privileged true

Now your container will pretty much operate the same as a native OS. (i.e. before you might have problems making the container do/install something you would on a native install because it didn't have enough kernel permissions) If it is not a trusted container, then you should figure out what specific permissions it requires in order to do the task you require of it
 
Here is how to do it on Ubuntu 17.10. Keep in mind the eth0 may be different depending on the name of your ethernet adapter

Code:
sudo apt-get install bridge-utils

Code:
sudo nano /etc/netplan/01-netcfg.yaml

Make the file similar to the following and save
Code:
network:
 version: 2
 renderer: networkd
 ethernets:
   eth0:
     dhcp4: no
     dhcp6: no
 bridges:
   br0:
     interfaces: [eth0]
     dhcp4: yes
     dhcp6: no

Code:
sudo netplan apply

Code:
lxd init
Choose the defaults for everything, except no for create new bridge

Code:
lxc profile edit default

Add the eth0 (or use your network adapter name) part
Code:
config: {}
description: Default LXD profile
devices:
  eth0:
    nictype: bridged
    parent: br0
    type: nic
  root:
    path: /
    pool: default
    type: disk
name: default
used_by: []
 
To create a Fedora 27 x64 container called cn-saltmaster:

Code:
lxc launch images:fedora/27/amd64 cn-saltmaster
 
Uh! Not. Really not. The requirements for "real" virtualization or Linux Containers are way different. LXC never ever in the history of virtualization ist the next level of anything.

I mean that as in that containers in general are the next 'layer'.

I'm not proposing using LXD/LXC containers instead of other containers, but as an alternative to either having multiple VMs, or combining multiple roles on one system. LXC containers are more lightweight than a VM, and provide the benefits of isolation. Hardware virtualization creates a layer of abstraction from the hardware, the goal of Docker and rocket is a layer of abstraction from the OS. LXD/LXC doesn't really fit in with that goal, it provides the OS controls as an interface, which makes it much easier for IT to use, and might be more suitable to situations where you don't want to be all the way abstracted from the OS.
 
I mean that as in that containers in general are the next 'layer'.
I disagree again. I use LXC since 2008. They are of course not the next 'layer'. They are at most an alternative solution.
LXC containers are more lightweight than a VM
Of course they are, here I agree with you.
and provide the benefits of isolation
Since Kernel 3.12 and the useable kernel-namespaces the security risks decreased.
and might be more suitable to situations where you don't want to be all the way abstracted from the OS
The fact that in many cases it is more suitable to use LXC contains the risk to use LXC instead of a "real" vm where a "real" vm should be used because of the maybe not so trustfulness operator of that container. If you use LXC only for your own purposes they are very often the better way because of what you already said: they are more lightweight.
 
I disagree again. I use LXC since 2008. They are of course not the next 'layer'. They are at most an alternative solution.

I'm going to assume we're just talking about LXC containers here, since that's where I see you have a valid point.
When I'm talking about spinning up a LXC containers, it's as an alternative to spinning up a VMs. You interact with it very similarly, and the container vs the VM would be doing the same role. So, yes, containers used in this way is pretty much just as an alternate solution.

However, I could still say my LXC containers are my next layer in my stack when I use hardware virtualization by creating a VM and then my LXC management OS is on that VM. I can only imagine not doing this way if the hardware didn't support virtualization.

Since Kernel 3.12 and the useable kernel-namespaces the security risks decreased.
When I mentioned isolation, I was thinking in a change management sense, not about workstations needing to be separated for confidentiality and integrity reasons. i.e. you have one server hosting everything, if you have to reboot you have to reboot everything.
 
Back
Top