# Linux Containers and Productization

Linux has improved many many things over the last couple of years. Of the many improvements, the one that I’ve started leveraging the most today, are Control Groups.

In the past, when there was a need to build a prototype for a solution, we needed hardware.

Then came the virtualization richness to Linux. It came in 2 major flavors, KVM ( Full Virtualization ) and Xen ( Para Virtualization ). Over the years, the difference of para vs full, for both the implementations, is almost none. KVM now has support for Para-Virtualizaiton, with para-virtualized drviers for most resource intensive tasks, like network and I/O. Similarly, Xen has Full Virtualization support with the help of Qemu-KVM.

But, if you had to build a prototype implementation comprising of a multi node setup, virtualization could still be resource hungry. Otherwise too, if your focus was an application (say like a web framework), virtualization was an overkill.

All thanks to Linux Containers, prototyping applicaiton based solutions, is now a breeze in Linux. The LXC project is very well designed, and well balanced, in terms of features (as compared to the recently introduced Docker implementation).

From an application’s point of view, linux containers provide virtualization for namespace, network and resources. Thus making more than 90% of your application’s needs fulfilled. For some apps, where a dependency on the kernel is needed, linux containers will not serve the need.

Beyond the defaults provided by the distribution, I like to create a base container with my customizations, as a template. This allows me to quickly create environements, without too much housekeeping to do for the initial setup.

My base config, looks like:

_rrs@learner:~$sudo cat /var/lib/lxc/deb-template/config [sudo] password for rrs: # Template used to create this container: /usr/share/lxc/templates/lxc-debian # Parameters passed to the template: # For additional config options, please look at lxc.container.conf(5) # CPU lxc.cgroup.cpuset.cpus = 0,1 lxc.cgroup.cpu.shares = 1234 # Mem lxc.cgroup.memory.limit_in_bytes = 2000M lxc.cgroup.memory.soft_limit_in_bytes = 1500M # Network lxc.network.type = veth lxc.network.hwaddr = 00:16:3e:0c:c5:d4 lxc.network.flags = up lxc.network.link = lxcbr0 # Root file system lxc.rootfs = /var/lib/lxc/deb-template/rootfs # Common configuration lxc.include = /usr/share/lxc/config/debian.common.conf # Container specific configuration lxc.mount = /var/lib/lxc/deb-template/fstab lxc.utsname = deb-template lxc.arch = amd64 # For apt lxc.mount.entry = /var/cache/apt/archives var/cache/apt/archives none defaults,bind 0 0 23:07 ♒♒♒ ☺ rrs@learner:~$_

Some of the important settings to have in the templace are the mount point, to point to your local apt cache, and CPU and Memory limits.

If there was one feature request to ask the LXC developers, I’d ask them to provide a util-lxc tools suite. Currently, to know the memory (soft/hard) allocation for the container, one needs to do the following:

rrs@learner:/sys/fs/cgroup/memory/lxc/deb-template$cat memory.soft_limit_in_bytes memory.limit_in_bytes 1572864000 2097152000 23:21 ♒♒♒ ☺ rrs@learner:/sys/fs/cgroup/memory/lxc/deb-template$ bc
bc 1.06.95
Copyright 1991-1994, 1997, 1998, 2000, 2004, 2006 Free Software Foundation, Inc.
This is free software with ABSOLUTELY NO WARRANTY.
For details type warranty'.

1572864000/1024/1024
1500
quit
23:21 ♒♒♒   ☺
rrs@learner:/sys/fs/cgroup/memory/lxc/deb-template\$`

Tools like lxc-cpuinfo, lxc-free would be much better.

Finally, there’s been a lot of buzz about Docker. Docker is an alternate product offering, like LXC, for Linux Containers. From what I have briefly looked at, docker doesn’t seem to be providing any ground breaking new interface than what is already possible with LXC. It does take all the tidbit tools, and presents you with a unified docker interface. But other than that, I couldn’t find it much appealing. And the assumption that the profiles should be pulled off the internet (Github ?) is not very exciting. I am hoping they do have other options, where dependence on the network is not really required.