Mounting QCOW2 (KVM/QEMU) directly

First, the tools you need

apt-get install qemu-utils

Now, enable NBD

modprobe nbd max_part=8

Once that is enabled, connect the file as a block device

qemu-nbd --connect=/dev/nbd0 /hds/usb/virts/Windows/main.qcow2

Now, the block device should appear like any other, alongside the partitions inside !

fdisk -l

On my machine, this resulted in

Disk /dev/nbd0: 95 GiB, 102005473280 bytes, 199229440 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xc5324c42

Device      Boot     Start       End   Sectors  Size Id Type
/dev/nbd0p1 *         2048    104447    102400   50M  7 HPFS/NTFS/exFAT
/dev/nbd0p2         104448 198138958 198034511 94.4G  7 HPFS/NTFS/exFAT
/dev/nbd0p3      198139904 199225343   1085440  530M 27 Hidden NTFS WinRE

This disk was around 40GB, but fdisk will see the number corresponding to the largest allowed size, 100GB in this case ! let us mount the drive

mount /dev/nbd0p2 /hds/loop

Now, in this case in particular, like any other block device that held the windows operating system, more often than not, you will get the message saying

The disk contains an unclean file system (0, 0).
Metadata kept in Windows cache, refused to mount.
Falling back to read-only mount because the NTFS partition is in an
unsafe state. Please resume and shutdown Windows fully (no hibernation
or fast restarting.)
Could not mount read-write, trying read-only

The solution to that is simple, follow the following two steps to remedy the issue and then force mount the file by using remove_hiberfile

ntfsfix /dev/nbd0p2
mount -t ntfs-3g -o remove_hiberfile /dev/nbd0p2 /hds/loop

The result of NTFSFIX was

Mounting volume... The disk contains an unclean file system (0, 0).
Metadata kept in Windows cache, refused to mount.
FAILED
Attempting to correct errors...
Processing $MFT and $MFTMirr...
Reading $MFT... OK
Reading $MFTMirr... OK
Comparing $MFTMirr to $MFT... OK
Processing of $MFT and $MFTMirr completed successfully.
Setting required flags on partition... OK
Going to empty the journal ($LogFile)... OK
Checking the alternate boot sector... OK
NTFS volume version is 3.1.
NTFS partition /dev/nbd0p2 was processed successfully.

And the following mount command worked as you would expect, silently

Step by step Unprivileged containers on Debian Bookworm

The full version of this, with an explanation of everything is here, this one is written for copy-paste and speed.

This version is meant to create unprivileged LXC containers owned by root subordinates, which in my opinion provides the best balance of security and flexibility.

  • Install Debian 12 (bookworm) on a computer or virtual machine or what have you.
  • I personally enable root access under SSH, so all the commands you see here are run as root, you may use another user with sudo if you wish, but i execute as root
  • Execute the following to install LXC (I am installing LXC and KVM) but you might want to remove KVM
apt-get update

apt-get install bridge-utils lxc libvirt-clients libvirt-daemon-system debootstrap qemu-kvm bridge-utils virtinst nmap resolvconf iotop net-tools

Most installations will have 2 users, root and another username you chose while installing the operating system,

Unprivileged containers made simple on Debian 12 (Bookworm)

IMPORTANT NOTE: This is the full version, if you just want to come in, copy some commands, and end up making unprivileged containers under root, THERE IS A SEPARATE POST FOR THAT HERE.

0- Intro

Don’t let the length fool you, I am trying to make this the simplest and fastest yet most comprehensive tutorial to having LXC (both privileged and unprivileged) up and running on debian bookworm !

I sent a previous version of this to a friend to spare myself the need to explain to him what to do, and he found the tutorial confusing ! instead of the old arrangement, having colors to denote what lines are for what task, I have decided to SEPARATE THIS INTO PARTS….

  1. Intro – About this post (You are already in it)
  2. LXC info
  3. Shared system setup (Privileged and unprivileged)
  4. Privilaged LXC step by step
  5. Shared setup for unprivileged containers
  6. Unprivileged LXC run by new user, step by step
  7. Unprivileged LXC run by root user, step by step

I hope this clears things up, the color codes will still exist, mostly because I have already done the work !

Why yet another tutorial ?

Most of the tutorials online focus on creating an extra user to use with LXC, that is one way to do it with a few drawbacks, the other way is to create a range of subordinate IDs for the root user, the advantages of this way of doing it are related to “Autostart” and filesystem sharing between host and guest.

As per usual, the primary goal of every post on this blog is my own reference, the internet is full of misleading and inaccurate stuff, and when i come back to a similar situation, I don’t want to do the research all over again.

Part 1: About LXC

Privileged VS unprivileged

Privileged containers are generally unsafe, the only advantage of privileged containers is that is is very easy to setup.

Privileged containers share the same root user with the host, so if the container root user gets compromised, the attacker can sneak into the host system, hence, unprivileged is more secure but involves some work initially to setup

What is the problem with Privileged containers

It is relatively easy to deploy LXC (Which also happens to be what is powering LXD)… You install it, run a command to create a container, and voila, a whole new Linux system within your host Linux system sharing the same kernel as the host… But there is one caveat, if a malicious user/application compromises your container, he/she would have also compromised the host machine automatically, how, the root user on both is the same user !

The solution, unprivileged containers

In comes Unprivileged containers, in this setup, we simply either map a User ID to root within the container, or, still use root, but through subordinate IDs, so instead of having the Host’s user id for root (Usually Zero) being also root inside the container, we create a user outside the container (Or a subordinate ID of root), and instruct the kernel to map this user’s ID and treat it as ID zero inside the container, So if a malicious user gets access to the container and ends up breaking out of the container, they will find themselves logged on as a different user, with privileges very close to the privileges of the user nobody, or in other words, barely any privileges

Relevant topic: User namespaces

A relevant topic to Unprivileged LXC containers is User namespaces (Starting kernel 3.8), namespaces are created with the functions clone() or unshare().

nuff with the theory, What do i need to do ?

You setup LXC, then depending on the type of container and user you need, you may want to setup Linux kernel to use that user as root in the container, but to make that happen, you will need to take a few steps to give that user the required privileges and nothing more than what is required, nothing complicated about those steps either. So let us get started

2- Shared system setup

Before writing this tutorial, I installed a copy of bookworm, enabled SSH, and got to work doing the steps you see below, the steps in this section are the same whether you plan to create privileged or unprivileged containers or both

Step 2-1: Install everything

apt-get update
apt-get install bridge-utils lxc libvirt-clients libvirt-daemon-system debootstrap qemu-kvm virtinst nmap resolvconf iotop net-tools

Step 2-2: Enable IP forwarding

Next, we need to enable IPv4 forwarding by un-commenting a line in sysctl.conf then run sysctl -p, so open sysctl.conf in your favorite linux compatible editor, and uncomment the line

net.ipv4.ip_forward=1

Now run the following command for the effects to take place

sysctl -p

Step 2-3: Host Networking

Before creating any containers, we need to make sure the host can bridge the network to them, in Debian, this is done by editing the file /etc/network/interfaces, there are a few ways to connect the containers, your host can become a DHCP server, or you can connect the containers directly to your router

In this setup below, I am connecting the containers directly to the router.. The host machine will have the IP 192.168.7.140, IF YOU ARE USING HYPER-V, YOU WILL NEED TO ENABLE “MAC address spoofing” IN THE HYPER-V VM SETTINGS

auto br0
	iface br0 inet static
	bridge_ports eno1
	bridge_fd 0
	address 192.168.7.140
	netmask 255.255.255.0
	gateway 192.168.7.1
	bridge_stp off
	bridge_maxwait 0
	dns-nameservers 8.8.8.8
	dns-nameservers 8.8.4.4

3- Privilaged LXC

To clarify, making a privileged container does not stop you from making unprivileged containers later, BUT, the unprivileged containers need to be different containers 😉 so you might make a privileged one, then replace it with an unprivileged one

Step 3-1: Download container

The following step is all about downloading your LXC container template ! I chose the mirror with the lowest ping time from me, but you can omit the mirror line altogether

MIRROR=http://ftp.debian.org/debian lxc-create --name vm142 --template download -- --dist debian --release bookworm --arch amd64

Something unexpected happened while i was doing this, I received an error about a problem downloading, by coincidence, i rebooted the machine and it worked, my theory is that the reboot was irrelevant but if this happens to you, tell me your conclusions in the comments.

"../src/lxc/lxccontainer.c: create_run_template: 1628 Failed to create container from template"

Right after, you have a brand new LXC container which is unfortunately privileged, you can have it listed with the command “lxc-ls -f” where the f stands for fancy 😉

lxc-ls -f

Step 3-2: Edit virtual machine config

This container might not be able to start though, some editing of the config file may be necessary !

Here is this machines config file, mind the comments, this is meant to be modified to fit your networking setup, so you will need to change the IP address and relevant network address information, the machine name and rootfs path, etc…

#this is a modified LXC container config file
# Template used to create this container: /usr/share/lxc/templates/lxc-download
# Parameters passed to the template: --dist debian --release bookworm --arch amd64
# For additional config options, please look at lxc.container.conf(5)

# Uncomment the following line to support nesting containers:
#lxc.include = /usr/share/lxc/config/nesting.conf
# (Be aware this has security implications)


# Distribution configuration
lxc.include = /usr/share/lxc/config/common.conf
lxc.arch = linux64

# Container specific configuration
lxc.apparmor.profile = generated
#nesting is for having docker and other similar containerization tech inside the container, dissable it if you don't want such virtual machines in the virtual machine
lxc.apparmor.allow_nesting = 1
lxc.rootfs.path = dir:/var/lib/lxc/vm142/rootfs
lxc.uts.name = vm142

# Initial Network configuration, disabled...
#lxc.net.0.type = veth
#lxc.net.0.link = lxcbr0
#lxc.net.0.flags = up

#the above config was dissabled, so net.0 altogether is better left empty
lxc.net.0.type = empty


#Now, add networking

lxc.net.1.type = veth
lxc.net.1.flags = up
lxc.net.1.link = br0
lxc.net.1.name = eth0
lxc.net.1.ipv4.address = 192.168.7.142/24
lxc.net.1.ipv4.gateway = 192.168.7.1


#App armor profile for this PRIVILEGED container
lxc.apparmor.profile = generated


#If you want this container to start with the host, uncomment the following
#lxc.start.auto = 1
#lxc-start.delay = 10
# #the order, the higher the earlier ;) 
#lxc.start.order = 30


# Container specific configuration (Not initially there)
lxc.tty.max = 4
lxc.pty.max = 1024

Problem : One remaining problem was that the virtual machine was getting 2 IP addresses, one static that we set above, and one dynamic via DHCP, turns out the /etc/systemd/network forced the machine to get DHCP, so i went in and commented all the lines inside that file !

Step 3-3: Start the machine and change credentials

Now, after starting the machine, you will need to login to it, to start the virtual machine and do that, issue the command

lxc-start -n vm142 -d
lxc-attach -n vm142

Now, you can use the passwd command to change the container’s password, and you would probably want to install “apt-get install ssh openssh-server”, this way you can login to it with putty or any other SSH client

4- Unprivileged LXC containers (Both)

Whatever in this section applies to unprivileged containers, whether root user or any other user

Step 4-1: Enable Unprivileged User Namespaces

it is enabled by default, To make sure that it is, run the command below, if it returns “kernel.unprivileged_userns_clone = 1” you are good to go.

sysctl kernel.unprivileged_userns_clone

if for any reason it is not enabled (0), you can enable it by adding it to /etc/sysctl.d…. by editing the file “/etc/sysctl.d/00-local-userns.conf” and adding the following line, if the file does not exist, create it

kernel.unprivileged_userns_clone=1

Once done, run the command

service procps restart

5- Unprivileged container under new user

Step 5-1: Create the user

You can call the user whatever you want, I chose to call the user lxcadmin, this is an arbitrary choice, To create a user we issue the following command.

adduser lxcadmin

The output of the adduser command should be something like

Adding user `lxcadmin' ...
Adding new group `lxcadmin' (1001) ...
Adding new user `lxcadmin' (1001) with group `lxcadmin (1001)' ...
Creating home directory `/home/lxcadmin' ...
Copying files from `/etc/skel' ...
...
Adding new user `lxcadmin' to supplemental / extra groups `users' ...
Adding user `lxcadmin' to group `users' ...

So here, our user gets the ID 1001 (Since i already have a user with the ID 1000 and the root user with the ID 0. Now if we inspect the 2 files /etc/subuid (The subordinate uid file) and /etc/subgid, we will find the following content in both (Identical contents in files).

yazeed:100000:65536
lxcadmin:165536:65536

What the above means is that user lxcadmin has a range of UIDs starting with 165536 and has 65536 extra UIDs total, so the last UID that lxcadmin can use is 165536 + 65536 – 1 = 321071, and the next user we add will start at 321072.

So to recap this user has a subordinate ID range from 165537 TO 321071, notice i added one to the starting number since the first number is not a subordinate ID, but rather the user’s default ID.

Step 5-2: Network adapter quota

New users generally do not have the ability to add a container to a bridge, for that you will need to give the user a network device quota, this quota is defined in the file /etc/lxc/lxc-usernet, the initial quota for unprivileged users is zero, so edit the file and add the following lines, depending on what adapters you would like to allow lxcadmin to connect containers to, the format is user type bridge quota

lxcadmin veth lxcbr0 10
lxcadmin veth br0 10

Notice that you can replace the user with a group name, but that is a subject of a different post…

Now you will need to copy the file /etc/lxc/default.conf to the user’s home directory, in my case under /home/lxcadmin/.config/lxc/default.conf, if the config directory does not exist, create it, now edit this file you just created and depending on the user you are using (I am using the second user, hence the numbers, yours will differ unless your user is the second one added, copy the values from /etc/subuid)…

    lxc.idmap = u 0 165536 65536
    lxc.idmap = g 0 165536 65536

Now, we are closer than ever to making it run, we need to create our first container, unlike the privileged “lxc-create mycontainer” this one is slightly more complicated (The solution is below to make things unprivileged and secure again)

systemd-run --unit=my-unit --user --scope -p "Delegate=yes" -- lxc-create -t download -n my-container
lxc-create -t download -n myunprivcontainer -- -d debian -r bookworm -a amd64

Don’t expect this to work yet…. the following contgainer config file was automatically created

# Template used to create this container: /usr/share/lxc/templates/lxc-download
# Parameters passed to the template: -d debian -r bookworm -a amd64
# For additional config options, please look at lxc.container.conf(5)

# Uncomment the following line to support nesting containers:
#lxc.include = /usr/share/lxc/config/nesting.conf
# (Be aware this has security implications)


# Distribution configuration
lxc.include = /usr/share/lxc/config/common.conf
lxc.include = /usr/share/lxc/config/userns.conf
lxc.arch = linux64

# Container specific configuration
lxc.apparmor.profile = generated
lxc.apparmor.allow_nesting = 1
lxc.idmap = u 0 165536 65536
lxc.idmap = g 0 165536 65536
lxc.rootfs.path = dir:/home/lxcadmin/.local/share/lxc/myunprivcontainer/rootfs
lxc.uts.name = myunprivcontainer

# Network configuration
lxc.net.0.type = veth
lxc.net.0.link = br0
lxc.net.0.flags = up

libpam-cgfs is already installed (It was a dependancy in the apt-get install above), libpam-cgfs is a Pluggable Authentication Module (PAM) to provide logged-in users with a set of cgroups which they can administer. This allows for instance unprivileged containers, and session management using cgroup process tracking.

Configure AppArmor

App Armor is enabled on Debian 10 (buster) and after by default, AppArmor is recommended as it adds a layer of security which may prove vital for a system running your virtual machines.

to check whether it is enabled on your system or not, you can run the following command

cat /sys/module/apparmor/parameters/enabled

If the above returns the letter Y, AppArmor is enabled, and you need to set it up to allow for our unprivileged setup

6- Unprivileged container under root subordinates

This is the most interesting setups, It is a no compromise setup where you can have a container run with all the features you see in privileged containers, while still maintaining the security provided by the unprivileged setup above (More or less)

Step 6-1: root Subordinates:

the first step is to allocate a uid and gid range to root user in /etc/subuid and /etc/subgid. This is because the root user, unlike users added with adduser, does not have subordinate IDs by default, so in short, figure out what the next range of IDs is, and assign them to root by adding a line similar to the following at the top of the list in those 2 files, In my case, lxcadmin has the last range, 165536:65536 means the next id is (165536 + 65536 = 231072), And i would like a million subordinate IDs so i can hand every machine a different set of IDs which should increase security even farther.

root:231072:1000000

adduser will recognize the new range when you use it next time, and start from there

And reflect that range in /etc/lxc/default.conf using lxc.idmap entries similar to those above.

root does not need network devices quotas and uses the global configuration file, so those steps from the above are not needed.

Any container you create as root from that point on will be running unprivileged, able to auto-start, and share filesystems !

Nested virtualization in KVM

The reason I am enabling this in my virtual machine is to develop with android studio under windows or Linux in a dedicated development machine (Let us call it an android development virtual machine), you will need to enable nested virtualization for the virtual android phone that comes with Android studio, there are many occasions where you need nested virtualization, so let us see what we need to do.

1- Check if our system allows nested virtualization with the following line

cat /sys/module/kvm_intel/parameters/nested 

If this returns a Y or a 1, then we are good to go to the next step, if not, then execute the following to enable the feature on the host system

echo 'options kvm_intel nested=1' >> /etc/modprobe.d/qemu-system-x86.conf 

Now, with that out of the way, we can move to the next step

2- Enable nested virtualization in the config of the virtual machine, either with virsh edit or edit the file manually and reload it, whatever you are used to doing should work

virsh edit androiddev

Now, specify either host-model OR host-passthrough, host model is more compatible when moving the virtual machine to a new CPU, while host-passthrough will deliver absolutly all CPU features to the guest os, but is very unfriendly to moving the machine to a different KVM host

<cpu mode='host-model'> 

Installing MacOS in a virtual machine (KVM) under linux

This is a simple task, and it is only simple because of foxlet (@FoxletFox on twitter)

Anyway, let us get to setting it up, to begin with, you don’t need to download MacOS, when using foxlet’s macOS-Simple-KVM, your virtual machine downloads MacOS on it’s own

Step 1: Make sure you have KVM ! and the relevant tools

apt-get install qemu-kvm libvirt-daemon qemu-system qemu-utils python3 python3-pip bridge-utils virtinst libvirt-daemon-system virt-manager

You know, the usual kvm setup ;), I am hoping you already have KVM, if not, see this post and install KVM first

Now that you have kvm, you need to insure that vhost_net is installed, loaded and enabled

modprobe vhost_net
lsmod | grep vhost

You will also need git to download macOS-Simple-KVM

git
git clone https://github.com/foxlet/macOS-Simple-KVM.git

Now, download MacOS base image that will download the rest of the operating system (catalina is the latest ?) options in that script are –high-sierra, –mojave, or –catalina.

./jumpstart.sh --catalina

Connecting to Windows KVM with VNC and putty tunnel

The setup assumed in this post is as follows, you are working on a remote windows computer, there is a Linux KVM host computer running guest virtual machines somewhere (OS of guest irrelevant), and you would like to connect to a guest machine’s console (which may be running windows, linux, macOS, or any operating system)

KVM, by default only allows people to connect through VNC to the console of a virtual machine if they are using the local host computer, so here are the tips on creating a tunnel to the host computer and connecting to your KVM virtual machine.

Windows does not support VNC very well, (Most VNC servers don’t run well on windows), but the VNC server here is not windows, it is KVM that is providing the VNC server to the guest’s console.

1- Create a tunnel (Putty on windows), simply put, save the connection in putty to that host machine, then under tunnels you will need to have something like this (And go back and hot save again)

Just create a tunnel for port 5900 and the destination localhost:5900 (5901 for the second virtual machine and so on), leave all other tunnel options unchecked/default

2- to know which ones are enabled on your machine run this command

netstat -tlpn | grep 590

3- VNC should now connect to localhost:9500 for example (I am using tightVNC on windows), and that connection should be automatically router to the KVM host, which will display the guest’s console depending on the port (every guest has it’s own port)

Mounting a multipart vmdk disk on Linux

There are many ways to do that, one of which is using the tools provided by vmware to combine the disks into one and then mounting it with

kpartx -av mydisk.vmdk;

Then

mount -o /dev/mapper/loop0p1 /hds/disk

While another method, which is simpler

apt-get install qemu-utils
qemu-img convert disk-s001.vmdk s01.raw
....
qemu-img convert disk-s013.vmdk s13.raw
....
qemu-img convert disk-s032.vmdk s32.raw

The above will be sparse files, so you will not have disk usage as big as the file, a “df -h” should not result in any lost of disk space beyond the data that is used by files in the image

following the above, we need to combine the RAW files like so

cat s01.raw s02.raw s03.raw s04.raw s05.raw s06.raw s07.raw s08.raw s09.raw s10.raw s11.raw s12.raw s13.raw s14.raw s15.raw s16.raw s17.raw s18.raw s19.raw s20.raw s21.raw s22.raw s23.raw s24.raw s25.raw s26.raw s27.raw s28.raw s29.raw s30.raw s31.raw s32.raw > combined.raw
losetup /dev/loop0 combined.raw
kpartx -a /dev/loop0
mount /dev/mapper/loop0p1 /hds/img1

gigabit Ethernet VirtIO driver for Windows 10 64bit

By default, KVM gives your virtual machine a realtek rtl8139 Ethernet adapter, with an ancient 100Mbit/Second speed, we all need gigabit Ethernet adapter for the KVM guest.

The answer is changing the string rtl8139 with virtio in the XML file of the virtual machine, then installing the drivers

The steps i use are

Run virtual machine with the realtek adapter to download the other adapter’s driver
once the adapter is there, shutdown the virtual machine guest (Windows guest), then edit the xml of the guest, and restart libvirtd
start the KVM guest again
open with VNC, start the device manager, install the driver you downloaded.

You are good, the adapter should report the speed of 10Gbit/second (10 gigabit per second)

One annoying thing is that all windows drivers come in a big ISO file, you probably just want the driver you need.

I will add the download links in the coming few days, but you can get them right now if you like from fedora, the fedora windows guest drivers should work on any linux distribution (Debian, ubuntu, etc…)

Wheezy is out, so is openVZ, but LXC seems to be in !

This post is somewhat old, and kept here for historical reasons, if you want to run LXC containers on Debian Bookworm (12), I have composed a much more useful post here

Yes, Wheezy is out to the public, and openVZ is out of Wheezy, so what to do.

Basically, what i am doing now is investigating the alternative LXC, i have no time to learn right now, so i am going to have to do this fast.

I have a gut feeling that LXC is better than openVZ, after all, it is in the mainline kernel, and it is supposed to be marvelously easy to install, so let me start working on this with everyone here.

NOTES: if you want to give away LXC containers to people, you will need to use AppArmor with it, here, i run my containers, so i will not be installing AppArmor in this tutorial, but maybe soon i will add a tutorial for the AppArmor part.

So, LXC here we come, to completely replace openVZ, with something more open (Sorry Parallels Virtuozzo, welcome IBM), something that can keep up with the kernel and not keep us behind.

I will be turning this post into a tutorial on installing and running LXC on debian wheezy (7) with memory allocation to containers and with the kernel that shipped with wheezy, i should be done creating this tutorial in a few days, and it will remain an incremental effort where i will be adding more and more as i learn about this.

NOTES: memory allocation is not compiled with the kernel by default but disabled, you enable it by adding a parameter to grub. (Not anymore, now memory allocation works out of the box)

1- Install base system of wheezy (debian 7)

2- Install some stuff i can never do without

apt-get update

apt-get upgrade

apt-get install ssh openssh-server fail2ban

fail2ban is a very important application that will prevent outsiders from bruit force cracking your server, it is very important, without it you will be hacked sooner or later (especially if you are in a datacenter), hackers look for servers to send spam from all the time.

Now, we need to specify a hostname for this machine (the LXC HOST), i want to call mine server5.example.com

echo server5.example.com > /etc/hostname

/etc/init.d/hostname.sh start

hostname

hostname -f

apt-get install ntp ntpdate

Now, we need to setup networking for LXC, every physical NIC (Network adapter) will need a bridge.

To create a bridge, you need to install

apt-get install bridge-utils

Then your /etc/network/interfaces file must look like this

------------------------------------------------
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
# The loopback network interface
  auto lo
  iface lo inet loopback
# The primary network interface
  #allow-hotplug eth0
  #iface eth0 inet dhcp
#Bridge setup
auto br0
  iface br0 inet static
  bridge_ports eth0
  bridge_fd 0
  address 192.168.2.121
  netmask 255.255.255.0
  gateway 192.168.2.1
  dns-nameservers 8.8.8.8
------------------------------------------------

apt-get install lxc

You will be presented with the following prompt, i myself accept the default /var/lib/lxc

Please specify the directory that will be used to store the Linux Containers. If unsure, use /var/lib/lxc (default). LXC directory:

mkdir /cgroup

Add the following line in /etc/fstab using a text editor:

cgroup /cgroup cgroup defaults 0 0

mount -a

Now, to make sure everything is working like it should

lxc-checkconfig

------------------- OUTPUT OF lxc-checkconfig ----------------START

Kernel config /proc/config.gz not found, looking in other places...
Found kernel config file /boot/config-3.2.0-4-amd64
--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: enabled
Network namespace: enabled
Multiple /dev/pts instances: enabled

--- Control groups ---
Cgroup: enabled
Cgroup clone_children flag: enabled
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: enabled
Cgroup cpuset: enabled

--- Misc ---
Veth pair device: enabled
Macvlan: enabled
Vlan: enabled
File capabilities: enabled

Note : Before booting a new kernel, you can check its configuration
usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig.

------------------- OUTPUT OF lxc-checkconfig ------------------END

And on the host machine, you need to enable IP forwarding befor you fire up any of those LXC containers

 echo 1 > /proc/sys/net/ipv4/ip_forward

But to make that peppermint you need to edit the file /etc/sysctl.conf where we can add a line containing net.ipv4.ip_forward = 1

/etc/sysctl.conf:

net.ipv4.ip_forward = 1

You might find that the entry is already there but with the value 0, in that case just flip the zero to a 1, or you might find it there but commented out, in that case, delete the # that precedes that line to enable it.

To enable the changes made in sysctl.conf (And you don’t if you already executed the echo 1 statement above) you will need to run the command:

sysctl -p /etc/sysctl.conf

Now that LXC is officially installed, there is more than 1 way to create containers, debootstrap is one of them (you will need to install it, and the container config will need to be done manually by adding a few lines into a file you create inside the container area), while i will use the LXC way by using the application lxc-create you are free to use any tool, including importing containers from vmware (copying vmware containers will work).

Also worth mentioning, i use apt-cacher so when i am asked about the urls of the distro, i simply modify it to read http://192.168.2.133:3142/ftp.us.debian.org/debian/ which is how i accerss apt-cacher to speed up things and not re-download everything every time.

So, lets start

lxc-create -t debian -n vm33

On a newer releast (7.7), the above gave me an error, so the following was the error and the solution (needed command)

 
MIRROR=http://ftp.us.debian.org/debian lxc-create -n vm10 -t debian -- -r wheezy

Or if you want to use apt-cacher

MIRROR=http://192.168.10.237:3142/ftp.us.debian.org/debian lxc-create -n vm10 -t debian -- -r wheezy

1- Preseed file anyone? Enter (optional) preseed file to use: <== leave this one empty

2- Chose the distro (debian wheezy for me)

3- 64 or 32, i use 64

4-
Archives.

[*] Debian Security

[*] Debian Updates

[*] Debian Backports

[ ] Debian Proposed Updates

5- Mirror.

i modify this to read http://192.168.2.133:3142/ftp.us.debian.org/debian/ in order to use my apt-cacher, you can put any mirror here, or leave the default one (http://ftp.debian.org/debian/ Mirror Security http://security.debian.org/ and Mirror Backports) provided for you. Archive areas Main, Packages (leave blank or specify the packages you want, you can install them later with apt-get), then the root password

You must keep in mind that even after you see the message ‘debian’ template installed ‘vm33’ created, the config file for vm33 is not really ready, you need to enable networking in it manually. so, let’s edit the file /var/lib/lxc/vm33/config and add networking support

vi /var/lib/lxc/vm33/config

NOTE: THE BELOW IS FOR TYPICAL SETUPS, FOR HETZNER DATACENTER, PLEASE SEE THE POST ON LXC NETWORK SETUP WITH HETZNER.

then add the lines right before #Capabilities and after the lines of ## Container

lxc.network.type = veth

lxc.network.flags = up

lxc.network.link = br0

lxc.network.name = eth0

lxc.network.ipv4 = 192.168.2.125/24

Also, before we start the container, there are a few things we need to do…

there seems to be an issue with the ssh keys, so what we will do around this issue is copy the keys from the host, (We will generate new ones for the conatiner later)

EXECUTE ON HOST

cp /etc/ssh/ssh_host_dsa_key /var/lib/lxc/vm33/rootfs/etc/ssh/ssh_host_dsa_key
cp /etc/ssh/ssh_host_dsa_key.pub /var/lib/lxc/vm33/rootfs/etc/ssh/ssh_host_dsa_key.pub
cp /etc/ssh/ssh_host_ecdsa_key /var/lib/lxc/vm33/rootfs/etc/ssh/ssh_host_ecdsa_key
cp /etc/ssh/ssh_host_ecdsa_key.pub /var/lib/lxc/vm33/rootfs/etc/ssh/ssh_host_ecdsa_key.pub
cp /etc/ssh/ssh_host_rsa_key /var/lib/lxc/vm33/rootfs/etc/ssh/ssh_host_rsa_key
cp /etc/ssh/ssh_host_rsa_key.pub /var/lib/lxc/vm33/rootfs/etc/ssh/ssh_host_rsa_key.pub

Then, they won’t work without proper permissions

chmod 0600 /var/lib/lxc/vm33/rootfs/etc/ssh/ssh_host_dsa_key.pub
chmod 0600 /var/lib/lxc/vm33/rootfs/etc/ssh/ssh_host_ecdsa_key 
chmod 0600  /var/lib/lxc/vm33/rootfs/etc/ssh/ssh_host_rsa_key

Now i reboot the server just to be on the safe side, then i do the following

lxc-start -n vm33 -d
lxc-info -n vm33

When you run the command for information, you should see the word RUNNING and a pid.

Just SSH to the host !

Now if you want to create new host keys for SSH just do the following

delete the files

/var/lib/lxc/vm33/rootfs/etc/ssh/ssh_host_dsa_key.pub
/var/lib/lxc/vm33/rootfs/etc/ssh/ssh_host_ecdsa_key
/var/lib/lxc/vm33/rootfs/etc/ssh/ssh_host_rsa_key

execute

dpkg-reconfigure openssh-server

—————————————

Making LXC auto start at the system boot
The old Way – create a symbolic link, should still work, but i have not tried

ln -s /var/lib/lxc/vm34/config /etc/lxc/auto/vm34_config

The new way that provides better control of the order they are started in.
Set lxc.start.auto == 1 in the config

Then, the following will tell the system what containers to start first, and when