Headless VirtualBox in a Docker Container:
I’ve been a virtualization guy since VMware was at beta 1 of ESX. For simple virtual machines that I only need occasionally, or for short duration, I’ve found VirtualBox to be an easy solution. I recently made some changes with respect to my personal computing environment such that I really didn’t want to use my desktop’s resources to run VMs in the background while I did work in the foreground. Just wanted to avoid the CPU/memory contention and such. I have a different machine that I run as a headless storage node, and much of the time it’s rather idle. So I put some thought into running these VM workloads in that context. Because the machine’s main job is not virtualization I didn’t want to install a native virtualization stack (Xen, KVM) and wondered how I could continue to use VirtualBox (zero VM conversions!). Docker to the rescue!
If you’ve read my other blogs on containerizing desktop applications (links at the bottom of this page) it should be no surprise that running VirtualBox in Docker would be one of my thoughts. The twist is that the context of those blogs have to do with GUI apps that are in your face. In this case, we’re talking about running a VM in a container on Linux in a headless mode — could be considered a service. Turns out that VirtualBox can accommodate quite nicely! So let’s set it up…
Note: This blog does not go into installing Docker. There are plenty of Internet resources for that.
For this task I, once again, plagiarized Jess Frazelle’s application container expertise and took a look at her VirtualBox Dockerfile. This gave me a great point to start from.
To have access to the VMs from the VirtualBox level (so we could, potentially, do OS installs and such) we should have access to some form of remote GUI. It is possible to install the normal VirtualBox GUI into the container and use remote X Windows (typically
ssh -X) to access it. I’ve chosen to go a different route. The VirtualBox Extension Pack includes VirtualBox Remote Display Protocol (VRDP), which is an implementation of RDP at the VirtualBox level. VRDP does not include the normal VirtualBox interface, but rather just an RDP view of the virtual machine that doesn’t rely on services inside the guest VM. To implement this we’ll need to install the VirtualBox Extension Pack for the specific version of VirtualBox that gets installed, and I’ve incorporated that into the Dockerfile. Another detail to add has to do with the container lifecycle. We’ll be using the VBoxManage command to spawn the VM, and that command starts the VM and returns shortly thereafter. Without taking this into account, as soon as that command returns the container will be destroyed. So we need a watchdog to let docker know when the container should be destroyed.
Note: Many of the resources I found on the net regarding using VirtualBox in a container use docker’s
--privileged parameter. This actually isn’t necessary and makes the container MUCH less secure! The commands I use below do not include
Let’s dig in and get dirty…
- First, get into the BIOS on your target machine and enable the virtualization options (VT-d, AMD-v) as VirtualBox will want to use those.
Note: I believe you can configure a VM to run fully software virtualized, but I haven’t done that with VirtualBox, and most machines now have the hardware acceleration which will offer a better experience.
- Download my VirtualBox project files from GitHub. I won’t go through all of the elements of the Dockerfile, but will point out the following:
- The line
&& VBoxVer=`dpkg -s virtualbox-6.0 | grep "^Version" | cut -d " " -f 2 | cut -d "-" -f 1` \gets the installed version of VirtualBox from the package manager. At this point in the install process, calling
VirtualBox --versionwill fail, so we’re getting the version a different way. This is used to download the proper Extension Pack version. If the Extension Pack isn’t installing properly, it may be because the package manager is not providing the proper version and would be a good place to start looking.As an aside, there didn’t seem to be too much information on the net about automating the install of the Extension Pack, so I’ll discuss here to hopefully help someone else. One way is to have VBoxManage install the Extension Pack, which requires the acceptance of a license with a key that’s unique to a versioned extension pack. In that case, you need to install the Extension Pack manually once, and when you accept the license it returns the key that was used. After that it’s possible to have VBoxManage install the Extension Pack with the following command:
VBoxManage.exe extpack install [--replace] <path/to/Oracle_VM_VirtualBox_Extension_Pack-<version>.vbox-extpack" --accept-license=<key>
Another, simpler way, and the one I employed in my Dockerfile, is to extract the compressed Extension Pack tarball into the correct directory:
tar xzf /tmp/Oracle_VM_VirtualBox_Extension_Pack-<version>.vbox-extpack -C /usr/lib/virtualbox/ExtensionPacks/Oracle_VM_VirtualBox_Extension_Pack/
- There are a few files in the container directory that will get copied to /usr/local/bin and will be used to manage the VM. One point to understand is that I’ve designed this container to run exactly ONE VM. As you progress through the rest of the blog, keeping this in mind will help things make sense.
- The startvm script includes the pieces to control the VM lifecycle, and tweaks the VM to use VRDP and bridged networking. The thing I find cool about this is the container can be non-persistent, you can tweak your docker command to run different VMs in different containers and you can tweak the container shell scripts to configure each VM appropriately (if necessary). In the docker command you pass the directory that contains all of the VM files. You also pass two environment variables; one to enable/disable VRDP and one to set the port that VRDP uses. If you run multiple containers/VMs on a single machine, you’ll want the VRDP port set uniquely for each container as it will apply to your HOST’s listening ports. When the container starts, the startvm script finds the VM and starts it.
- The vmid script is the first thing startvm calls, and can be used a couple different ways. It will check to see if the VM passed into the container has been registered with VirtualBox, and if not it will register it. Once registered, it will set a VMID variable at a level appropriate for your use, depending on how you call it:
- Source it into your shell or script:
If you source it into your shell with the command below, the VMID environment variable will be populated IN YOUR CURRENT SHELL or the SCRIPT IT’S CALLED FROM.
- Run it to get the VM’s UUID as output:
When you execute the script with the command below, the output of the command is the UUID of the VM.
- Source it into your shell or script:
- The getIPaddress script will output the IP address of the running guest VM — if the VirtualBox Guest Additions have been installed.
- The attachGuestAdds, attachDVD, detachDVD and showDVD scripts all manipulate the DVD-ROM device of the VM. These are not run as part of the lifecycle processes, and would be executed in a separate
docker execcommand or shell. These are really just example scripts and could be used to create other function scripts. More about these later.
- The line
- Build the container:
docker build -t virtualbox .
Note that I’m not including a version in the image tag yet, as that could change over time. We’ll find the version number and re-tag later.
- Run the container for the first time to get the files needed to install the host kernel drivers:
docker run -it --entrypoint /bin/bash --name virtualbox --rm virtualbox:latest
- Find the version of VirtualBox:
dpkg -l | grep virtualinside the container)
root@7d5775597faf:/# dpkg -l | grep virtual ii virtualbox-6.0 6.0.4-128413~Ubuntu~bionic amd64 Oracle VM VirtualBox
Note the 6.0.4 in the output above.
- While the container is running, from a separate HOST shell, copy two directories from the container to the host.NOTE: Be cautious if you’re doing this on a host that already has VirtualBox installed. In that case you’ll want to install the same version on the host and in the container and you can skip down to step 11 as the host installation will already have completed this. If you proceed and VirtualBox is installed on the host, this process will overwrite the virtualbox directories on the host and could cause a driver mismatch with the installed version.(run the following from the host, not the container)
sudo docker cp <container id/name>:/usr/lib/virtualbox /usr/lib
sudo docker cp <container id/name>:/usr/share/virtualbox /usr/share
- Kill the container:
(from the container cli)
(from the host)
docker container stop virtualbox
- Add a group to the host:
sudo groupadd vboxusers
- Run the VirtualBox setup script:
sudo /usr/lib/virtualbox/vboxdrv.sh setup
- Remove the directories that were copied from the container. The drivers persist on the host.
sudo rm -rf /usr/lib/virtualbox /usr/share/virtualbox/
- Re-tag the container (with version):
docker tag virtualbox:latest virtualbox:<version>
- Remove old tag:
docker rmi virtualbox:latest
That’s it for building the Docker image.
If you’ve read my other Docker blogs you may be aware that I’m very keen on the x11docker project which is focused on running Docker containers more securely. The default Docker security settings provide a good middle ground with respect to performance and usability. x11docker starts from the standpoint of least privilege and allows you to provide more privilege as necessary. In the case of graphical apps, x11docker spins up a separate X Windows display server. I’m not utilizing that capability here. In addition, x11docker takes additional steps to tighten security such as running the container apps as a non-root user and dropping capabilities that the container doesn’t need. In the steps below, we’ll use x11docker to run the container as securely as possible. Here’s a link to the x11docker install documentation.
I’d like to take this opportunity to say, THANK YOU, to @mviereck, the author of x11docker, who was a big help in figuring out a couple of the nuances of containerizing VirtualBox.
To run the VirtualBox container in a production mode:
x11docker --env VRDE=on --env VBPORT=33389 --quiet --showid --cap-default --hostnet --tty -- -v /path/to/vm:/vm --device /dev/vboxdrv --cap-drop=ALL -- virtualbox:6.0.4 >/dev/null 2>&1 &
x11docker --env VRDE=on \ # enable VRDP - use on/true/1/yes to enable; anything else to disable --env VBPORT=33389 \ # the HOST port that VRDP will use --quiet \ # x11docker parameter that disables the display of warning messages --showid \ # x11docker parameter that shows the ID of the container when it's run. similar to
docker -d. --cap-default \ # x11docker parameter that enables normal docker capabilities (more on this below) --hostnet \ # x11docker parameter that attaches the container to the host network (more on this below) --tty \ # x11docker parameter that disables the X Windows aspect of x11docker -- \ # x11docker token that denotes the start of normal docker options -v /path/to/vm:/vm \ # docker volume that passes the directory that contains the virtualbox vm files (.vbox, .vdi, etc.) --device /dev/vboxdrv \ # docker device passing the virtualbox kernel driver --cap-drop=ALL \ # docker-level dropping of all capabilities (more on this below) -- \ # x11docker token that denotes the end of normal docker options virtualbox:6.0.4 \ # the docker image to run >/dev/null 2>&1 & \ # discards terminal output
Command parameter notes:
- VRDE is the VirtualBox Remote Desktop Extension
- VRDP is the VirtualBox Remote Display Protocol
- The host user that starts the container will be part of the host’s docker group and would have rw permissions to the VirtualBox VM’s files. By default, x11docker will identify the user that launched it and create a similarly configured user within the container.
- As mentioned, x11docker focuses on container security. By default it will use the
--security-opt no-new-privilegesDocker parameters, which greatly increase security. I’ve found that the
--security-opt no-new-privilegesparameter doesn’t work with VirtualBox, so we need to remove it from the equation. To do that we use the
--cap-defaultx11docker parameter to prevent x11docker from implementing
--security-opt no-new-privilegesparameter, and then use the Docker
--cap-drop=ALLparameter to lock down the container security.
- Regarding networking, my expectation is that most users will want the container to be bridged to the host’s network to allow connection from other LAN nodes. If the
--hostnetparameter is removed, the container will be connected to a normal isolated Docker bridge which would require more work to allow access to the VM from LAN nodes. A separate router or firewall container both come to mind. Another aspect to take into account is that normal Docker port mapping will only work to VirtualBox, not the running VM that would have a separate IP address than the VirtualBox container. There’s no way that I’m aware of to expose a host port to a secondary IP address (the VM’s address) within the container. The startvm script connects the VM’s first NIC as a bridged interface. If your use case doesn’t want that you’ll need to change the script.
To run the VirtualBox container in an interactive test mode to manually manipulate the virtual machine:
x11docker --interactive --no-entrypoint --env VRDE=on --env VBPORT=33389 --cap-default --hostnet --tty -- -v /path/to/vm:/vm --device /dev/vboxdrv --cap-drop=ALL -- virtualbox:6.0.4 /bin/bash
x11docker \ --interactive \ # connects STDIN to the allocated pseudo-TTY --no-entrypoint \ # disables the ENTRYPOINT element of the docker image --env VRDE=on \ # enable VRDP - use on/true/1/yes to enable; anything else to disable --env VBPORT=33389 \ # the HOST port that VRDP will use --cap-default \ # x11docker parameter that enables normal docker capabilities (more on this below) --hostnet \ # x11docker parameter that attaches the container to the host network (more on this below) --tty \ # x11docker parameter that disables the X Windows aspect of x11docker and allocates a pseudo-TTY -- \ # x11docker token that denotes the start of normal docker options -v /path/to/vm:/vm \ # docker volume that passes the directory that contains the virtualbox vm files (.vbox, .vdi, etc.) --device /dev/vboxdrv \ # docker device passing the virtualbox kernel driver --cap-drop=ALL \ # docker-level dropping of all capabilities (more on this below) -- \ # x11docker token that denotes the end of normal docker options virtualbox:6.0.4 \ # the docker image to run /bin/bash # the shell to run
This will put you in a container shell. From there,
startvm will start the VM and wait until the VM is shutdown. You can run
startvm & to push it into the background if you don’t want your shell to block. You can run VBoxManage commands to figure out what commands you want to put into helper scripts. Some commands won’t make changes while the VM is running, so execute in the proper sequence.
When running in production (background) mode, you can validate that the container is running and find its x11docker generated container name with:
docker container ls
docker container ls | grep virtual
If you’re running in production (background) mode, and want to make changes to the VM:
docker exec -it `docker container ls | grep virtual | cut -d " " -f 1` bash
This assumes that you’re only running a single VirtualBox container instance, and will drop you into a separate shell for the VM container. Once there, one of the first things you might do is to set the VMID environment variable:
After that, you don’t have to worry about finding the VM’s uuid or exact name to run VBoxManage commands. Here are some examples of what you might want to run:
SATA (1, 0): /usr/share/virtualbox/VBoxGuestAdditions.iso (UUID: af44dc2e-5ad0-4c8a-979b-233c798c01fc)
SATA (1, 0): Empty
attachDVD $VMID </path/to/ISO>
The typical use case for this would be to pass an ISO or directory of ISOs with an additional
-vDocker parameter and then use the path to the ISO in the container.
VBoxManage list vms
VBoxManage showvminfo $VMID
- VirtualBox-level suspend and resume:
VBoxManage controlvm $VMID pause | resume
- Power Management:
VBoxManage controlvm $VMID reset | poweroff | savestate | acpipowerbutton
- savestate: VirtualBox-level save state (memory) to disk and stop the VM
- acpipowerbutton: Graceful shutdown via guest
Look at the scripts to see more examples of the VBoxManage command. If you’re going to do more with VBoxManage you should take a look at Oracle’s documentation, as there are a ton of options. A dated, but still valid set of command line management of VirtualBox can be found here.
To access the VM from the VirtualBox-level, you should be able to RDP from any machine that can access the host you’re running the VM on at:
When I say, “VirtualBox-level,” I mean below the guest OS. If you were running the VirtualBox GUI this would be the interface where you would be able to do OS installs and watch the VM boot without having remote connectivity services running in the VM.
To access the VM after it’s booted using guest-OS-level services, you’ll connect to the container’s IP address and appropriate port dependent on the service you want to connect to. For RDP you would connect to:
Or if ssh is running in the VM you would connect to port 22 at the guest’s IP address. This begs the question, “What is the guest’s IP address?” You can find out by:
- Connecting to the VirtualBox-level interface and using the normal guest tools to determine the address
- Connect to a shell in the VM’s container using the
docker execcommand above and execute:
. vmid && getIPaddress $VMID
- Shortcut the previous command from the host shell:
docker exec <container> getIPaddress `docker exec <container> vmid`
(those are backticks — the key below the escape key)
As a bonus, here’s the process to create a new VirtualBox VM in this context:
- On the docker host, create a directory that will contain the VM’s files and make sure the directory is owned by the user that docker will run as.
- Start the container interactively:
x11docker --interactive --no-entrypoint --env VRDE=on --env VBPORT=33389 --cap-default --hostnet --tty -- -v /path/to/vm:/vm -v /path/to/iso:/iso --device /dev/vboxdrv --cap-drop=ALL -- virtualbox:6.0.4 /bin/bash
- Pick a name and set a variable to it:
- Create a virtual disk for the VM (size in MB):
VBoxManage createhd --filename $VM.vdi --size 51200
- Browse the list of OS types that this version of VirtualBox supports, selecting the ID to be used in the createvm command:
VBoxManage list ostypes | less
- Create the VM config file and register the VM with VirtualBox:
VBoxManage createvm --name $VM --ostype Windows10_64 --basefolder /vm --register
- Change the number of CPUs allocated to the VM:
VBoxManage modifyvm $VMID --cpus 6
- Create the virtual storage controller:
VBoxManage storagectl $VM --name "SATA Controller" --add sata --controller IntelAHCI
- Attach the virtual disk to the virtual storage controller:
VBoxManage storageattach $VM --storagectl "SATA Controller" --port 0 --device 0 --type hdd --medium $VM.vdi
- Attach an empty DVD drive to the virtual storage controller:
VBoxManage storageattach $VM --storagectl "SATA Controller" --port 1 --device 0 --type dvddrive --medium emptydrive
- Enable the IO APIC:
VBoxManage modifyvm $VM --ioapic on
- Set the boot order to boot from the DVD first:
VBoxManage modifyvm $VM --boot1 dvd --boot2 disk --boot3 none --boot4 none
- Allocate memory and video memory:
VBoxManage modifyvm $VM --memory 8192 --vram 128
- Attach an ISO file to the DVD drive:
attachDVD $VMID /path/to/ISO/file.iso
VBoxManage storageattach $VM --storagectl "SATA Controller" --port 1 --type dvddrive --medium /path/to/ISO/file.iso
- Start the VM:
- Connect to the VirtualBox-level RDP session using your favorite RDP client: <hostIPaddress:33389>
- Complete the install
- After rebooting into the newly installed OS, connect the Guest Additions ISO:
- After rebooting from the Guest Additions install, empty the DVD drive:
- Configure the VM
After creating the VM, assuming you have a guest service configured (RDP, SSH, etc.), you could shutdown the VM, exit the container and start a new container in production mode, disabling VRDE. You can then connect to the VM using the guest services.
That should get you running VirtualBox VMs in Docker containers, and give you the resources to configure your containers and VMs to your requirements.
I’m interested to hear how you are using this technology in your environment. If you’ve found this valuable, or have questions, please follow the discourse link below and leave a comment. I’d love to hear from you.
Interesting sites I came across while working on this:
- VBoxManage Documentation
- Create VirtualBox VM from the command line
Note: The only thing I didn’t like about this page was the use of the IDE Controller for the DVD-ROM rather than adding a device to the SATA Controller as I have done above.
And here are the links to my prior three-part series on using Docker for Graphical Containers:
- Desktop Docker (1/3): Linux Graphical Containers
- Desktop Docker (2/3): Secure Linux Graphical Containers
- Desktop Docker (3/3): GPU-enabled Linux Graphical Containers