Preparing hands-on conferences: to USB or not to USB
On Friday, October 2nd, Xebia organized the inaugural edition of TestWorks Conf. The conference was born out of the apparent need for a hands-on test automation conference in the Netherlands. Early on, we decided that having a high level of engagement from the participants was key in achieving this. Thus, the idea of making everything hands-on was born. Not only did we have workshops throughout the day, people also should be enabled to code along with the speakers during talks.This however posed a challenge on the logistical side of things: How to make sure that everyone has the right tooling and code available on their laptops?
Constraints of a possible solution
Just getting all code on people's machines is not sufficient. As we already learned during our open kitchen events, there is always some edge case causing problems on a particular machine. In order to let participants jump straight into the essentials of the workshop, it would need to meet the following requirements:
- Take at most 10 minutes to be up and running for everyone
- Require no internet connection
- Be identical on every machine
- Not be intrusive on people's machines
Virtual machines vs local installations?
The first decision we had to make was opting for local installations or installing a virtual machine. Based on the requirements, a local installation would require participants to atleast install all the software beforehand, as the list of required software is quite large (could easily take 60+ minutes). If we were to go this route, we would have to build a custom installer to make sure everyone has all the contents. Having built deployment packages for a Windows domain in the past, this takes a lot of time to get right. Especially if we would need to support multiple platforms. Going down this route, it's questionable if we could satisfy the final requirement. What happens if software we install overrides specific custom settings a user has done? Will the uninstallers revert this properly? This convinced us that using a VM was the way to go.
Provisioning the virtual machine
In order to have all the contents and configuration of the VM under version control, we decided to provision it using Vagrant. This way we could easily synchronize changes in the VM between the speakers while preparing for the workshops and talks. It also posed a nice dillemma. How far will you go in automating the provisioning? Should you provision application specific settings? Or just set these by hand before exporting the VM? In the end, we decided to have a small list of manual actions:
- Importing all projects in IntelliJ, so all the required indexing is done
- Putting the relevant shortcuts on the Ubuntu launcher
- Installing required IntelliJ plugins
- Setting the desired Atom theme
Distributing the virtual machine
So, now we have a VM, but how do we get it into everybody's hands? We could ask everyone to provision their own beforehand using Vagrant. However, this would require additional work on the Vagrant scripts (so they're robust enough to be sent out into the wild), and we would need to automate all the manual steps. Secondly, it would require everybody to actually do these preparations. What if 30 people didn't and start downloading 5ish GB simultaneously at the start of the conference? This would probably grind the Internet to a halt at the venue.
Because of this, we decided to make an export of the VM image and copy this to a USB thumbdrive, together with installers for VirtualBox for multiple platforms. Every participant would be asked as preparation to install VirtualBox, and would receive the thumbdrive when registering at the conference. The only step left would be to copy all the contents to 180 thumbdrives. No problem right?
Flashing large batches of USB drives
The theory of flashing the USB drives was easy. Get some USB hubs with a lot of ports, plug all the USB drives and flash an image to all the drives. However, practice has proved different.
First of all, what filesystem should we use? Since we're striving for maximal compatibility, FAT32 would be preferred. This however was not feasible, since FAT32 has a file size limit of 4GB, and our VM grew to well over 5GB. This leaves two options: ExFAT or NTFS. ExFAT works by default on OSX and Windows, but requires an additional package to be installed under Linux. NTFS works by default under Windows and Linux, but is readonly under OSX. Since users would not have to write to the drives, NTFS seemed the best choice.
Having to format the initial drive as NTFS, we opted for using a Windows desktop. After creating the first drive, we created an image from this drive which was to be copied to all the remaining drives.
This got us to plug in 26 drives in total (13 per hub), all ready to start copying data. Only to find out that the drive letter assignment that windows does is a bit outdated 🙂
When you run out of available drive letters, you have to use a NTFS volume mount point. The software we used for cloning the USB drives (imageUSB) would not recognize these mount points as drives however, so this put a limit on the amount of drives to flash at once. When we actually flashed the drives, both hubs turned out to be faulty, disconnecting the drives at random causing the writes to fail. This lead us to spread the effort of flashing the drives over multiple people (thanks Kishen & Viktor!), as we could do less per machine.
Just copying the data is not sufficient however. We have to verify that the data which was written can be read back as such. During this verification, several USB drives turned out to be faulty. After plugging in and out a lot of drives, this was the result:
What did we learn?
During the conference, it turned out that using the USB drives and virtual machine image (mostly) worked out as planned. There were issues, but they were manageable and usually easy to resolve. To sum up the most important points:
- Some vendors disable the CPU virtualization extensions by default in the BIOS/UEFI. These need to be enabled for the VM to work
- The USB drives were fairly slow. Using faster drives would've smoothed things out more
- A 64-bit VM does not work on a 32-bit host 🙂
- Some machines ran out of memory. The machine was configured to have 2GB, this was probably slightly too low.
- Setting up the machines was generally pain-free, but it would still be good to reserve some time at the beginning of the conference for this.
- A combination of using USB drives together with providing the entire image beforehand is probably the sweet spot
Next year we will host TestWorks Conf again. What we learned here will help us to deliver an even better hands-on experience for all participants.