Choosing a hosting provider when you want the familiarity and trust of your native distribution resources is actually quite difficult. Many of the juggernaut providers offer features that require them to have control over your kernel and boot-loader. We assume for this article:
- You want to use VPS as a medium because you need full backups and snapshots
- You want to install from distribution supplied media into your VPS
- You want to have block level backup (in case using FDE via LUKS/Geli)
- You’re somewhat familiar with VPS providers rolling their own Kernels and customizing distribution, and you’re a bit uncomfortable with it
The Saga (past years)
This is long and rambly, sorry. I felt it was pretty important to explain the saga of migrations so that others who may have done something similar could related. Skip the entire section if you’d like to see where I ended up.
I dabbled with my own bare-metal for a while in the late 1990s, eventually hosting in local data centers and ovh until about 2009. Cost was high, but I was new to the avocation so I was happy to shell out some money and run several applications on a single server despite the risk that if one faulted it could cause service interrupts to others.
Arrival of VPS providers
I became aware of VPS providers in 2009 and immediately jumped into Linode ravenously. #linode on oftc was the first time I really fell in love with IRC, primarily because I was able to speak with people like
@caker who were a part of Linode’s core staff. I was blown away at the transparency of this organization, and the community was seething with knowledgeable people helping out on IRC and writing great documentation. Coming from the sterile and corporate focused ovh, I was surprised to jump into a community that had a sense of camaraderie for small to large scale deployments and open source development.
Cost was low, service was great… I was able to split my couple bare-metal servers into several VPSs that made me feel more comfortable with my limited ‘fault tolerance’, and I picked up snapshots and backups. It was amazing, and I stuck with it for several years.
In 2010 I started dabbling with kvm on my ‘core’ servers (server in each of my families households) so that I could virtualize applications that I didn’t want running on the metal. It was pretty rough, libvirt felt like I was dicking around with java beans in the amount of xml I was mucking in, and I won’t even begin to lament about network bridging in those days. This drove me to dabbled in Xen but that sent me running right back to kvm. Ultimately, despite the rough edges I felt like kvm was going to really stick around (unsurprisingly it did, likely in part due to Red Hat pushing it as a primary product).
Deploying virtual machines on kvm strait from the distribution supplied media just felt right. There was no lag time in waiting for a provider to pull and prepare an install, which allowed me to do some early testing of releases for core distribution features I had become interested in. About this time, likely due to increased proficiency, I became a bit frustrated with the model of running on my remote providers modified distribution, and booting into their custom built kernel. I didn’t know why I’d want to run the distribution supplied Kernel yet, because I was in debian/ubuntu land and didn’t have awareness of SELinux, but I felt strongly that I should be on the supplied defaults.
In 2011 DigitalOcean sprung on the scene with fancy SSD based storage and a core commitment to kvm. I dove towards it because I thought it would be fantastic to be able to use kvm on someone else’s beefy remote servers. I migrated all of my Linode hosts, felt sad about leaving them, but looked forward to a future where I’d gain a bit more control. There were some early options of essentially escaping the default install pathway and getting weird things loaded, but those pathways we’re plugged up. There was a lot of talk within the community about custom images with comments made in IRC about estimated times it’d be available. I stuck around for quite a while in hopes that they would reverse their decision to block custom installs. You can check on that last link today and see they are still ‘gathering feedback’ years on.
an update… that allow custom images feature arrived in September of 2018… pushing almost a decade before they addressed this call for support.
Awareness of provider Limitations
Over that time on my local machines I was doing more and more deployments, distro-hopping as necessary to support weird scientific software builds, eventually becoming quite happy at home with Fedora. I felt at home with Fedora for a variety of reasons that likely warrants another article, and wanted to deploy straight from their install media with SELinux enabled. This was impossible on DigitalOcean, so I began searching again and surprisingly ended up back at Linode because they moved to KVM, and some folks on IRC explained how I could get SELinux running with their Fedora install. Once again, migrated everything. This time I felt less bad, I was back with Linode who I’d always felt was a well run company that stuck to its design principles. I felt at home back in their IRC, even though my entire time at DigitalOcean I had continued to parasitically learn from #linode.
I’d achieved one thing I wanted, I was able to run the distribution supplied Kernel. However, each time an update hit I had to jump into each box and muck about so that it could boot again (otherwise it would stick on the older Kernel). I was frustrated about this, I knew I’d also be waiting days to weeks for Linode to get the next major release of Fedora prepped for install. I began the search agian, thinking I’d end up back on bare-metal after all… loosing backups and snapshots.
A friend of mine who seeks the lowest cost hosting stumbled across Vultr and was singing praises of performance. I largely ignored them as an option because there seems to be a cesspool of bottom barrel providers that will carve out space for you anywhere just to get some money. Coming from the self funded, well intentioned Linode, or the juggernaut of DigitalOcean made me scoff at any provider that didn’t have some critical mass of the communities I was sampling for tech news. Vultr was new, it was formed as a new foray from an older company that seemed to specialize in game servers, and there didn’t appear to be much transparency about their direction.
As I began to dig into potential providers I widened my nets and found that Vultr was kvm based, did block level snapshots, and had a custom media install option. Within about an hour I had provisioned an Arch and Fedora system strait from the distribution supplied media onto LUKS encrypted volumes. I was able to run snapshots, scheduled backups, and they even had a very tiny community on #vultr that had some of their team members present.
I’ve ended up on Vultr, as they do basically everything I had wanted. I have some reservations about their implementations, as I’ve experienced some hiccups due to naughty co-tenants for some of my servers. I also have reservations about the community, which as of this writing totals 84 on #vultr, compared to over 500 on both #linode and #digitalocean. I’ve got the functionality I want, but I don’t feel like I’m investing in a company that cultivates a strong community through open sourcing their software, or perpetuating best in class documentation. I’m happy, yet lament the feeling of participating in that good mix of corporate advancement and community participation that Linode did so well.
@vdave is pretty active in IRC, he’s stepped in to assist in some cases where I’ve not gotten ticket advancement as quickly as I needed, and he’s relatively candid about the organizations future. Even with Vultr’s ability to achieve functionally what I wanted, I wasn’t swayed to make the jump until I felt like I had a bit of a relationship with their organization through
This is likely something I should write independently about, but just an excerpt about it here: I feel strongly that people should be aware of how spending their money promotes specific resource allocaiton. Many are aware of what it means to go to farm-to-table, or purchase their produce at the farmers market… well I see the same thing occurring in the Internet space. Whoever I choose to host with is taking a measure of the overall resource allocation for that service, I’d like to see providers that represent my own ideals survive and prosper. For now Vultr seems to be a good mix of that, despite the aforementioned drawbacks. If you’re thinking of giving them a shot, consider using my affiliate link so I can muck about in their infrastructure more.
Why so serious?
Lot’s of people along this search were skeptical, likely for good reason, about why I would have the need to install from the distribution media or use LUKS. Ultimately this is personal preference, however as I appear to be in the minority compared to most… I’ll try to outline a bit:
- Distribution Media: Choosing a distribution is a social contract. You align yourself with how the maintainers philosophy, and through use you run into things that necessitate giving back to the community. I’d always wanted to be able to do early testing on remote servers, which has to be done via early spins of the install media. In the case of Fedora I’m typically ready to make the jump to the next version on or before the release of Beta. Not being able to install from their media takes you away from the upstream intended user experience. It also potentially leaves you in a state where you don’t understand the origination of your system, which I was always uncomfortable with.
- Distribution Kernels: Same as above in many ways, but furthermore there are particular features that you’re conceptually ‘signed up’ for when you run a distribution, many of which are perpetuated in the specific build and boot parameters of the Kernel.
- Full Disk Encryption (FDE): Lost of the servers I’m hosting have data on them that are behind authentication for a reason. I’d wanted FDE in place in the event that another tenant on my VPS got in trouble for anything and a full copy of the VPS disks were made for internal examination, or even more precarious, law enforcement. This might seem silly when you’re entering your boot password over a web based manager, but I still felt that it was just one more thing in place that minimally increases complexity but significantly increases the difficulty of examining data within my server.
Looking to the future
There appears to be a new trend in offering bare-metal to people, with a sort of hybrid management that feels like your provisioning a VPS. I’m keeping my pulse particularly on packet.net who overnight created an IRC channel to keep goofy people like me happy, and it seems to be growing quite aggressively (doubling a couple times in just one week). They are working on a ‘Bring Your Own OS’ option to debute in a couple months.
I was pretty happy to hear that this was on their roadmap before I even stumbled across them. I was elated that they responded to my inquiry of ‘do you have an IRC channel?’ by registering a channel and throwing some of their core staff in there. Now it consistently has over 100 people.
I deeply hope that the idea of installing from distribution media doesn’t go away. With solutions like Docker-Hub and folks doing the
su -c "curl <url> | bash" models of deployment… I’m typically dreading when I see a new paradigm of systems deployment and provisioning that hails itself as being “less lines than competing shitty idea”.