The One Thing a Developer Can Do To Level Up Infrastructure Skills

Use Docker and Docker Compose to manage your local dev environment.

I’ve worked on backend infrastructure for almost 20 years. In the early 2000s, Atomic Object’s web server was hosted on a physical 1U pizza box server at a local datacenter. I regularly visited this server in-person, as well as a few of its neighbors, which hosted apps we had built for our clients at the time. On occasion, I also got the opportunity to visit servers located in our client’s data centers. :joy:

The stuff I’d have to do during these visits to get/keep everything working seemed a bit frustrating at the time and harrowing in retrospect.

The Cold Swap

At some point, our original web server running Debian Linux let us know one of it’s two physical, shinny-disk, IDE hard drives configured in a RAID 1 array was failing. This wasn’t the first time, as this style of hard drive had a tendency to fail over long periods of sustained use.

When I arrived at the data center, the reality of just how cold of a swap this operation would be set in. There were no slots on the front of this pizza box to slide out the failed hard drive. The fact that the spare drive I brought with me looked exactly the same as the one in my home PC should’ve clued me into that. No, I had to plug the server into a monitor & keyboard stand, shut it down, unplug everything, pull it out onto the floor, then open it up and dig around inside.

Inside is where I encountered my first Indiana Jones dungeon puzzle of an infra task. I was staring at two seemingly unlabeled drives. One was still good and had all of our data on it, just waiting to dutifully rebuild a new RAID 1 buddy. And the other was corrupted trash, ready to do who knows what if I left it in there and replaced the good drive with the spare I brought.

The error logs that sent me here in the first place had designated the bad drive as simply “drive 1.” I had a strong feeling this actually referred to the second of two drives, as I recalled verifying the other drive was, in fact, “drive 0.” But which of these unlabeled drives was the second vs the first?

I didn’t know. But I found calm in what I can only describe as my zen awakening in the IT/Infrastructure/DevOps world with this thought: “The worst case is I break this and have to rebuild it. A person built this before me. I’m a smart person too. So screw it!”

I pulled out the hard drive on the right because it was second from left to right. Good enough. It turns out I was lucky because the RAID array picked up and started rebuilding itself when I got everything back together.

I swore from that day on all new server hardware we’d buy would use front-of-rack accessible, hot-swappable SCSI drives with those fancy LEDs that TELL YOU WHICH DRIVE IS BAD.

 

The NIC

And that’s exactly what I did. One of the first of these servers was for a client’s data center in which software we’d developed was going to be hosted. Our client ordered the server to my specifications: a 1U Dell PowerEdge with two SCSI hot-swappable drives in RAID 1 configuration for redundancy.

When it arrived, I headed to their location to install Debian Linux and our software. I put in my pre-burned Debian installation CD, booted the installer, and was immediately stymied by a lack of internet connectivity. It turns out the network interface card (NIC) on the server was so new, its driver wasn’t included in the installer I was using (the Intel e1000, if I recall correctly).

I was able to find the source for the e1000 network card driver from Intel using my laptop. I got it downloaded and compiled. To get it onto the PowerEdge server during installation required getting it onto a 3.5″ floppy disk. This was because old Linux installers rarely used to load USB hid (ie. mouse/keyboard) or mass storage (ie. USB stick) drivers during the installation process. Fortunately, I had both a 3.5″ floppy disk and USB floppy drive with me (because, of course, I always did).

This was the last time I can remember using a floppy disk to transfer a device driver to a server for an installation process. But it was also the first time I remember the surge of pride possibility best summed up as: “I can solve any problem if I refuse to stop learning whenever I feel blocked.”

Continue Forward

Backend hosting and infrastructure providers sprung forward and evolved such that managing our own hardware continued to make less and less sense. The industry evolved to shared hosting environments, managed servers, and on-demand virtual servers managed through a web interface. I remember the joy of using the service Slicehost to fire up a new Linux virtual sever when it prompted me for which distro and version I wanted in a dropdown (at which point I shot a glaring frown at my old floppy drive).

And now we have Docker, which I believe to be quite an incredible peak of progress, intersecting those considerations most important to dev and ops. From all my experience getting apps running on physical servers, virtual servers, and container services, I say this: “As a dev, if you get your app running in Docker Compose, you’ve most concisely documented and communicated what it takes to run it on production infrastructure.”