We recently launched a new website, replacing the venerable old website of 9 years. So as not to completely lose the content of our old website, we decided to archive it to disk so that we would be able to resurrect it at a moment’s notice, both for historical purposes and to ensure that we would be able to retrieve any content or files we had not migrated to our new website.
A fair amount of my work involves solving problems and troubleshooting issues beyond systems I have direct access to or insight for. Most often, this involves communicating with someone a world away (or at least several time zones away) who is experiencing a problem with an application I’m responsible for maintaining or supporting. Read more on Remote Troubleshooting Tools…
Back in March I posted about using git subtrees to simplify deploys. I was initially hoping to clean things up a bit by using subtrees. I wanted to reduce the size of my deploy and my Heroku slug by excluding source assets. I also wanted to make sure that it was easy to understand and that other developers didn’t have to worry about extra steps when cloning the repo or when deploying. Submodules force some extra steps when cloning, pulling, and pushing, so they weren’t my first choice.
Using subtrees like this did work fairly well, but I would think twice before using them again for this type of problem. Read more on Simpler Deploys with git Subtrees: A Retrospective…
Atomic really has a passion for writing high quality code and for testing. While our internal server infrastructure has often been maintained in a semi-automated fashion, it has traditionally lagged far behind our development practices in terms of code quality, testing, and continuous integration.
Over the past year, however, Mike English and I have been working to revamp much of our server infrastructure using the Chef configuration management tool. Our goal has become to build a Test-Driven Infrastructure (TDI) in which we first write tests to model and validate the code that we later produce to configure and manage our servers and applications. Read more on Test-Driven Infrastructure (TDI)…
First announced almost a month ago, Shellshock continues to endanger un-patched web servers and Linux devices. So what is it? How can you tell if you’re vulnerable? And how can it be addressed?
What Is Shellshock?
Shellshock is a vulnerability in the
bash software program. Bash is a shell, installed to Linux and other operating systems in the Unix family. A shell is a software component that is deeply integrated into the operating system, which is what makes this vulnerability so insidious.
The Shellshock vulnerability is a bug in the parser. It was first introduced more than 20 years ago when a feature to allow exporting functions was added. The danger is that an attacker who could control the content of an environment variable could potentially execute arbitrary code on a vulnerable system. Remote code execution (RCE) vulnerabilities (also called “arbitrary code execution” vulnerabilities) are among the most dangerous. Paired with privilege escalation vulnerabilities or poor security practices (e.g. allowing web servers to run as privileged users), unaddressed arbitrary code execution vulnerabilities can lead to the complete takeover of vulnerable systems. Read more on Shellshock – CVEs, Patches, Updates, & Other Resources…
As a Professional Problem Solver, much of my work deals with installing, configuring, and managing the Operating System layer of an application stack.
Managing the OS layer has been the work of System Administrators for many years. With the advent of virtualization, it became relatively easy to create and destroy virtual machines. With the “cloud” many of us no longer even own physical servers. With DevOps tools and configuration management, we’ve created abstractions for configuration and automated provisioning.
The operating systems have remained relatively the same. When we’re not using a PaaS like Heroku, our application servers are often full Linux VMs. Even with containerization tools like Docker, the underlying OS is fundamentally the same. The advent of virtualization brought many changes, but we still haven’t seen the full impact of this paradigm shift. Read more on Re-imagining Operating Systems: Xen, Unikernels, and the Library OS…
Recently, we rebuilt Atomic’s SVN server. We wanted to upgrade to the latest Ubuntu LTS release and also wanted to manage the server with Chef. Provisioning the server and bootstrapping it with Chef was straightforward. However, actually preparing the server for hosting our SVN repositories and migrating all of the data posed some challenges. I was reminded some useful commands, techniques, and learned how to fix some problems.
Unlike git, which allows us to clone a new bare repository from any existing one, SVN repositories must be exported or ‘dumped’ to a portable format (called a ‘dumpfile’), transferred to the new location, and then loaded into a new, empty repository.
Security patches for libraries and tools come out quite frequently. Just subscribe to any Linux distribution security list, and you’ll find that security updates are released with astounding frequency, sometimes even daily. Even kernel security updates are fairly common, with two security patches being released for the kernel used by Ubuntu 12.04 LTS in June. To keep current with security fixes, I often find it useful to configure servers to perform automatic security updates. If properly configured, automatic updates can mitigate risk and keep any service interruptions to a minimum.
Are Automatic Upgrades a Good Choice?
Most servers I work with are good candidates for automatic security updates; they aren’t running applications sensitive to the minor changes introduced by security updates. Additionally, quick service interruptions at off-hours aren’t an issue. For example, a quick restart of Apache or MySQL at 2am will not be a problem. If a server is particularly sensitive, I will only setup notification of security updates, so that I can control the what and when of any update installation. Read more on Debian and Ubuntu Automatic Security Updates…
A great deal of the time, I work on the command line — usually logged into a remote system, doing some tasks or troubleshooting some problem. Quite often, this involves checking or manipulating something on the filesystem.
There are dozens of filesystem utilities. Most are well-known file manipulation utilities such as
mkdir, etc. However, there are several less familiar, but very powerful tools that I find myself using on a nearly daily basis. The following Linux filesystem utilities are ones I find particularly helpful for diagnosing issues and gathering information to solve problems.
Free Disk Space
Finding the amount of available free disk space is important — especially if a system has a low capacity hard drive or typically runs close to the margins. Whenever I start seeing strange failures on a system, one of the first things I check is disk utilization. The
df command allows me to quickly check if a system is running near disk capacity. Read more on 5 Linux Filesystem Utilities for Diagnostics…
As much as I’d like to see a world where PKI is used to secure digital resources, the status quo is a world often secured by passwords. Passwords are hard to remember, and easy to lose. We should use complex, hard-to-guess passwords. We should use separate passwords for every site. We should keep passwords to ourselves instead of sharing accounts with other users. All of these things add up to more than most minds should be taxed with.
The good news is: password managers can help! Read more on GPG + Git: The pass Password Manager…