Unit testing, integration testing, acceptance testing, exploratory testing—all are important methods that contribute to delivering quality software. But there’s another important kind of testing that is often overlooked, and that’s testing the production environment.
For example, has the production environment been configured correctly so the system can successfully send email? Can the system access a third-party web API?
Some features that rely on a configured environment are easy to test manually just by loading a page in a web application or navigating to the right screen in a mobile app. That’s not what I want to focus on in this blog post. (I’m assuming everybody is testing that kind of stuff already.)
I want to focus on the things that “can’t be tested in production.” These are things you can’t just point and click to try out in production. Maybe it’s a feature that’s triggered when an external system writes a file to a known directory on the local filesystem, or the ability to access an external FTP server to deliver a data file to a third party.
What Should be Tested?
No matter how good of a job you’ve done simulating the production environment in your automated test suite or test environment, your system isn’t going to work properly after it’s been deployed if the necessary environment variables haven’t been configured correctly, a background process hasn’t been started, etc.
Anything that can be changed based on the environment should be tested once it has been deployed to production. This could include:
- Sending email
- Reading/writing to/from a database
- Accessing third-party web APIs
- Reading/writing/monitoring files on a network share
- Accessing an FTP server
- Notifying an error reporting system
- Any other external integrations
Bake Testing into the System
When working on a feature, writing automated tests is considered “part of the story.” If you don’t have tests that prove your code does what it’s supposed to do, the story isn’t done.
The same philosophy can apply to environment testing. If the feature relies on some external system or is dependent on the environment where it runs, then the story isn’t done unless you can manually test that the environment has been configured correctly once it’s in production (in a way that doesn’t affect “real” production users).
The following are some examples of ways to bake manual environment testing into an application.
Integration status dashboard
Add a page/screen, accessible only to admin/super users, that shows the current ability to connect to external systems that are configured as part of the production environment. Something like GitHub’s Status page. When the status page is loaded, the application can try to connect, read, write (if it’s safe), etc. to each external integration, and display the results all on a single page.
Checking the status page should become a standard step in a production deploy. It can tell you immediately if the password for an FTP server isn’t configured correctly, so you don’t have to wait to hear from a production user that they’re getting an error when trying to deliver an important document.
The status page can also come in handy when troubleshooting problems that happen long after a deploy. For example, a quick check of the status page first thing in the morning would show that the system isn’t able to connect to the mail server, so you don’t have to find out from users that haven’t been able to send out purchase orders.
Adding a status check to this page when each new integration is added to the system won’t require much additional work, and it will ensure you know the integration point has been configured properly once it’s been deployed to production.
One system I worked on had a scheduled task that, every 30 minutes, would ping a server running inside the company’s firewall telling it to check a specified local directory to see if there were any incoming data files that needed to be processed. If it found any, those files would be parsed and sent up to a cloud-hosted server to be imported into the system. Once imported, PDF reports would be generated and then emailed to specific sets of users.
In this process, there are a handful of things that need to be wired up correctly in order for everything to work:
- The scheduler (e.g. cron) needs to be configured to run every 30 minutes.
- The scheduled task needs to be configured with the correct path and environment to run a script.
- The script that’s run by the scheduler needs to tell the correct internal server to process available files.
- The internal server needs to be able to read from the local filesystem.
- The internal server needs to be able to communicate with the cloud-hosted server.
- The cloud-housted server needs to be configured with correct mail server credentials.
- The cloud server needs to be able to generate a PDF (meaning the correct utilities/libraries are installed on the cloud server).
- The cloud server needs to be able to send an email with a PDF attachment.
Without doing anything special, the only way to know if the whole end-to-end process is going to work is to wait for a real file to come in, and see if everything happens as it should.
Instead, bake testing into the system by adding support for a test input file that’s coded to exercise the system as much as possible.
For example, the internal server that parses the incoming files could consider this test file just another type of incoming file, reading from the filesystem to parse the file and sending the parsed document up to the cloud server for importing. The cloud server could treat this test document just like any other, maintaining the status of the import and inserting rows into a table that’s only used for this testing. A test PDF could be generated, displaying some of the contents of the original file to show that it was parsed correctly, and then sent out as an email attachment to support staff.
With this all in place, as soon as the system goes live, or whenever any of the configuration is changed in production, you could run a live test to make sure that all of the pieces are wired up correctly.
The integration status dashboard described above will indicate the status of basic connectivity with an external system. But in most cases, there is more to working with a third-party API than just getting the login credentials configured correctly.
For further testing, I suggest adding a page that lets you do things like trigger an error notification using your configured error reporting tool, send an email using the system’s mail server, etc.
Each section of the page could have a form that allows you to specify the necessary information (an error message to send, the recipients/subject/message of an email, etc.) and then a button that would trigger the action.
When first deploying a new integration, the action could be tested as part of the deployment, providing confidence that everything has been configured correctly. It could also be used to troubleshoot problems down the road when there are reports of something not working as expected.
All of these techniques for testing the production environment require some level of additional development. But when quality and correctness are important, it’s not enough to just push to production and hope that everything has been configured correctly.
It is definitely worth it to do what’s necessary to make manual testing of the production environment an important part of your software development process.