The Responsible Application Audit – A 7-Part Checklist

As developers, we commonly get asked to review codebases for existing software applications. The common thinking is that a skilled developer can simply:

  1. “give a look” to the existing codebase,
  2. provide an expert opinion on their ability to extend the application, and
  3. provide some level of certainty around the costs associated with new functionality.

This is a trap. Don’t fall for it.

Refuse to “Give it a Look”

The “give it a look” strategy is flawed. It leads to bad expectation setting and disappointment. Why?

  • Codebase Size & Complexity – The application being reviewed was likely created over a significant amount of time, and it likely provides significant functionality. Imagine being asked to produce a well-thought-through summary and analysis of a novel by only skimming a few key paragraphs. Chances are, you’d miss some significant concepts.
  • Reviewer Temperament – Some of us are pessimistic, and others are optimistic. Without working through a quantitative checklist of discrete best practices (or actually learning the system through developing the next piece of functionality), it’s probable that our review will either be a way too rosy or way too negative. Either way, the output is wrong.

A Responsible Application Audit

If you’re going to review an existing codebase, I’m a big believer in using a plan that is based around best practices. It’s an audit, not a review. A good audit should produce tangible artifacts and unbiased answers about the codebase with respect to software development best practices.

Here’s an outline of a plan that we’ve successfully used when auditing an existing web application. It’s not perfect for all application types, but it shows a good structure for working through an audit.

1. Get it running on your local machine.

This is the most important thing you can do. Getting the application to run properly on your local machine means that you’ve been able to account for and install all the required dependencies. Surprisingly, this effort will likely require a day or two of work, and it will undoubtedly uncover smells with the codebase.

Here are a couple of key questions to answer.

  • Were the setup and build processes easy?
  • Does an automated deployment exist?

2. Review the architecture.

Start by drawing a high-level picture of the application architecture. Focus on the major components and how they interact.

  • Does the drawing make logical sense?
  • Would you refactor how any of the major components interact?
  • Are there systematic weakness that could hamper adding new features?

Next, draw a system diagram that represents the databases, caches, cloud services, dockerization, etc. Then, once again ask yourself the same set of questions that you did for the application architecture.

3. Review a feature.

Select a major feature of the application and do a deeper evaluation. Start with a quantitative review.

  • Does the code violate proper separation of concerns?
  • Is the single responsibility principle being followed?
  • Are the required dependencies reliable? Are they being actively maintained and supported?
  • Is there stale or unused code?
  • Do you see any other code smells like complexity, duplication, or churn?

Next, ask yourself some qualitative questions.

  • Is the code understandable?
  • Does the code do a good job of representing the mental model of the application user?
  • Are you confident that you can safely modify the code to enhance the feature?
  • Do you feel like you could repurpose the code to develop a parallel feature?

4. Measure the test coverage.

This helps establish a high-level understanding of the application’s quality.

  • What types of tests exist?
  • Do the tests that exist run cleanly?
  • Using a code coverage tool, can you find any components of the application that aren’t tested well?
  • Read through a few select test cases. Do they make sense? Do they give you a good understanding of the code that’s being tested?

5. Assess the code readability.

Code developed with consistent standards is much easier to extend.

  • Does the codebase use a consistent coding standard?
  • Are files, classes, functions, methods, and variables named well?
  • Are common web conventions being followed?

6. Evaluate basic security precautions.

Validating these helps you understand if the original developer understood the common security risks.

  • Are authentication and authorization best practices used? What tools are being leveraged?
  • Is session management being properly handled?
  • Is secure transmission being used when appropriate (e.g., proper use of HTTPS)?
  • Are common security risks like uploads being handled properly?

If security is a huge risk point, consider recommending a more formal security audit. There are various organizations that specialize in this particular area.

7. Evaluate basic scalability risk points.

Assuming the application has a fair amount of users, review the set of common scalability risk points.

  • Does the database have proper indexes?
  • Are there obvious inefficient queries in the codebase?
  • Are long-running tasks happening in a background process?

Pro tip: Ask the client what parts of the application run slower than they’d like, and start your evaluation there.

Conclusion

The “give it a look” strategy for reviewing an existing codebase is flawed. Instead, work through a responsible application audit. The audit will provide a good evaluation of what exists and will help guide the next steps for enhancing the application. It will also make you much more confident in your recommendations.