Prepare for a HIPAA Audit Like It’s a Production Launch

If you’re new to working in regulated industries, audits can feel opaque and high-stakes in a way that normal engineering usually doesn’t. They’re unfamiliar and usually come wrapped in legal jargon that feels heavy. And they’re also a fact of life. If you shift your perspective just slightly, you may find that you already have the mindset you need to prepare for a successful audit, in the same way that you prepare for a successful production launch.

Not buying it? Let me try to justify the metaphor.

Preparing for a HIPAA Audit

The goal of every project I’ve ever worked on is to ship the software to production, to get the tool, service, whatever, in the hands of its intended customers so they can live a better life by automating the boring stuff through software. If I’ve done my job well, then shipping to production is a low-risk event. Think turning the lights on, not launching a rocket.

Doing my job well means derisking things using good engineering practice and know-how. I write unit tests to catch issues with individual functions, end to end tests to catch issues with entire applications, and use continuous delivery practices like dark launches and feature flags to catch issues with whole production environments. There’s a ladder of checks, and each one is designed to minimize surprises on the next rung up, all the way through production launch.

The goal of every digital health project has all of those characteristics, and also includes a periodic test called a compulsory audit, where an outside authority checks to make sure you aren’t leaking sensitive information, or setting things up in a way that makes leaks more likely. If you’ve done your job well, that audit is a low-risk event because of the ladder of derisking that you’ve done ahead of time.

You can use developer checklists to ensure you catch easy-to-address HIPAA audit risks during initial development, when they’re cheap to fix. A delivery teammate can run a separate set of checklists in an adversarial (only a little) way to ensure that the dev team isn’t succumbing to over-optimism or missing issues that are hard to see when they’re deep in the code. You can use AI tools (very carefully) to run increasingly realistic audits with increasingly realistic test data. And you can run your own internal and external audits. This too is a ladder of checks, where each rung minimizes the chances of surprises on the next rung up, all the way through compulsory external audits.

Below are a few ways you and your dev team can leverage your existing skills to prepare for unsurprising audits.

Read the Source Code

HIPAA requirements are mostly common sense and good practice wrapped in legalese, an effects-oriented programming language that runs on a massively distributed, eventually consistent(ish) collection of independent interpreters known as the judiciary.

If you’re the developer who’s been asked to lead the charge in this space, take some time to read the underlying legal documents in the same way that you might read an RFC. Use web pages like this one as a jumping off point, to help you find the original, then go read that to see what’s actually required.

But be careful. Just because you understand what the words would mean if they were interpreted by a lay person, doesn’t necessarily mean that you fully understand their import when interpreted by a judge. Legalese is an old language, and the runtime has been under heavy construction for centuries. It’s even older than Perl. 🤯 There are some gotchas to watch out for. So find a good lawyer and check your understanding with them early and often. Lawyers and developers have a lot more in common than you might think.

HIPAA’s Security Rule

This is one of the big governing documents that you should be familiar with.

Some of the tricky-to-parse basics are as follows:

  • Record and examine activity in any system that contains PHI, per 45 CFR §164.312(b)
  • Regularly review audit logs, access reports, etc, per 45 CFR §164.308(a)(1)(ii)(D)
  • Prevent PHI from being altered or destroyed improperly, per 45 CFR §164.312(c)(1)
  • Strongly authenticate users who attempt to access PHI, per 45 CFR §164.312(d)
  • Retain logs for 6 years, per 45 CFR §164.316(b)(2)
  • Disclose PHI access, per 45 CFR §164.528

If you, like me, space out a bit whenever the § symbol shows up in a document, here are those requirements translated to more standard software developer lingo:

  • Log activity on any system that touches Protected Health Information (PHI)
  • Look at the logs regularly
  • Use tamper-proof stores for PHI
  • Use strong authentication for everyone
  • Keep your logs for 6 years
  • Tell people when their PHI is accessed

Document Your Intent

Look through your codebase, infrastructure, hardware, and org chart and find everything that might touch PHI. Write it all down. This will probably take a while. It needs to include everything you probably expect: databases, event streams, and applications; and some things you might not be thinking about all the time: network configuration, log aggregation, analytics packages. If you’re using a cloud provider like AWS, you’ll want to carefully record which services you’re using and how.

Design an audit log that adequately captures everything you’d need to prove to a HIPAA auditor that you’re doing everything correctly. At a minimum, your log should include:

  • Actor identifiers and roles (who’s who, and what do they do?)
  • The subject of the action (what they accessed)
  • The action itself (what they did)
  • Timestamp (when they did it)
  • Outcome (whether it worked)
  • Reason code (why they tried)
  • Request identifiers (think distributed system traces)
  • Source network attributes (where did they access it from)
  • Cryptographic hashes that link each record into a tamper-evident chain (like a commit hash)

Emit events from every service into a secure ingestion pipeline like Kinesis, through a transformation layer that validates schema conformance and drops sensitive payloads. Store your logs in immutable storage such as S3 with object lock or Azure immutable blobs. Encrypt them in-transit and at-rest.

Restrict write access to service principals, and require short-lived, least-privilege roles for investigators. Plan to keep your audit logs for at least six years, and make sure you check that they’re actually readable at least annually.

Write Tests and Run Them Automatically

Tests mean nothing if you don’t run them, a lot. An audit log is the same. It’s useless if you don’t look at it. And it’s only really useful if someone else looks at it. Write developer and delivery checklists that will only pass once you’ve set up an adequate audit log, and built the infrastructure that would let non-developers (including, ideally, a small computer program) prove that any new changes are emitting logs properly.

Establish a cadence for evaluating your checklists and reviewing your audit logs. You might, for instance, review a random sampling of failed logins daily, a random sampling of high-risk events weekly, and random sampling of all events quarterly. You should probably run your developer checklists every time you’re about to publish a PR. You might run your delivery checklists as a sign-off step before those PRs get merged. Just like in regular software development, there will be parts of your test suite that run automatically, and parts that require a human in the loop. Both can be integrated into an overall automated release pipeline.

Get Somebody Else to Test the System

It’s great if the tests pass locally, but what really matters is whether they pass in CI. The same is true for audit preparation. Your developer checklists and internal delivery checklists are good low rungs on the ladder that should catch a lot, but you should also bring in external parties to check your work.

Assign clear owners spanning security, compliance, and clinical operations. Publish the runbooks that they follow: investigation checklist, data sources, escalation tree, and the sanctions policy they trigger. Set up automated alerts where appropriate, but be careful to avoid alert fatigue.

Keep an “evidence bundle” that’s always ready to go. Include policies, Standard Operating Procedures (SOPs), data flow diagrams, screenshots of dashboards with PHI masked, sample log exports with checksums, and links to third-party attestations from cloud providers or Security Information and Event Management (SIEM) vendors.

You may want to build some canned queries to questions that auditors are likely to ask, like:

  • Who accessed data for patient X in the last 30 days?
  • Which accounts escalated to admin last quarter?
  • Show failed access attempts for our Electronic Health Records (EHR) integration.

You’ll use similar tooling to satisfy accounting-of-disclosure requests from patients within the required 60 days. Document how you validate log integrity (hash verification, digital signatures) and how you grant temporary investigator access without breaching least privilege.

Summing up

I know this sounds like a lot. It is. But so is a successful test suite, and you already have one of those. If I were to write down all the things your team already does to make sure the software works reliably, it would sound like an unreasonable burden to place on any team. Begin your audit prep in the same way that you would begin to wrap an untested legacy codebase in a solid test harness: one step at a time.

This post is part of a series on digital health. For further reading, see “Handling HIPAA: A Developer’s First Steps.”

Conversation

Join the conversation

Your email address will not be published. Required fields are marked *