Here’s a problem my software development team faced recently. There is proprietary content available via a public AWS S3 bucket. Since these files’ URLs are publicly accessible, anyone with a link can access them indefinitely. While this may not be an issue now, as our app gains traction among users, this could pose a few security concerns down the line.
This content is custom-made and important for running the business – it’s valuable and should be protected. Given what resources and services AWS gives us, what can we do? How can we help?
There were two approaches brought to the table, either:
- Use AWS Cognito to create a developer identity/role to access S3 content.
- Use AWS S3 to presign each URL given some global expiration period (e.g. 20 minutes).
Both of these were intended to be implemented on the backend before being served on the frontend. For example, these might be used in loading an image or video that’s stored in S3 on a page.
Ultimately, I decided that presigning the given S3 URLs was the most straightforward choice. This is because, although you can gain access to S3 currently using AWS Cognito, the ability to get an S3 object URL without presigning it does not seem to be fully supported by the AWS SDKs.
In summary, it seemed to me that by going with approach #1, I would also need to do #2. So, presigning URLs in the backend before serving is essentially cutting out the “middle man” based on what the current SDKs give us.
Because presigned URLs are given an expiry period, this means that after this period passes, it would need to be signed again. That creates a potential problem.
If the content being loaded is a video and the expiry period is set low, to something like 30 second, what happens if that video doesn’t buffer completely? Videos are streamed, so… well, it wouldn’t continue loading. The user would need to refresh the page and start a new session. Although this is a rare case, consider this when designing your backend architecture.
Closing Thoughts on Presigning URLs