Secrets, Security and Environment Configuration

True story.

Once upon a time there was a web application stack built atop of a particular CI (continue integration) pattern with particular security constraints which I had the pleasure of configuring. The application needed certain secrets (third-party API keys, passwords, trade-secrets, etc) in order to run properly but that data needed to be very well guarded (and definitely not shipped with the codebase). The following is how I tackled the assignment, I think it works well but I could be mistaken. Please feel free to comment with any questions, suggestions, or other feedback.

First, I decide to put all configuration information in environment-based YAML files except for the sensitive info. I decided to store sensitive data in shell environment variables in-memory having and referencing them in the codebase where needed. This particular application uses Puppet for CM (configuration management), the provisioning and continuous deployment process is similar to the following...

A fresh server is booted-up using an image which executes a bootstrapping script which installs a few essentials and prompts for some user input. One of the expected inputs is a password which is used to access the contents of a password-protected 7zip archive which gets downloaded from a private file hosting server. The 7zip archive contains a GPG (GNU Privacy Guard) public and private keyring which is used to decrypt the environment config file which gets downloaded from a private GitHub repository which only hosts the various app configurations. The bash-based environment configuration file is then sourced in-place (e.g. eval gpg -d /tmp/envfile) having the commands within the file executed and certain secrets exposed as environment variables. Certain bits are also copied to disk in-case of failure and reboot (each location is secured).

Having the secrets exposed and a process and workflow which seems to be somewhat secure, I just needed a better way of getting at the complex environment variable names which held the sensitive data. So I decided to write Config::Environment, a module which give you access to app-specific environment variables (with some additional goodies as well).



One thing to be aware of: on various unix variants, ps -E or ps -e will show all environment variables for the listed processes.

Debian's/Ubuntu's ps also has "e" option to view environment variables, but it does not allow a normal user to peek other user's environment. I'd guess that this is the norm with the other Unices.

Specifying passwords on the command-line arguments, on the other hand...

My thoughts? Rube Goldberg security.

As far as can be gleaned from your post, the only relevant security in your system comes from the password protecting the 7zip file, and from prompting the user for that password rather than storing it anywhere. All other steps in your workflow are completely redundant.

You could put the ultimate credentials in a plaintext file inside the 7zip archive, make 7zip decompress it to stdout (instead of a file) with your app capturing its output, and ship the 7zip file alongside your code, and the whole thing would be exactly as secure as before, only much simpler, so that you’re less likely to botch the security somewhere along the way. (E.g. why are you including a GPG private key in the 7zip archive…?)

The build system could be compromised at various points so there are obstacles at various points in an attempt to make the system more secure.

What is your threat model? Do you expect an attacker who can gain access to some step inside your build system, but not the following ones? In that case I can see the point of downloading credentials from an external source. The remainder still seems redundant either way.

Because once the 7zip file is decrypted, the rest of the credentials all tumble out. Each further step already has all the credentials it needs to proceed by itself. You’ve put the credentials inside a Russian doll system where there’s a padlock only on the outermost one. (Or more accurately, the key to each inner doll’s padlock is right next to its padlock.) Sure you can’t get to the good stuff without opening all the Russian dolls, but once the (first) padlock is off, that’s easy. The Russian dolls do nothing but make it look more complicated.

If you assume an attacker who is able only to compromise one step of this process, you should simply delay obtaining the credentials – pass the password along the chain, up to the point where you need the credentials, and only use it to obtain them at that time. You leave out all the dolls and padlocks except the inner-most one. The attacker will see the final key, but will have no more use for it than they did before, because they cannot find the corresponding padlock. (If they can, then they also could before, and it makes no difference.) The security of this scheme is exactly equivalent to that of your current contraption – only, again, it is much simpler.

Upon discovery of a compromise,

(Side note from the rest of my comment: that right there is a tall assumption. Do you have any mechanism that would tip you off about a compromise, other than stuff popping up on your external accounts that you didn’t put there yourself?)

I could change the location of the external files, blacklist the server from the puppetmaster and/or update the 7zip archive. If the environment file is compromised at it’s location, well, … it’s GPG encrypted. If the 7zip archive is compromised at it’s location, well, … it’s AES encrypted.

These are all well and fine, but how do they improve the situation above and beyond “upon discovery of a compromise I must change all of my credentials on the external services” which, by all I can tell, is unavoidable anyway?

Here’s a question: why do you not simply download the credentials file from a password-protected URL, and prompt the user for the password? (HTTPS would serve as your encryption.) Upon discovery of a compromise, you change your credentials on the external services, then change the password for the download URL of the credentials file, and maybe ship a new version of the code so you can change the URL it downloads from. (You could also require an SSL client certificate, which you can revoke upon discovery of a compromise. This must be shipped alongside the code so it’s roughly the same security as changing the URL.)

Presuming that you have measures in place to prevent the attacker from immediately re-compromising the system the same way they did before, you have locked them back out. No need for all the rest of it.

Leave a comment

About Al Newkirk

user-pic ... proud Perl hacker, ask me anything!