Vault HTTP API
Previously, we looked at the many possibilities of using HashiCorp Vault in conjunction with Kubernetes. However, before digging into these, it is important that we comprehend the broader context, which means understanding how to access data directly from Vault via its HTTP API.
We’ll look at how an application authenticates to Vault using various methods, such as token-based authentication or more advanced methods like AppRole or Kubernetes authentication, as well as the strengths and shortcomings of each method.
Token-Based Authentication
Tokens are the core method for authenticating; therefore, nearly all requests to Vault (the default policy does provide the ability for the token to look up data about itself) are accompanied by a token. Vault clients authenticate with Vault using a defined auth method. Upon successful authentication, Vault generates a token managed by the token backend and returns it to the client.
The token auth method is built-in and serves as the core of client authentication. This method allows clients to easily authenticate with Vault.
Thus, the easiest approach for an application to authenticate with Vault is by using a static token. In such a case, an application which needs access to information stored in Vault would most probably access Vault via its REST API.
Let’s take a quick look at how an nginx
pod would authenticate to Vault.
Firstly, we need to create a token associated with a policy which allows read privileges on the unfriendlygrinch
kv-v2 secrets engine.
$ vault policy read reader
path "unfriendlygrinch/data/app/config" {
capabilities = ["read"]
}
$ vault token create -policy=reader
Key Value
--- -----
token hvs.CAESIP7hKy46gIixWPgVz3t1p3n-xVb88FDMZEU5H__C2mezGh4KHGh2cy5VZFM4QVBSMldhTUx4N0VYMzlWQVFKM0s
token_accessor jbYnMp2AWcNYjSMGC9WNW5bV
token_duration 768h
token_renewable true
token_policies ["default" "reader"]
identity_policies []
policies ["default" "reader"]
However, the nginx
pod is not capable of querying the Vault HTTP API. This is why we are going to build a custom container image which contains a Python script. This will be the actor responsible for retrieving the secret from Vault and passing it to the main container, namely nginx
, using a shared volume. And in order to achieve this, we need to pass the URL of the Vault server, as well as the token to use for authentication.
Let’s quickly go over the Python script:
#!/usr/bin/python
import os
import sys
try:
import hvac
except ImportError:
raise ImportError('Please install hvac to use this script.')
sys.exit(0)
class InvalidTokenError(Exception):
pass
def get_vault_client() -> hvac.Client:
"""Get HashiCorp Vault Client."""
try:
vault_addr = os.environ['VAULT_ADDR']
except Exception:
print('The VAULT_ADDR environment variable must be set.')
sys.exit(0)
try:
vault_token = os.environ['VAULT_TOKEN']
except Exception:
print('The VAULT_TOKEN environment variable must be set.')
sys.exit(0)
vault_client = hvac.Client(url=vault_addr, token=vault_token, verify=False)
if vault_client.is_authenticated():
return vault_client
else:
raise InvalidTokenError('Invalid Hashicorp Vault Token')
def read_secret(vault_client: hvac.Client):
"""Read the data written under the provided path."""
read_response = vault_client.secrets.kv.read_secret_version(
mount_point=os.getenv('MOUNT_POINT', "unfriendlygrinch"),
path=os.getenv('SECRET_PATH', "app/config"))
return read_response['data']['data']['password']
def main():
with open("{}/index.html".format(os.getenv('SHARED_VOLUME')), mode="wt") as f:
f.write(read_secret(get_vault_client()))
if __name__ == "__main__":
main()
This is a rather straightforward approach in which we use the hvac
Python Library in order to interact with the Vault HTTP API.
Let’s now build our custom container image using podman
:
FROM python:slim
LABEL maintainer="Elif Samedin <elif@unfriendlygrinch.info>"
LABEL company="Unfriendly Grinch"
LABEL description="Vault HTTP API"
RUN pip install hvac=='1.0.0'
ADD read_secret.py .
CMD ["./read_secret.py"]
ENTRYPOINT ["python"]
$ podman build -t quay.io/elifsamedin/nginx-static-content . --no-cache
STEP 1/8: FROM python:slim
STEP 2/8: LABEL maintainer="Elif Samedin <elif@unfriendlygrinch.info>"
--> 5278b655173d
STEP 3/8: LABEL company="Unfriendly Grinch"
--> 039e9cedef26
STEP 4/8: LABEL description="Vault HTTP API"
--> 1ae1666b5752
STEP 5/8: RUN pip install hvac=='1.0.0'
Collecting hvac==1.0.0
Downloading hvac-1.0.0-py3-none-any.whl (143 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 143.5/143.5 kB 1.9 MB/s eta 0:00:00
Collecting pyhcl<0.5.0,>=0.4.4
Downloading pyhcl-0.4.4.tar.gz (61 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 61.1/61.1 kB 11.1 MB/s eta 0:00:00
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'done'
Collecting requests==2.27.1
Downloading requests-2.27.1-py2.py3-none-any.whl (63 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 63.1/63.1 kB 11.1 MB/s eta 0:00:00
Collecting urllib3<1.27,>=1.21.1
Downloading urllib3-1.26.16-py2.py3-none-any.whl (143 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 143.1/143.1 kB 9.4 MB/s eta 0:00:00
Collecting certifi>=2017.4.17
Downloading certifi-2023.7.22-py3-none-any.whl (158 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 158.3/158.3 kB 18.5 MB/s eta 0:00:00
Collecting charset-normalizer~=2.0.0
Downloading charset_normalizer-2.0.12-py3-none-any.whl (39 kB)
Collecting idna<4,>=2.5
Downloading idna-3.4-py3-none-any.whl (61 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 61.5/61.5 kB 12.9 MB/s eta 0:00:00
Building wheels for collected packages: pyhcl
Building wheel for pyhcl (pyproject.toml): started
Building wheel for pyhcl (pyproject.toml): finished with status 'done'
Created wheel for pyhcl: filename=pyhcl-0.4.4-py3-none-any.whl size=50128 sha256=e7b743ec086d0428f1032b34b97f08b0243cab0829e67620e17d9e29ebe30d0c
Stored in directory: /root/.cache/pip/wheels/48/b1/7a/4f7e20dedcb202afa9006ad492bf20e446409da3f379f4952e
Successfully built pyhcl
Installing collected packages: pyhcl, urllib3, idna, charset-normalizer, certifi, requests, hvac
Successfully installed certifi-2023.7.22 charset-normalizer-2.0.12 hvac-1.0.0 idna-3.4 pyhcl-0.4.4 requests-2.27.1 urllib3-1.26.16
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
[notice] A new release of pip available: 22.3.1 -> 23.2.1
[notice] To update, run: pip install --upgrade pip
--> fffdca509177
STEP 6/8: ADD read_secret.py .
--> bc55fadc2e3d
STEP 7/8: CMD ["./read_secret.py"]
--> 5b8ad89427e1
STEP 8/8: ENTRYPOINT ["python"]
COMMIT quay.io/elifsamedin/nginx-static-content
--> 666c0737d344
Successfully tagged quay.io/elifsamedin/nginx-static-content:latest
666c0737d34462143e348ce278c58631161ff4846e3c54034eff418462c9acbd
And, for simplicity purposes, we are simply going to push the newly created image to quay.io
$ podman push quay.io/elifsamedin/nginx-static-content
Getting image source signatures
Copying blob 3d8bf696b1fa done
Copying blob aa32521023a9 skipped: already exists
Copying blob 4ef39abb1f06 skipped: already exists
Copying blob 64da778fad94 skipped: already exists
Copying blob 1c288dd2fdfd skipped: already exists
Copying blob 411b5449c991 skipped: already exists
Copying blob 5ae9e0cf1749 skipped: already exists
Copying config 666c0737d3 done
Writing manifest to image destination
Storing signatures
The next step is to create a pod composed of an initContainer which is based on our image and a container which is based on the nginx
image. These will use a shared volume of type emptyDir so that the nginx
container would access the data the initContainer fetches from Vault.
apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx
name: nginx
namespace: unfriendlygrinch
spec:
initContainers:
- name: nginx-static-content
image: quay.io/elifsamedin/nginx-static-content
env:
- name: VAULT_ADDR
value: "http://<MY_VAULT_ADDR>"
- name: VAULT_TOKEN
value: "hvs.CAESIP7hKy46gIixWPgVz3t1p3n-xVb88FDMZEU5H__C2mezGh4KHGh2cy5VZFM4QVBSMldhTUx4N0VYMzlWQVFKM0s"
- name: SHARED_VOLUME
value: "/data"
volumeMounts:
- name: shared
mountPath: /data
containers:
- name: nginx
image: nginx
volumeMounts:
- name: shared
mountPath: /usr/share/nginx/html
dnsPolicy: ClusterFirst
volumes:
- name: shared
emptyDir: {}
And checking that it actually works:
$ k -n unfriendlygrinch exec -it nginx -- bash
Defaulted container "nginx" out of: nginx, nginx-static-content (init)
root@nginx:/# curl localhost && echo
dretda1rus
This approach above is, in my view, quite at hand to get data from Vault. However, there are some concerns regarding it:
-
The token we have created has a TTL of 768h. What happens when this expires? Unless the token is renewed before its expiration, then our app would no longer have access to Vault. This would result in service disruption. The token renewal in this case is a manual task.
-
The token used by the app is actually a service token. This means that the token has a parent token. What happens in case the parent token gets revoked it reaches its TTL? When a parent token expires, Vault revokes all the child tokens, regardless of their TTL. In this case, we could create an orphan token for our app, because such tokens do not expire when their parent does. On the other hand, orphan tokens still expire when their own max TTL is reached. Or maybe use a periodic service token which has a TTL, but no max TTL. This means that such a token may live for an infinite duration of time so long as it are renewed within its TTL. Any of these two solves the underlying problem only partially.
-
The token is passed to the app as an environment variable and this might pose a security issue because in case attacker gets access to it, it gets access to the data stored in Vault, of course, to the extent that the policy associated with the token allows it. However, in case the policy is not that restrictive, a great amount of data might end up being compromised.
AppRole Auth Method
The AppRole auth method in Vault is designed for application-to-application communication. It allows applications or services to authenticate with Vault using a combination of a Role ID and a Secret ID. This approach avoids the need for human intervention in the authentication process, making it suitable for automated processes.
The AppRole auth method works as follows:
- Create an AppRole Role: This role is associated with policies that specify the permissions the application will have while accessing secrets. Get Role ID: The Role ID is an identifier that is used by the the application to authenticate itself to Vault. This is usually typically known to the application and is used to determine the role the application is to assume.
- Get Secret ID: Similarly to the Role ID, this is an identifier that the application uses to authenticate itself to Vault. However, the Secret ID changes periodically.
- Application Authentication: The application sends the Role ID and Secret ID to Vault during the authentication process. Vault evaluates the Role ID and Secret ID, and and issues a temporary token to the application.
- Access Secrets: The application is able to access the secrets as well as execute operations permitted by the policies associated with the Role using the temporary token.
Let’s now configure the AppRole auth method in HashiCorp Vault:
- Enable the AppRole auth method.
$ vault auth enable approle
Success! Enabled approle auth method at: approle/
- Create an AppRole: Define a role with the
reader
policy attached.
$ vault policy read reader
path "unfriendlygrinch/app/config" {
capabilities = ["read"]
}
$ vault write auth/approle/role/app policies=reader
Success! Data written to: auth/approle/role/app
In the example above, app
is the name of the AppRole, and reader
is the policy associated with it.
- Get the RoleID of the AppRole:
$ vault read auth/approle/role/app/role-id
Key Value
--- -----
role_id d17b4482-f590-2c02-2643-a4fbd079ab23
- Get a SecretID for the AppRole:
$ vault write -f auth/approle/role/app/secret-id
Key Value
--- -----
secret_id 2fa2aeb7-de94-eb01-b34d-7c67eee6aef2
secret_id_accessor b9d7cec1-2abd-c709-090e-0111ea0f0da7
secret_id_num_uses 0
secret_id_ttl 0s
- Application Authentication: We are going to use the Role ID and Secret ID to authenticate the application with Vault.
$ vault write auth/approle/login role_id=d17b4482-f590-2c02-2643-a4fbd079ab23 secret_id=2fa2aeb7-de94-eb01-b34d-7c67eee6aef2
Key Value
--- -----
token hvs.CAESIBn9ToLV2NEiV_xRojO0fz8UlJU7tW4dRhcGaXaESqVtGh4KHGh2cy5aWGdWR2VVckNkdFdiT1hnNlIzUDlpWXg
token_accessor DcQiWLywEeZycefdjcqLMxUS
token_duration 768h
token_renewable true
token_policies ["default" "reader"]
identity_policies []
policies ["default" "reader"]
token_meta_role_name app
Vault returns a temporary token that can be used by the application to access secrets.
- Fetch secrets: Let’s use the temporary token to perform API queries to Vault and access the secrets based on the AppRole role’s policies.
$ VAULT_TOKEN="hvs.CAESIBn9ToLV2NEiV_xRojO0fz8UlJU7tW4dRhcGaXaESqVtGh4KHGh2cy5aWGdWR2VVckNkdFdiT1hnNlIzUDlpWXg" vault kv get unfriendlygrinch/app/config
========== Secret Path ==========
unfriendlygrinch/data/app/config
======= Metadata =======
Key Value
--- -----
created_time 2023-07-31T14:57:00.611356951Z
custom_metadata <nil>
deletion_time n/a
destroyed false
version 1
====== Data ======
Key Value
--- -----
password dretda1rus
By using AppRole, we avoid hard-coding sensitive credentials like tokens directly into the application’s codebase. On the other hand, the application itself must be able to handle the authentication process, obtain and renew tokens, and handle token rotation.
And what if the effort to change the application codebase is too high, or if we do not own the codebase of the application in question (as an example, an OSS/COTS application)?
In this case, an alternate approach would be to use a Kubernetes pattern that makes use of an initContainer. We are able to run a specific process before the main application process begins by installing an initContainer. The purpose is to ensure that the application has access to the necessary secret data before it starts its execution. To accomplish this, our initContainer will interact with a container containing the Vault CLI or a dedicated Vault container to retrieve the necessary secret from Vault.
As an example, we will create a configMap that includes an HCL file. This, in turn, will contain instructions on how to connect to the Vault, which role to use for authentication, and how to handle the retrieved secrets. In this particular case, we will store the secrets in a file for consumption by the main application.
apiVersion: v1
kind: ConfigMap
metadata:
name: vault-config
namespace: unfriendlygrinch
data:
config.hcl: |
"auto_auth" = {
"method" = {
"config" = {
role_id_file_path = "/etc/vault/approle/roleid"
secret_id_file_path = "/etc/vault/approle/secretid"
}
"type" = "approle"
}
"sink" = {
"config" = {
"path" = "/home/vault/.token"
}
"type" = "file"
}
}
"exit_after_auth" = true
"pid_file" = "/home/vault/.pid"
"template" = {
"contents" = "{{- with secret \"unfriendlygrinch/app/config\" }}Password: {{ .Data.data.password }}{{- end }}"
"destination" = "/data/index.html"
}
The Vault Agents expects the path to the files containing the Role ID, and the Secret ID respectively. A straightforward approach is to create an opaque secret containing these and mount it as a file in the pod.
apiVersion: v1
kind: Secret
metadata:
name: vault-approle
namespace: unfriendlygrinch
data:
roleid: ZDE3YjQ0ODItZjU5MC0yYzAyLTI2NDMtYTRmYmQwNzlhYjIzCg==
secretid: MmZhMmFlYjctZGU5NC1lYjAxLWIzNGQtN2M2N2VlZTZhZWYyCg==
And the Pod:
apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx
name: nginx
namespace: unfriendlygrinch
spec:
initContainers:
- name: vault
image: hashicorp/vault
args:
- agent
- -config=/etc/vault/config.hcl
- -log-level=debug
env:
- name: VAULT_ADDR
value: "http://176.118.186.135:30000"
volumeMounts:
- name: vault-config
mountPath: /etc/vault
- name: vault-approle
mountPath: /etc/vault/approle
- name: shared
mountPath: /data
containers:
- name: nginx
image: nginx
volumeMounts:
- name: shared
mountPath: /usr/share/nginx/html
dnsPolicy: ClusterFirst
volumes:
- name: vault-config
configMap:
name: vault-config
items:
- key: config.hcl
path: config.hcl
- name: vault-approle
secret:
secretName: vault-approle
- name: shared
emptyDir: {}
The main container, nginx
, serves content from a shared volume.
By using this Kubernetes pattern, Init Container, we can ensure that the application’s secret requirements are satisfied, developing a more secure and efficient runtime environment.
Conclusions
In both scenarios, our application is “Vault-aware”. This means that it is designed to connect directly to Vault in order to get the necessary secrets. This can include, among others, token-based authentication or AppRole authentication.
- Token-based authentication: This means that the application is designed to send a pre-configured token to Vault in order to authenticate itself. The token uniquely identifies the application and based on the policies associated with it, Vault determines the paths the application has access to.
- AppRole authentication: In contrast to direct token assignment, when using the AppRole authentication methods, the application is provided with a RoleID and a SecretID, delivered through different channels. Both are used to authenticate to Vault. This approach is more secure than the previous one because two values are required for successful authentication. Furthermore, SecretID can be changed without impacting the RoleID.
It’s worth noting that Vault-aware applications must manage potential error scenarios like failed authentication, handling secret rotation, and so forth. Making an application Vault-aware entails not only being able to retrieve secrets, but also managing the full secrets lifecycle.
Additionally, when adopting a Kubernetes pattern, such as the Init Container, a lot of configuration is required, which might add a significant amount of overhead.
Without an adequate budget to cover overhead costs, we might not be able to manage Vault-aware applications. Let’s further dive into alternatives such as integrations…