Search…
Grant Access to AWS resources (S3, etc.) from ReleaseHub
This article describes how to allow self-hosted clusters to access AWS resources like S3 and more by policy.

Introduction

There are several ways to gain access to AWS resources from your ReleaseHub self-hosted cluster. The main ways to access AWS resources are described below.

Overview

Using Static Access Key Credentials

The simplest way to access resources is to generate user credentials and add them to environment variables in your application. Ensure that you use secret: true to hide the values! These credentials will work as long as they are valid and will work for whatever AWS account the user was issued in. The user must have a restrictive policy that allows least-privileges to the resources required. Typically this means not using a human's account which has elevated, broad privileges to lots of resources. Adding the credentials to the environment variables is straightforward and can be encrypted at rest. The other benefit is that static credentials in the environment do not typically require any code changes to be supported. However, they are very poor security because they can be easily compromised in the container just by looking at the environment.
Here is an example of how you could add static keys to an application:
---
services:
myapp:
- key: AWS_ACCESS_KEY_ID
value: AKIASOMETHING
secret: false # Or true!
- key: AWS_SECRET_ACCESS_KEY
value: abcdefghijklmnopqrstuvwxyz
secret: true
TL;DR: This method is generally not advisable but it will work. We discourage customers from using static keys due to the security risks.

AWS Metadata

The AWS metadata service runs on each Kubernetes node and allows a pod or application to assume temporary credentials that are refreshed automatically when accessed in the metadata. These temporary credentials can be used to access resources that have a trust relationship from the account the nodes run in. No credentials are stored in the environment, and in general, no changes are required for code or SDK calls to access credentials. The downside of a trust relationship is especially apparent in cross-account or third party access because the trust relationship usually traces back to unknown applications in another account. However, the policy that is applied to the trust relationship can be tailored to exact permissions required by the application and so it is typically a better security posture than using static keys. Given the low effort to implement metadata identity, this is a good middle ground to use for access.
Here is an example of a document used in creating a policy. Please read the AWS documentation for more examples. You can find the account number required for this policy in the View Clusters page with your integrations. If you have any questions, you can contact us to gather information on the value for AWS-account-ID (which is the account ID where your cluster resides).
{
"Version": "2012-10-17",
"Statement": {
"Sid": "S3BucketAccess",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::${AWS-account-ID}:root"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::mybucket",
"arn:aws:s3:::mybucket/*"
]
}
}
TL;DR: This method works well enough inside the same account where trust can be relaxed to a source account where the cluster lives. It is not well suited for regulated, secret, cross-account or third-party access. We would recommend most customers use this method instead of static keys at least.

Using Assumed Roles

An assumed role allows your application to request credentials to a role that has a predefined with a policy and trust relationship to the application. If access is granted, then a session token is issued that is valid for a short time (usually one hour but can be configured from 15 minutes to more than a day). In almost all cases, this requires you to write code to request the credentials, store them in your application or in memory, and then use them. You will also need to create a role, trust policy, and policy document in your account or another account. No credentials are added to the environment, credentials are not checked into VCS, and lastly credentials eventually expire so they cannot be compromised later. The role policy is not used for humans and can be tailored to the exact minimum access needed by the application, especially in cross-account or third-party access. A trust relationship can be granted to a very specific level of detail, making remote and third-party requests easy. However, code changes, and coordination with cross-account or third-party accounts can be a herculean task. Despite the effort required, the AWS assumed roles are the preferred and most secure way to gain access to resources.
Related documentation:

Pod-default Node Instance Role (source role)

The "Node Instance Role" is the default role that is assigned to a pod during pod execution in AWS using EKS. This role can be examined in a running container by using the aws sts get-caller-identity call on a container with the AWS CLI installed. You can see an example here:
# aws sts get-caller-identity
{
"UserId": "AROxxxx:i-00xxxx",
"Account": "123456789",
"Arn": "arn:aws:sts::123456789:assumed-role/eksctl-test-release-prod-us-e-NodeInstanceRole-xxxx/i-00xxx"
}
This is the default role for all pods running in the entire cluster and so the role is restricted to only a very common set of tasks like accessing read and write S3 buckets. The reason that more permissions are not granted is so that any pod that is deployed to the account cannot obtain more privileges than it needs. One of the only things that the default Node Instance Role can perform is to assume the role of another target role in the account or organisation that is underneath the path namespace of /role/release/. As an example, you can confirm the permissions created for the Node Instance Role in the AWS Console:
{
"Sid": "SessionTokenServiceAssumeRole",
"Effect": "Allow",
"Action": [
"sts:AssumeRole"
],
"Resource": "arn:aws:iam::123456789:role/release/*"
},

Create an Elevated Permissions Role (target role)

In order to perform higher permissions than included with the default Node Instance Role, you must assume a role that has higher (or escalated) permissions to perform those functions. As an example, let's say that a pod needs to be able to access a Kinesis stream or a DynamoDB table that is not normally allowed. First, create a role under the /role/release/ path with a trust relationship from the source role.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ElevatedRoleTrustPolicy",
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Principal": {
"AWS": "arn:aws:iam::111111111111"
},
"Condition": {
"ArnLike": {
"iam:AssociatedResourceARN": "arn:aws:iam::111111111111:assumed-role/role-name/*"
}
}
}
]
}
Note that wildcards are not allowed in assumed role session principals.
This policy will need access to Kinesis and perhaps DynamoDB, so when you create the target role, you can use an example policy like the following:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"dynamodb:PutItem*",
"dynamodb:GetItem*"
],
"Resource": "arn:aws:dynamodb:us-west-2:123456789012:table/Music"
},
{
"Effect": "Allow",
"Action": [
"kinesis:PutItem"
],
"Resource": "arn:aws:kinesis:us-west-2:123456789012:stream/Orders"
}
]
}

ReleaseHub-supplied Elevated Permissions Role

ReleaseHub provides a default "Elevated Permissions Role" which can be used to assume elevated permissions above what the normal Node Instance Role allows. This can be used (for example), to run Administrative (or, possibly lesser role) services. Contact us to find out the details of what permissions are supplied in the elevated role permissions and how to access the ARN for assuming that role.