How to Configure Access to Your K8s cluster

Learn how to create access controls and view your cluster using kubectl, k9s, and eksctl.

How to Add IAM Users to Your Self-hosted EKS Cluster(s)

This document walks you through setting up a user to get access to an EKS release cluster.

Definitions

  • AWS: Amazon Web Services

  • IAM: Identity and Access Management

  • EKS: Elastic Kubernetes Service

  • ARN: Amazon Resource Name

  • OAuth/SAML: Open Authorization/Security Assertion Markup Language, methods for identifying and authorising users and applications

Prerequisites

Before you continue, you will need the following:

Administrator

  • IAM Credentials for someone who already has administrator privileges or who is already listed in the EKS configuration map as an administrator

  • The Role or User ARN that identifies the user (looks like aws:arn:iam::ACCTID:user/USERNAME)

  • An existing kubeconfig file for the EKS cluster

  • If you do not have an existing kubeconfig file then you can generate one by following the initial steps for the end user.

  • If you do not already have access to the cluster to generate a kubeconfig file, the original user or role credentials that created the cluster must be used. An AWS administrator should be able to assume that role to generate the configuration. You can refer to AWS support or ask Release if they can specify which user or role created the cluster.

  • We recommend that you install k9s for ease of use

End User

Administrator Steps

These steps are for administrators to grant access to the cluster. There are two ways to install privileges: via the k9s visual editor or via the command line.

K9s Instructions

  1. Assuming you already have k9s setup and a kubeconfig file available (if not, you can follow the initial steps for an end user), start up k9s and use the :namespace command to access the kube-system namespace as shown below

  1. Then use the :configmap command to access the aws_auth configuration as shown below.

  1. Once you find aws_auth you would hit the e command to edit the file and insert the user as shown below.

  1. You will then need to carefully copy and paste the section outlined in red above to create a new user, being careful to edit the ARN correctly to allow the user to access the system. In this example, the users are administrators but you can consult the documentation for Kubernetes default roles like viewers and ops users.

  2. Save the file and then verify the changes by using the d (describe) command to view the document that was applied.

CLI Instructions

You follow the documentation available from AWS for the same procedure done visually above. The steps are the same:

  1. Download the existing aws_auth configmap from the kube-system namespace

  2. Edit the mapUsers field and add the user

  3. Save the file

  4. Apply the changes to the cluster

  5. Verify the changes have been made

End User Steps

Assuming that you have been added to the cluster configmap and that you have the prerequisites installed, you can gain access to the cluster to view status, logs, and perform other tasks you have permissions to accomplish.

Create Kubeconfig File

You will need to have your AWS credentials available either in configuration files or in your environment variables or in named profiles, etc. The actual steps are beyond the scope of this document but you can read about them in the credentials quickstart. The eksctl binary respects the usual configuration directives that the AWS CLI would also use. This document assumes the default credentials are available. If you wish to specify a different set of credentials than default, you will need to specify them appropriately.

Your credentials will authenticate you as a user or role in the account and region where the EKS cluster is available. You may have a user role configured in a different account and then assume a role to the EKS cluster account, or you may have very complicated setups with OAuth or SAML integrations, which are beyond the scope of this document.

To generate your kubeconfig file type the following where your eksctl binary is available and your AWS credentials are specified by default:

eksctl utils write-kubeconfig --cluster CLUSTERNAME --region REGION

K9s instructions

We recommend that you use the K9s interface mainly for visualization and viewing logs and status rather than cluster administration, although that might be possible. Here are a few use cases we’ve found useful.

View Application Namespaces

You can use the :namespaces command and filter with the /release search to list applications running from the Release Environments as shown below:

View Pods for a Release Environment

You can then either hit on a namespace or type the :pods command to view the applications in the Release Environment as shown below:

View Logs for an Application Container in a Release Environment

You can use the l (or logs) command to view what is happening in your application as shown below:

Access the Container System (if available)

If you have sufficient privileges and configuration, use the s (or shell) command to enter the running container if available:

Exit K9s

Use the familiar VI controls to :quit the K9s application:

CLI Instructions

The CLI commands give you an example of the above visual output. As stated previously, we recommend you use commands to examine the state of the cluster and generally not to change settings or stop/start pods or services because this should be all handled by the Release website or our own CLI tool.

You can also find great kubectl documentation here.

kubectl get namespaces

Remember that a namespace in Kubernetes maps to a Release Environment

kubectl get pods -n RELEASEENV

Remember that a pod in Kubernetes maps to a Release Service in the Environment

kubectl get logs RELEASESERVICE