Application Guide
The application is one of the “top level concepts” within Release. Before we dive into actually creating our first application, let's introduce some high level information about applications and environments, so you have more context about how to model your services within release. We start by introducing some terminology that we’re going to use soon:
An application is where you define the configuration required to deploy one or more services that make up an application
An environment is an instance of an application (you can have many environments within an application)
App Imports
App imports is the mechanism by which we connect applications together to form “virtual applications”


Within Release, it’s possible to model your existing environments in a number of different ways, and it’s important to perform this mapping before we start building Release applications.
An application within Release can run multiple containers in a single Kubernetes namespace. A typical application might be called “backend”, map to your backend repository, and run your backend service as well as supporting containers such as postgres and redis.
It’s possible to make an application build and pull containers from multiple repositories, and then run all those containers in a single application. Sometimes this is necessary in a mono-repo environment, but generally we try to recommend you model out smaller applications that can be connected together later in the process. This gives you additional flexibility when modeling an environment. An example of this kind of flexibility is:
“I want to be able to test my backend as a standalone environment, I don’t need an associated frontend, just the backend and the database. However, when I deploy my frontend I also want it to deploy an associated backend because the frontend can’t be tested in isolation!”
App Imports allows this flexibility, and we’ll dig deeper into that part of modeling as the guide progresses. When you setup the configuration for an application, you list a single repository to track (for the purposes of matching builds for updating running environments), so generally we recommend “one application per repository” as the model to follow.

Application Environments

Terminology gets complicated when you introduce Release. Traditionally the word “environment” means the superset of everything required for your system to run. You might have a staging environment that contains every service you have.
Within Release, each application can have any number of “environments” running. When you create an ephemeral environment you select a branch to track and then an environment is created, using the “templates” you define when configuring the application.
This environment, once created, is no longer affected by changes to the template, you can think of it as a complete fork at the time of creation. If you use our app imports feature you may have multiple “release environments” connected together to form a virtual environment that contains everything necessary.

Virtual Applications (App Imports)

As mentioned above, we have a mechanism for connecting together multiple applications. We call these “app imports” and you can read more comprehensive documentation on our website. At a very high level, these allow you to automatically trigger the creation of environments in multiple applications when an environment for just one application is triggered.
Let's imagine you have three applications:
  • Backend
  • Frontend
  • Admin
These map to three repositories in git, with the same name. If you configure (and the documentation covers how to do this) app imports for “backend” and “admin” in the frontend application then when you create an environment for a branch called “feature/new” in the frontend repository we’ll:
  1. 1.
    Look for matching “feature/new” branches in the backend and admin repositories
    1. 1.
      If we find these branches we’ll build the necessary containers
    2. 2.
      If we don’t find these branches we’ll fall back on your default branch (master/main)
  2. 2.
    We’ll kick off new environments in backend, admin, and frontend
  3. 3.
    We’ll merge all the containers into a single kubernetes namespace (so they can all talk to each other)
At the end you’ll be able to connect to the url you see for the frontend, and it’ll in turn be able to talk to the urls created for the backend and admin as necessary, giving you a complete virtual environment containing all services you need to test the pull request on the frontend.

Routing Traffic to Applications

Now we have all these applications, we need to route traffic into them. As you can imagine, this can get fairly complicated and our documentation dives into more specific details.
We have the ability to route traffic into applications publicly (so that anyone can connect if they have the URL) as well as privately (you can only access the URLs from your AWS network). Right now this is a full “account wide flip” and you can talk to your Release TAM to make this change.
An “Ingress” is a kubernetes concept, and is effectively a shared cluster wide resource to handle more sophisticated routing requirements. You can read about the ingress over at the Kubernetes Documentation.
What you need to know about the Ingress for the purposes of release is:
  1. 1.
    Some aspects of the Ingress are configurable (timeouts, body-sizes, and so on).
  2. 2.
    We support “rules based routing” for complex scenarios, allowing you to route paths such as /api to one service while routing all other paths to another.
Most of the ingress is handled by magic, and you’ll never need to think about TLS certificates or basic configuration. We map your configured routing rules into ingress configuration for you.
Most of the time you will only need to adjust this if you know your application has very long requests, massive POST/GET bodies, or other unique needs. You can work with your Release TAM to investigate responses at the Ingress layer, if required, to find appropriate tuning values.

Building Containers

The foundation of any service in Release is the container. When creating applications the repository link is used to determine which containers need to be built and how they need to be built.
While we have extensive documentation around the build configuration, what you need to know at a very high level is we support two kinds of building.


Docker builds are the most commonly used option inside Release. When you define a build argument for a service, Release will look for the corresponding Dockerfile and build this container. We can build in two locations, a Release managed cluster, as well as your cluster for enhanced security.
Once containers are built (in either environment) they are pushed to a container registry within your Cloud account.


Our alternative build mechanism is primarily used for static node based frontends. A typical pattern seen in the wild is to have a frontend that builds and pushes to a cloud bucket (S3, typically) and is served up through a CDN for performance reasons. Our static builder replicates this pattern, and allows you to build CDN served frontends.
This can be problematic if you need “run time information”, such as dynamically generated backend URLs to connect to. If you find you need more information about the environment when the frontend is built, a common pattern is to use a Docker build instead and run a “build” when the container starts, ensuring it has all environment information.

Initializing an Application

Now you have an overview on how applications work within Release we’re ready to actually start building and deploying your first environment.

Create Application

To create your first application, at a high level the steps are:
  1. 1.
    Create a new application from the Release dashboard
  2. 2.
    Connect the application to a version control system repository (e.g. GitHub, GitLab, etc.)
  3. 3.
    Review and customize the automatically generated Application Template
  4. 4.
    Provide the necessary environment variables and secrets required to run the application
  5. 5.
    Start the first build and deployment
When we generate the initial Application Template, Release will checkout your application’s source code from your repository, attempt to detect the various services that compose your application, and determine how to build them.
One of the most effective ways for Release to discover your application’s services is through a well constructed Docker Compose file. For tips on creating a Docker Compose file, refer to this section of the guide.
For a step-by-step guide through application creation see: Create Your Application.


Deployments to an environment will follow one of a predefined set of workflows depending on how it is triggered.
Workflows can be customized to run user-defined jobs or customize which services are started/recreated in which order in response to changes to the environment.
Triggered By
Useful For
  • Initial environment creation
  • Environment configuration changes
  • One-time jobs like database initialization
  • External resource creation
  • Service startup
  • Push to environment’s tracking branch
  • Recreating service instances with images based on new code
  • Running database migration jobs to update the schema
  • Environment deletion
  • External resource cleanup
For more details on configuring your application’s workflows see: Workflow Schema Definition
ACTION: Create your first application and configure it appropriately to build

Connect Data Sources

Ephemeral environments work best when they have enough useful “real data” for your teams to test again. Our Instant Datasets feature allows you to track a snapshot of an existing database in your cloud account and generate ephemeral databases that get attached at environment creation time.
There’s some things you need to know about this feature up front if you want to use it:
  • Release must be deployed into the same cloud account and region as the database you wish to track
  • Each night will we replace any unattached databases with fresh clones from the most recent snapshot
  • You will have additional cloud costs from maintaining a pool of database for ephemeral environments
We have documentation around creating and attaching datasets at

Advanced Application Configuration

While most applications configured within Release use a limited set of features, we support a richer set of functionality than we’ve discussed so far. The rest of this guide is intended to introduce you to those concepts so that you know what is possible within Release. Each of the below topics is covered in detail within the documentation at


While we normally map one repo to one application, we do have the ability to connect multiple repositories into one application, this will ensure multiple containers are built as necessary to be used in the services section. You can find this documented at (ERROR: No docs!)

Environment Variables

Almost all configuration of services within Release is done by providing appropriate environment variables. This is a common pattern with Kubernetes services, but it means your application should be capable of reading environment variables and configuring itself.


Services are the basic building block within Release, and you’ll have experienced them by now. However, there’s a few advanced cases that it’s worth talking about so you know these features exist.


Health Checks are an optional concept, but helpful for ensuring your application is monitored and restarted appropriately. Kubernetes supports two kinds of health checking, liveness and readiness probes. These are documented in the Kubernetes Documentation, as well as our reference documentation and Release supports both exec and httpGet style probes.
A liveness probe is a test to see if your container is working at all. A common test here might be a simple GET on / to see if we get a 200. If a container fails the liveness probe, Kubernetes will delete the container and start a new one. Thus, liveness probes should be used sparingly and only when you can tolerate containers being killed and restarted automatically.
A readiness probe is a test to see if your container is ready for traffic. This is useful if your containers take a while to start, or perform work before they are ready for incoming requests. This is extremely important if you have required services or dependencies between services that must be ready before subsequent services start. This is also important if your service takes a long time (perhaps, more than one minute) to start servicing requests. Without a readiness probe, your service could be refusing to service requests for a long period of time before being ready.
Health checks are not required in Release, but we strongly recommend them because they help the underlying Kubernetes cluster understand if your services are healthy or not. We recommend adding them carefully, however, and only after you are done testing the functionality and initial configuration of your application. Readiness probes are one of the last steps you can add to your application before considering it ready to be used.

Init Containers

Init containers are a way to perform one time tasks at container startup. They are often used for tasks such as database migrations, or perhaps copying static assets from s3 into the container at runtime. You can read more about how to use these in our documentation as well as in the Kubernetes documentation.
The most important thing to know about an init container is they must run successfully to completion, before your main container will be started. Otherwise, if the init container exits with an error code, the pod will be deleted and a new one will be created to try again. Often, this results in the dreaded CrashLoopBackoff if not managed correctly.


Jobs are a “one time service”. They start a container and run once, and then exit. These are great for various setup tasks and other one time work. By default they do not block other services from coming up, and their failure will not cause your services to fail. You can manage how the jobs and services interact and how they handle failure modes in the workflow. You can read about how to use these in our documentation.


We commonly see Jobs used for migrations. A good example of this is when you run postgres as a service within Release, but want to run a command to run migrations after it’s up. This is similar to the init container above, but jobs won’t block the start of a service unless you want them to stop the deployment.


Release supports GitOps configuration management by checking in your application template and environment variables into your Git repository. This is a beta feature that requires the Release support team to enable for your Release account. Once your account has been enabled for GitOps you will be able to make Release configuration changes via Git commits and pushes to your repository. If you are interested in learning more about using GitOps in Release please reach out to [email protected]