Search…
Schema Definition

Application Template Schema

This configuration template is the basis for all environments you will create for this application. Each of the sections and directives file help to create the configuration for your specific environment. The environment_templates section describes differences between your ephemeral and permanent templates and you select one of these when creating an environment. Each section and directive will be described in detail below in this document.
1
---
2
app:
3
type: String
4
required: true
5
description: Name of your app, can't be changed.
6
auto_deploy:
7
type: Boolean
8
required: true
9
description: If true, environments will auto deploy on a push
10
context:
11
type: String
12
required: true
13
description: Cluster context
14
domain:
15
type: String
16
required: true
17
description: Used to create hostnames
18
mode:
19
type: String
20
required: true
21
description: What mode your app runs in, in the environment
22
repo_name:
23
type: String
24
required: true
25
description: Name of the repositoriy, can't be changed.
26
tracking_branch:
27
type: String
28
required: false
29
description: Default branch for environments to track
30
tracking_tag:
31
type: String
32
required: false
33
description: Default tag for environments to track
34
app_imports:
35
type: Array
36
required: false
37
description: Connect multiple apps together
38
cron_jobs:
39
type: Array
40
required: false
41
description: Cron Jobs
42
environment_templates:
43
type: Array
44
required: true
45
description: Templates for creating environments
46
hostnames:
47
type: Array
48
required: false
49
description: Hostnames for services
50
jobs:
51
type: Array
52
required: false
53
description: Arbitrary jobs, scripts to run.
54
resources:
55
type: Hash
56
required: true
57
description: Default cpu, memory, storage and replicas.
58
routes:
59
type: Array
60
required: false
61
description: For defining multiple entry points to a service and routing rewrites
62
and auth
63
rules:
64
type: Array
65
required: false
66
description: For defining multiple entry points to a service
67
services:
68
type: Array
69
required: false
70
description: List of services needed for you application
71
sidecars:
72
type: Array
73
required: false
74
description: Reusable sidecar definitions
75
workflows:
76
type: Array
77
required: false
78
description: Definitions for deploying config and code updates
Copied!

auto_deploy

If true, environments will deploy whenever you push to the corresponding repo and tracking branch.

context

This value is used by your application to deploy to a specific cluster. If you have your own EKS cluster through Release you can change this value to match that cluster/s, but if not use the generated value.

domain

The domain name where your applications will be hosted. These domains must be AWS Route 53 hosted domains. Release supports first and second level domains. (i.e. domain.com or release.domain.com)

mode

Mode is a configuration directive that you can use (it is set as an environment variable in your containers) if useful. e.x 'development' or 'production' or 'test'

tracking_branch

By default this will be the default branch of your repository, but it can be changed to any branch you would like to track with you environments.

tracking_tag

A specific git tag that you want your environments to track. You must unset tracking_branch if you use tracking_tag.

app_imports

App Imports are a way to connect multiple apps together. When you create an environment on one application, the apps that you import will also get environments created in the same namespace. Click here for more info.

cron_jobs

Cron Jobs are Jobs that run on a schedule. Cron jobs allow you to periodically execute commands within a namespace. They can be used for warming up caches, running database maintenance, etc. Click here for more info.

environment_templates

These templates are used when creating an environment. They allow you to override or change any of the defaults in this file for particular type of environments: ephemeral or permanent. Click here for more info.

hostnames

Hostnames are defined as an array of key/value pairs, under the hostnames directive. The key is a service name, and the value is the hostname. These can be hardcoded hostnames or utilize variable substitution. They are auto-generated for any service with a node_port or static build.

jobs

Jobs are like services except they run to completion. Examples include database migrations, asset compilation, etc. The inherit the image from a service, and run a command that ultimately terminates. Click here for more info.

resources

Default resources for all of your services. The structure and values are based on https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/. Click here for more info. For examples of why and how to override these defaults check out Managing Service Resources.

routes

Routes allow an easy way define multiple endpoints per service. Routes allow for edge routing rewrites, authentication and provide full support for ngnix ingress rules.

rules

Rules are based on https://kubernetes.github.io/ingress-nginx/user-guide/basic-usage/ and allow an easy way define multiple endpoints per service. They consist of three parts: service name, hostnames, and a path. Release will take this configuration and create an Nginx Ingress deployment to handle your routing automatically.

services

These services define the most important part/s of your application. The can represent services Release builds from your repositories, off-the-shelf containers (postgres, redis, elasticsearch, etc), external services you need to connect to or even services from other applications you have that are needed in this application also. Click here for more info.

sidecars

Top level sidecar definitions allow you to create reusable containers that can be applied to several services defined with in your application. These are useful for log aggregation. Click here for more info.

workflows

Workflows are an ordered list of what must be done to deploy new configuration or code to your environments. They are a combination of services and jobs (if you have them). Click here for more info.
There are two kinds of workflows Release supports: setup and patch. When a new environment is created setup is ran, and when code is pushed a patch is run against that environment.

Hostnames or Rules

Hostname or Rules can be both be used to define entry points to your services. They can not be used together at the same level in the config. In other words, you can't have default hostnames and rules, but you could have default hostnames and then use rules inside the environment_templates section of the file.

Hostnames

Hostnames are defined as an array of key/value pairs, under the hostnames directive. The key is a service name, and the value is the hostname. These can be hardcoded hostnames or utilize variable substitution. They are auto-generated for any service with a node_port or static build.
1
---
2
hostnames:
3
- frontend: frontend-${env_id}-${domain}
4
- docs: docs-${env_id}-${domain}
5
- backend: backend-${env_id}-${domain}
Copied!
Hostnames by default are generated using two variables env_id and domain. env_id is a randomly generated string for ephemeral environments or the name of the environment for permanent ones. Using some amount of random values allows Release to bring up any number of ephemeral environments on the same domain without conflicts. Domain is taken directly from your configuration file.

Rules

Rules are based on https://kubernetes.github.io/ingress-nginx/user-guide/basic-usage/ and allow an easy way define multiple endpoints per service. They consist of three parts: service name, hostnames, and a path. Release will take this configuration and create an Nginx Ingress deployment to handle your routing automatically.
1
service:
2
type: String
3
required: true
4
description: Service name from your config
5
hostnames:
6
type: Array
7
required: true
8
description: Same as hostnames above
9
path:
10
type: String
11
required: true
12
description: Entry point for hostnames
Copied!
Rules Schema
1
rules:
2
- service: backend
3
hostnames:
4
- backend-${env_id}.${domain}
5
path: "/auth/"
6
- service: frontend
7
hostnames:
8
- frontend-${env_id}.${domain}
9
path: "/graphql"
Copied!
Rules Example

App Imports

App Imports are optional and not present in the Application Template by default. The app that you are importing from must exist in your account.
1
name:
2
type: String
3
description: name of app you want to import
4
required: true
5
branch:
6
type: String
7
description: setting the branch means you'll always get
8
an environment with that branch
9
required: false
10
exclude_services:
11
type: Array
12
description: if you have a services in your imported app that would
13
be a repeat, say both apps have Redis, you can exclude them
14
required: false
Copied!
App Import Schema.
1
name:
2
type: String
3
description: name of service you want to exclude
4
required: true
Copied!
App Import : Exclude Services Schema
1
app_imports:
2
- name: backend
3
branch: new-branch
4
exclude_services:
5
- name: redis
Copied!
Example: App Imports excluding a service

Environment Templates

There are two types of allowed and required templates: ephemeral and permanent. When creating a new environment, either manually or through a pull request one of these templates will be used to construct the configuration for that particular environment. If the template is empty you get the defaults contained in your Application Template, but these templates allow you to override any of the defaults.
The schema for these is a duplicate of the entire default configuration as it allows you override anything contained in this file for that particular template. As such, we won't detail the schema twice, but there are examples contained here showing how to override default configuration in your templates.
Instant Datasets are unique in that they are not allowed at the root of the default config and can only be added under environment_templates. Since Instant Datasets allow you to use instances of RDS databases (often snapshots of production, but they could be snapshots of anything) having this be the default could result in unwanted behavior for you permanent environments.
Release requires you to be explicit on which template/s you would like to (by default) use Instant Datasets. Once you have created an environment you may add Instant Datasets to your environments through the Environment Configuration file if you don't want all environments of a particular type to use datasets.

Jobs

Jobs allow you to run arbitrary scripts during a deployment. This allows you to do anything before or after a service is deployed that is needed to setup your environment. A common example is database migrations before your backend comes up, but after you have deployed your database. Another good example might be running asset compilation. These tasks and any others can be accomplished using jobs.
1
name:
2
type: String
3
requird: true
4
description: Unique name to use when referencing the job
5
command:
6
type: Array
7
required: true
8
description: Command to run to do the job
9
from_services:
10
type: String
11
required: true
12
description: Name of service to inherit configuration from
Copied!
Jobs Schema
1
jobs:
2
- name: migrate
3
command:
4
- "./run-migrations.sh"
5
from_services: backend
6
- name: setup
7
command:
8
- "./run-setup.sh"
9
from_services: backend
Copied!
Jobs Example

Resources

Resources are service level defaults. They represent the resources allocated for each service. Storage is different in that not every container needs storage, so while you can specify defaults, not every container will use storage.
Requests define resource guarantees. Containers are guaranteed the request amount of resource. If not enough resources are available the container will not start.
Limits, on the other hand, make sure a container never goes above a certain amount of resource. The container is never allowed to exceed the limit.
memory: Limits and requests for memory are measured in bytes. You can express memory as a plain integer or as a fixed-point integer using one of these suffixes: E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. For example, the following represent roughly the same value: 128974848, 129e6, 129M, 123Mi
cpu: Limits and requests for cpu are represented in millicpu. This is represented by '{integer}m', e.x: 100m (guarantees that service will receive 1/10 of 1000m, or 1/10 of 1 cpu). You can also represent the cpu resources as fractions of integers, e.x. 0.1, is equivalent to 100m. Precision finer than '1m' is not allowed.
replicas: This number is the number of containers that will run during normal operation. This field is an integer, e.x: 5, that would make 5 of each service.
storage: Consists of two values size and type. Size accepts the same values as memory and type is the type of storage, whether aws-efs, empty_dir, or host_path.
1
---
2
replicas:
3
type: Integer
4
required: true
5
description: Number of containers, per service
6
default: 1
7
cpu:
8
type: Hash
9
required: true
10
description: Limits and requests for cpus
11
default: '{"limit"=|"1000m", "requests"=|"100m"}'
12
memory:
13
type: Hash
14
required: true
15
description: Limits and requests for memory
16
default: '{"limit"=|"1Gi", "requests"=|"100Mi"}'
17
storage:
18
type: Hash
19
required: false
20
description: Size and type definition
Copied!

Services

Services contain descriptions of each of your containers. They include many fields from your docker-compose and fields auto-generated by Release upon application creation. For each service you can define:
  • Static javascript builds
  • Open and map any number of ports
  • Creates mounts and volumes
  • Use ConfigMaps to modify config at run-time for off-the-shelf containers
  • Override default resources
  • Pin particular services to particular images
  • Create liveness and readiness probes and set other k8s config params (e.x.
    max_surge)
  • Create stateful services
  • External DNS entries for cross namespace services
1
---
2
build_base:
3
type: String
4
required: false
5
description: Path to build directory for static builds
6
build_command:
7
type: String
8
required: false
9
description: Command to do a static build
10
completed_timeout:
11
type: Integer
12
required: false
13
description: Time to wait for container to reach completed state
14
image:
15
type: String
16
required: false
17
description: Name of or path to image
18
max_surge:
19
type: String
20
required: false
21
description: K8s max_surge value
22
name:
23
type: String
24
required: true
25
description: Name of your service
26
pinned:
27
type: Boolean
28
required: false
29
description: Pin service to particular image
30
ready_timeout:
31
type: Integer
32
required: false
33
description: Time to wait for container to reach ready state
34
replicas:
35
type: Integer
36
required: false
37
description: Same as resources, but for this service only
38
static:
39
type: Boolean
40
required: false
41
description: If true Release will do a static build
42
command:
43
type: Array
44
required: false
45
description: Command to run on container start
46
cpu:
47
type: Hash
48
required: false
49
description: Same as resources, but for this service only
50
depends_on:
51
type: Array
52
required: false
53
description: List of service that must be deployed before
54
init:
55
type: Array
56
required: false
57
description: List of containers to be envoked before the primary service
58
liveness_probe:
59
type: Hash
60
required: false
61
description: Test of proper container operation
62
memory:
63
type: Hash
64
required: false
65
description: Same as resources, but for this service only
66
readiness_probe:
67
type: Hash
68
required: false
69
description: Test for proper container start-up
70
sidecars:
71
type: Array
72
required: false
73
description: List of containers along side the primary service
74
storage:
75
type: Hash
76
required: false
77
description: Same as resources, but for this service only
78
volumes:
79
type: Array
80
required: false
81
description: List of volumes and mount points
Copied!

Stateful Sets and Deployments

stateful provides a StatefulSet which creates guarantees about the naming, ordering and uniqueness of a service.
  • Stable, unique network identifiers.
  • Stable, persistent storage.
  • Ordered, graceful deployment and scaling.
  • Ordered, automated rolling updates.
If an application doesn’t require any stable identifiers or ordered deployment, deletion, or scaling, you should either set stateful to false or remove it.

Static Builds

In order to utilize static javascript builds you must provide values for these 4 directs
  • static: true
  • build_base: path/to/build/directory
  • build_command : build command
  • build_output_directory: /path/to/output/directory/for/build_artifacts
1
---
2
static: true
3
build_command: GENERATE_SOURCEMAP=false yarn build
4
build_base: frontend
5
build_directory: build/
Copied!
Static Build Example

Service Resources

Resources can be overwritten on a service by service basis. The resources key is removed and each directive cpu, memory, storage and replicas can be defined individually. If they are not specified the defaults will be used.
cpu, memory, and storage define resource guarantees. The service definition for cpu, memory, and storage overrides the values in resource_defaults You can use the service definition to more finely tune the amount cpu, memory, and storage for each service.
replicas allows you to specifiy different amount of pods that will be deployed for your particular service.

Init Containers

Init containers allow you define additional containers that share volume mounts from the primary service. These can be used to perform setup tasks that are required for the main service to run. Init containers should run to completion with an exit code of zero. Non zero exit codes will result in a CrashLoopBackoff.
1
---
2
has_repo:
3
type: Boolean
4
required: false
5
description: a
6
image:
7
type: String
8
required: false
9
description: a
10
name:
11
type: String
12
required: true
13
description: a
14
args:
15
type: Array
16
required: false
17
description: a
18
command:
19
type: Array
20
required: false
21
description: a
22
volumes:
23
type: Array
24
required: false
25
description: a
Copied!
1
service:
2
- name: backend
3
image: fred/spaceplace/backend
4
init:
5
- name: sync-seed-data
6
command:
7
- rsync
8
- "-avzh"
9
- [email protected]:/home/fred/seed-data
10
- /app/seed-data
11
- name: build-static-assets
12
command:
13
- rake assets:precompile
Copied!
Example init container which inherits image from the main service
You can also define init containers using off the self images like busybox. This can be useful for performing additional operations which don't require the main service image or require binaries not included in the primary service.
1
- name: backend
2
image: fred/spaceplace/backend
3
init:
4
- name: wait-for-my-other-service
5
image: busybox:
6
command:
7
- sh
8
- '-c'
9
- while ! httping -qc1 http://myhost:myport ; do sleep 1 ; done
Copied!
Example init container using busybox to wait for another service to startup

Readiness and Liveness Probes

liveness_probe and readiness_probe are used to check the health of your service. When your code is deployed via a rolling deployment, the readiness_probe will determine if the service is ready to serve traffic before adding it to the load balancer. Release will convert the docker-compose healthcheck to a liveness_probe and readiness_probe. Both liveness_probe and readiness_probe allow for more advanced configuration beyond the docker-compose healthcheck definition.
1
---
2
services:
3
# HTTP health check with customer header
4
- name: frontend
5
image: davidgiffin/spacedust/frontend
6
command:
7
- "./start.sh"
8
completed_timeout: 240
9
ready_timeout: 1200
10
registry: local
11
has_repo: true
12
ports:
13
- type: node_port
14
target_port: '4000'
15
port: '4000'
16
liveness_probe:
17
exec:
18
command:
19
- curl
20
- "-Lf"
21
- http://localhost:4000
22
failure_threshold: 30
23
period_seconds: 30
24
timeout_seconds: 10
25
readiness_probe:
26
exec:
27
command:
28
- curl
29
- "-Lf"
30
- http://localhost:4000
31
failure_threshold: 30
32
period_seconds: 30
33
timeout_seconds: 10
34
cpu:
35
limits: 2000m
36
requests: 100m
37
memory:
38
limits: 4Gi
39
requests: 100Mi
40
static: true
41
build_command: GENERATE_SOURCEMAP=false yarn build
42
build_base: frontend
43
build_directory: build/
44
- name: web
45
readiness_probe:
46
httpGet:
47
path: /healthz
48
port: 8080
49
httpHeaders:
50
- name: Custom-Header
51
value: Awesome
52
initialDelaySeconds: 5
53
periodSeconds: 10
54
# TCP health check
55
- name: redis
56
readiness_probe:
57
tcpSocket:
58
port: 6379
59
initialDelaySeconds: 10
60
periodSeconds: 30
61
# Command / shell health check
62
- name: worker
63
readiness_probe:
64
exec:
65
command:
66
- cat
67
- /tmp/healthy
68
initialDelaySeconds: 5
69
periodSeconds: 5
Copied!
In this example we show the various types of probes that you can define for services along with overrides for resources and timeouts, while also defining static builds.

Ports

Ports can be one of two types container_port or node_port.
container_port is used to define a port that another service will consume. Internal services like your data stores, caches and background workers should not be exposed to the internet and available only internally to other services.
node_port is used to define a service that you want to expose to the Internet. target_port is the port on the pod that the request gets sent to from a load balancer or internally. Your application needs to be listening for network requests on this port for the service to work. port exposes the service on the specified port internally within the cluster.
You can set an optional loadbalancer flag to create a separate load balancer that can be used access the service over the Internet. loadbalancer is useful for exposing a TCP based service to the Internet that doesn't support HTTP/HTTPS traffic.
node_port will also define an ingress rule to allow HTTP/HTTPS traffic to be routed to a service. See the documentation on hostnames and rules to understand how to define ingress rules and mount your service at a custom path, etc.
1
---
2
services:
3
# create an ingress rule on port 8080
4
- name: frontend
5
image: example-org/web-app/frontend
6
has_repo: true
7
ports:
8
- type: node_port
9
target_port: "8080"
10
port: "8080"
11
# create an ingress rule on port 8080 and listen locally on port 4572
12
- name: localstack
13
image: example-org/web-app/localstack
14
has_repo: true
15
ports:
16
- type: container_port
17
port: "4572"
18
- type: nodePort
19
target_port: "8080"
20
port: "8080"
21
# create a load balancer that listens on port 6000
22
- name: worker
23
image: example-org/web-app/frontend
24
has_repo: true
25
ports:
26
- type: node_port
27
target_port: "6000"
28
port: "6000"
29
loadbalancer: true
Copied!
Container and Node Ports together

Sidecars

It’s a generally accepted principle that a container should address a single concern only. Sidecar containers allow you to define additional workloads for a given service. Sidecar containers share resources with the main service container so they are great for shared filesystems, networking ports, routing process spaces, etc..
1
---
2
has_repo:
3
type: Boolean
4
required: false
5
description: a
6
image:
7
type: String
8
required: false
9
description: a
10
name:
11
type: String
12
required: true
13
description: a
14
args:
15
type: Array
16
required: false
17
description: a
18
command:
19
type: Array
20
required: false
21
description: a
22
cpu:
23
type: Hash
24
required: false
25
description: a
26
liveness_probe:
27
type: Hash
28
required: false
29
description: a
30
memory:
31
type: Hash
32
required: false
33
description: a
34
readiness_probe:
35
type: Hash
36
required: false
37
description: a
38
volumes:
39
type: Array
40
required: false
41
description: a
Copied!
1
services:
2
- name: frontend
3
image: kornkitti/express-hello-world:master
4
ports:
5
- type: node_port
6
target_port: "80"
7
port: "80"
8
volumes:
9
- name: nginx-logs
10
type: empty_dir
11
mount_path: /var/log/ngnix
12
sidecars:
13
- name: nginx
14
image: nginx:1.7.9
15
- name: logtail
16
from: logtailer
17
sidecars:
18
- name: logtailer
19
image: docker.elastic.co/logstash/logstash:7.10.1
20
command:
21
- tail
22
- "-f"
23
- /var/log/*
Copied!
Example sidecar definition with a reusable logstash container and nginx for serving static assets

Workflows

By default there are three workflows : setup, patch, and teardown. setup is what creates your environment from scratch. patch is the workflow for deploying your code to an already setup environment. teardown is the workflow for deleting your environment and removing any cloud native resources that have been created during setup. These are auto-generated, but you can add your own jobs and change the order, add tasks, etc.
1
---
2
name:
3
type: String
4
required: true
5
description: Name of the workflow
6
wait_for_all_tasks_to_complete:
7
type: Boolean
8
required: false
9
description: If true, will wait at the end of the workflow, for all tasks to finish
10
order_from:
11
type: Array
12
required: false
13
description: Jobs and Services involved in the workflow
14
parallelize:
15
type: Array
16
required: false
17
description: Alternative to order_from that runs your workflows in parallel
Copied!
name is the name of the particular workflow, only setup and patch are allowed.
wait_for_all_tasks_to_complete is set to true by default. This means that when the deployment finishes it will wait for all the tasks to finish. If it's set to false, the stage will not wait for everything to finish and the next stage will run immediately.
1
workflows:
2
- name: setup
3
order_from:
4
- jobs.migrate
5
- services.all
6
- jobs.setup
7
- name: patch
8
order_from:
9
- jobs.migrate
10
- services.frontend
11
- services.backend
12
- name: teardown
13
order_from:
14
- release.remove_environment
Copied!
Example: setup will deploy all your services and patch will only deploy frontend and backend.

Workflow Parallelization

Parallelization of your workflows can result in significant decrease in the time it takes to deploy your environment. But, it may not be as simple as just telling Release to run all of your services and/or jobs in parallel. In some cases you may need certain services to wait before a migration job has run, for example. You can design your application around this issue, but in some cases it will be unavoidable and parallelize gives you both options.
1
---
2
step:
3
type: String
4
required: true
5
description: A name for this step in the workflow
6
wait_for_finish:
7
type: Boolean
8
required: false
9
description: Some steps you want to start first, but not wait for. A long running
10
static job is a good example of something you may want to run near the beginning
11
of your workflows, but not hold up your backends for.
12
metadata:
13
type: Array
14
required: false
15
description: Data and params you can pass to tasks
16
tasks:
17
type: Array
18
required: true
19
description: List of jobs and/or services to run in parallel
Copied!
wait_for_finish allows you to customize the behavior of each step. By default everything in a step is run in parallel, but the workflow will not transition to the next step until every task in the step has finished. By setting it to false, the workflow will start it and the progress to the next step.
The most obvious use-case for it being false, is a long running frontend static build. You want to start it at the beginning of the workflow and make wait_for_finish: false, this way it will not hold up things that don't depend on it and by default the workflow will wait at the end for all jobs to finish, before transitioning to the next workflow.
metadata allows you to customize tasks by passing directives or data to them. It allows you to have multiple tasks in your task list, but send different kinds of data or params to specific tasks. See the schema for metadata in Release Tasks
1
workflows:
2
- name: setup
3
parallelize:
4
- step: frontend
5
tasks: [services.frontend]
6
wait_for_finish: false
7
- step: migrate
8
tasks: [jobs.migrate]
9
- step: backend
10
tasks: [services.backend]
11
- step: post-setup-deployment
12
tasks: [jobs.setup]
13
- name: patch
14
- step: frontend
15
tasks: [services.frontend]
16
wait_for_finish: false
17
- step: migrate
18
tasks: [jobs.migrate]
19
- step: backend
20
tasks: [services.backend]
21
- name: teardown
22
- step: remove_environment
23
tasks: [release.remove_environment]
Copied!
Example: Same services and jobs, but the frontend will be started first, but everything else will be allowed to run without waiting for it to finish. Only after everything else is done, will the workflow wait for services.frontend to finish.

Release Tasks (beta)

Release has created a few tasks that you can reference and use in your workflows. These tasks are built into release and you can parameterize them and utilize them in your workflows.
  • pod_exec: This task allows you to run any arbitrary commands on the pods of your choosing. This is very useful for sending messages to the service/s you have running on each pod. A good use case is you need to send a signal to all of your queue workers. If you used a Job (k8s jobs) you would need to find all the pods, etc to run the command on. This way Release does that for you and executes your command on each Pod.
  • pod_checker: This task will check the states of the pods and not finish until at least one of those states is found on every pod. This is very useful if you would like to do something to, or run something on, or against those pods, but only after they have transitioned to a specific state or one of a list of states.
  • remove_environment: This task is required in the teardown workflow. When it runs, Release will remove your environment from the UI and remove the namespace along with all corresponding objects from Kubernetes.

Pod Exec Metadata Schema

When you are using metadata to parameterize your pod_exec job, only a subset of the metadata directives are available.
1
task_name:
2
value: 'release.pod_exec'
3
type: String
4
description: complete name of the task this metadata is for
5
required: true
6
wait:
7
type: Integer
8
description: |
9
How long this task will wait to finish
10
required: true
11
command:
12
type: Array
13
description: |
14
Command you wish to run
15
required: true
16
for_pod:
17
type: String
18
description: exact pod name to run against, only available when using helm. Cannot be used with for_service.
19
required: false
20
for_service:
21
type: String
22
description: service name to run the command against
23
required: false
24
namespace:
25
type: String
26
default: current
27
description: |
28
Can be either `previous or current`, by default it is current. This defines which namespace to run against. This is most useful when using rainbow deploys because you have multiple namespaces in k8s per environment.
29
required: false
Copied!

Pod Exec Example

1
workflows:
2
- name: setup
3
parallelize:
4
- step: frontend
5
tasks: [services.frontend]
6
wait_for_finish: false
7
- step: migrate
8
tasks: [jobs.migrate]
9
- step: backend
10
tasks: [services.backend]
11
- step: send-backend-pod-setup
12
tasks: [release.pod_exec]
13
metadata:
14
- task_name: release.pod_exec
15
command: ["ruby ./bin/backend_pod_setup.rb"]
16
wait: 300
17
for_service: backend
18
- step: post-setup-deployment
19
tasks: [jobs.setup]
20
- name: patch
21
- step: frontend
22
tasks: [services.frontend]
23
wait_for_finish: false
24
- step: migrate
25
tasks: [jobs.migrate]
26
- step: backend
27
tasks: [services.backend]
Copied!
Pod Exec Example: We are going to ruby ./bin/backend_pod_setup.rb on each pod for the backend service.
Full Metadata Schema
1
task_name:
2
type: String
3
description: complete name of the task this metadata is for
4
required: true
5
wait:
6
type: Integer
7
description: |
8
How long this task will wait to finish
9
required: true
10
command:
11
type: Array
12
description: |
13
Command you wish to run
14
required: false
15
for_service:
16
type: String
17
description: service name to run the command against
18
required: false
19
namespace:
20
type: String
21
default: current
22
description: |
23
Can be either `previous or current`, but default it is current. This defines which namespace to run against. This is most useful when using rainbow deploys because you have multiple namespaces in k8s per environment.
24
required: false
25
states:
26
type: Array
27
default: [running,terminated]
28
description: |
29
These are pod states that the `pod_checker` task will check each pod for.
30
required: false
31
exclude:
32
type: Array
33
description: |
34
Services to exclude when running the pod checker
35
required: false
36
include:
37
type: Array
38
description: |
39
Services to include when running the pod checker
40
required: false
Copied!
Examples
1
workflows:
2
- name: setup
3
parallelize:
4
- step: frontend
5
tasks: [services.frontend]
6
wait_for_finish: false
7
- step: migrate
8
tasks: [jobs.migrate]
9
- step: backend
10
tasks: [services.backend]
11
- step: check-backend-pods
12
tasks: [release.pod_checker]
13
metadata:
14
- task_name: release.pod_checker
15
wait: 300
16
states: ['running']
17
include: [services.backend]
18
- step: send-backend-pod-setup
19
tasks: [release.pod_exec]
20
metadata:
21
- task_name: release.pod_exec
22
wait: 300
23
command: ["ruby ./bin/backend_pod_setup.rb"]
24
for_service: backend
25
- step: post-setup-deployment
26
tasks: [jobs.setup]
27
- name: patch
28
- step: frontend
29
tasks: [services.frontend]
30
wait_for_finish: false
31
- step: migrate
32
tasks: [jobs.migrate]
33
- step: backend
34
tasks: [services.backend]
Copied!
Pod Checker and Pod Exec Combined Example: We are checking the backend pods to make sure they are in the running state before running ruby ./bin/backend_pod_setup.rb on each pod.
Last modified 5d ago