Search…
Schema Definition

Schema Definition

Application Template Schema

This configuration template is the basis for all environments you will create for this application. Each of the sections and directives file help to create the configuration for your specific environment. The environment_templates section describes differences between your ephemeral and permanent templates and you select one of these when creating an environment. Each section and directive will be described in detail below in this document.
1
---
2
app:
3
type: String
4
required: true
5
description: Name of your app, can't be changed.
6
auto_deploy:
7
type: Boolean
8
required: true
9
description: If true, environments will auto deploy on a push
10
context:
11
type: String
12
required: true
13
description: Cluster context
14
domain:
15
type: String
16
required: true
17
description: Used to create hostnames
18
mode:
19
type: String
20
required: false
21
description: Deprecated
22
parallelize_app_imports:
23
type: Boolean
24
required: false
25
description: Parallelize the deployment of all the apps
26
repo_name:
27
type: String
28
required: true
29
description: Name of the repository, can't be changed.
30
tracking_branch:
31
type: String
32
required: false
33
description: Default branch for environments to track
34
tracking_tag:
35
type: String
36
required: false
37
description: Default tag for environments to track
38
app_imports:
39
type: Array
40
required: false
41
description: Connect multiple apps together
42
cron_jobs:
43
type: Array
44
required: false
45
description: Cron Jobs
46
environment_templates:
47
type: Array
48
required: true
49
description: Templates for creating environments
50
hostnames:
51
type: Array
52
required: false
53
description: Hostnames for services
54
ingress:
55
type: Hash
56
required: false
57
description: Ingress
58
jobs:
59
type: Array
60
required: false
61
description: Arbitrary jobs, scripts to run.
62
node_selector:
63
type: Array
64
required: false
65
description: Node Selector
66
resources:
67
type: Hash
68
required: true
69
description: Default cpu, memory, storage and replicas.
70
routes:
71
type: Array
72
required: false
73
description: For defining multiple entry points to a service and routing rewrites
74
and auth
75
rules:
76
type: Array
77
required: false
78
description: For defining multiple entry points to a service
79
services:
80
type: Array
81
required: false
82
description: List of services needed for you application
83
shared_volumes:
84
type: Array
85
required: false
86
description: Voulmes that are accessed by multiple services
87
sidecars:
88
type: Array
89
required: false
90
description: Reusable sidecar definitions
91
workflows:
92
type: Array
93
required: true
94
description: Definitions for deploying config and code updates
Copied!

auto_deploy

If true, environments will deploy whenever you push to the corresponding repo and tracking branch.

context

This value is used by your application to deploy to a specific cluster. If you have your own EKS cluster through Release you can change this value to match that cluster/s, but if not use the generated value.

domain

The domain name where your applications will be hosted. These domains must be AWS Route 53 hosted domains. Release supports first and second level domains. (i.e. domain.com or release.domain.com)

mode

Mode is a configuration directive that you can use (it is set as an environment variable in your containers) if useful. e.x 'development' or 'production' or 'test'

parallelize_app_imports

If there are no dependencies for the order in which in the apps deploy, use parallelize_app_imports to deploy all the apps at the same time.

tracking_branch

By default this will be the default branch of your repository, but it can be changed to any branch you would like to track with you environments.

tracking_tag

A specific git tag that you want your environments to track. You must unset tracking_branch if you use tracking_tag.

app_imports

App Imports are a way to connect multiple apps together. When you create an environment on one application, the apps that you import will also get environments created in the same namespace. Click here for more info.

cron_jobs

Cron Jobs are Jobs that run on a schedule. Cron jobs allow you to periodically execute commands within a namespace. They can be used for warming up caches, running database maintenance, etc. Click here for more info.

environment_templates

These templates are used when creating an environment. They allow you to override or change any of the defaults in this file for particular type of environments: ephemeral or permanent. Click here for more info.

hostnames

Hostnames are defined as an array of key/value pairs, under the hostnames directive. The key is a service name, and the value is the hostname. These can be hardcoded hostnames or utilize variable substitution. They are auto-generated for any service with a node_port or static build.

ingress

Ingress settings that can control the behavior and functionality of the NGINX ingress controller to access HTTP services in your cluster

jobs

Jobs are like services except they run to completion. Examples include database migrations, asset compilation, etc. The inherit the image from a service, and run a command that ultimately terminates. Click here for more info.

node_selector

Node Selectors allow you to assign workloads to particular nodes based on common labels such as kubernetes.io/os=windows and kuberenetes.io/arch=arm64. Click here for more information.

resources

Default resources for all of your services. The structure and values are based on https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/. Click here for more info. For examples of why and how to override these defaults check out Managing Service Resources.

routes

Routes allow an easy way define multiple endpoints per service. Routes allow for edge routing rewrites, authentication and provide full support for ngnix ingress rules.

rules

Rules are based on https://kubernetes.github.io/ingress-nginx/user-guide/basic-usage/ and allow an easy way define multiple endpoints per service. They consist of three parts: service name, hostnames, and a path. Release will take this configuration and create an Nginx Ingress deployment to handle your routing automatically.

services

These services define the most important part/s of your application. The can represent services Release builds from your repositories, off-the-shelf containers (postgres, redis, elasticsearch, etc), external services you need to connect to or even services from other applications you have that are needed in this application also. Click here for more info.

shared_volumes

Shared Volumes creates a PersistentVolumeClaim that is written to/read from by multiple services.

sidecars

Top level sidecar definitions allow you to create reusable containers that can be applied to several services defined with in your application. These are useful for log aggregation. Click here for more info.

workflows

Workflows are an ordered list of what must be done to deploy new configuration or code to your environments. They are a combination of services and jobs (if you have them). Click here for more info.
There are two kinds of workflows Release supports: setup and patch. When a new environment is created setup is ran, and when code is pushed a patch is run against that environment.

Hostnames or Rules

Hostname or Rules can be both be used to define entry points to your services. They can not be used together at the same level in the config. In other words, you can't have default hostnames and rules, but you could have default hostnames and then use rules inside the environment_templates section of the file.

Hostnames

Hostnames are defined as an array of key/value pairs, under the hostnames directive. The key is a service name, and the value is the hostname. These can be hardcoded hostnames or utilize variable substitution. They are auto-generated for any service with a node_port or static build.
1
---
2
hostnames:
3
- frontend: frontend-${env_id}-${domain}
4
- docs: docs-${env_id}-${domain}
5
- backend: backend-${env_id}-${domain}
Copied!
Hostnames by default are generated using two variables env_id and domain. env_id is a randomly generated string for ephemeral environments or the name of the environment for permanent ones. Using some amount of random values allows Release to bring up any number of ephemeral environments on the same domain without conflicts. Domain is taken directly from your configuration file.

Rules

Rules are based on https://kubernetes.github.io/ingress-nginx/user-guide/basic-usage/ and allow an easy way define multiple endpoints per service. They consist of three parts: service name, hostnames, and a path. Release will take this configuration and create an Nginx Ingress deployment to handle your routing automatically.
1
service:
2
type: String
3
required: true
4
description: Service name from your config
5
hostnames:
6
type: Array
7
required: true
8
description: Same as hostnames above
9
path:
10
type: String
11
required: true
12
description: Entry point for hostnames
Copied!
Rules Schema
1
rules:
2
- service: backend
3
hostnames:
4
- backend-${env_id}.${domain}
5
path: "/auth/"
6
- service: frontend
7
hostnames:
8
- frontend-${env_id}.${domain}
9
path: "/graphql"
Copied!
Rules Example

App Imports

App Imports are optional and not present in the Application Template by default.
1
---
2
branch:
3
type: String
4
required: false
5
description: Setting the branch pins all created Environments to that branch
6
name:
7
type: String
8
required: true
9
description: Name of App you want to import. The imported App from must exist in
10
your account.
11
exclude_services:
12
type: Array
13
required: false
14
description: If you have a services in your imported app that would be a repeat,
15
say both apps have Redis, you can exclude them
Copied!
1
app_imports:
2
- name: backend
3
branch: new-branch
4
exclude_services:
5
- name: redis
Copied!
Example: App Imports excluding a service
1
parallelize_app_imports: true
2
app_imports:
3
- name: backend
4
- name: upload-service
5
- name: worker-service
6
- name: authentication-service
Copied!
Example: App Imports with many apps utilizing the parallel deploys

Exclude Services

Allows the removal of duplicate services during App Imports
1
---
2
name:
3
type: String
4
required: true
5
description: Name of service you want to exclude
Copied!

Cron Jobs

Cron Job containers allow you to define additional workloads run on a schedule. Cron Jobs can be used for mant different tasks like database maintenance, reporting, warming caches by accessing other containers in the namespace, etc.
1
---
2
from_services:
3
type: String
4
required: false
5
description: Service to use for job execution
6
has_repo:
7
type: Boolean
8
required: false
9
description: Repository is local
10
image:
11
type: String
12
required: false
13
description: Docker image to execute
14
name:
15
type: String
16
required: true
17
description: A Name
18
schedule:
19
type: String
20
required: true
21
description: Cron expression
22
args:
23
type: Array
24
required: false
25
description: Arguments
26
command:
27
type: Array
28
required: false
29
description: Entrypoint
Copied!
Each cron job entry has a mutually exclusive requirement where either image or from_services must be present.
1
cron_jobs:
2
- name: poll-frontend
3
schedule: "0 * * * *"
4
image: busybox
5
command:
6
- sh
7
- "-c"
8
- "curl http://frontend:8080"
9
- name: redis-test
10
schedule: "*/15 * * * *"
11
from_services: redis
12
command:
13
- sh
14
- "-c"
15
- "redis-cli -h redis -p 6390 ping"
Copied!
Example cron job definitions to poll the frontend service and ping Redis

from_services

A reference to the service name to use as the basis for executing the cron job. Parameters from the service will be copied into creating this cron job.

has_repo

Use an internal repository built by Release, or not.

image

A reference to the docker image to execute the job, use if from_services is not a good fit

name

What's in a name? That which we call a rose/By any other name would smell as sweet.

schedule

A string representing the schedule when a cron job will execute in the form of minute hour dayofmonth month dayofweek

args

An array of arguments to be passed to the entrypoint of the container.

command

An array of arguments to be passed to override the entrypoint of the container.

Environment Templates

There are two types of allowed and required templates: ephemeral and permanent. When creating a new environment, either manually or through a pull request one of these templates will be used to construct the configuration for that particular environment. If the template is empty you get the defaults contained in your Application Template, but these templates allow you to override any of the defaults.
The schema for these is a duplicate of the entire default configuration as it allows you override anything contained in this file for that particular template. As such, we won't detail the schema twice, but there are examples contained here showing how to override default configuration in your templates.
Instant Datasets are unique in that they are not allowed at the root of the default config and can only be added under environment_templates. Since Instant Datasets allow you to use instances of RDS databases (often snapshots of production, but they could be snapshots of anything) having this be the default could result in unwanted behavior for you permanent environments.
Release requires you to be explicit on which template/s you would like to (by default) use Instant Datasets. Once you have created an environment you may add Instant Datasets to your environments through the Environment Configuration file if you don't want all environments of a particular type to use datasets.

Ingresses

Ingress settings that can control the behavior and functionality of the NGINX ingress controller to access HTTP services in your cluster
1
---
2
affinity:
3
type: String
4
required: false
5
description: Nginx affinity
6
affinity_mode:
7
type: String
8
required: false
9
description: The mode for affinity stickiness
10
proxy_body_size:
11
type: String
12
required: false
13
description: Proxy Body Size maximum
14
proxy_buffer_size:
15
type: String
16
required: false
17
description: Proxy Initial Buffer Size
18
proxy_buffering:
19
type: Boolean
20
required: false
21
description: Enable or Disable Proxy Buffering
22
proxy_buffers_number:
23
type: Integer
24
required: false
25
description: Proxy Initial Buffer Count
26
proxy_max_temp_file_size:
27
type: String
28
required: false
29
description: Proxy Max Temp File Size
30
proxy_read_timeout:
31
type: String
32
required: false
33
description: Proxy Read Timeout
34
proxy_send_timeout:
35
type: String
36
required: false
37
description: Proxy Send Timeout
38
session_cookie_change_on_failure:
39
type: Boolean
40
required: false
41
description: Session Cookie Change on Failure
42
session_cookie_max_age:
43
type: Integer
44
required: false
45
description: Session Cookie Maximum Age in Seconds
46
session_cookie_name:
47
type: String
48
required: false
49
description: Session Cookie Name
50
session_cookie_path:
51
type: String
52
required: false
53
description: Session Cookie Path
54
wafv2_acl_arn:
55
type: String
56
required: false
57
description: Web Application Firewall Version 2 Access Control List Amazon Web Servcies
58
Resource Name
Copied!
1
ingress:
2
proxy_body_size: 30m
3
proxy_buffer_size: 64k
4
proxy_buffering: true
5
proxy_buffers_number: 4
6
proxy_max_temp_file_size: 1024m
7
proxy_read_timeout: "180"
8
proxy_send_timeout: "180"
Copied!
Example proxy buffer settings for large web requests
1
ingress:
2
affinity: "cookie"
3
affinity_mode: "persistent"
4
session_cookie_name: "my_Cookie_name1"
5
session_cookie_path: "/"
6
session_cookie_max_age: 86440
7
session_cookie_change_on_failure: true
Copied!
Example settings for stickiness settings using a cookie
1
ingress:
2
wafv2_acl_arn: arn:aws:wafv2:us-west-2:xxxxx:regional/webacl/xxxxxxx/3ab78708-85b0-49d3-b4e1-7a9615a6613b
Copied!
Example settings for applying a WAF ruleset to the ALB in (AWS-only)

Ingress settings schema

affinity

Type of the affinity, set this to cookie to enable session affinity. See https://kubernetes.github.io/ingress-nginx/examples/affinity/cookie/

affinity_mode

The affinity mode defines how sticky a session is. Use balanced to redistribute some sessions when scaling pods or persistent for maximum stickiness.

proxy_body_size

Sets the maximum allowed size of the client request body.

proxy_buffer_size

Sets the size of the buffer used for reading the first part of the response received from the proxied server. This part usually contains a small response header.

proxy_buffering

Enables or disables buffering of responses from the proxied server.

proxy_buffers_number

Sets the number of the buffers used for reading the first part of the response received from the proxied server.

proxy_max_temp_file_size

When buffering of responses from the proxied server is enabled, and the whole response does not fit into the buffers set by the proxy_buffer_size and proxy_buffers directives, a part of the response can be saved to a temporary file. This directive sets the maximum size of the temporary file.

proxy_read_timeout

Sets the timeout in seconds for reading a response from the proxied server. The timeout is set only between two successive read operations, not for the transmission of the whole response.

proxy_send_timeout

Sets the timeout in seconds for reading a response from the proxied server. The timeout is set only between two successive read operations, not for the transmission of the whole response.
When set to false nginx ingress will send request to upstream pointed by sticky cookie even if previous attempt failed. When set to true and previous attempt failed, sticky cookie will be changed to point to another upstream.
Time in seconds until the cookie expires, corresponds to the Max-Age cookie directive.
Name of the cookie that will be created (defaults to INGRESSCOOKIE).
Path that will be set on the cookie (required because Release Ingress paths use regular expressions).

wafv2_acl_arn

The ARN for an existing WAF ACL to add to the load balancer. AWS-only, and must be created separately.

Jobs

Jobs allow you to run arbitrary scripts during a deployment. This allows you to do anything before or after a service is deployed that is needed to setup your environment. A common example is database migrations before your backend comes up, but after you have deployed your database. Another good example might be running asset compilation. These tasks and any others can be accomplished using jobs.
1
---
2
completed_timeout:
3
type: Integer
4
required: false
5
description: How long will Release wait until the job is considered timed out and
6
raise an error
7
from_services:
8
type: String
9
required: false
10
description: Name of service to inherit image from
11
halt_on_error:
12
type: Boolean
13
required: false
14
description: Should the deployment stop running if the job raises an error
15
image:
16
type: String
17
required: false
18
description: The image to use for the job
19
name:
20
type: String
21
required: true
22
description: Unique name to use when referencing the job
23
service_account_name:
24
type: String
25
required: false
26
description: Creates a ServiceAccount object in Kubernetes
27
args:
28
type: Array
29
required: false
30
description: Arguments that are passed to command on container start
31
command:
32
type: Array
33
required: false
34
description: Command to run on container start. Overrides what is in the Dockerfile
35
cpu:
36
type: Hash
37
required: false
38
description: Same as resources, but for this job only. If not specified the default
39
resources will be used.
40
memory:
41
type: Hash
42
required: false
43
description: Same as resources, but for this job only. If not specified the default
44
resources will be used.
45
nvidia_com_gpu:
46
type: Hash
47
required: false
48
description: Specify the limits value for gpu count on this job. Do not specify
49
`requests`. Must be an integer and cannot be overprovisioned or shared with other
50
containers.
Copied!
Each job entry has a mutually exclusive requirement where either image or from_services must be present.
1
jobs:
2
- name: migrate
3
command:
4
- "./run-migrations.sh"
5
from_services: backend
6
- name: setup
7
command:
8
- "./run-setup.sh"
9
from_services: backend
10
cpu:
11
limits: 100
12
requests: 100
13
memory:
14
limits: 1Gi
15
requests: 1Gi
16
- name: mljob
17
command:
18
- "./run-ml-batch.sh"
19
from_services: backend
20
node_selector:
21
key: "nvidia.com/gpu"
22
value: "true"
23
nvidia_com_gpu:
24
limits: 1
Copied!
Jobs Example

Resources

Resources are service level defaults. They represent the resources allocated for each service. Storage is different in that not every container needs storage, so while you can specify defaults, not every container will use storage.
Requests define resource guarantees. Containers are guaranteed the request amount of resource. If not enough resources are available the container will not start.
Limits, on the other hand, make sure a container never goes above a certain amount of resource. The container is never allowed to exceed the limit.
memory: Limits and requests for memory are measured in bytes. You can express memory as a plain integer or as a fixed-point integer using one of these suffixes: E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. For example, the following represent roughly the same value: 128974848, 129e6, 129M, 123Mi
nvidia_com_gpu: Limits for Nvidia GPU units. (Do not specify requests:). GPU limits can only be integer values and cannot be shared concurrently with other containers. You must also specify a node_selector to schedule a job or service on the correct worker node(s).
cpu: Limits and requests for cpu are represented in millicpu. This is represented by '{integer}m', e.x: 100m (guarantees that service will receive 1/10 of 1000m, or 1/10 of 1 cpu). You can also represent the cpu resources as fractions of integers, e.x. 0.1, is equivalent to 100m. Precision finer than '1m' is not allowed.
replicas: This number is the number of containers that will run during normal operation. This field is an integer, e.x: 5, that would make 5 of each service.
storage: Consists of two values size and type. Size accepts the same values as memory and type is the type of storage, whether aws-efs, empty_dir, or host_path.
1
---
2
replicas:
3
type: Integer
4
required: true
5
description: Number of containers, per service
6
default: 1
7
cpu:
8
type: Hash
9
required: true
10
description: Limits and requests for cpus
11
default: '{"limit"=|"1000m", "requests"=|"100m"}'
12
memory:
13
type: Hash
14
required: true
15
description: Limits and requests for memory
16
default: '{"limit"=|"1Gi", "requests"=|"100Mi"}'
17
nvidia_com_gpu:
18
type: Hash
19
required: false
20
description: Limits for nvidia.com/gpu tagged nodes
21
storage:
22
type: Hash
23
required: false
24
description: Size and type definition
Copied!

Services

Services contain descriptions of each of your containers. They include many fields from your docker-compose and fields auto-generated by Release upon application creation. For each service you can define:
  • Static javascript builds
  • Open and map any number of ports
  • Creates mounts and volumes
  • Use ConfigMaps to modify config at run-time for off-the-shelf containers
  • Override default resources
  • Pin particular services to particular images
  • Create liveness and readiness probes and set other k8s config params (e.x. max_surge)
  • Create stateful services
  • External DNS entries for cross namespace services
1
---
2
build_base:
3
type: String
4
required: false
5
description: Path to the Javascript application if it does not reside at the root
6
build_command:
7
type: String
8
required: false
9
description: Command to create the static Javascript build.
10
build_destination_directory:
11
type: String
12
required: false
13
description: Directory to copy the generated output to
14
build_output_directory:
15
type: String
16
required: false
17
description: Directory where the generated output is located
18
build_package_install_command:
19
type: String
20
required: false
21
description: Command to install packages such as `npm install` or `yarn`. Defaults
22
to `yarn`
23
completed_timeout:
24
type: Integer
25
required: false
26
description: Time to wait for container to reach completed state
27
has_repo:
28
type: Boolean
29
required: false
30
description: If we should reference an image built by Release
31
image:
32
type: String
33
required: false
34
description: Name of or path to image
35
max_surge:
36
type: String
37
required: false
38
description: K8s max_surge value
39
name:
40
type: String
41
required: true
42
description: Name of your service
43
pinned:
44
type: Boolean
45
required: false
46
description: Pin service to particular image
47
ready_timeout:
48
type: Integer
49
required: false
50
description: Time to wait for container to reach ready state
51
replicas:
52
type: Integer
53
required: false
54
description: Same as resources, but for this service only
55
service_account_name:
56
type: String
57
required: false
58
description: Creates a ServiceAccount object in Kubernetes
59
static:
60
type: Boolean
61
required: false
62
description: When true, Release will create a static Javascript build. Review the
63
following build_* attributes
64
args:
65
type: Array
66
required: false
67
description: Arguments that are passed to command on container start
68
build:
69
type: Hash
70
required: false
71
description: Instructions for Release to build an image.
72
command:
73
type: Array
74
required: false
75
description: Command to run on container start. Overrides what is in the Dockerfile
76
cpu:
77
type: Hash
78
required: false
79
description: Same as resources, but for this service only
80
depends_on:
81
type: Array
82
required: false
83
description: List of service that must be deployed before
84
init:
85
type: Array
86
required: false
87
description: List of containers to be envoked before the primary service
88
liveness_probe:
89
type: Hash
90
required: false
91
description: Test of proper container operation
92
memory:
93
type: Hash
94
required: false
95
description: Same as resources, but for this service only
96
node_selector:
97
type: Array
98
required: false
99
description: Node Selector
100
nvidia_com_gpu:
101
type: Hash
102
required: false
103
description: Specify the limits value for GPU count on this service.
104
ports:
105
type: Array
106
required: false
107
description: Set the ports which will be exposed for the service
108
readiness_probe:
109
type: Hash
110
required: false
111
description: Test for proper container start-up
112
sidecars:
113
type: Array
114
required: false
115
description: List of containers along side the primary service
116
storage:
117
type: Hash
118
required: false
119
description: Same as resources, but for this service only
120
volumes:
121
type: Array
122
required: false
123
description: List of volumes and mount points
Copied!

node_selector

Node Selectors allow you to assign workloads to particular nodes based on common labels such as kubernetes.io/os=windows and kuberenetes.io/arch=arm64. Click here for more information.

nvidia_com_gpu

limits: must be an integer value. Do not specify requests:. GPU processors cannot be overprovisioned or shared with other containers.

Stateful Sets and Deployments

stateful provides a StatefulSet which creates guarantees about the naming, ordering and uniqueness of a service.
  • Stable, unique network identifiers.
  • Stable, persistent storage.
  • Ordered, graceful deployment and scaling.
  • Ordered, automated rolling updates.
If an application doesn’t require any stable identifiers or ordered deployment, deletion, or scaling, you should either set stateful to false or remove it.

Build

Instructions for Release to build an image. This needs to be combined with has_repo: true.
1
---
2
context:
3
type: String
4
required: false
5
description: Path to the files if they do not reside at the root. Defaults to '.'
6
dockerfile:
7
type: String
8
required: false
9
description: Name of the Dockerfile to use. Defaults to 'Dockerfile'
10
name:
11
type: String
12
required: false
13
description: Name of build
14
repo_branch:
15
type: String
16
required: false
17
description: Combined with `repo_url`, use to target a specific branch
18
repo_commit:
19
type: String
20
required: false
21
description: Combined with `repo_url`, use to target a specific commit
22
repo_url:
23
type: String
24
required: false
25
description: If you want to create a Build from a different repository
26
target:
27
type: String
28
required: false
29
description: If a specific build stage should be targetted
30
args:
31
type: Array
32
required: false
33
description: Args passed into the build command
34
image_scan:
35
type: Hash
36
required: false
37
description: Release can scan your built images for known security vulnerabilities
Copied!

Build Image Scan

Release allows for scanning your images for vulnerabilities. If any are found, the build is marked as an error. You are able to designate what level of serverity will cause an error and also whitelist specific vulnerabilities to ignore.
1
---
2
severity:
3
type: String
4
required: true
5
description: Level of severity which
6
whitelist:
7
type: Array
8
required: false
9
description: List of vulnerabilities to ignore
Copied!
1
build:
2
context: .
3
image_scan:
4
severity: high
5
whitelist:
6
- name: CVE-123
7
description: "Release created this CVE"
8
reason: "This CVE doesn't exist!"
Copied!
Example image scan that will fail the build if any CVE's with a serverity level of high are found. The scan also skips over CVE-123 because it is known that Release created that fake CVE in this documentation.

Service Resources

Resources can be overwritten on a service by service basis. The resources key is removed and each directive cpu, memory, storage, and replicas can be defined individually. If they are not specified the defaults will be used.
cpu, memory, nvidia_com_gpu, and storage define resource guarantees. The service definition for cpu, memory, nvidia_com_gpu, and storage overrides the values in resource_defaults. In the case of nvidia_com_gpu, Kubernetes recommends setting limits: but not requests:, unless they are the same. You can use the service definition to more finely tune the amount cpu, memory, nvidia_com_gpu, and storage for each service.
replicas allows you to specify different amount of pods that will be deployed for your particular service.

Init Containers

Init containers allow you define additional containers that share volume mounts from the primary service. These can be used to perform setup tasks that are required for the main service to run. Init containers should run to completion with an exit code of zero. Non zero exit codes will result in a CrashLoopBackoff.
1
---
2
has_repo:
3
type: Boolean
4
required: false
5
description: If we should reference an image built by Release
6
image:
7
type: String
8
required: false
9
description: Name of or path to image
10
name:
11
type: String
12
required: true
13
description: Name of the init container
14
args:
15
type: Array
16
required: false
17
description: Arguments that are passed to command on container start
18
command:
19
type: Array
20
required: false
21
description: Command to run on container start. Overrides what is in the Dockerfile
22
volumes:
23
type: Array
24
required: false
25
description: List of volumes and mount points
Copied!
1
service:
2
- name: backend
3
image: fred/spaceplace/backend
4
init:
5
- name: sync-seed-data
6
command:
7
- rsync
8
- "-avzh"
9
- [email protected]:/home/fred/seed-data
10
- /app/seed-data
11
- name: build-static-assets
12
command:
13
- rake assets:precompile
Copied!
Example init container which inherits image from the main service
You can also define init containers using off the self images like busybox. This can be useful for performing additional operations which don't require the main service image or require binaries not included in the primary service.
1
- name: backend
2
image: fred/spaceplace/backend
3
init:
4
- name: wait-for-my-other-service
5
image: busybox:
6
command:
7
- sh
8
- '-c'
9
- while ! httping -qc1 http://myhost:myport ; do sleep 1 ; done
Copied!
Example init container using busybox to wait for another service to startup

volumes

See Volumes

Readiness and Liveness Probes

liveness_probe and readiness_probe are used to check the health of your service. When your code is deployed via a rolling deployment, the readiness_probe will determine if the service is ready to serve traffic before adding it to the load balancer. Release will convert the docker-compose healthcheck to a liveness_probe and readiness_probe. Both liveness_probe and readiness_probe allow for more advanced configuration beyond the docker-compose healthcheck definition.
1
---
2
services:
3
# HTTP health check with customer header
4
- name: frontend
5
image: davidgiffin/spacedust/frontend
6
command:
7
- "./start.sh"
8
completed_timeout: 240
9
ready_timeout: 1200
10
registry: local
11
has_repo: true
12
ports:
13
- type: node_port
14
target_port: '4000'
15
port: '4000'
16
liveness_probe:
17
exec:
18
command:
19
- curl
20
- "-Lf"
21
- http://localhost:4000
22
failure_threshold: 30
23
period_seconds: 30
24
timeout_seconds: 10
25
readiness_probe:
26
exec:
27
command:
28
- curl
29
- "-Lf"
30
- http://localhost:4000
31
failure_threshold: 30
32
period_seconds: 30
33
timeout_seconds: 10
34
cpu:
35
limits: 2000m
36
requests: 100m
37
memory:
38
limits: 4Gi
39
requests: 100Mi
40
static: true
41
build_command: GENERATE_SOURCEMAP=false yarn build
42
build_base: frontend
43
build_directory: build/
44
- name: web
45
readiness_probe:
46
http_get:
47
path: /healthz
48
port: 8080
49
http_headers:
50
- name: Custom-Header
51
value: Awesome
52
initial_delay_seconds: 5
53
period_seconds: 10
54
# TCP health check
55
- name: redis
56
readiness_probe:
57
tcp_socket:
58
port: 6379
59
initial_delay_seconds: 10
60
period_seconds: 30
61
# Command / shell health check
62
- name: worker
63
readiness_probe:
64
exec:
65
command:
66
- cat
67
- /tmp/healthy
68
initialDelaySeconds: 5
69
periodSeconds: 5
Copied!
In this example we show the various types of probes that you can define for services along with overrides for resources and timeouts, while also defining static builds.

Service Node Selectors

Node Selector allows pods to chose specific nodes to run on. The most common use case is for selecting nodes with different OS (like Windows) or different architecture (like ARM64, GPUs), but could also select specific cloud provider settings such as AWS AZ (like us-east-1c).
1
---
2
key:
3
type: String
4
required: true
5
description: Label Key
6
value:
7
type: String
8
required: true
9
description: Label value
Copied!
1
services:
2
- name: frontend
3
image: mcr.microsoft.com/windows/servercore:ltsc2019
4
node_selector:
5
- key: kubernetes.io/os
6
value: windows
7
8
# Top level default
9
node_selector:
10
- key: "topology.kubernetes.io/zone"
11
value: "us-east-1c"
Copied!