Search…
Schema definition

Schema Definition

Application Template Schema

This configuration template is the basis for all environments you will create for this application. Each of the sections and directives file help to create the configuration for your specific environment. The environment_templates section describes differences between your ephemeral and permanent templates and you select one of these when creating an environment. Each section and directive will be described in detail below in this document.
---
app:
type: String
required: true
description: Name of your app, can't be changed.
auto_deploy:
type: Boolean
required: true
description: If true, environments will auto deploy on a push
context:
type: String
required: true
description: Cluster context
domain:
type: String
required: true
description: Used to create hostnames
mode:
type: String
required: false
description: Deprecated
parallelize_app_imports:
type: Boolean
required: false
description: Parallelize the deployment of all the apps
repo_name:
type: String
required: true
description: Name of the repository, can't be changed.
tracking_branch:
type: String
required: false
description: Default branch for environments to track
tracking_tag:
type: String
required: false
description: Default tag for environments to track
app_imports:
type: Array
required: false
description: Connect multiple apps together
cron_jobs:
type: Array
required: false
description: Cron Jobs
development_environment:
type: Hash
required: false
description: Set of services configured for remote development
environment_templates:
type: Array
required: true
description: Templates for creating environments
hostnames:
type: Array
required: false
description: Hostnames for services
ingress:
type: Hash
required: false
description: Ingress
jobs:
type: Array
required: false
description: Arbitrary jobs, scripts to run.
node_selector:
type: Array
required: false
description: Node Selector
resources:
type: Hash
required: true
description: Default cpu, memory, storage and replicas.
routes:
type: Array
required: false
description: For defining multiple entry points to a service and routing rewrites
and auth
rules:
type: Array
required: false
description: For defining multiple entry points to a service
service_accounts:
type: Array
required: false
description: Service Accounts
services:
type: Array
required: false
description: List of services needed for you application
shared_volumes:
type: Array
required: false
description: Volumes that are accessed by multiple services
sidecars:
type: Array
required: false
description: Reusable sidecar definitions
workflows:
type: Array
required: true
description: Definitions for deploying config and code updates

auto_deploy

If true, environments will deploy whenever you push to the corresponding repo and tracking branch.

context

This value is used by your application to deploy to a specific cluster. If you have your own EKS cluster through ReleaseHub you can change this value to match that cluster/s, but if not use the generated value.

domain

The domain name where your applications will be hosted. These domains must be AWS Route 53 hosted domains. ReleaseHub supports first and second level domains. (i.e. domain.com or release.domain.com)

mode

Mode is a configuration directive that you can use (it is set as an environment variable in your containers) if useful. e.x 'development' or 'production' or 'test'

parallelize_app_imports

If there are no dependencies for the order in which in the apps deploy, use parallelize_app_imports to deploy all the apps at the same time.

tracking_branch

By default this will be the default branch of your repository, but it can be changed to any branch you would like to track with you environments.

tracking_tag

A specific git tag that you want your environments to track. You must unset tracking_branch if you use tracking_tag.

app_imports

App Imports are a way to connect multiple apps together. When you create an environment on one application, the apps that you import will also get environments created in the same namespace. Click here for more info.

cron_jobs

Cron Jobs are Jobs that run on a schedule. Cron jobs allow you to periodically execute commands within a namespace. They can be used for warming up caches, running database maintenance, etc. Click here for more info.

development_environment

This allows you to connect from a local machine to the remote environment and sync files and folders. Click here for more info.

environment_templates

These templates are used when creating an environment. They allow you to override or change any of the defaults in this file for particular type of environments: ephemeral or permanent. Click here for more info.

hostnames

Hostnames are defined as an array of key/value pairs, under the hostnames directive. The key is a service name, and the value is the hostname. These can be hardcoded hostnames or utilize variable substitution. They are auto-generated for any service with a node_port or static build.

ingress

Ingress settings that can control the behavior and functionality of the NGINX ingress controller to access HTTP services in your cluster

jobs

Jobs are like services except they run to completion. Examples include database migrations, asset compilation, etc. The inherit the image from a service, and run a command that ultimately terminates. Click here for more info.

node_selector

Node Selectors allow you to assign workloads to particular nodes based on common labels such as kubernetes.io/os=windows and kubernetes.io/arch=arm64. Click here for more information.

resources

Default resources for all of your services. The structure and values are based on https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/. Click here for more info. For examples of why and how to override these defaults check out Managing Service Resources.

routes

Routes allow an easy way define multiple endpoints per service. Routes allow for edge routing rewrites, authentication and provide full support for ngnix ingress rules.

rules

Rules are based on https://kubernetes.github.io/ingress-nginx/user-guide/basic-usage/ and allow an easy way define multiple endpoints per service. They consist of three parts: service name, hostnames, and a path. ReleaseHub will take this configuration and create an Nginx Ingress deployment to handle your routing automatically.

service_accounts

Allow you to define service accounts that can be used to control the cloud permissions assumed by your workloads (services, jobs, cron jobs, etc.)

services

These services define the most important part/s of your application. The can represent services ReleaseHub builds from your repositories, off-the-shelf containers (postgres, redis, elasticsearch, etc), external services you need to connect to or even services from other applications you have that are needed in this application also. Click here for more info.

shared_volumes

Shared Volumes creates a PersistentVolumeClaim that is written to/read from by multiple services.

sidecars

Top level sidecar definitions allow you to create reusable containers that can be applied to several services defined with in your application. These are useful for log aggregation. Click here for more info.

workflows

Workflows are an ordered list of what must be done to deploy new configuration or code to your environments. They are a combination of services and jobs (if you have them). Click here for more info.
There are two kinds of workflows ReleaseHub supports: setup and patch. When a new environment is created setup is ran, and when code is pushed a patch is run against that environment.

Hostnames or Rules

Hostname or Rules can be both be used to define entry points to your services. They can not be used together at the same level in the config. In other words, you can't have default hostnames and rules, but you could have default hostnames and then use rules inside the environment_templates section of the file.

Hostnames

Hostnames are defined as an array of key/value pairs, under the hostnames directive. The key is a service name, and the value is the hostname. These can be hardcoded hostnames or utilize variable substitution. They are auto-generated for any service with a node_port or static build.
---
hostnames:
- frontend: frontend-${env_id}-${domain}
- docs: docs-${env_id}-${domain}
- backend: backend-${env_id}-${domain}
Hostnames by default are generated using two variables env_id and domain. env_id is a randomly generated string for ephemeral environments or the name of the environment for permanent ones. Using some amount of random values allows ReleaseHub to bring up any number of ephemeral environments on the same domain without conflicts. Domain is taken directly from your configuration file.

Rules

Rules are based on https://kubernetes.github.io/ingress-nginx/user-guide/basic-usage/ and allow an easy way define multiple endpoints per service. They consist of three parts: service name, hostnames, and a path. ReleaseHub will take this configuration and create an Nginx Ingress deployment to handle your routing automatically.
service:
type: String
required: true
description: Service name from your config
hostnames:
type: Array
required: true
description: Same as hostnames above
path:
type: String
required: true
description: Entry point for hostnames
Rules Schema
rules:
- service: backend
hostnames:
- backend-${env_id}.${domain}
path: "/auth/"
- service: frontend
hostnames:
- frontend-${env_id}.${domain}
path: "/graphql"
Rules Example

App Imports

App Imports are optional and not present in the Application Template by default.
---
branch:
type: String
required: false
description: Setting the branch pins all created Environments to that branch
name:
type: String
required: true
description: Name of App you want to import. The imported App from must exist in
your account.
exclude_services:
type: Array
required: false
description: If you have a services in your imported app that would be a repeat,
say both apps have Redis, you can exclude them
app_imports:
- name: backend
branch: new-branch
exclude_services:
- name: redis
Example: App Imports excluding a service
parallelize_app_imports: true
app_imports:
- name: backend
- name: upload-service
- name: worker-service
- name: authentication-service
Example: App Imports with many apps utilizing the parallel deploys

Exclude Services

Allows the removal of duplicate services during App Imports
---
name:
type: String
required: true
description: Name of service you want to exclude

Cron Jobs

Cron Job containers allow you to define additional workloads run on a schedule. Cron Jobs can be used for many different tasks like database maintenance, reporting, warming caches by accessing other containers in the namespace, etc.
---
completions:
type: Integer
required: false
description: Minimum Required Completions For Success
default: 1
concurrency_policy:
type: String
required: false
description: Policy On Scheduling Cron jobs
default: Forbid
from_services:
type: String
required: false
description: Service To Use For Job Execution
has_repo:
type: Boolean
required: false
description: Repository is local
image:
type: String
required: false
description: Docker Image To Execute
name:
type: String
required: true
description: A Name
parallelism:
type: Integer
required: false
description: Amount Of Parallelism To Allow
default: 1
schedule:
type: String
required: true
description: Cron Expression
args:
type: Array
required: false
description: Arguments
command:
type: Array
required: false
description: Entrypoint
Each cron job entry has a mutually exclusive requirement where either image or from_services must be present.
cron_jobs:
- name: poll-frontend
schedule: "0 * * * *"
image: busybox
command:
- sh
- "-c"
- "curl http://frontend:8080"
- name: redis-test
schedule: "*/15 * * * *"
from_services: redis
command:
- sh
- "-c"
- "redis-cli -h redis -p 6390 ping"
Example cron job definitions to poll the frontend service and ping Redis
parallelism, completions, and concurrency_policy are ways to control how many pods will be spun up for jobs and how preemption will work. By default, a minimum of one job will run successfully to be considered passing. Also by default, we set concurrency_policy to equal Forbid rather than the default Kubernetes setting of Allow. We have found that the default of Allow creates problems for long running jobs or jobs that are intensive and need to be scheduled on a smaller cluster. For example, if a job runs for ten minutes but is scheduled every five minutes, then Kubernetes will gladly keep starting new jobs indefinitely because it does not think the job is finished. This can quickly overwhelm resources. You can use Forbid to prevent rescheduling jobs that should not be rescheduled even if they are not run or fail to start.
A few examples follow.
cron_jobs:
- name: poll-frontend
concurrency_policy: "Forbid"
parallelism: 1
completions: 1
schedule: "0 * * * *"
image: busybox
command:
- sh
- "-c"
- "curl http://frontend:8080"
An example of the default settings (same as leaving them blank).
cron_jobs:
- name: poll-frontend
concurrency_policy: "Replace"
parallelism: 2
completions: 2
schedule: "*/10 * * * *"
image: busybox
command:
- sh
- "-c"
- "curl http://frontend:8080"
An example of a job that will run two polling jobs roughly simultaneously every ten minutes. Two jobs must succeed for the job to be marked complete; if it does not finish within 10 minutes, then the Replace policy will kill the previous job and start a new one in its place.
cron_jobs:
- name: sync-data-lake
concurrency_policy: "Allow"
parallelism: 3
completions: 6
schedule: "@daily"
image: busybox
command:
- sh
- "-c"
- "backup db"
An example of a queue-pulling job that will run 3 threads of self-synchronising pods and usually takes six runs to complete. The setting of Allow will ensure the job starts again if the scheduler decides the jobs did not finish or started late due to resource constraints on the cluster. Please note: completion_mode is not available until v1.24 is supported.

completions

Integer amount greater than zero to indicate how many successful runs should be considered finished. Usually you would set this value equal or greater than parallelism but it might be possible to set it less if you do not care about wasted pods being scheduled. Depending on parallelism and concurrency_policy, the combination of settings may cause jobs to be run multiple times in excess of this value. See the Kubernetes documentation

concurrency_policy

One of Allow, Forbid, or Replace. Kubernetes defaults to Allow which allows jobs to be rescheduled and started if they have failed or haven't started or haven't finished yet. We prefer to set Forbid because it prevents pods from being started or restarted again, which is much safer. The option to use Replace means that if a job is failed or stalled then the previous job start will be killed (if it is still running) before being started on a new pod.

from_services

A reference to the service name to use as the basis for executing the cron job. Parameters from the service will be copied into creating this cron job.

has_repo

Use an internal repository built by ReleaseHub, or not.

image

A reference to the docker image to execute the job, use if from_services is not a good fit

name

What's in a name? That which we call a rose/By any other name would smell as sweet.

parallelism

Integer amount of number of pods that can run in parallel. Set to 0 to disable the cron job. See the Kubernetes documentation. This controls how many pods are potentially running at the same time during a scheduled run.

schedule

A string representing the schedule when a cron job will execute in the form of minute hour dayofmonth month dayofweek or @monthly, @weekly, etc. Read the Kubernetes docs

args

An array of arguments to be passed to the entrypoint of the container.

command

An array of arguments to be passed to override the entrypoint of the container.

Development Environments

Coming Soon! Contact sales to learn more.
Development Environment allows you to configure an environment to be used for remote development. This allows you to connect from a local machine to the remote environment and sync files and folders.
---
services:
type: Array
required: true
description: Set of services which will allow remote development
Each service entry describes:
  • image to use, if not using the same as the one defined on the service
  • command to run on the image, if not using the one defined on the service
  • sync which files and folders to sync from a local machine to the remote container
  • port_forwards which ports to forward from the local machine to the remote container
development_environment:
services:
- name: api
command: "yarn start"
image: releasehub
sync:
- remote_path: "/app/src/api"
local_path: "./src/api"
port_forwards:
- remote_port: "4000"
local_port: "4000"
- name: frontend
command: "bash"
sync:
- remote_path: "/app/src/frontend"
local_path: "./src/frontend"
port_forwards:
- remote_port: 4000
local_port: 4000
- remote_port: 4001
local_port: 4001
Development Environment Example

Development Environment Services

Development Environment allows you to configure an environment to be used for remote development. This allows you to connect from a local machine to the remote environment and sync files and folders.
---
command:
type: String
required: false
description: Command to run on container start. Overrides any `command` specified
for the `service`.
image:
type: String
required: false
description: The image to use for the container. Overrides any `image` specified
for the `service`.
name:
type: String
required: true
description: Name of the service to use for remote development.
port_forwards:
type: Array
required: true
description: Specify which ports are forwarded.
sync:
type: Array
required: true
description: Specify which files and folders are synchronized.

Port Forwards

Forward ports allows you to configure which local port(s) are mapped to the remote port(s) on your container.
---
local_port:
type: Integer
required: true
description: The local port
remote_port:
type: Integer
required: true
description: The remote port

Sync

Sync allows you to configure which files and folders are synchronized between a local machine and a remote container.
---
local_path:
type: String
required: true
description: The full path or the relative path assumed from the current working
directory.
remote_path:
type: String
required: true
description: The full path on the container.

Environment Templates

There are two types of allowed and required templates: ephemeral and permanent. When creating a new environment, either manually or through a pull request one of these templates will be used to construct the configuration for that particular environment. If the template is empty you get the defaults contained in your Application Template, but these templates allow you to override any of the defaults.
The schema for these is a duplicate of the entire default configuration as it allows you override anything contained in this file for that particular template. As such, we won't detail the schema twice, but there are examples contained here showing how to override default configuration in your templates.
Instant Datasets are unique in that they are not allowed at the root of the default config and can only be added under environment_templates. Since Instant Datasets allow you to use instances of RDS databases (often snapshots of production, but they could be snapshots of anything) having this be the default could result in unwanted behavior for you permanent environments.
ReleaseHub requires you to be explicit on which template/s you would like to (by default) use Instant Datasets. Once you have created an environment you may add Instant Datasets to your environments through the Environment Configuration file if you don't want all environments of a particular type to use datasets.

Ingresses

Ingress settings that can control the behavior and functionality of the NGINX ingress controller to access HTTP services in your cluster
---
affinity:
type: String
required: false
description: Nginx affinity
affinity_mode:
type: String
required: false
description: The mode for affinity stickiness
backend_protocol:
type: String
required: false
description: Protocol to use on the backend
proxy_body_size:
type: String
required: false
description: Proxy Body Size maximum
proxy_buffer_size:
type: String
required: false
description: Proxy Initial Buffer Size
proxy_buffering:
type: Boolean
required: false
description: Enable or Disable Proxy Buffering
proxy_buffers_number:
type: Integer
required: false
description: Proxy Initial Buffer Count
proxy_max_temp_file_size:
type: String
required: false
description: Proxy Max Temp File Size
proxy_read_timeout:
type: String
required: false
description: Proxy Read Timeout
proxy_send_timeout:
type: String
required: false
description: Proxy Send Timeout
session_cookie_change_on_failure:
type: Boolean
required: false
description: Session Cookie Change on Failure
session_cookie_max_age:
type: Integer
required: false
description: Session Cookie Maximum Age in Seconds
session_cookie_name:
type: String
required: false
description: Session Cookie Name
session_cookie_path:
type: String
required: false
description: Session Cookie Path
wafv2_acl_arn:
type: String
required: false
description: Web Application Firewall Version 2 Access Control List Amazon Web Services
Resource Name
ingress:
proxy_body_size: 30m
proxy_buffer_size: 64k
proxy_buffering: true
proxy_buffers_number: 4
proxy_max_temp_file_size: 1024m
proxy_read_timeout: "180"
proxy_send_timeout: "180"
Example proxy buffer settings for large web requests
ingress:
affinity: "cookie"
affinity_mode: "persistent"
session_cookie_name: "my_Cookie_name1"
session_cookie_path: "/"
session_cookie_max_age: 86440
session_cookie_change_on_failure: true
Example settings for stickiness settings using a cookie
ingress:
wafv2_acl_arn: arn:aws:wafv2:us-west-2:xxxxx:regional/webacl/xxxxxxx/3ab78708-85b0-49d3-b4e1-7a9615a6613b
Example settings for applying a WAF ruleset to the ALB in (AWS-only)

Ingress settings schema

affinity

Type of the affinity, set this to cookie to enable session affinity. See https://kubernetes.github.io/ingress-nginx/examples/affinity/cookie/

affinity_mode

The affinity mode defines how sticky a session is. Use balanced to redistribute some sessions when scaling pods or persistent for maximum stickiness.

backend_protocol

Which backend protocol to use (defaults to HTTP, supports HTTP, HTTPS, GRPC, GRPCS, AJP and FCGI)

proxy_body_size

Sets the maximum allowed size of the client request body.

proxy_buffer_size

Sets the size of the buffer used for reading the first part of the response received from the proxied server. This part usually contains a small response header.

proxy_buffering

Enables or disables buffering of responses from the proxied server.

proxy_buffers_number

Sets the number of the buffers used for reading the first part of the response received from the proxied server.

proxy_max_temp_file_size

When buffering of responses from the proxied server is enabled, and the whole response does not fit into the buffers set by the proxy_buffer_size and proxy_buffers directives, a part of the response can be saved to a temporary file. This directive sets the maximum size of the temporary file.

proxy_read_timeout

Sets the timeout in seconds for reading a response from the proxied server. The timeout is set only between two successive read operations, not for the transmission of the whole response.

proxy_send_timeout

Sets the timeout in seconds for reading a response from the proxied server. The timeout is set only between two successive read operations, not for the transmission of the whole response.
When set to false nginx ingress will send request to upstream pointed by sticky cookie even if previous attempt failed. When set to true and previous attempt failed, sticky cookie will be changed to point to another upstream.
Time in seconds until the cookie expires, corresponds to the Max-Age cookie directive.
Name of the cookie that will be created (defaults to INGRESSCOOKIE).
Path that will be set on the cookie (required because ReleaseHub Ingress paths use regular expressions).

wafv2_acl_arn

The ARN for an existing WAF ACL to add to the load balancer. AWS-only, and must be created separately.

Jobs

Jobs allow you to run arbitrary scripts during a deployment. This allows you to do anything before or after a service is deployed that is needed to setup your environment. A common example is database migrations before your backend comes up, but after you have deployed your database. Another good example might be running asset compilation. These tasks and any others can be accomplished using jobs.
---
completed_timeout:
type: Integer
required: false
description: How long (in seconds) ReleaseHub will wait until the job is considered
timed out and raise an error
default: 1200
completions:
type: Integer
required: false
description: Minimum Required Completions For Success
default: 1
from_services:
type: String
required: false
description: Name of service to inherit image from
halt_on_error:
type: Boolean
required: false
description: When set to `true`, the deployment will be aborted with an error if
the job fails.
image:
type: String
required: false
description: The image to use for the job
name:
type: String
required: true
description: Unique name to use when referencing the job
parallelism:
type: Integer
required: false
description: Amount Of Parallelism To Allow
default: 1
service_account_name:
type: String
required: false
description: Runs the job using the given service account (see [service accounts](#service-accounts))
args:
type: Array
required: false
description: Arguments that are passed to command on container start
command:
type: Array
required: false
description: Command to run on container start. Overrides what is in the Dockerfile
cpu:
type: Hash
required: false
description: Same as resources, but for this job only. If not specified the default
resources will be used. Can include units like `milli`, `centi`, etc.
memory:
type: Hash
required: false
description: Same as resources, but for this job only. If not specified the default
resources will be used. Include the units in Gibibytes or Mibibytes.
nvidia_com_gpu:
type: Hash
required: false
description: Specify the limits value for gpu count on this job. Do not specify
`requests`. Must be an integer and cannot be overprovisioned or shared with other
containers.
volumes:
type: Array
required: false
description: List of volumes and mount points
Each job entry has a mutually exclusive requirement where either image or from_services must be present.
jobs:
- name: migrate
completed_timeout: 600
command:
- "./run-migrations.sh"
from_services: backend
- name: setup
parallelism: 0 # disabled
command:
- "./run-setup.sh"
from_services: backend
cpu:
limits: 100
requests: 100
memory:
limits: 1Gi
requests: 1Gi
- name: mljob
completed_timeout: 3600
parallelism: 3
completions: 3
command:
- "./run-ml-batch.sh"
from_services: backend
node_selector:
key: "nvidia.com/gpu"
value: "true"
nvidia_com_gpu:
limits: 1
Jobs Example

completions

Integer amount greater than zero to indicate how many successful runs should be considered finished. Usually you would set this value equal or greater than parallelism but it might be possible to set it less if you do not care about wasted pods being scheduled. Depending on parallelism and concurrency_policy, the combination of settings may cause jobs to be run multiple times in excess of this value. See the Kubernetes documentation

parallelism

Integer amount of number of pods that can run in parallel. Set to 0 to disable the job. See the Kubernetes documentation. This controls how many pods are potentially running at the same time during a scheduled run.

Resources

Resources are service level defaults. They represent the resources allocated for each service. Storage is different in that not every container needs storage, so while you can specify defaults, not every container will use storage.
Requests define resource guarantees. Containers are guaranteed the request amount of resource. If not enough resources are available the container will not start.
Limits, on the other hand, make sure a container never goes above a certain amount of resource. The container is never allowed to exceed the limit.
memory: Limits and requests for memory are measured in bytes. You can express memory as a plain integer or as a fixed-point integer using one of these suffixes: E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. For example, the following represent roughly the same value: 128974848, 129e6, 129M, 123Mi
nvidia_com_gpu: Limits for Nvidia GPU units. (Do not specify requests:). GPU limits can only be integer values and cannot be shared concurrently with other containers. You must also specify a node_selector to schedule a job or service on the correct worker node(s).
cpu: Limits and requests for cpu are represented in millicpu. This is represented by '{integer}m', e.x: 100m (guarantees that service will receive 1/10 of 1000m, or 1/10 of 1 cpu). You can also represent the cpu resources as fractions of integers, e.x. 0.1, is equivalent to 100m. Precision finer than '1m' is not allowed.
replicas: This number is the number of containers that will run during normal operation. This field is an integer, e.x: 5, that would make 5 of each service.
storage: Consists of two values size and type. Size accepts the same values as memory and type is the type of storage, whether aws-efs, empty_dir, or host_path.
---
replicas:
type: Integer
required: true
description: Number of containers, per service
default: 1
cpu:
type: Hash
required: true
description: Limits and requests for cpus
default: '{"limit"=|"1000m", "requests"=|"100m"}'
memory:
type: Hash
required: true
description: Limits and requests for memory
default: '{"limit"=|"1Gi", "requests"=|"100Mi"}'
nvidia_com_gpu:
type: Hash
required: false
description: Limits for nvidia.com/gpu tagged nodes
storage:
type: Hash
required: false
description: Size and type definition

Service Accounts

Service accounts allow you to control the cloud permissions granted to your workloads (services, jobs, infrastructure runners, etc.)
To apply a service account to a workload, set its service_account_name field to its name.
---
cloud_role:
type: String
required: false
description: |
Cloud role to assume.
On AWS, this is the IAM Role's ARN. On GCP this is the service account's email address.
name:
type: String
required: true
description: Unique name to use when referencing the service account from `service_account_name`
service_accounts:
- name: custom-role
cloud_role: arn:aws:iam::111111111111:role/MyCustomRole
services:
- name: aws-cli
image: amazon/aws-cli
jobs:
- name: aws-whoami
from_services: aws-cli
args:
- sts
- get-caller-identity
service_account_name: custom-role
Example: Assuming a custom IAM role from a job

Services

Services contain descriptions of each of your containers. They include many fields from your docker-compose and fields auto-generated by ReleaseHub upon application creation. For each service you can define:
  • Static javascript builds
  • Open and map any number of ports
  • Creates mounts and volumes
  • Use ConfigMaps to modify config at run-time for off-the-shelf containers
  • Override default resources
  • Pin particular services to particular images
  • Create liveness and readiness probes and set other k8s config params (e.x. max_surge)
  • Create stateful services
  • External DNS entries for cross namespace services
---
build_base:
type: String
required: false
description: Path to the Javascript application if it does not reside at the root
build_command:
type: String
required: false
description: Command to create the static Javascript build.
build_destination_directory:
type: String
required: false
description: Directory to copy the generated output to
build_output_directory:
type: String
required: false
description: Directory where the generated output is located
build_package_install_command:
type: String
required: false
description: Command to install packages such as `npm install` or `yarn`. Defaults
to `yarn`
completed_timeout:
type: Integer
required: false
description: Time \(in seconds\) to wait for container to reach completed state
default: 600
has_repo:
type: Boolean
required: false
description: If we should reference an image built by ReleaseHub
image:
type: String
required: false
description: Name of or path to image
max_surge:
type: String
required: false
description: K8s max_surge value \(as a percentage from 0 to 100\)
default: 25
name:
type: String
required: true
description: Name of your service
pinned:
type: Boolean
required: false
description: Pin service to particular image
ready_timeout:
type: Integer
required: false
description: Time \(in seconds\) to wait for container to reach ready state
default: 180
replicas:
type: Integer
required: false
description: Same as resources, but for this service only
service_account_name:
type: String
required: false
description: Runs the service using the given service account (see [service accounts](#service-accounts))
static:
type: Boolean
required: false
description: When true, ReleaseHub will create a static Javascript build. Review
the following build_* attributes
args:
type: Array
required: false
description: Arguments that are passed to command on container start
build:
type: Hash
required: false
description: Instructions for ReleaseHub to build an image.
command:
type: Array
required: false
description: Command to run on container start. Overrides what is in the Dockerfile
cpu: