# Foreman deployments design
Use arrow keys to switch the slides.
# Foreman deployments
Agenda:
- goals
- the design
- use case: Satellite
# Goals
- allow for defining multi-host provisioning, configuration and orchestration
- instance independent deployment definition
- sharing the definitions
- pre-configured "one click" deployments
# Deployment workflow
1. describe the deployment process
- result: *stack saved in a file locally*
2. import the stack into a foreman instance
- can be exported from other instance
- result: *stack saved in the Foreman's DB*
3. configure the stack on the instance
- e.g.: select subnet, define number of hosts to provision
- result: *deployment record + configuration*
4. deploy the configured stack
- result: *running deployment*
# Stacks
- describe the deployment process
- consist of multiple tasks dependent on each other
- task centric approach helps to allow for integration of additional tools
- example tasks: `PuppetRun`, `ProvisionHost`, `TakeSubnet`
# Tasks
- extended Dynflow actions (workflow engine https://github.com/Dynflow/dynflow/)
- take inputs and produces results (dependencies)
- stacks gives the action have name and description
- stack doesn't need to define all inputs
Example:
```
TakeSubnet
# Either creates a new subnet or lets user to select existing one
# or create a new one
- input
- name
- network_address
- network_mask
- ...
- result:
- subnet instance
```
# Stack definition
- defines tasks, their inputs (and relationships)
- in the stack, task input can be:
- hardcoded value
- entered by a user in a configuration phase
- reference to a value from another task
- ignored
- in YAML
Example:
```yaml
DbServerHostgroup(TakeHostgroup):
- allow_select: false
- name: "DB servers"
DbHost(TakeHost):
- description: Database hosts
- count:
- type: input
- data_type: "int"
- min: "1"
- name:
- type: template
- value: "db-%"
- puppet_classes:
- ntp
- postgres
- comment: "Database server"
- hostgroup:
- type: reference
- object: "DbServerHostgroup"
- field: "result"
```
# Stack configuration
When the stack is imported we can map it on the instance specific configuration and create a deployment.
- stack defines and limits available configuration (good UX, not allowed to set unsupported values)
- we collect the tasks and render UI for required inputs
- each input bears it's type and validator
- configuration is validated before it is saved
```plain
┌─ Provisioning subnet ─────────────────────────────────┐
│ Select subnet: [ Test lab ▾ ] [create new|refresh] │
└───────────────────────────────────────────────────────┘
┌─ Database hosts ──────────────────────────────────────┐
│ Count: [ 1 ▾ ] │
│ Name template: [_____________] │
┊ ┊
```
- each task has it's configuration UI
- can be generated automatically for the simpler ones
# Deploying the configured stack
- tasks are ordered and processed by Dynflow
- ordering based on task inputs and results
- concurrency evaluation for free
- progess tracking
- the power of dynflow console
# Ordering actions without direct dependency
- each task has input field `after`
```
FirstRun(PuppetRun):
SecondRun(PuppetRun):
- after:
- type: reference
- object: "FirstRun"
- field: "result"
```
# Stored configurations
- keep instance specific values separate from the stack
- cloning existing deployments
- stored pre-created configurations
- partial configs
- full configs for one click deployments
- users fill only missing values
# Extensibility
- via plugins
- plugins can
- define their own tasks
- add properties to existing tasks
- tasks don't need to be limited on the Foreman
- email notifications
- approvals
- third-party system integration (e.g. REST calls)
# Integration with Katello
- via activation keys defined on `TakeHost` and `TakeHostgroup` tasks
- custom task `TakeActivationKey`
# Editing deployed infrastructure
- can't be done automatically in most cases
- users need to define a special process
- handled via stacks too
- e.g.: `scale-up`, `scale-down`, `db upgrade`
# Future features
- composed stacks
- each stack defines inputs and results too
- can be coupled by encapsulating stacks
- stack inheritance
- parents define common behaviour
- children can add more tasks
- abstract stacks
- `DB stack` -> `PostgreSQL`, `MySQL`
- concrete stacks are selected in the configuration phase
# Examples - Satellite 6 deployment
Deploy one Satellite 6 instance with N capsules.
1. provision and configure Sat 6 machine
2. provision and configure capsules
3. couple them together
# Satellite 6
1. provision the machine
2. get content
3. install Satellite 6
![](diagrams/sat_stack.svg)
# Satellite 6
```yaml
ProvisioningSubnet(TakeSubnet):
- description: Subnet to provision the Satellite on
SatelliteOs(TakeOs):
- description: OS for the Satellite machine
- family: RHEL
- version: '>= 6.5'
SatelliteMachine(TakeHost):
- description: The satellite machine
- count: 1
- name:
- type: input
- data_type: string
- interfaces:
- primary: true
subnet:
- type: reference
- object: ProvisioningSubnet
- field: result.subnet
- os:
- type: reference
- object: SatelliteOs
- field: result.os
- activation_key:
- type: input
- data_type: select(ActivationKey)
ProvisionSatellite(ProvisionHost):
- hosts:
- type: reference
object: SatelliteMachine
field: result.hosts
InstallSatellite(RemoteExecution):
- hosts:
- type: reference
object: SatelliteMachine
field: result.hosts
- command: "katello-installer ..."
- parameters:
- name: password
type: string
```
# Capsule
1. provision the machine
2. get content
3. generate certificates on the Satellite machine
4. transport the certs to the Capsule machine
3. install Capsule (requires the certs and oauth credentials)
![](diagrams/capsule_stack.svg)
# Capsule
```yaml
ProvisioningSubnet(TakeSubnet):
- description: Subnet to provision the Capsules on
CapsuleOs(TakeOs):
- description: OS for the Capsule machine
- family: RHEL
- version: '>= 6.5'
CapsuleMachine(TakeHost):
- description: The capsule machine
- count:
- type: input
- data_type: integer
- validate: '> 0'
- name:
- type: template
- template: "capsule-#{index}"
- interfaces:
- primary: true
subnet:
- type: reference
- object: ProvisioningSubnet
- field: result.subnet
- os:
- type: reference
- object: CapsuleOs
- field: result.os
- activation_key:
- type: input
- data_type: select(ActivationKey)
ProvisionCapsule(ProvisionHost):
- hosts:
- type: reference
object: CapsuleMachine
field: result.hosts
SatelliteMachine(TakeHost):
- description: The satellite master machine
- count: 1
GenerateAndDistributeCertificates(RemoteExecution):
- hosts:
- type: reference
object: SatelliteMachine
field: result.hosts
- command: "<% hosts.map(&:name).each {|fqdn| `capsule-certs-generate --capsule-fqdn ..."
- params:
- name: capsules
type: reference
object: ProvisionCapsule
field: result.hosts
- result:
- name: hosts
value: "<% hosts %>"
InstallCapsule(RemoteExecution):
- hosts:
- type: reference
object: GenerateAndDistributeCertificates
field: result.hosts
- command: "capsule-installer ..."
- parameters:
- name: oauth_key
type: string
- name: oauth_secret
type: string
```
# Coupling Satellite and Capsule stacks
Encapsulating stack will update the substack tasks to:
- use the same subnet
- capsule should be registered against the satellite
- use the same activation key
![](diagrams/complete_stack.svg)
# Coupling Satellite and Capsule stacks
```yaml
SatelliteStack(Stack):
- name: Satellite
CapsuleStack(Stack):
- name: Capsule
CapsuleStack:SatelliteMachine(TakeHost):
- activation_key:
- type: reference
- object: SatelliteStack:SatelliteMachine
- value: result.host.activation_key
CapsuleStack:ProvisioningSubnet(TakeSubnet):
- subnet:
- type: reference
- object: SatelliteStack:ProvisioningSubnet
- value: result.subnet
CapsuleStack:SatelliteMachine(TakeHost):
- host:
- type: reference
- object: SatelliteStack:ProvisionSatellite
- value: result.host
```
More capsules can be added later by running the Capsule stack again
# Puppet integration
- `TakeHost` will have input field for puppet classes and parameters
- additional tasks:
- `PuppetRun`
- `Enable/DisablePuppet`
- validation can check if `PuppetRun` is present when you assign classes to a host
- similar approach can be taken for other config tools
Questions?