Writing Deploys

The definitive guide to writing pyinfra deploys.

What is a pyinfra deploy?
A deploy represents a collection of inventory (hosts to target), data (configuration, templates, files) and operations (changes/state to apply to the inventory). Deploys are written in standard Python, and other packages can be used as needed.

Layout

The layout of a pyinfra deploy is generally very flexible. Only two paths are hard-coded, both relative to the Python file being executed:

  • group_data/*.py - arbitrary data for host groups
  • config.py - optional configuration

Although optional, it is recommended to use the following layout for other files:

  • *.py - top-level operation definitions
  • inventory.py or inventories/*.py - inventory definitions
  • templates/*.j2 - jinja2 template files
  • files/* - normal/non-template files
  • tasks/*.py - operations to perform a specific task
  • requirements.txt - Python package requirements for the deploy

An example layout:

- setup_server.py  # deploy file containing operations to execute
- update_server.py  # another deploy file with different operations
- config.py  # optional pyinfra configuration
inventories/
    - production.py  # production inventory targets
    - staging.py  # staging inventory targets
group_data/
    - all.py  # global data variables
    - production.py  # production inventory only data variables
tasks/
    - nginx.py  # deploy file containing task-specific operations
files/
    - nginx.conf  # a file that can be uploaded with the `files.put` operation
templates/
    - web.conf.j2  # a template that can be rendered & uploaded with the `files.template` operation

Inventory

Inventory files contain groups of hosts. Groups are defined as a list of hosts. For example, this inventory creates two groups, app_servers and db_servers:

# inventories/production.py

app_servers = [
    'app-1.net',
    'app-2.net'
]

db_servers = [
    'db-1.net',
    'db-2.net',
    'db-3.net'
]

Important

In addition to the groups defined in the inventory, all the hosts are added to two more groups: all and the name of the inventory file, in this case production. Both can be overriden by defining them in the inventory.

Data

Data allows you to separate deploy variables from the deploy script. With data per host and per group, you can easily build deploys that satisfy multiple environments. The data example deploy shows this in action.

Host Data

Arbitrary data can be assigned in the inventory and used at deploy-time. You just pass a tuple (hostname, data) instead of just the hostname:

# inventories/production.py

app_servers = [
    'app-1.net',
    ('app-2.net', {'some_key': True})
]

Group Data

Group data files can be used to attach data to groups of host. They are placed in group_data/<group_name>.py. This means group_data/all.py can be used to attach data to all hosts.

Data files are just Python, any core types will be included:

# group_data/production.py

app_user = 'myuser'
app_dir = '/opt/myapp'

Data Hierarchy

The same keys can be defined for host and group data - this means we can set a default in all.py and override it on a group or host basis. When accessing data, the first match in the following is returned:

  • “Override” data passed in via CLI args
  • Host data as defined in the inventory file
  • Normal group data
  • “all” group data

Note

pyinfra contains a debug-inventory command which can be used to explore the data output per-host for a given inventory/deploy, ie pyinfra inventory.py debug-inventory.

Connecting with Data

Instead of passing --key, --user, etc to the CLI, or running a SSH agent, you can define these details within host and group data. Different variables are used depending on the connector - see the connectors page to see them all. For example the SSH variables available are as follows:

ssh_port = 22
ssh_user = 'ubuntu'
ssh_key = '~/.ssh/some_key'
ssh_key_password = 'password for key'
# ssh_password = 'Using password authorization is bad. Preferred option is ssh_key.'

Operations

Now that you’ve got an inventory of hosts and know how to authenticate with them, you can start writing operations. Operations are used to describe changes to make to the systems in the inventory. Operations are imported from pyinfra.operations.

For example, this deploy will ensure that user “pyinfra” exists with home directory /home/pyinfra, and that the /var/log/pyinfra.log file exists and is owned by that user.

# deploy.py

# Import pyinfra modules, each containing operations to use
from pyinfra.operations import server, files

server.user(
    name='Create pyinfra user',
    user='pyinfra',
    home='/home/pyinfra',
)

files.file(
    name='Create pyinfra log file',
    path='/var/log/pyinfra.log',
    user='pyinfra',
    group='pyinfra',
    permissions='644',
    sudo=True,
)

# Execute with: pyinfra my-server.net deploy.py

Uses the server module and files module. You can see all available operations in the operations index.

Important

Operations that rely on one another (interdependency) must be treated with caution. See: deploy limitations.

Global Arguments

In addition to each operations having its own arguments, there are a number of keyword arguments available for all operations:

Privilege & user escalation:
  • sudo: Execute/apply any changes with sudo.
  • sudo_user: Execute/apply any changes with sudo as a non-root user.
  • use_sudo_login: Execute sudo with a login shell.
  • use_sudo_password: Whether to use a password with sudo (will ask).
  • preserve_sudo_env: Preserve the shell environment when using sudo.
  • su_user: Execute/apply any changes with su.
  • use_su_login: Execute su with a login shell.
  • preserve_su_env: Preserve the shell environment when using su.
  • su_shell: Use this shell (instead of user login shell) when using su). Only available under Linux, for use when using su with a user that has nologin/similar as their login shell.
Operation control:
  • name: Name of the operation.
  • shell_executable: The shell to use. Defaults to sh (Unix) or cmd (Windows).
  • chdir: Directory to switch to before executing the command.
  • env: Dictionary of environment variables to set.
  • ignore_errors: Ignore errors when executing the operation.
  • success_exit_codes=[0]: List of exit codes to consider a success.
  • timeout: Timeout for each command executed during the operation.
  • get_pty: Whether to get a pseudoTTY when executing any commands.
  • stdin: String or buffer to send to the stdin of any commands.
Operation execution:
  • parallel: Run this operation in batches of hosts.
  • run_once: Only execute this operation once, on the first host to see it.
  • serial: Run this operation host by host, rather than in parallel.
  • precondition: Command to execute & check before the operation commands begin.
  • postcondition: Command to execute & check after the operation commands complete.
Callbacks:
  • on_success: Callback function to execute on success.
  • on_error: Callback function to execute on error.

Data & Facts

Both data (supplied by the user as part of the inventory) and facts (information about the target host) are often used in operation arguments or as conditional statements (if ...).

Data

Adding data to inventories was described above - you can access it within a deploy on host.data:

from pyinfra import host
from pyinfra.operations import server

# Ensure the state of a user based on host/group data
server.user(
    name='Setup the app user',
    user=host.data.app_user,
    home=host.data.app_dir,
)

Facts

Facts allow you to use information about the target host to change the operations you use. A good example is switching between apt & yum depending on the Linux distribution. Like data, facts are accessed using host.fact:

from pyinfra import host
from pyinfra.operations import yum

if host.fact.linux_name == 'CentOS':
    yum.packages(
        name='Install nano via yum',
        packages=['nano'],
        sudo=True
    )

Some facts also take a single argument like the directory or file facts. The facts index lists the available facts and their arguments.

Operation Meta

All operations return an operation meta object which provides information about the changes the operation will execute. This can be used for subsequent operations:

from pyinfra.operations import server

# Run an operation, collecting its meta output
create_user = server.user(
    name='Create user myuser',
    user='myuser',
}

# If we added a user above, do something extra
if create_user.changed:
    server.shell( # add user to sudo, etc...

Includes / Nested operations

Including files can be used to break out operations into multiple files, often referred to as tasks. Files can be included using local.include.

from pyinfra import local

# Include & call all the operations in tasks/install_something.py
local.include('tasks/install_something.py')

See more in examples: groups & roles.

Important

It is also possible to bundle operations into Python functions - this requires a slightly different syntax to maintain correct operation order; see packaging deploys for more information.

Config

There are a number of configuration options for how deploys are managed. These can be defined at the top of a deploy file, or in a config.py alongside the deploy file. See the full list of options & defaults.

# config.py or top of deploy.py

# SSH connect timeout
CONNECT_TIMEOUT = 1

# Fail the entire deploy after 10% of hosts fail
FAIL_PERCENT = 10

Note

When added to config.py (vs the deploy file), these options will take effect for any CLI usage (ie pyinfra host exec -- 'tail -f /var/log/syslog').

Requirements

The config can be used to check Python package requirements before pyinfra executes, helping to prevent unexpected errors. This can either be defined as a requirements text file path or simply a list of requirements:

REQUIRE_PACKAGES = 'requirements.txt'  # path relative to the deploy
REQUIRE_PACKAGES = [
    'pyinfra~=1.1',
    'pyinfra-docker~=1.0',
]

Examples

A great way to learn more about writing pyinfra deploys is to see some in action. There’s a number of resources for this: