Automating cloud resources with Python and Pulumi: Separating config from code

Separating the config data into a yaml file which serves as input to the Pulumi program. Safespring is a cloud platform built on OpenStack.

Jarle Bjørgeengen

Jarle Bjørgeengen

Chief Product Officer

In infrastructure code (and other code, too) it is a good practice to separate the program logic from its input data (configuration). That way, in order to change the state of our infrastructure, we only need to change the input data and not the program unless the logic of the program changes.

In the previous blog post, we went through a basic setup of Pulumi with the Python template for using it to manage OpenStack resources in Safespring. This is a good starting point to understand the basics of how one can use Python together with Pulumi to declaratively manage infrastructure resources without having to write all resource graph management from the ground and up, which, of course, also would be possible with Python or any modern programming language for that matter.

One problem with the first example is that the configuration (instance name, flavor, network, and so on) is embedded inside the Python code.

While that approach serves as a nice self-contained example it quickly becomes both error-prone and tedious having to change the Python program every time a new object (an instance for instance ;-)) should be added, changed, reconfigured or removed.

If the configuration was stored outside the program and loaded into it upon execution, for example, in the most ubiquitous “human-friendly data serialization language” in IT today: YAML, then that would be an improvement over the initial approach, right?

Prerequisites

Reading the instance configuration from a YAML file

Consider the following Python code:

"""An OpenStack Python Pulumi program"""

import pulumi
from pulumi_openstack import compute
from pulumi_openstack import networking
from ruamel.yaml import YAML
import os.path

# Configure the behavior for the yaml module
yaml=YAML(typ='safe')
yaml.default_flow_style = False

# Load config data from YAML file representation
# into Python dictionary representation

config_data_file = "pulumi-config.yaml"
if os.path.isfile(config_data_file):
  fh = open(config_data_file, "r")
  config_dict = yaml.load(fh)
else:
  print(f'The file {config_data_file} does not exist!')
  exit(1)


instances = {}
for i in config_dict:
  instances[i['name']] = compute.Instance(i['name'],
        name = i['name'],
        flavor_name = i['flavor'],
        networks = [{"name": i['network']}],
        image_name = i["image"])

In this example, we have taken the same minimal set of parameters needed to define an instance as in example 1, but instead of specifying the parameters in the code, we read them from a dictionary, which again comes from de-serializing data from the pulumi-config.yaml file. In addition, we create a loop that iterates over a list of instances, with parameters in each list item from the yaml-file.

And the pulumi-config.yaml file looks like this:

---
- name: pulumi-snipp
  flavor: l2.c2r4.100
  image: ubuntu-22.04
  network: default
- name: pulumi-snapp
  flavor: l2.c2r4.500
  image: ubuntu-22.04
  network: public

So now we can just run pulumi up and iterate over the list of instances in the YAML file to converge desired into state to actual state? Well, first, we actually need to update the virtualenv the Pulumi program uses in order to make use of the ruamel.yaml module. To ensure the change persists when replicating the setup in other places (a pipeline for instance) we should add the ruamel.yaml module to the requirements.txt file and then run venv/bin/pip install -r requirements.txt in order to update the installed Python libraries according to the requirements.

Now, we can apply the desired state by:

(oscli) ubuntu@demo-jumphost:~/pulumi$ pulumi up
Previewing update (dev)

View in Browser (Ctrl+O): https://app.pulumi.com/JarleB/pulumi-demo/dev/previews/9709238f-9230-4029-8bd5-0c6d9a55664d

     Type                           Name             Plan
     pulumi:pulumi:Stack            pulumi-demo-dev
 +   ├─ openstack:compute:Instance  pulumi-snapp     create
 +   └─ openstack:compute:Instance  pulumi-snipp     create


Resources:
    + 2 to create
    1 unchanged

Do you want to perform this update? yes
Updating (dev)

View in Browser (Ctrl+O): https://app.pulumi.com/JarleB/pulumi-demo/dev/updates/24

     Type                           Name             Status
     pulumi:pulumi:Stack            pulumi-demo-dev
 +   ├─ openstack:compute:Instance  pulumi-snapp     created (15s)
 +   └─ openstack:compute:Instance  pulumi-snipp     created (14s)


Resources:
    + 2 created
    1 unchanged

Duration: 17s

(oscli) ubuntu@demo-jumphost:~/pulumi$

Let’s inspect what was created by using OpenStack CLI:

(oscli) ubuntu@demo-jumphost:~/pulumi$ openstack server list |grep pulu
| 48d1cb9f-d732-4684-82e8-aa89ca05c5b9 | pulumi-snapp                          | ACTIVE  | public=212.162.147.53, 2a09:d400:0:1::2b1  | ubuntu-22.04             | l2.c2r4.500  |
| 5870d687-5aac-40b8-8f23-e54755e0fc62 | pulumi-snipp                          | ACTIVE  | default=10.68.3.95, 2a09:d400:0:2::82      | ubuntu-22.04             | l2.c2r4.100  |
(oscli) ubuntu@demo-jumphost:~/pulumi$

Looks like Pulumi kept it’s promise.

Adding security groups for access

It is not much fun to provision (and pay for) instances that can’t be reached, so let’s extend the setup to add some security groups and rules so the services on the instances will be reachable.

Thus, we’ll make changes to the Pulumi program so that it will accept the configuration of security groups and rules from the configuration YAML file and add the list of security group memberships as parameters to the instances.

The new Python code also reflects a different structure in the YAML configuration file; we moved the list of instances below a new sub-tree called instances, and, unsurprisingly, placed the security groups under the security_groups sub-tree with rules for each security group as “leaf nodes” under each security group.

Like this:

---
security_groups:
  ssh-from-the-world:
    ssh:
      direction: ingress
      ethertype: IPv4
      protocol: tcp
      port_range_min: 22
      port_range_max: 22
      remote_ip_prefix: 0.0.0.0/0
  web:
    https:
      direction: ingress
      ethertype: IPv4
      protocol: tcp
      port_range_min: 443
      port_range_max: 443
      remote_ip_prefix: 0.0.0.0/0
    http:
      direction: ingress
      ethertype: IPv4
      protocol: tcp
      port_range_min: 80
      port_range_max: 80
      remote_ip_prefix: 0.0.0.0/0

instances:
  - name: pulumi-snipp
    flavor: l2.c2r4.100
    image: ubuntu-22.04
    network: default
    security_groups:
      - ssh-from-the-world
  - name: pulumi-snapp
    flavor: l2.c2r4.500
    image: ubuntu-22.04
    network: public
    security_groups:
      - ssh-from-the-world

And then the updated Pulumi program that will implement the logic structure of the YAML-file:

"""An OpenStack Python Pulumi program"""

import pulumi
from pulumi_openstack import compute
from pulumi_openstack import networking
from ruamel.yaml import YAML
import os.path

# Configure the behavior for the yaml module
yaml=YAML(typ='safe')
yaml.default_flow_style = False

# Load config data from YAML file representation
# into Python dictionary representation

config_data_file = "pulumi-config.yaml"
if os.path.isfile(config_data_file):
  fh = open(config_data_file, "r")
  config_dict = yaml.load(fh)
else:
  print(f'The file {config_data_file} does not exist!')
  exit(1)


security_groups = {}
for sg in config_dict['security_groups']:
  security_groups[sg] = networking.SecGroup(sg,
        name = sg)
  for sgr in config_dict['security_groups'][sg]:
    rule = {}
    rule = config_dict['security_groups'][sg][sgr]
    security_groups[sgr] =  networking.SecGroupRule(sgr,
      direction = rule['direction'],
      ethertype = rule['ethertype'],
      protocol = rule['protocol'],
      port_range_min = rule['port_range_min'],
      port_range_max = rule['port_range_max'],
      security_group_id = security_groups[sg].id)



instances = {}
for i in config_dict['instances']:
  instances[i['name']] = compute.Instance(i['name'],
    name = i['name'],
	flavor_name = i['flavor'],
	networks = [{"name": i['network']}],
    security_groups = i['security_groups'],
	image_name = i["image"])

Let’s run the Pulumi program and see how the desired state of our IaaS changes according to the YAML configuration file structure:

(oscli) ubuntu@demo-jumphost:~/pulumi$ pulumi up
Previewing update (dev)

View in Browser (Ctrl+O): https://app.pulumi.com/JarleB/pulumi-demo/dev/previews/ce560731-1889-42bb-821d-9003e1acfc1e

     Type                                  Name                Plan       Info
     pulumi:pulumi:Stack                   pulumi-demo-dev
 +   ├─ openstack:networking:SecGroup      web                 create
 +   ├─ openstack:networking:SecGroup      ssh-from-the-world  create
 ~   ├─ openstack:compute:Instance         pulumi-snipp        update     [diff: ~securityGroups]
 ~   ├─ openstack:compute:Instance         pulumi-snapp        update     [diff: ~securityGroups]
 +   ├─ openstack:networking:SecGroupRule  https               create
 +   ├─ openstack:networking:SecGroupRule  http                create
 +   └─ openstack:networking:SecGroupRule  ssh                 create


Resources:
    + 5 to create
    ~ 2 to update
    7 changes. 1 unchanged

Do you want to perform this update? yes
Updating (dev)

View in Browser (Ctrl+O): https://app.pulumi.com/JarleB/pulumi-demo/dev/updates/29

     Type                                  Name                Status                  Info
     pulumi:pulumi:Stack                   pulumi-demo-dev     **failed**              1 error
 +   ├─ openstack:networking:SecGroup      web                 created (1s)
 +   ├─ openstack:networking:SecGroup      ssh-from-the-world  created (1s)
 ~   ├─ openstack:compute:Instance         pulumi-snipp        **updating failed**     [diff: ~securityGroups]; 1 error
 ~   ├─ openstack:compute:Instance         pulumi-snapp        updated (5s)            [diff: ~securityGroups]
 +   ├─ openstack:networking:SecGroupRule  https               created (0.88s)
 +   ├─ openstack:networking:SecGroupRule  http                created (1s)
 +   └─ openstack:networking:SecGroupRule  ssh                 created (1s)


Diagnostics:
  openstack:compute:Instance (pulumi-snipp):
    error: 1 error occurred:
    	* updating urn:pulumi:dev::pulumi-demo::openstack:compute/instance:Instance::pulumi-snipp: 1 error occurred:
    	* Gateway Timeout

  pulumi:pulumi:Stack (pulumi-demo-dev):
    error: update failed

Resources:
    + 5 created
    ~ 1 updated
    6 changes. 1 unchanged

Duration: 1m5s

(oscli) ubuntu@demo-jumphost:~/pulumi$

While applying the state, we see that one of the planned actions failed due to an API timeout on the OpenStack API. This sometimes happens, and when it does, it is nice to have a tool that keeps track of the current state and what was done even if some actions failed. In this sense, Pulumi behaves the same as Terraform and will pick up the remaining changes on the next state application. So, let’s run the Pulumi program again and see what happens:

(oscli) ubuntu@demo-jumphost:~/pulumi$ pulumi up
Previewing update (dev)

View in Browser (Ctrl+O): https://app.pulumi.com/JarleB/pulumi-demo/dev/previews/e85ac2cd-53d0-40d1-8f74-8ea1dba35be8

     Type                           Name             Plan       Info
     pulumi:pulumi:Stack            pulumi-demo-dev
 ~   └─ openstack:compute:Instance  pulumi-snipp     update     [diff: +securityGroups]


Resources:
    ~ 1 to update
    7 unchanged

Do you want to perform this update? yes
Updating (dev)

View in Browser (Ctrl+O): https://app.pulumi.com/JarleB/pulumi-demo/dev/updates/30

     Type                           Name             Status           Info
     pulumi:pulumi:Stack            pulumi-demo-dev
 ~   └─ openstack:compute:Instance  pulumi-snipp     updated (1s)     [diff: +securityGroups]


Resources:
    ~ 1 updated
    7 unchanged

Duration: 4s

(oscli) ubuntu@demo-jumphost:~/pulumi$

And just as expected, there was only one update remaining, and it was quickly converged to the desired state described in the YAML configuration file. Now, the desired state should be equal to the actual state.

Let’s check to verify.

(oscli) ubuntu@demo-jumphost:~/pulumi$ openstack security group list |grep pul
| 33765832-f1a8-4afa-a542-c087994fd1a3 | pulumi-ssh             |                        | 74cf3e20e55345d29935625c7b3e5618 | []   |
| 58bc1279-3548-41cb-b918-15430cc983f1 | pulumi-web             |                        | 74cf3e20e55345d29935625c7b3e5618 | []   |
(oscli) ubuntu@demo-jumphost:~/pulumi$

(oscli) ubuntu@demo-jumphost:~/pulumi$ openstack server show -c instance_name -c addresses -c security_groups pulumi-snapp
+-----------------+--------------------------------------------+
| Field           | Value                                      |
+-----------------+--------------------------------------------+
| addresses       | public=212.162.147.166, 2a09:d400:0:1::140 |
| instance_name   | None                                       |
| security_groups | name='pulumi-ssh'                          |
|                 | name='pulumi-web'                          |
+-----------------+--------------------------------------------+
(oscli) ubuntu@demo-jumphost:~/pulumi$ openstack server show -c instance_name -c addresses -c security_groups pulumi-snipp
+-----------------+-----------------------------------------+
| Field           | Value                                   |
+-----------------+-----------------------------------------+
| addresses       | default=10.68.1.105, 2a09:d400:0:2::26a |
| instance_name   | None                                    |
| security_groups | name='pulumi-ssh'                       |
+-----------------+-----------------------------------------+
(oscli) ubuntu@demo-jumphost:~/pulumi$ nc -w 1 212.162.147.166 22
SSH-2.0-OpenSSH_8.9p1 Ubuntu-3
(oscli) ubuntu@demo-jumphost:~/pulumi$ nc -w 1 10.68.1.105 22
SSH-2.0-OpenSSH_8.9p1 Ubuntu-3
(oscli) ubuntu@demo-jumphost:~/pulumi$

It seems like Pulumi kept its promises again. Note that we can immediately reach the RFC1918 address of the instance on the default network. If you wonder why this “just works”, please read the blog post about the Safespring network model.

Conclusion

Starting from where we left our first and very basic Pulumi example, we have continued to show the value of combining the ruamel.yaml. Python library in combination with a Python-driven Pulumi program to quickly generalize Python code by separating the concerns of code and configuration data.