General Configuration

Managing LinchPin requires a few configuration files. Beyond linchpin.conf, there are a few other configurations that need to be created. When running linchpin, four different locations are checked for linchpin.conf files. Files are checked in the following order:

  1. linchpin/library/path/linchpin.conf
  2. /etc/linchpin.conf
  3. ~/.config/linchpin/linchpin.conf
  4. path/to/workspace/linchpin.conf

The linchpin configuration parser supports overriding and extension of configurations. Therefore, after the files are checked for existence, the existing configuration files are read and if linchpin finds two or more different configuration files to contain the same configuration section header, the header that was parsed more recently will provide the configuration for that section. Therefore, if the user wants to add their own configurations to their linchpin workpace, the the user should add their configurations to a linchpin.conf file in the root of their workspace. This way, their file will be parsed last and their configurations will take precedence over all other configurations.

To add your own configurations, simply create a linchpin.conf file in the root of your workspace using your preferred text editor and write configuration in a .ini style. Here’s an example:

::
[Section Header] key1 = value1 key2 = value2

Initialization

Running linchpin init will generate the directory structure needed, along with an example PinFile, topology, and layout files. One important option here, is the –workspace. When passing this option, the system will use this as the location for the structure. The default is the current directory.

$ export WORKSPACE=/tmp/workspace
$ linchpin init
PinFile and file structure created at /tmp/workspace
$ cd /tmp/workspace/
$ tree
.
├── credentials
├── hooks
├── inventories
├── layouts
│   └── example-layout.yml
├── PinFile
├── resources
└── topologies
    └── example-topology.yml

At this point, one could execute linchpin up and provision a single libvirt virtual machine, with a network named linchpin-centos71. An inventory would be generated and placed in inventories/libvirt.inventory. This can be known by reading the topologies/example-topology.yml and gleaning out the topology_name value.

PinFile

A PinFile takes a topology and an optional layout, among other options, as a combined set of configurations as a resource for provisioning. An example Pinfile is shown.

dummy1:
  topology: dummy-cluster.yml
  layout: dummy-layout.yml

The PinFile collects the given topology and layout into one place. Many targets can be referenced in a single PinFile.

The target above is named dummy1. This target is the reference to the topology named dummy-cluster.yml and layout named dummy-layout.yml. The PinFile can also contain definitions of hooks that can be executed at certain pre-defined states.

Topologies

The topology is a set of rules, written in YAML, that define the way the provisioned systems should look after executing linchpin. Generally, the topology and topology_file values are interchangeable, except where the YAML is specifically indicated. A simple dummy topology is shown here.

---
topology_name: "dummy_cluster" # topology name
resource_groups:
  -
    resource_group_name: "dummy"
    resource_group_type: "dummy"
    resource_definitions:
      -
        name: "web"
        type: "dummy_node"
        count: 3

This topology describes a set of three (3) dummy systems that will be provisioned when linchpin up is executed. The names of the systems will be ‘web_#.example.net’, where # indicates the count (usually 0, 1, and 2). Once provisioned, the resources will be output and stored for reference. The output resources data can then be used to generated an inventory, or passed as part of a linchpin destroy action.

Inventory Layouts

The inventory_layout or layout mean the same thing, a YAML definition for providing an Ansible static inventory file, based upon the provided topology. A YAML layout is stored in a layout_file.

---
inventory_layout:
  vars:
    hostname: __IP__
  hosts:
    example-node:
      count: 3
      host_groups:
        - example
  host_groups:
    example:
      vars:
        test: one

The above YAML allows for interpolation of the ip address, or hostname as a component of a generated inventory. A host group called example will be added to the Ansible static inventory, along with a section called example:vars containing test = one. The resulting static Ansible inventory is shown here.

[example:vars]
test = one

[example]
web-2.example.net hostname=web-2.example.net
web-1.example.net hostname=web-1.example.net
web-0.example.net hostname=web-0.example.net

[all]
web-2.example.net hostname=web-2.example.net
web-1.example.net hostname=web-1.example.net
web-0.example.net hostname=web-0.example.net