Open navClose nav

How it works

STACKL.IO is tool allowing you to centralize configuration data and define applications and services as stacks. Stacks describe applications and services in a declarative way, decoupled from orchestrators and platforms. Allowing ultimate portability and flexibility.

STACKL.IO can be logically split in two main parts: Configuration Store and Stacks. The configuration store serves as a central data-lookup system for orchestrators, infrastructure as code and configuration management tools. The configuration store allows you to centrally manage configuration data in the form of key/value pairs for all your automation tools, avoiding the management and maintenance of multiple stores, leading to inconsistency. Stacks are a way to describe applications in an abstract manner, the goal of a stack is: “you describe your application once and are able to deploy it anywhere”. From a stack perspective, if the application needs to be deployed on a VMware IaaS platform, or on AWS EC2 or …, the application itself remains the same and the differences are related to the target platform.

The configuration store and stacks are working hand-in-hand with the purpose to deliver the resulting key/value pairs to automation tool (or tools) responsible for the execution of the deployment. To understand we will explain by example both parts and how they work together to achieve this.

The Configuration Store

The configuration store is an hierarchical and relational model essentially describing your IT infrastructure or in other words it’s queryable documentation representing your IT infrastructure. Every IT infrastructure exists out of one or more of the following building blocks: Environments, Locations, Network or Security Zones. These building blocks typically have a relation to one another, for example:

Example Scenario

Your organization has a hybrid cloud environment, where your developers are required to build and test on a public cloud provider like AWS or Azure or Google compute Engine, or … due to the dynamic and elasticity nature of the workloads. On the other hand your production workloads are more predictable and therefore you might choose to deploy these workloads on a VMware cluster located at your own datacenter.

In the above example you will have at least the following building blocks: Production and Development Environments, a VMware vSphere cluster and an AWS EC2 location, and one or more subnets or vlans as zones. The hierarchical and relational model forming the tree structure in the configuration store will look like this:

https://res.cloudinary.com/stacklio/image/fetch/c_limit,dpr_auto,f_auto,q_80,w_640/https://www.stackl.io/uploads/2018/08/03.png

With the building blocks and their relations to one another defined, each building block has a unique purpose hence it exist.
For example: in order to provision virtual machines, information is required about the target location: credentials, subnet information, templates or ami’s, regions, vmware cluster names, etc. In other words each building block requires a set of key/value pairs describing their uniqueness. The following shows how this might look like.

Document Key Value Description
common dns_servers [10.10.0.10] Global DNS server, used for managing DNS records
ntp_servers [10.10.0.5] Global NTP server
automation_handler terraform Use Terraform for provisioning infrastructure
production iaas_platform esx Used to determine on which platform we are going to deploy
development iaas_platform aws Used to determine on which platform we are going to deploy
aws-eu-west-01 aws_access_key_id xxxxxxx AWS Access key ID
aws_access_key_secret xxxxxxx AWS Secret key ID
dns_servers [172.16.0.10] Overwrite the DNS value from upper blocks in case we are provisioning to AWS
vmw-clus-01 vmw_prov_admin xxxxxx Name of the account that has VMware virtual machine provisioning permissions
vmw_prov_admin_password xxxxxx Password for vmw_prov_admin
vmw_cluster_name cluster01 Name of the VMware cluster we are going to provision virtual machines
vmw_dc_name dc01 Name of the datacenter object within vCenter

Secret management

The above table is for illustration purposes only. We strongly DO NOT recommend storing secrets as key value pairs in the configuration store. Secrets should be managed by a secure vault, and the configuration store should only contain an identifier specifying the location in the vault where secret is stored.

Stacks

Stacks are common definitions that describe a certain application or infrastructure service and not in relation to an automation tool or platform. Stacks are composed out of parameters, resources and/or resource groups. Each resource defined within a stack has a relation to role object type and a shape object type. Consider the following example: you are creating a stack for an application called MyApp. The MyApp application exists out of one or more virtual machines running my application MyApp and a load balancer.

When creating a stack template for the MyApp application, the stack-template will look like this:

{
  "name": "MyApp",
  "type": "stack-template",
  "description": "Stack Template for MyApp",
  "parameters": {
    "app_count": "",
    "environment": "",
    "location": "",
    "zone": "",
    "git_version": ""
  },
  "resources": {
    "myapp": {
      "type": "Spot::ResourceGroup",
      "properties": {
        "count": "<<app_count>>",
        "index_start": "0",
        "resource_def": {
          "properties": {
            "name": "myapp-<<index>>",
            "zone": "<<zone>>",
            "depends_on": "myapp-lbl",
            "git_version": "<<git_version>>",
            "environment": "<<environment>>",
            "shape": "micro",
            "role": "myapp",
            "location": "<<location>>"
          }
        }
      }
    },
    "loadbalancer": {
      "type": "Spot::Server",
      "properties": {
        "name": "myapp-lbl",
        "zone": "<<zone>>",
        "backend_server_names": "<<(Array)['myapp']['name']>>",
        "frontend_ports": 80,
        "environment": "<<environment>>",
        "shape": "no-shape",
        "role": "loadbalancer",
        "location": "<<location>>"
      }
}

The MyApp resource is based on a role definition named myapp. The myapp role is an object type with a type role and looks as following.

{
  "name": "myapp",
  "type": "role",
  "description": "MyApp role",
  "inherits": "ubuntu",
  "listenport": 80,
  "git_repo": "git@gitlab.com:myapps/myapp.git"
}

Roles can inherit from other roles, making it very easy to organize and structure your definitions avoiding having to specify keys multiple times which can lead to inconsistency and make it difficult to manage.

The ubuntu role, looks as following.

{
  "name": "ubuntu",
  "type": "role",
  "description": "Base ubuntu 16.04",
  "aws_image": "ami-12345678",
  "vmw_vcenter_template": "ubuntu-base-1604"
}

The last object type required is the shape micro. micro is an object type of the type shape and looks as following.

{
  "name": "micro",
  "type": "shape",
  "description": "Micro shape",
  "aws_instance_type": "t2.micro",
  "memory_mb": 512,
  "cpu_num": 1
}

When the stack template gets instantiated ( requested and deployed ), a stack instance will be created which logically will look like as following.

https://res.cloudinary.com/stacklio/image/fetch/c_limit,dpr_auto,f_auto,q_80,w_640/https://www.stackl.io/uploads/2018/08/04.png

Inheritance

When a stack template MyApp gets instantiated, the stack instance will become part of the tree. Assume that we used the following parameters when we requested the stack:

Key Value
app_count 1
environment development
location aws-eu-west-01
zone sub-ab123456
git_version 1.1

If we look at the entire tree structure in relation to the stack instance MyApp, it will look as following:

https://res.cloudinary.com/stacklio/image/fetch/c_limit,dpr_auto,f_auto,q_80,w_640/https://www.stackl.io/uploads/2018/08/05.png

When requesting the resulting set of key/value pairs for a resource ( for example myapp-01 ) via the STACKL.IO REST API, the result will look as following:

Key Value Document
ntp_servers [10.10.0.5] common (common object type)
automation_handler terraform common (common object type)
iaas_platform aws development (environment object type)
dns_servers [172.16.0.10] aws-eu-west-01 (location object type)
aws_access_key_id xxxxxxxx aws-eu-west-01 (location object type)
aws_access_key_secret xxxxxxxx aws-eu-west-01 (location object type)
sub-ab123456 (zone object type)
git_repo git@gitlab.com:myapps/myapp.git myapp (role object type)
aws_image ami-12345678 myapp (role object type)
aws_instance_type t2.micro myapp (role object type)
git_version 1.1 myapp-01 (resource object type)

Invocations

While the configuration store can be used purely as a datalookup system for retrieving key/values pairs for resources, by default it will perform an invocation to an automation tool. Rather invocations will be triggered can be controlled by the following key automation_invocation_enabled, by default it’s set to true. Invocations only happen when stack-instances are created, updated or deleted.

The way the platform performs invocations works as following:
1. For any CRUD operation on stack instances, an event will be generated.
2. The corresponding proxy agent ( which is controlled by the proxyTag key, see Proxy Agent ) receives the event.
3. The proxy agent will determine what automation handler ( which is controlled by the automationHandler key ) must be used for each requested resource in the stack.
4. The automation handler will then invoke the commands on the automation endpoint.
5. The automation endpoint will run the workflow and perform the necessary actions as coded.

With all these key/values pairs available, you can write your code with your favorite automation tool or tools.


Last updated on October 10, 2018