AutoScaling in SteelApp
AutoScaling enables a Traffic Manager to scale a pool of back end nodes either up or down based on application response time. An obvious use case for AutoScaling is for a website that has different flows of traffic during the course of a day. When the server load increases the application response time decreases, so the number of web or application servers is increased to handle the additional load; conversely, when the load drops, the number of servers is reduced as they are no longer needed. This can be especially useful in environments where the customers pay for each of their compute resources (i.e. chargeback). The AutoScaling functionality is enabled through a Cloud API in Traffic Manager and this previous Article covered some of the basics of creating a custom Cloud API.
Docker is a framework for applications to be built, packaged and deployed. It works closely with Linux Containers and stores everything a container needs in a re-usable form. Docker, along with Linux Containers, have many use cases, but again going back to the web application already mentioned, a Docker container would have all of a web application's needs built-in; things like Apache libraries and binaries, HTML files, Database connectors and more. When the Docker application is deployed, all the information about it, like memory, image file location, networking, etc. is given to it and is reported back in Docker. This lets Traffic Manager hook back into it to find information that is needed in a Pool.
Gluing everything together
There are a few things that need to be enabled and configured to allow Traffic Manager and Docker play well together:
Docker must be listening for REST API calls
A Docker image that is deployable and networkable.
A Cloud API plugin for Traffic Manager.
When the two APIs are put together they form the framework for building the scalable application pool. The flow starts when Traffic Manager needs a node for a Pool, Traffic Manager then tells Docker it needs a new container, Docker creates the container, Traffic Manager then tells Docker to start the container, Traffic Manager waits and after it has started, Traffic Manager finds the IP address of the container from Docker and adds it to its pool. When Traffic Manager decides that it no longer needs a node, it asks Docker about the containers, Traffic Manager finds the oldest container, and then tells Docker to destroy it. There can be more or less steps in the process, like a “holding period” for a VM before it is destroyed, but it is otherwise fairly straight forward.
When it is boiled down, the Traffic Manager’s Cloud API only has a few tasks to execute; create instances, destroy instances, and get the status of instances. Each of these tasks has a few additional things to do but that is generally what it needs to do and the API of Docker provides a way to access each of these functions.
These few tasks translate to some specific Docker API calls; All Container Statuses, Individual Container details, Start a Container, Stop a Container, Destroy Container, examples of these calls are:
The python script that is attached provides some basic AutoScaling functionality and only needs to be modified to set the appropriate Docker Host (a variable on line 239) to get started. Image ID and Name Prefix can be specified during the Pool setup. Additional parameters can be added using a separate options file that is not covered in this post, but can be understood from the previous article.
Docker API Docs
Pulse vADC - Application Delivery Controller
Feature Brief: Traffic Manager's Autoscaling capability
... View more