AutoScaling in SteelApp
AutoScaling enables a Traffic Manager to scale a pool of back end nodes either up or down based on application response time. An obvious use case for AutoScaling is for a website that has different flows of traffic during the course of a day. When the server load increases the application response time decreases, so the number of web or application servers is increased to handle the additional load; conversely, when the load drops, the number of servers is reduced as they are no longer needed. This can be especially useful in environments where the customers pay for each of their compute resources (i.e. chargeback). The AutoScaling functionality is enabled through a Cloud API in Traffic Manager and this previous Article covered some of the basics of creating a custom Cloud API.
Docker is a framework for applications to be built, packaged and deployed. It works closely with Linux Containers and stores everything a container needs in a re-usable form. Docker, along with Linux Containers, have many use cases, but again going back to the web application already mentioned, a Docker container would have all of a web application's needs built-in; things like Apache libraries and binaries, HTML files, Database connectors and more. When the Docker application is deployed, all the information about it, like memory, image file location, networking, etc. is given to it and is reported back in Docker. This lets Traffic Manager hook back into it to find information that is needed in a Pool.
Gluing everything together
There are a few things that need to be enabled and configured to allow Traffic Manager and Docker play well together:
Docker must be listening for REST API calls
A Docker image that is deployable and networkable.
A Cloud API plugin for Traffic Manager.
When the two APIs are put together they form the framework for building the scalable application pool. The flow starts when Traffic Manager needs a node for a Pool, Traffic Manager then tells Docker it needs a new container, Docker creates the container, Traffic Manager then tells Docker to start the container, Traffic Manager waits and after it has started, Traffic Manager finds the IP address of the container from Docker and adds it to its pool. When Traffic Manager decides that it no longer needs a node, it asks Docker about the containers, Traffic Manager finds the oldest container, and then tells Docker to destroy it. There can be more or less steps in the process, like a “holding period” for a VM before it is destroyed, but it is otherwise fairly straight forward.
When it is boiled down, the Traffic Manager’s Cloud API only has a few tasks to execute; create instances, destroy instances, and get the status of instances. Each of these tasks has a few additional things to do but that is generally what it needs to do and the API of Docker provides a way to access each of these functions.
These few tasks translate to some specific Docker API calls; All Contain er Statuses, Individual Container details, Start a Container, Stop a Container, Destroy Container, examples of these calls are:
The python script that is attached provides some basic AutoScaling functionality and only needs to be modified to set the appropriate Docker Host (a variable on line 239) to get started. Image ID and Name Prefix can be specified during the Pool setup. Additional parameters can be added using a separate options file that is not covered in this post, but can be understood from the previous article.
Docker API Docs
Pulse vADC - Application Delivery Controller
Feature Brief: Traffic Manager's Autoscaling capability
... View more
This video is a demo of Elastic Application Delivery using the Stingray Services Controller. It shows how monitoring something like throughput or concurrent connections can be used as a trigger for the Services Controller to spin up new instances of Stingray. This type of model can be used when building a very dynamic datacenter. The Flexible Licensing of the Services Controller enables a pool of licensed bandwidth to "follow an application".
Imagine how in an enterprise data center, different workloads are busy at different times of the day. For instance, early in the morning, the VMware View servers may be busy as desktops are booted, but after that "boot storm" they are relatively idle. Then a half hour later something like Microsoft Exchange or Siebel become busy... Well now using the flexible licenses of the Stingray Services Controller, an ADC can spin up during the busy times for one application, then shut down and return its resources to the pool, then as the next application becomes busy, it can bring up a different instance with bandwidth assigned to it, then return its bandwidth when it is done.
This type of model is only available with the Stingray Services Controller.
... View more
Hi Jay. Here's what I've done to extend it (and I'm assuming you already have root access): Add an additional hard disk to the VM. fdisk the new disk and add partition. Set it to Linux LVM type. run partprobe to re-scan the partition pvcreate /dev/<new partition> vgextend rvbd-ssc-host-vg /dev/<new partition> lvextend -l +100%FREE /dev/rvbd-ssc-host-vg/root resize2fs -F /dev/rvbd-ssc-host-vg/root As you can see, it's really just running standard Linux commands to add a disk, extend the VG and LV, then extend the filesystem. I would caution you on the last step since you have to force it because you cant fsck the root filesystem in the running state (i.e. do it at your own risk). I would suggest taking backups or snapshots or whatever your normal process is, to make sure you have something you can roll back to if things go awry. --Brian
... View more
This is a short video showing the deployment of 20 Stingray Traffic Managers using the Stingray Services Controller. Both the Web User Interface and the REST API are used during the demo. The Web UI is used here to see what is happening on the Controller and Hosts, while the REST API is used to programmatically deploy the instances. For additional information on the Stingray Services Controller refer to the User Guide available at Riverbed Support: Software - Stingray Services Controller
... View more
In a previous video we showed how to bring up Stingray Traffic Manager in a Virtual Private Cloud (VPC) on Amazon Web Services (AWS). Now we will walk through the deployment of a cluster of Stingray Traffic Managers in a VPC on AWS. The clustering feature as well as other features such as SSL offloading and compression can be found in all instances of Stingray, unlike our competition. New to Stingray Traffic Manager Version 9.5 is also Active-Active clustering within AWS. We’ll take a look at that in this video as well. References on the AWS deployment of the 9.5 release can be found in the EC2 Getting Started Guide. Additional Stingray Traffic Manager references on the deployment of common enterprise applications can be found in the Stingray Solution Guides. Many other Stingray related documents can be found at https://splash.riverbed.com/community/product-lines/stingray/content. Lastly, I've attached is a copy of the script I used for setting up the VPC. While it is not usually a regular occurrence in a production environment, it certainly can be in a DevOps environment.
... View more