Traffic Manager is capable of managing node deployment automatically; it can scale up and down the number of nodes in a service pool by monitoring the response times of the nodes themselves. Traffic Manager will scale the pool depending on the number of requests (percentage) in compliance with a pre-set response time. When the compliance falls below the configured scale_up level for a sustained period, Traffic Manager will initiate node provisioning. When the compliance rises above the scale_down level for a sustained period, Traffic Manager will initiate de-provisioning. An upper and lower limit can be set on the number of nodes in service.
For More Information See Feature Brief: Traffic Manager's Autoscaling capability
This AutoScaling driver for OpSource provides an interface for managing node deployment within OpSource clouds using the OpSource RESTful API. This document will describe the steps necessary to install the driver onto a Traffic Manager, and configure an AutoScaled pool for use in the OpSource Cloud.
The OpSource driver is written in Python and so will require that you have python installed and available in your Linux VM. You will also need to install the “requests” and “elementtree”. Both of these modules can be installed with easy_install if they’re not available in your Linux distro directly.
Try installing “python-setuptools” to get access to easy_install, and then “easy_install requests”.
We will take it for granted that you have already deployed a Linux VM within the OpSource cloud and installed the Traffic Manager software. To install the driver, you will need to log in to the Stingray web UI and upload “DiData.OpSource.py” script to Catalogs -> Extra Files -> Miscellaneous. Do not forget to check the “Executable” box when you upload it.
The driver uses an additional configuration file to hold OpSource specific configuration. The settings and their values should be entered into a text file, as space/tab separated key/value pairs and then uploaded alongside the AutoScaling driver in extra files. In the screen shot above you can see my configuration is in a file called “DD-US-Cloud.cfg”. The configuration file will need to contain the following settings: user, apiHost, orgID, destroy, vlanID.
The user should be a user within your organisation that has full permissions to manage nodes in the VLAN in which you intend to deploy nodes.
The apiHost should be set to the FQDN of the API server for the region you are using.
The orgID should be set to the organisation ID as displayed in the Account tab of the OpSource admin interface.
The destroy flag should be set to “true” or “false” and indicates whether the driver should either destroy nodes or simply power them off. Obviously scaling up a node which is powered off is much faster than creating a new one from scratch.
The vlanID should be set to the ID of the cloud into which the nodes are being deployed. This ID is displayed when you expand a cloud section under the “Clouds” tab of the OpSource UI.
The final step before your driver is ready for use in an Auto Scaled pool is to create a set of Cloud Credentials. A “Cloud Credential” is a configuration object which links the driver with its configuration options. In the Traffic Manager UI you must navigate to Catalogs -> Cloud Credentials and create a new set of credentials for your OpSource cloud.
The first field in the CC configuration is a name, so chose something appropriate.
Next you will need to select the “DiData.OpSource.py” script from the Cloud API drop down box.
Inside credential1 enter the name of the configuration file which you uploaded previously.
In credential2 enter the password for the user specified in the configuration file.
Save the Cloud credentials.
Your Cloud Credentials are now available for use within a pool. Navigate to the Services -> Pools tab in the UI and create a new pool. Leave the nodes box empty, but tick the Auto-Scaling check box.
You will need to set the “autoscale!enabled” option to “Yes”, and the “autoscale!external” option to “No”. Next you should be able to select your “Cloud Credentials” file from the CC drop down box.
Now we need an imageID. At this point you will need to have created a clone of the node which you intend to use in the pool, and it should be available in the “customer images” section in your cloud. Traffic Manager will provision nodes from this template. To find out the ID; click on the custom image and you should be able to see the ID displayed under image name in the pop up box.
OpSource Cloning Ref: How to Clone a Cloud Server to Create a Customer Image Using the Administrative UI
The machine type isn’t used in OpSource because the hardward configuration is all stored with the image ID. Set the name prefix to be something appropriate for the service, and ensure that “autoscale!ipstouse” is set to “private IP addresses” as your nodes will not be given public ones.
The remaining options control how and when stingray deploys nodes. They are not OpSource specific. Set them to appropriate values for your service. When you are finished click update.
The Traffic Manager autoscaler will now attempt to deploy nodes up to the autoscale!min_nodes setting. This defaults to 1, so you will probably see messages in the log on Traffic Manager, and you should see nodes being provisioned in the OpSource admin portal.
You will now want to create a Virtual Server and Traffic IP Group to use with the AutoScaling pool.