If you are just getting off the high from the AWS re:Invent conference this past year, you probably still remember BMC Software‘s pitch for their Cloud Lifecycle Management solution. This was touted as a Cloud Broker of all clouds, and specifically, facilitating and delivering compliancy and “Trust” to Amazon’s Web Service.
The offering is pretty compelling, but much was not shown or discussed how it actually works. This is not uncommon, as there was not enough time in the 1 hour slot provided to BMC. If you were able to visit the BMC booth, then maybe you got the tour. If not, we’ll be covering in this blog the major functional components which make BMC’s CLM solution tick. Hold on to your hat, as its plentiful of capabilities.
Part 1 – Providers
This first part, we’ll review CLM’s Providers for AWS. BMC had the intuition that one cloud will not fit all. Some organizations want to leverage their internal resources (Private Cloud) – so BMC built a provider for that.
Some organizations leverage VMWare as internal hypervisor, others KVM, others Hyper-V, and some Xen – well, BMC supports it all. When it came to public clouds, BMC also had to think future-proof: the Amazon Web Services provider is the first release of their public Cloud Provider support.
Others are supported (albeit more of a Professional Services gig) such as Rackspace and OpenStack, and more are planned to be more publicly offered.
Providers are a great way for CLM to grow as technology matures. Without getting into the nitty-gritty, CLM has a full REST-based API for which it leverages for all cloud transactions between it’s Platform Manager, and their out-of-the-box providers as well as 3rd party providers. The official definition by BMC:
[CLM defines a] Provider API as an integration API between CLM and third-party systems
Providers are broken into 3 distinct categories:
- Compute – Compute providers are those interfaces which are leveraged and responsible for manipulating the cloud operations for Compute platforms: Servers (Virtual, Physical, or mixture). The main native Compute provider is BMC Software’s Bladelogic Server Automation (BBSA). Other 3rd party providers (such as the one focused on this blog), is Amazon’s AWS Compute Provider. This can be either EC2 and VPC.
- Network – For Network, CLM leverages natively the BMC Software’s Bladelogic Network Automation (BBNA). This component is leverage for Data Center Management of private clouds (today) such as VMware, KVM, Cisco Nexus, and all of the fancy Virtual Firewall (VFW) and Load Balancer capabilities used in the Private Cloud. Now, rumors are that the next gen version of CLM (4.0?) will allow Cloudy users of CLM to manipulate AWS Elastic Load Balancer and Security Groups natively via CLM. Cool!
- Storage – For Storage, CLM leverages the built-in BMC Software Atrium Orchestrator component (BAO) which is deployed as part of any CLM implementation. Within BAO, CLM leverages a combination of Adapters and Modules for interfacing with AWS Storage (S3/EBS) requests, although primarily when spinning an instance from a pre-defined AMI.
As you can see, CLM’s Platform Manager is a busy bee – it’s the traffic cop for all cloud requests.
So how is the AWS provider configured in CLM? Let’s look. As you can see from the image, there are three distinct AWS providers registered with CLM (this is version 3.0 being shown, which is the GA release). All three are leveraging the BAO Amazon Adapter, which provides the orchestration for all API calls to AWS public API. This includes Compute, Storage, and Network calls.
Next, we need to cover the Resources. Resources is how CLM is able to allocate cloud requests to a defined capacity in a private, public, or hybrid cloud offering.
Part 2 – Resources
In CLM, resources are units of consumable goods in a Data Center. They are broken into the following categories:
- Locations – Physical location of your data centers, supporting your cloud resources.
- Pods – Represent a portion of the cloud bound by a set of physical network equipment – routers, firewalls, load balancers
- Network Containers – Represent network segments of the cloud used to isolate workloads or tenants based on specific policies/rules. Think of it as a virtual data center.
- Compute Pools – A combination of resources which are consumed as a whole.
- Resources – Individual data center resources.
In the private cloud world, these could be VMWare Clusters, ESX Hosts, Virtual Disk Repositories, Distributed Virtual Switches, etc. In AWS world, they are Regions, Availability Zones, EC2 or VPC boundaries, and Availability Zone compute resources.
The CLM AWS Provider is a unique resource provider, because unlike private data center components (e.g. VMWare), you cherry-pick resources based on how resources are exposed by the AWS API. Meaning, you don’t get to pick an individual disk, or an individual server/cluster to deploy to. You choose:
- Which region are you interfacing with (Location).
- Which Availability Zone (Pods)
- Which EC2 or VPC resource (Network Container, and more relative to the term Virtual Data Center when dealing with VPC specifically).
- Which Availability Zone (Compute Pools – again, just because Amazon abstract the machine stuff into AZs).
- Which Availability Zone (Resources – again, because of how AWS works).
So how does this look in CLM’s Administrator Console?
CLM Resources – Location
You can see from the image above that the Physical location is a bit abstracted in my example. I chose to represent AWS as a single Location. You could, in fact, have distinct locations to represent different AWS Regions (EAST, WEST, etc) and then onboard resources accordingly.
CLM Resources – Pods
For the Pods Resources, I on-boarded four cloud boundaries: US-EAST 1A, US-EAST 1B, US-EAST 1C, US-EAST 1D. These are the Amazon Web Services Availability Zones I chose to leverage as data center resources. This comes in very handy later on when we layer on top Service Offerings to be consumed by Cloud Users.
CLM Resources – Network Containers
For the Network Containers, this is where it gets interesting. VPCs are represented in CLM as a Compute Pool. (You onboard Compute Pools into Network Containers). I chose to onboard a defined VPC Compute Pool from my AWS account. This is a VPC previously created in AWS which provide me with ability to burst into AWS securely, instead of bursting into EC2.
This VPC is hosted out of the US-EAST 1C Availability Zone, but you could onboard one VPC Compute Pool per Region/AZ if you wish. Here’s how it looks in AWS (note how the VPC ID is cross referenced into CLM):
This is relevant because when you onboard a VPC, CLM will dynamically query your AWS account and identify which compute boundary you wish to use (EC2 or VPC):
CLM Resources – Compute Pools
The traditional concepts of Clusters and Virtual Disk Repositories (VDR) are on-boarded into CLM as Compute Pools. In AWS relative terms, an Availability Zone can be both a Cluster as well as a VDR, which is why the image above display two Compute Pool per AZ.
CLM Resources – Resources
For raw compute resources, you can see how we are simply on-boarding the AWS Availability Zones. Note how the CPU and Memory count for these resources are set to zero (this is because AWS doesn’t publish via API how many CPU or Memory one AZ currently has available). Wouldn’t it be nice to know? 🙂
Now we have AWS on-boarded into CLM. Let’s look at the next section – Service Blueprints – which allows us to define services which will leverage these CLM Resources.
Part 3 – Service Blueprints
If you haven’t explored BMC’s approach to service design for the cloud via Service Blueprints, you don’t know what you are missing. Its some pretty interesting stuff. It allows you to define units of your service, which can be reused regardless of what cloud you wish to deploy it onto.
Let’s use a more concrete example. Say you want to deploy a LAMP stack (Linux, Apache, Mysql, PHP). In a traditional AWS model, you would use an AMI pre-buit with the Linux OS, Apache, Mysql, and PHP. Simple enough right? sure, for one or two or a handlful of equivalent deployments. What if you need to upgrade the Apache version to a newer release? You have to:
- Spin an Instance from your AMI
- Upgrade the Apache component
- Convert the instance to another AMI.
What about the X number of currently running EC2 instances based on your previous AMI? you have to destroy it (no merging) or manually upgrade apache in each one of them (remember, AWS does not provide you with an application automation method. (Note: CloudFormation is starting to do this, but it is still very limited to some stacks).
BMC’s approach is much more modular than that. Yes, you still need to use AWS AMI, but simply as an infrastructure (bare OS). CLM would handle the dynamic deployment of Apache, Mysql, PHP every time you wish to deploy a LAMP instance.
The benefit? Repeatability and re-usability. If you go through the Apache update cycle mentioned above, you would:
- Simply update the Apache package in CLM (through Bladelogic Server Automation product).
Any new LAMP deployments into AWS would automatically leverage the newer version of Apache. You no longer have to change the original AMI, and you can leverage the same BSA Apache package to upgrade existing AMIs running an older version of Apache, via an automated Software Deployment Job in BSA.
So let’s look at how this is constructed in CLM.
The Service Definition
First step in a Service Blueprint is to define the Service Definition. The Service Def identifies the individual Components that make up your service. In the screenshot below, I have a couple components which make up a WAMP stack (Windows Operating System – Apache – Mysql – PHP).
The Service Deployment Definition
Next, we have to define how you wish to deploy this Service Definition – called “Service Deployment Definition”. This identifies the different options for implementing your service: Location awareness (such as which Availability Zone), Sizing (micro, small, large, etc), and networking. Note how I chose to deploy to a VPC instance we discussed earlier on, as well as hard coded some of the location awareness. All of this can be exposed to the user so they choose where to deploy this service.
Note how you can also hard code (or prompt a user) for the AWS specific parameters for deployment: SSH Instance Key Name, Security Group, and VPC Public IP. You could actually prompt the user to enter their own information.
The next series of configurations enable you to define the Components you wish to include in this deployment definition, the compute resource allocation (AWS Sizes), Network resources (such as VPC IP addressing), and any post deployment actions you wish to perform (such as, perform a compliance check to ensure your cloud deployment are within your standards).
Now that you have a defined Service Blueprint, you can leverage this blueprint to create an actual service – what the end user will actually be able to request. Let’s look at this next section.
Part 3 – Services and Service Offerings
In this section, you will actually define the Service based from A) Service Blueprint Deployment Definitions and B) Options. We covered the Deployment Defs above, but didn’t talk much about options. Options are simply that – options – extended to the user as part of the service request processing. Things like AWS Instance Sizes, Region, Availability Zones – all can be offered as “options”.
You can offer a Service as simple as the one below (basically, an IaaS consisting of simply a Windows 2008 AMI):
Or something more elaborate with various options. You can see the difference from the user’s perspective:
Next stop, service request!
Part 3 – Service Offering Request
This step is probably what you have already seen. A user logs in as a cloud user, selects a Service Offering (like the ones shown above), and then submits the request. What happens? In general terms:
- The CLM Platform Manager submits a Web Services call to BAO with XML request to the BAO Amazon EC2 Adapter, which consists of your service offering request.
- BAO transforms the input XML into pieces which mean something to AWS. The AWS API is called. One of those is the AMI ID to spin instances from.
- AWS API takes the request, spins the number of instances requested, to the right network construct (such as a specific VPC), applies the security group, and assigns IP addressing.
- CLM BAO polls the AWS API for the request, to know when the request is completed (or failed).
- Once completed, CLM returns the details of the requested instance (such as public IP address information).
- If your Service Blueprint contains deployable software, CLM will then enroll your AWS Instance into BSA, for which then Software Deploy Jobs will run to install the application stack.
CLM Displays a “Running” instance to the cloud end user, which looks like this:
Hopefully this has given you a glimpse on the “behind the scenes” workings of CLM and AWS! We welcome any comments you may have in regards to this solution!
BMC, BMC Software, the BMC logos, and other BMC marks are trademarks or registered trademarks of BMC Software, Inc. in the U.S. and/or certain other countries.