Focus on Configuration Management Debt First

It has been a few years since Managing Software Debt was published and I think about how it could have been better on a regular basis. I think the content was useful to put in a book and aligns with current movements such as DevOps, Microservices, Continuous Delivery, Lean and Agile. On the other hand, there may have been a tendency by me to focus on Technical Debt at the beginning of the book that was not warranted in terms of importance for managing software debt. It has been my experience that managing technical debt, the activities that a team or team members choose not to do well now and will impede future development if left undone, is less important than the other four types of software debt. In fact, here are the priorities that I typically used with companies to help them manage software debt more effectively:

  1. Configuration Management Debt: Integration and release management become more risky, complex, and error-prone.
  2. Platform Experience Debt: The availability of people to work on software changes is becoming limited or cost-prohibitive.
  3. Design Debt: The cost of adding features is increasing toward the point where it is more than the cost of writing from scratch.
  4. Quality Debt: There is a diminishing ability to verify the functional and technical quality of software.

Along the way, I might work with teams on processes and techniques that can help with reducing technical debt but that was to get them ready to take advantage of the efficiencies that result from the other four. I wonder if the popularity of the topic “technical debt” and my background as a developer lead me to focus chapters 2 through 4 on managing technical debt. No matter what the reasoning, I think it is good to discuss the reasons for demoting it in priority with regards to managing software debt on your team and in your organizations.

Why Configuration Management Debt First?

The value of software is only potential value until it is put into a user’s hands. There can be many roadblocks to software getting into user’s hands in an organization’s processes:

  • Proliferation of long-lived branches
  • Over burdened release engineering and operations teams
  • Poor integration processes across architecture components and scaled team delivery
  • High coupling with centrally managed architecture element/component
  • Too many variations/versions of the software supported in production
  • Code changes feel too risky and takes too long to validate before releasing into production
  • Poor documentation practices
  • Too many hand-offs between teams in order to release software to users

Just as the tag line of the chapter 6 “Configuration Management Debt” says:

If releases are like giving birth, then you must be doing something wrong.
— Robert Benefield

In organizations that have effective configuration management practices it is common to see deployment pipelines that have a smaller number of hand-offs between teams, architectures that tend to be more malleable, and efficient validation processes. What is interesting about the list that I just wrote in the previous sentence is that they align with managing three other types of software debt more effectively:

  • Smaller number of hand-offs: Platform Experience Debt
  • Malleable architectures: Design Debt
  • Efficient validation processes: Quality Debt

What I have found is that by focusing on Configuration Management Debt it is simpler to identify aspects of the integration and release management process that need to be tackled in order to get working software in the hands of users sooner while reducing the bottlenecks in the organizational processes and practices therefore leading to further optimizations in the future.

Alignment with Today’s Industry Conversations

It is interesting to note that this focus aligns well with the tenets of the DevOps movement luminaries. Going towards Feature Teams organizational configurations that have people with all of the capabilities to design, implement, validate, integrate, configure, deploy and maintain reduces hand-offs and increases quality because the team that builds the software maintains it, as well. Its amazing how quality goes up when a team member on rotation has to respond to production issues in the middle of the night.

Not only does this align with the DevOps but malleable architectures tends to align with the Microservices movement. Given Feature Teams we can take advantage of Conway’s Law, rather than be a victim of it, to align team delivery with providing a specific capability in the architecture. Since these capabilities are more specific the implementation tends to be smaller and therefore easier to change later. Now there are still operational issues to overcome in order to make this an efficient approach. Platforms such as Cloud Foundry that provide application runtimes and access to services with low operational overhead will make Microservices based architectures even more approachable and efficient to attain.

The Agile movement has already encouraged significant changes in how we validate software. Continuous Delivery has continued to push the validation efficiencies forward. As more teams become aware and more effective with test frameworks, tools, platforms and SaaS offerings we will continue to see more efficient validation processes in our delivery pipelines.

There is a great book by Jez Humble, Lean Enterprise: How High Performance Organizations Innovate at Scale, that I recommend if you want to explore the topics above further. Let me know in the comments section if you have any additional or different thoughts around focusing on configuration management debt first. I’m always interested in learning from others tackling difficult problems and the approaches that they see working.

Deploying Meteor Apps to Cloud Foundry

Recently, I was playing around with Meteor to test out how our Cloud Foundry deployments would handle web socket connections from client browsers. For those that haven’t seen Meteor before, Meteor is a platform for building web and mobile applications that get updated automatically when changes are made through other clients. For instance, if our application has cards that move between columns on a Kanban board and someone else moves a card in their browser session then the card would also move in my browser if I had it open at the same time the event occurred. Meteor uses MongoDB as its data persistence layer and so our Cloud Foundry environment had have MongoDB available as a service in the marketplace.

The Cloud Foundry Meteor Buildpack

When I started there wasn’t a de facto Meteor buildpack that worked with Cloud Foundry, at least that I could find. I did find a Heroku Meteor buildpack which provided many of the pieces needed to update for Cloud Foundry Meteor application deployments. I originally forked the repo but then created a new repo with a name that declared the buildpack as being Cloud Foundry oriented. You can check out the Cloud Foundry Meteor buildpack repo at The major changes that were need for use with Cloud Foundry had to do with where the MongoDB and root URL environment variables, necessary to be set for Meteor to start application successfully, were being pulled from. In Cloud Foundry these are pulled by convention from the VCAP_SERVICES environment variable rather than setting configurations from the command line in Heroku.

Creating a Meteor Application

You can install the Meteor binaries using the following command:

curl | sh

Once installed, creating your first example Meteor application is as simple as running the following in the directory of your choice:

meteor create --example todos

You can change the name of your application from “todos” to something else if you’d like.

Deploying a Meteor Application to Cloud Foundry

Now that we have an example Meteor application, lets push it to a Cloud Foundry environment. In order to push our application we will need to be logged into a Cloud Foundry environment. We will use the Cloud Foundry CLI to login and push our application. Here is how I do this in our Cloud Foundry environment:

cf login -a -o myorg -u csterwa

And after I put in my password I’m logged in. We can now use the Cloud Foundry CLI to push our application. First, lets create a MongoDB instance for our persistence layer that our Meteor application will bind to and use.

cf create-service mongodb default todos-mongo

We named our MongoDB instance “todos-mongo”. We can now create a file called manifest.yml in our Meteor applications top level directory with the following contents:

- name: todos
memory: 512M
- todos-mongo

This manifest is setting the name of the application to todos and reserving 512 MB of memory for the application to use. It also tells Cloud Foundry to use the Cloud Foundry Meteor buildpack to bundle up the application for running on Cloud Foundry. And finally, bind to a service named todos-mongo, the name of the MongoDB instance that we just created. With this manifest.yml file created in our top level Meteor application directory, it is now time to push our application to Cloud Foundry.

cf push

Since we have the manifest.yml we don’t need to send any arguments to the Cloud Foundry CLI push command. It should now be uploading our application files, building a deployment package, binding the application to our MongoDB instance and then starting the application. When it is finished it will say “App Started” and provide a URL to access the running application. In this case it would have been assuming my default domain was “” in this Cloud Foundry environment. You should now be able to open up multiple browser windows to the URL provided and see changes in one show up on the other.

I’m looking forward to hearing from others on their experiences deploying Meteor applications using the Cloud Foundry Meteor buildpack. If you have any feedback or issues please add to the issue tracker on the Github repo at

Node.js, MySQL and Redis on Cloud Foundry

The Cloud Foundry marketplace provides services that applications deployed into Cloud Foundry can take advantage of. Service plans are added into a Cloud Foundry marketplace by connecting their associated Service Brokers as I described in a previous post.

Recently, I was making a demo Node.js application which used MySQL for storing data and Redis to enable scaled Express session management across application instances. This article is intended to describe how to bind MySQL and Redis service instances to your application deployed into Cloud Foundry. The service brokers used in the Cloud Foundry environment that I deployed the demo Node.js application into were the open source MySQL and Redis service brokers from Pivotal:


Node.js and Cloud Foundry

To deploy Node.js application into Cloud Foundry the proper buildpack must be available. You can check if the Node.js buildpack is installed in your Cloud Foundry environment by running the following command:

$ cf buildpacks
nodejs_buildpack 3 true false

Don’t fret if the Node.js buildpack is not available in your Cloud Foundry environment. There is another way to use the buildpack when you deploy your Cloud Foundry application. When deploying applications into Cloud Foundry we use the `cf` CLI and the `push` command. So to deploy an application with the Node.js buildpack it would look like this:

$ cf push my_app_01 -b

Adding MySQL to Node.js in Cloud Foundry

As mentioned at the beginning of this article, we will assume that the open source MySQL and Redis service brokers are available in your Cloud Foundry marketplace. You can check if these services are available in your marketplace from a shell:

$ cf marketplace
p-mysql 100mb-dev, 1gb-dev A MySQL service for application development and testing
p-redis shared-vm, dedicated-vm Redis service to provide a key-value store

To create a MySQL service instance we can run the following:

$ cf create-service p-mysql 100mb-dev my_service_01

Once we have a service instance we can bind it to our application:

$ cf bind-service my_app_01 my_service_01

Now that the service instance is bound to our application, we need to use the service in our Node.js application. When a service instance is bound to an application it typically puts information on how to connect with the service inside the application’s environment. You can view an application’s environment from a shell like so:

$ cf env my_app_01

Services by convention will put their environment information into the VCAP_SERVICES variable. An example of VCAP_SERVICES might look like the following:

"p-mysql": [
"credentials": {
"hostname": “[some IP]",
"jdbcUrl": "jdbc:mysql://[some IP]:3306/[gen DB name]?user=[gen username]\u0026password=[gen password]",
"name": “[gen DB name]",
"password": “[gen password]",
"port": 3306,
"uri": "mysql://[gen username]:[gen password]@[some IP]:3306/[gen DB name]?reconnect=true",
"username": “[gen username]"
"label": "p-mysql",
"name": “my_service_01",
"plan": "100mb-dev",
"tags": [

For MySQL it will provide a full connection URI that can be used by Node.js mysql module to interact with the service instance. To do this, we can write some JavaScript to pull the environment information from VCAP_SERVICES that we need to execute statements against our MySQL service instance. First, we need to install the Node.js mysql module:

$ npm install mysql —save

Next, we need code for local development with the Node.js mysql module. Just put in username, password and database name that you will create for local development.

var mysql = require('mysql');
var connectionInfo = {
user: ‘myuser',
password: ‘aPassword',
database: ‘mydb'

Now we will pluck the environment credentials and connection information for our MySQL service instance for when we deploy into Cloud Foundry. We check to see if the VCAP_SERVICES environment variable is defined and that way we know that the application is deployed into Cloud Foundry.

if (process.env.VCAP_SERVICES) {
var services = JSON.parse(process.env.VCAP_SERVICES);
var mysqlConfig = services["p-mysql"];
if (mysqlConfig) {
var node = mysqlConfig[0];
connectionInfo = {
host: node.credentials.hostname,
port: node.credentials.port,
user: node.credentials.username,
password: node.credentials.password,

We can then create a generic function to execute SQL commands against the MySQL database we’re connected to.

exports.query = function(query, callback) {
var connection = mysql.createConnection(connectionInfo);
connection.query(query, function(queryError, result) {
callback(queryError, result);

A query using this function might look like:

db.query(‘select * from tasks order by due_date’, function(err, result) {

Adding Redis to Node.js in Cloud Foundry

Adding Redis is similar to adding MySQL. To create a Redis service instance we can run the following:

$ cf create-service p-redis shared-vm my_cache_01

Once we have a service instance we can bind it to our application:

$ cf bind-service my_app_01 my_cache_01

The VCAP_SERVICES environment information for Redis looks just a little different but is accessed in the same way.

"p-redis": [
"credentials": {
"host": “[some IP]",
"password": “[gen password]",
"port": 55645
"label": "p-redis",
"name": “my_cache_01",
"plan": "shared-vm",
"tags": [

Express is not able to share user sessions out of the box across application instances. In order to support shared sessions across application instances we can use Redis with Express on Node.js. The connect-redis module enables the use Redis with Express session management. To install connect-redis run the following:

$ npm install connect-redis —save

Once the connect-redis module is available we can use it in our code. We must send the Express session into connect-redis so that it can enhance session management:

module.exports = function(session) {
var RedisStore = require('connect-redis')(session);
var options = {};

And again we pull out the relevant connection and credentials from the VCAP_SERVICES environment variable available to our application:

if (process.env.VCAP_SERVICES) {
var services = JSON.parse(process.env.VCAP_SERVICES);
var redisConfig = services["p-redis"];
if (redisConfig) {
var node = redisConfig[0];
options = {
port: node.credentials.port,
pass: node.credentials.password,

And finally make the Redis store available with the configuration options as a function:

return {
getRedisStore: function() {
return new RedisStore(options);

Finally, we can configure our application to use the Redis as a store for Express sessions:

store: cacheConfig.getRedisStore(),
secret: ‘mysecret'

Scaling Application Instances

With Cloud Foundry, it is simple to scale the number of instances for our running application. After we push our application again to Cloud Foundry:

$ cf push my_app_01

We can scale the number of running instances by using the `cf scale` command:

$ cf scale my_app -i 3

And we can verify the number of instances running by looking up our application information from Cloud Foundry:

$ cf app my_app
name requested state instances memory disk urls
my_app_01 started 2/2 256M 1G

Now a user should be able to authenticate to your application and if you save the user in the session it will be available across all running instances through Redis session store.


It is incredible how fast you can deploy scalable applications with add-on services that enable fairly complex behavior. I’ll post some more code examples for applications in more languages and other platforms over the next couple months.

Service Brokers in Cloud Foundry

An Introduction to Cloud Foundry Service Brokers

Service Brokers are a critical aspect of Cloud Foundry. They enable service instances to be created and bound to applications deployed into Cloud Foundry. An example could be asking a Service Broker to create a MongoDB cluster instance, magically done behind the scenes from within the broker. Once created, an app can bind that MongoDB cluster instance to its environment enabling access to information such as its DB URL.

Service Broker API

In order for Service Brokers to do this they must implement the following API endpoints (API Docs link):

GET /v2/catalog - returns information about service offerings and service plans provided by service broker
PUT /v2/service_instances/:instance_id - creates a new service instance with the specified unique ID, service offering and service plan
PATCH /v2/service_instances/:instance_id - updates an existing service instance to change service plan
PUT /v2/service_instances/:instance_id/service_bindings/:binding_id - bind a service instance to an application by specified unique binding ID
DELETE /v2/service_instances/:instance_id/service_bindings/:binding_id - remove a service application binding for a specified service instance
DELETE /v2/service_instances/:instance_id - remove a service instance

When an application is bound to a service, the service binding PUT response can include credentials that the application can use to work with the service. This response comes in JSON format and is populated into the applications environment. An example could be binding my application with the MongoDB service and then running the following using the Cloud Foundry CLI for your deployed application:

$ cf env card-maps
Getting env variables for app in org myorg / space dev as myuser...
"mongodb-2.6": [
"credentials": {
"db": "db",
"host": "",
"hostname": "",
"password": "genpassword",
"port": 11001,
"url": "mongodb://genusername:genpassword@",
"username": "genusername"
"label": "mongodb-2.6",
"name": "mymongodb",
"plan": "free",
"tags": [

Registering Service Brokers with Cloud Foundry

Once you have a Service Broker, you can use the Cloud Foundry CLI to enable it in Cloud Foundry:

$ cf create-service-broker my-service-broker-name username password

The credentials, username and password, sent through the command line are valid credentials that can be used to access the broker URL. This could be basic auth credentials that can be configured at the service broker URL. Once your service broker is created you can see it in the list of service brokers and then enable access to the service offerings and service plans to your Cloud Foundry application developers:

$ cf service-brokers
Getting service brokers as myuser...
name url

$ cf service-access
Getting service access as myuser...
broker: my-service-broker-name
service plan access orgs
offering-1 free all

$ cf enable-service-access offering-1

Notice that we must enable access to a specific service offering. You can be more specific about what aspects of the service offerings you want to enable:

$ cf help enable-service-access
enable-service-access - Enable access to a service or service plan for one or all orgs
cf enable-service-access SERVICE [-p PLAN] [-o ORG]
-p Enable access to a specified service plan
-o Enable access for a specified organization

More information on how to manage your Service Brokers can be found here.

Community Contributed Services

There are some existing community contributed, open source services available for Cloud Foundry. Here are a few that might be of interest but this is definitely not an exhaustive list:

And there are and will be many more commercial and free open source Service Brokers for Cloud Foundry. Just do a search if you have a specific service you’d like to enable Cloud Foundry application access to. Now go forth and bind services to your Cloud Foundry applications!

Managing Software Debt 2015 Predictions

2015 is fast approaching and for the first time I felt the urge to make public predictions on what the new year will bring through the lens of Managing Software Debt. Interestingly enough, many of these predictions revolve around topics this blog was constructed to discuss. This blog has been a long time coming for me since my last official blog post prior to November was in 2012 so let me take a slight diversion to describe my reflections on how this blog came into being.

My last blog had run its course after 178 posts from 2005 to 2012 and it seemed to me that we were Beyond Agile and I needed a different focus. After publishing Managing Software Debt: Building for Inevitable Change in 2010, a reference on 5 types of software debt and how they can be managed and monitored, it seemed that focusing on leading change through Continuous Delivery and reducing Configuration Management Debt had the best results in my consulting experience. Since that time, the DevOps movement has hit a full stride and embodies this approach for leading change in organizations. I think we are on the precipice of our next industry revolution to reduce the cost of change for software as cloud has become a common enterprise platform of choice. With that, here are areas of the software development, deployment and operations ecosystem that I think are going to see significant interest in 2015.

  • PaaS (Platform as a Service)
  • Twelve-Factor Apps and Microservices
  • Feature Teams around Business Capabilities
  • Deployment Orchestration


OK, OK. I know that my new role is Product Owner for PaaS at CenturyLink Cloud but there is a good reason I took this role in November (time to note my disclaimer that the views expressed on this blog are mine alone). In my last few roles the impact of infrastructure development enabling deployment foo applications and services to cloud platforms was significant enough to cause me pause. It seemed that the problems being solved were similar across development efforts: load balancing apps & services, event publishing & processing, service discovery, continuous delivery pipelining, blue/green deployments and infrastructure provisioning just to name a few. We had looked at multiple vendor offerings but what we saw prior to 2014 had been, in our opinion, immature. As 2014 progressed, the popularity of containers kicked off a valuable conversation about separation of concerns for deployment and infrastructure. Containers provided a piece of the solution that allowed infrastructure and application/service development to execute within their own life cycles beyond what Puppet and Chef had done for configuration management.

After I heard about the Product Owner role here at CenturyLink Cloud, where I’d be working with a team to deliver PaaS based on Cloud Foundry, I updated my knowledge of the PaaS space. I had already been playing with Docker and worked with others who implemented a build pipeline for our services based on Docker containers. Through this process it was clear that there were still significant problems to solve beyond the ease of development story that Docker was just an introductory chapter to. While researching Cloud Foundry again, after trying it out back in 2012 and deciding not to use it, I was pleasantly surprised how far the platform had come. Immediately I took notice of an aspect of Cloud Foundry called Warden which manages isolated, ephemeral, and resource-controlled environments (aka containers). It been around since November 2011 and had the full Cloud Foundry ecosystem surrounding it which looked to help solve more of the problems in the infrastructure and application/service deployment space than other alternatives available today.

As more enterprise developers see the benefits of PaaS, such as ease of development and deployment with low overhead for configuration management and operations involvement, there will be a large upswing in its adoption. Also, folks in operations will further benefit and enable the DevOps culture as PaaS continues to mature allowing for more self-service provisioning and deployment while still getting the visibility needed to support service level agreements and control operating costs. Look for more on PaaS in this blog as we learn more about how customers innovate and deliver on our upcoming PaaS offering.

Twelve-Factor Apps and Microservices

My last post on The Imminent Acceleration of the Twelve-Factor Apps Approach already discussed why I think The Twelve-Factor App and Microservices will be big in 2015. Some folks in our industry are already realizing the benefits of these approaches. It is probably not surprising that this realization was found mostly outside of the monolithic ESB and SOA tool vendor offerings. Instead, the rise of SPAs (single page apps), RESTful APIs, OpenID/OAuth, cloud computing, open source and many other emergent approaches from the community have been the potion for increased adoption of service-oriented architectures. Look for significant changes in enterprise application development to support The Twelve-Factor Apps approach and implementation of Microservices (or at least less monolithic) as 2015 progresses.

Feature Teams around Business Capabilities

As the chapter “Platform Experience Debt” from the Managing Software Debt book explained, organizations are more flexible when there is clearer alignment of teams to business capabilities and ultimately to their users. The rise of DevOps has brought the cultural changes needed to be more adaptive (and dare I say “agile”) to light and ignited a follow on movement from the software development centric agile movement to incorporating production operations as an aspect of team responsibility. This has made it even more apparent that the Feature Teams collaborative team configuration helps drive alignment of business capabilities with user needs along. Not only that, this organizational alignment creates less brittle boundaries between teams than component (or functional) team configurations do. DevOps has definitely made its mark with Gene Kim’s book The Phoenix Project and the outbreak of DevOps oriented conferences around the world. Look for the DevOps movement to accelerate as more real world change stories are shared in the new year.

Deployment Orchestration

With the rise of PaaS, The Twelve-Factor App and Microservices, the need for more effective deployment orchestration tools and processes will grow. Enterprise Operations groups will need strategies for dealing with the increased frequency of deployment, proliferation of environments and running processes, and hybrid models with internal data centers and cloud architectures used in conjunction with public cloud provider offerings. Continuous Integration servers and access to server instances are not enough. The number of deployment models, platforms, and network topologies will make governance a mess. In 2015 we will need to start finding solutions to orchestrating deployments from build to validation to deployment and to governance. There is a lot of room to innovate and make a significant impact in Deployment Orchestration. I’m excited to see what is coming to solve Deployment Orchestration challenges in the new year.

This is my first attempt at a predictions blog. Let me know how I did on Twitter (@csterwa). I’m always looking for feedback.

Have a Happy New Year 2015!

The Imminent Acceleration of the Twelve-Factor Apps Approach

Software that takes advantage of what the cloud has to offer must take on a new shape. Applications deployed to the cloud need to show resilience in the face of VMs or server instances going down or not behaving as expected. Applications deployed to cloud infrastructures also tend to need to scale temporarily to keep costs reasonable and automatic based on patterns of usage. Learning faster than your competition is of utmost importance to many who deploy software into a cloud therefore deploying updates on a continuous basis becomes critical. There is an approach to developing and deploying applications into the cloud, The Twelve-Factor App.

The Twelve Factors of this approach are:

I. Codebase – One codebase tracked in revision control, many deploys
II. Dependencies – Explicitly declare and isolate dependencies
III. Config – Store config in the environment
IV. Backing Services – Treat backing services as attached resources
V. Build, release, run – Strictly separate build and run stages
VI. Processes – Execute the app as one or more stateless processes
VII. Port binding – Export services via port binding
VIII. Concurrency – Scale out via the process model
IX. Disposability – Maximize robustness with fast startup and graceful shutdown
X. Dev/prod parity – Keep development, staging, and production as similar as possible
XI. Logs – Treat logs as event streams
XII. Admin processes – Run admin/management tasks as one-off processes

These factors tend to drive highly cohesive applications and services that also exhibit low coupling with their dependencies. Each application or service should be in a single repository that can be built and deployed on its own. Rather than branching or forking to create multiple deployable versions of the application, we should externalize configuration so that the maintenance costs are not exponentially growing to support all versions of the application. These applications should also have understandable boundaries and implement a single concept or responsibility to increase their disposability. Creating stateless applications and services enables horizontal scaling and redundancy across multiple nodes in a cloud. Deploying multiple instances of an application or service results in multiple ports that can load balanced and registered for use by others.

Some of these factors are no brainers for many experienced developers and some are not as easy to implement due to access restrictions to configuration management and flexible operating platforms. For those of us fortunate enough to use “real” cloud infrastructure, today’s cloud vendors are starting to provide services that enable Twelve-Factor Apps. The Cloud Foundry Foundation officially launched this week as part of the Linux Foundation Collaborative Projects. Cloud Foundry is the most robust and mature open source Platform-as-a-Service (PaaS) offering in the market. With Cloud Foundry it has become much easier to apply The Twelve-Factor App approach to applications and services. Buildpacks enable the use of polyglot software development across applications and services yet still deploy into a single PaaS. Software can be deployed using a single command to a PaaS: `cf push <appname>`. Zero scheduled downtime deployments can be performed using a Blue/Green Deployment approach, described well by Martin Fowler here, that keeps existing versions of the software (Blue) up while deploying and testing new versions (Green) that can run alongside or replace the old version (Blue) when validated as ready. And binding to dependent services, such as DB clusters and message queues, can be simplified to create a named service deployment with `cf create-service mongodb cluster my-mongo-cluster` and then binding it to your application with `cf bind-service my-app-1 my-mongo-cluster`. It is incredible how Cloud Foundry can make creating and continuously delivering a Twelve-Factor App orders of magnitude easier than constructing and managing your own infrastructure or in an Infrastructure-as-a-Service (IaaS) platform. When you need to optimize the deployment environment you can always take a single application or service and adjust it to deploy into your own data center or an IaaS platform but since they take The Twelve-Factor Approach you don’t have to do it for all of your applications and services.

I hope this article provides awareness around The Twelve-Factor App approach and how a PaaS, such as Cloud Foundry, can enable effective use of the approach. I recommend clicking the links provided in this article to read more details about the approach and how to take advantage of what Cloud Foundry has to offer. It is imminent that in the next few years that Twelve-Factor Apps will become more the norm in software development shops and PaaS will be a common deployment platform. Take the time to read and experiment with Cloud Foundry to get a leg up on this imminent acceleration of PaaS and The Twelve-Factor Apps approach.