The Monolith: Penny Wise, Pound Foolish

I just arrived in Santa Clara for the 2016 Cloud Foundry Summit. I’ve been meaning to write this article for some time but for some reason haven’t found the time to do so. Although not typical nor preferable, I’m sitting in a hotel lobby with a beer and a burger with a bit of time on my hands before the Unconference begins later this afternoon. This must be the right time to get this off my mind and out into public to spark feedback and conversation.

The Monolith

So lets jump right in head first. I make it no secret that I’m a fan of the Microservices and Serverless architectural approaches, Continuous Delivery, Containers and Feature Teams which I deem as aspects of the Cloud Native development approach. This is not just theoretical admiration but rather hands on and experiential from years of applying and consulting with similar or direct use of these approaches and technologies. Over and over again I’ve seen that these have lead to more scalable, understandable and, most importantly, changeable systems. The alternative, developing a Monolith architecture approach, has, in my experience, tended in the other direction on all of these dimensions.

The Monolith seems easier to develop and understand for quite some time. Although it is well understood that “project size is easily the most significant determinant of effort, cost and schedule [for a software project]” as Steve McConnell pointed out in his book “Software Estimation: Demystifying the Black Art”. Lets break down some of the reasons why a Monolith approach usually turns into a Big Ball of Mud.

The Scalability Problem

When most people think of scaling they think of “web scale”, cluster size or problems only the largest companies have to deal with. Although these are areas where the Monolith architecture approach breaks down, there are situations that are much more common around scalability to deal with.

Have you ever tried to get two or more teams working on the same codebase? What problems start to emerge? How do we handle them? These are the questions that all software development organizations should ask themselves. Here are some of the problems that I’ve seen and how organizations attempted to solve them:

  • Trampling each others changes tends to lead towards complicated source control management, build and deployment approaches including:
    • Slow code review processes
    • Delayed merging of code
    • Reverse code trampling into developer environment
    • Not allowing multiple teams to work on sections of codebase
  • Breaking changes to dependent code leads to:
    • Processes for more up front design reviews
    • More comprehensive documentation of proposed changes

These problems seem to most as just the way software development works. As time passes on a system the unapologetic grip of legacy code takes hold. Unintended coupling between different parts of the system creep in and lead to additional cyclomatic complexity as McCabe defined in 1976. Later studies showed a positive correlation between complexity and lower cohesion and defect count in systems with higher cycolmatic complexity. Rather than responding with apathy and synacism about the onslaught of legacy code, this should scream at us there must be a better way.

The Understandability Problem

How long does it take for a brand, spanking new team member to get up to speed with your system or even an aspect of the system? Can a new team member be trusted to check in at least somewhat significant code on day 1 and not cause great amounts of harm? Do all of your existing team members all understand the system enough to make changes?

We’ve all been there. Downloading an existing codebase with tens or hundreds of thousands of lines of code. Taking a look at it to figure out how you should get started understanding it. Even the best intentioned and capable teams can find themselves in this dilemma of the Monolith architecture. Where did it all go wrong?

Requirements change. Business changes. Code changes. When developing in a Monolith architecture these new perspectives tend to result in leaky abstractions, muddled models, and broken boundary contexts. I believe this is due to the convenience allowed, as a type of moral hazard, to make changes that cross boundary contexts inside a single system. These changes are not done with ill intentions or even without forethought. As the great Gerald Weinberg said:

Things are the way they are because they got that way.

The Changeability Problem

Have you ever been in a situation where you’ve had to tell a product manager, stakeholder or customer that a seemingly simple change may take weeks or months because of technical debt? Were there ever situations where an architectural redesign of a component within the system was deemed as too risky with a significant and unknown duration to resolve? Are there components of the system that team members aren’t supposed to touch and instead there are wrappers around the component to interract with it from the outside?

I’ve seen all of these and have also been part of a team that has brought such codebases back to life. How did we do it? Break it apart into understandable, isolated, cohesive pieces through careful refactoring of the code and infrastructure over long periods of time. Why did we do it? Because we knew it would lead to accelerated delivery of improvements to those components. We could then scale up the number of teams developing on the system by creating feature teams around those components.

Changeability is one of the most important aspects of any software system. Smaller, cohesive components that are decoupled from other components allows the system to be malleable as the need for changes emerge. An application should not equal a single codebase. Instead, an application is the interactions of multiple components that provide the desired behavior and value to users. How can we get to this desired state?

Which Way Next?

It is my opinion that the Cloud Native development approach is going to change how we deliver software even more than most people think. The days of developing using monolithic software architectures are numbered. The platforms, organizational team structures and architectural patterns & practices that will support this change are maturing quickly. My upcoming articles will describe how I think some of this will play out in the industry. Make way for the Cloud Native way!

Deploying Meteor Apps to Cloud Foundry

Recently, I was playing around with Meteor to test out how our Cloud Foundry deployments would handle web socket connections from client browsers. For those that haven’t seen Meteor before, Meteor is a platform for building web and mobile applications that get updated automatically when changes are made through other clients. For instance, if our application has cards that move between columns on a Kanban board and someone else moves a card in their browser session then the card would also move in my browser if I had it open at the same time the event occurred. Meteor uses MongoDB as its data persistence layer and so our Cloud Foundry environment had have MongoDB available as a service in the marketplace.

The Cloud Foundry Meteor Buildpack

When I started there wasn’t a de facto Meteor buildpack that worked with Cloud Foundry, at least that I could find. I did find a Heroku Meteor buildpack which provided many of the pieces needed to update for Cloud Foundry Meteor application deployments. I originally forked the repo but then created a new repo with a name that declared the buildpack as being Cloud Foundry oriented. You can check out the Cloud Foundry Meteor buildpack repo at The major changes that were need for use with Cloud Foundry had to do with where the MongoDB and root URL environment variables, necessary to be set for Meteor to start application successfully, were being pulled from. In Cloud Foundry these are pulled by convention from the VCAP_SERVICES environment variable rather than setting configurations from the command line in Heroku.

Creating a Meteor Application

You can install the Meteor binaries using the following command:

curl | sh

Once installed, creating your first example Meteor application is as simple as running the following in the directory of your choice:

meteor create --example todos

You can change the name of your application from “todos” to something else if you’d like.

Deploying a Meteor Application to Cloud Foundry

Now that we have an example Meteor application, lets push it to a Cloud Foundry environment. In order to push our application we will need to be logged into a Cloud Foundry environment. We will use the Cloud Foundry CLI to login and push our application. Here is how I do this in our Cloud Foundry environment:

cf login -a -o myorg -u csterwa

And after I put in my password I’m logged in. We can now use the Cloud Foundry CLI to push our application. First, lets create a MongoDB instance for our persistence layer that our Meteor application will bind to and use.

cf create-service mongodb default todos-mongo

We named our MongoDB instance “todos-mongo”. We can now create a file called manifest.yml in our Meteor applications top level directory with the following contents:

- name: todos
memory: 512M
- todos-mongo

This manifest is setting the name of the application to todos and reserving 512 MB of memory for the application to use. It also tells Cloud Foundry to use the Cloud Foundry Meteor buildpack to bundle up the application for running on Cloud Foundry. And finally, bind to a service named todos-mongo, the name of the MongoDB instance that we just created. With this manifest.yml file created in our top level Meteor application directory, it is now time to push our application to Cloud Foundry.

cf push

Since we have the manifest.yml we don’t need to send any arguments to the Cloud Foundry CLI push command. It should now be uploading our application files, building a deployment package, binding the application to our MongoDB instance and then starting the application. When it is finished it will say “App Started” and provide a URL to access the running application. In this case it would have been assuming my default domain was “” in this Cloud Foundry environment. You should now be able to open up multiple browser windows to the URL provided and see changes in one show up on the other.

I’m looking forward to hearing from others on their experiences deploying Meteor applications using the Cloud Foundry Meteor buildpack. If you have any feedback or issues please add to the issue tracker on the Github repo at

Node.js, MySQL and Redis on Cloud Foundry

The Cloud Foundry marketplace provides services that applications deployed into Cloud Foundry can take advantage of. Service plans are added into a Cloud Foundry marketplace by connecting their associated Service Brokers as I described in a previous post.

Recently, I was making a demo Node.js application which used MySQL for storing data and Redis to enable scaled Express session management across application instances. This article is intended to describe how to bind MySQL and Redis service instances to your application deployed into Cloud Foundry. The service brokers used in the Cloud Foundry environment that I deployed the demo Node.js application into were the open source MySQL and Redis service brokers from Pivotal:


Node.js and Cloud Foundry

To deploy Node.js application into Cloud Foundry the proper buildpack must be available. You can check if the Node.js buildpack is installed in your Cloud Foundry environment by running the following command:

$ cf buildpacks
nodejs_buildpack 3 true false

Don’t fret if the Node.js buildpack is not available in your Cloud Foundry environment. There is another way to use the buildpack when you deploy your Cloud Foundry application. When deploying applications into Cloud Foundry we use the `cf` CLI and the `push` command. So to deploy an application with the Node.js buildpack it would look like this:

$ cf push my_app_01 -b

Adding MySQL to Node.js in Cloud Foundry

As mentioned at the beginning of this article, we will assume that the open source MySQL and Redis service brokers are available in your Cloud Foundry marketplace. You can check if these services are available in your marketplace from a shell:

$ cf marketplace
p-mysql 100mb-dev, 1gb-dev A MySQL service for application development and testing
p-redis shared-vm, dedicated-vm Redis service to provide a key-value store

To create a MySQL service instance we can run the following:

$ cf create-service p-mysql 100mb-dev my_service_01

Once we have a service instance we can bind it to our application:

$ cf bind-service my_app_01 my_service_01

Now that the service instance is bound to our application, we need to use the service in our Node.js application. When a service instance is bound to an application it typically puts information on how to connect with the service inside the application’s environment. You can view an application’s environment from a shell like so:

$ cf env my_app_01

Services by convention will put their environment information into the VCAP_SERVICES variable. An example of VCAP_SERVICES might look like the following:

"p-mysql": [
"credentials": {
"hostname": “[some IP]",
"jdbcUrl": "jdbc:mysql://[some IP]:3306/[gen DB name]?user=[gen username]\u0026password=[gen password]",
"name": “[gen DB name]",
"password": “[gen password]",
"port": 3306,
"uri": "mysql://[gen username]:[gen password]@[some IP]:3306/[gen DB name]?reconnect=true",
"username": “[gen username]"
"label": "p-mysql",
"name": “my_service_01",
"plan": "100mb-dev",
"tags": [

For MySQL it will provide a full connection URI that can be used by Node.js mysql module to interact with the service instance. To do this, we can write some JavaScript to pull the environment information from VCAP_SERVICES that we need to execute statements against our MySQL service instance. First, we need to install the Node.js mysql module:

$ npm install mysql —save

Next, we need code for local development with the Node.js mysql module. Just put in username, password and database name that you will create for local development.

var mysql = require('mysql');
var connectionInfo = {
user: ‘myuser',
password: ‘aPassword',
database: ‘mydb'

Now we will pluck the environment credentials and connection information for our MySQL service instance for when we deploy into Cloud Foundry. We check to see if the VCAP_SERVICES environment variable is defined and that way we know that the application is deployed into Cloud Foundry.

if (process.env.VCAP_SERVICES) {
var services = JSON.parse(process.env.VCAP_SERVICES);
var mysqlConfig = services["p-mysql"];
if (mysqlConfig) {
var node = mysqlConfig[0];
connectionInfo = {
host: node.credentials.hostname,
port: node.credentials.port,
user: node.credentials.username,
password: node.credentials.password,

We can then create a generic function to execute SQL commands against the MySQL database we’re connected to.

exports.query = function(query, callback) {
var connection = mysql.createConnection(connectionInfo);
connection.query(query, function(queryError, result) {
callback(queryError, result);

A query using this function might look like:

db.query(‘select * from tasks order by due_date’, function(err, result) {

Adding Redis to Node.js in Cloud Foundry

Adding Redis is similar to adding MySQL. To create a Redis service instance we can run the following:

$ cf create-service p-redis shared-vm my_cache_01

Once we have a service instance we can bind it to our application:

$ cf bind-service my_app_01 my_cache_01

The VCAP_SERVICES environment information for Redis looks just a little different but is accessed in the same way.

"p-redis": [
"credentials": {
"host": “[some IP]",
"password": “[gen password]",
"port": 55645
"label": "p-redis",
"name": “my_cache_01",
"plan": "shared-vm",
"tags": [

Express is not able to share user sessions out of the box across application instances. In order to support shared sessions across application instances we can use Redis with Express on Node.js. The connect-redis module enables the use Redis with Express session management. To install connect-redis run the following:

$ npm install connect-redis —save

Once the connect-redis module is available we can use it in our code. We must send the Express session into connect-redis so that it can enhance session management:

module.exports = function(session) {
var RedisStore = require('connect-redis')(session);
var options = {};

And again we pull out the relevant connection and credentials from the VCAP_SERVICES environment variable available to our application:

if (process.env.VCAP_SERVICES) {
var services = JSON.parse(process.env.VCAP_SERVICES);
var redisConfig = services["p-redis"];
if (redisConfig) {
var node = redisConfig[0];
options = {
port: node.credentials.port,
pass: node.credentials.password,

And finally make the Redis store available with the configuration options as a function:

return {
getRedisStore: function() {
return new RedisStore(options);

Finally, we can configure our application to use the Redis as a store for Express sessions:

store: cacheConfig.getRedisStore(),
secret: ‘mysecret'

Scaling Application Instances

With Cloud Foundry, it is simple to scale the number of instances for our running application. After we push our application again to Cloud Foundry:

$ cf push my_app_01

We can scale the number of running instances by using the `cf scale` command:

$ cf scale my_app -i 3

And we can verify the number of instances running by looking up our application information from Cloud Foundry:

$ cf app my_app
name requested state instances memory disk urls
my_app_01 started 2/2 256M 1G

Now a user should be able to authenticate to your application and if you save the user in the session it will be available across all running instances through Redis session store.


It is incredible how fast you can deploy scalable applications with add-on services that enable fairly complex behavior. I’ll post some more code examples for applications in more languages and other platforms over the next couple months.