Node.js, MySQL and Redis on Cloud Foundry

The Cloud Foundry marketplace provides services that applications deployed into Cloud Foundry can take advantage of. Service plans are added into a Cloud Foundry marketplace by connecting their associated Service Brokers as I described in a previous post.

Recently, I was making a demo Node.js application which used MySQL for storing data and Redis to enable scaled Express session management across application instances. This article is intended to describe how to bind MySQL and Redis service instances to your application deployed into Cloud Foundry. The service brokers used in the Cloud Foundry environment that I deployed the demo Node.js application into were the open source MySQL and Redis service brokers from Pivotal:

cf-mysql-broker: https://github.com/cloudfoundry/cf-mysql-release
cf-redis-broker: https://github.com/pivotal-cf/cf-redis-release

Node.js and Cloud Foundry

To deploy Node.js application into Cloud Foundry the proper buildpack must be available. You can check if the Node.js buildpack is installed in your Cloud Foundry environment by running the following command:

$ cf buildpacks
...
nodejs_buildpack 3 true false nodejs_buildpack-offline-v1.0.4.zip
...

Don’t fret if the Node.js buildpack is not available in your Cloud Foundry environment. There is another way to use the buildpack when you deploy your Cloud Foundry application. When deploying applications into Cloud Foundry we use the `cf` CLI and the `push` command. So to deploy an application with the Node.js buildpack it would look like this:

$ cf push my_app_01 -b https://github.com/cloudfoundry/nodejs-buildpack.git

Adding MySQL to Node.js in Cloud Foundry

As mentioned at the beginning of this article, we will assume that the open source MySQL and Redis service brokers are available in your Cloud Foundry marketplace. You can check if these services are available in your marketplace from a shell:

$ cf marketplace
...
p-mysql 100mb-dev, 1gb-dev A MySQL service for application development and testing
p-redis shared-vm, dedicated-vm Redis service to provide a key-value store

To create a MySQL service instance we can run the following:

$ cf create-service p-mysql 100mb-dev my_service_01

Once we have a service instance we can bind it to our application:

$ cf bind-service my_app_01 my_service_01

Now that the service instance is bound to our application, we need to use the service in our Node.js application. When a service instance is bound to an application it typically puts information on how to connect with the service inside the application’s environment. You can view an application’s environment from a shell like so:

$ cf env my_app_01

Services by convention will put their environment information into the VCAP_SERVICES variable. An example of VCAP_SERVICES might look like the following:

{
"VCAP_SERVICES": {
"p-mysql": [
{
"credentials": {
"hostname": “[some IP]",
"jdbcUrl": "jdbc:mysql://[some IP]:3306/[gen DB name]?user=[gen username]\u0026password=[gen password]",
"name": “[gen DB name]",
"password": “[gen password]",
"port": 3306,
"uri": "mysql://[gen username]:[gen password]@[some IP]:3306/[gen DB name]?reconnect=true",
"username": “[gen username]"
},
"label": "p-mysql",
"name": “my_service_01",
"plan": "100mb-dev",
"tags": [
"mysql"
]
}]
}
}

For MySQL it will provide a full connection URI that can be used by Node.js mysql module to interact with the service instance. To do this, we can write some JavaScript to pull the environment information from VCAP_SERVICES that we need to execute statements against our MySQL service instance. First, we need to install the Node.js mysql module:

$ npm install mysql —save

Next, we need code for local development with the Node.js mysql module. Just put in username, password and database name that you will create for local development.

var mysql = require('mysql');
var connectionInfo = {
user: ‘myuser',
password: ‘aPassword',
database: ‘mydb'
};

Now we will pluck the environment credentials and connection information for our MySQL service instance for when we deploy into Cloud Foundry. We check to see if the VCAP_SERVICES environment variable is defined and that way we know that the application is deployed into Cloud Foundry.

if (process.env.VCAP_SERVICES) {
var services = JSON.parse(process.env.VCAP_SERVICES);
var mysqlConfig = services["p-mysql"];
if (mysqlConfig) {
var node = mysqlConfig[0];
connectionInfo = {
host: node.credentials.hostname,
port: node.credentials.port,
user: node.credentials.username,
password: node.credentials.password,
database: node.credentials.name
};
}
}

We can then create a generic function to execute SQL commands against the MySQL database we’re connected to.

exports.query = function(query, callback) {
var connection = mysql.createConnection(connectionInfo);
connection.query(query, function(queryError, result) {
callback(queryError, result);
});
connection.end();
};

A query using this function might look like:

db.query(‘select * from tasks order by due_date’, function(err, result) {
...
});

Adding Redis to Node.js in Cloud Foundry

Adding Redis is similar to adding MySQL. To create a Redis service instance we can run the following:

$ cf create-service p-redis shared-vm my_cache_01

Once we have a service instance we can bind it to our application:

$ cf bind-service my_app_01 my_cache_01

The VCAP_SERVICES environment information for Redis looks just a little different but is accessed in the same way.

{
"VCAP_SERVICES": {
"p-redis": [
{
"credentials": {
"host": “[some IP]",
"password": “[gen password]",
"port": 55645
},
"label": "p-redis",
"name": “my_cache_01",
"plan": "shared-vm",
"tags": [
"pivotal",
"redis"
]
}]
}
}

Express is not able to share user sessions out of the box across application instances. In order to support shared sessions across application instances we can use Redis with Express on Node.js. The connect-redis module enables the use Redis with Express session management. To install connect-redis run the following:

$ npm install connect-redis —save

Once the connect-redis module is available we can use it in our code. We must send the Express session into connect-redis so that it can enhance session management:

module.exports = function(session) {
var RedisStore = require('connect-redis')(session);
var options = {};

And again we pull out the relevant connection and credentials from the VCAP_SERVICES environment variable available to our application:

if (process.env.VCAP_SERVICES) {
var services = JSON.parse(process.env.VCAP_SERVICES);
var redisConfig = services["p-redis"];
if (redisConfig) {
var node = redisConfig[0];
options = {
host: node.credentials.host,
port: node.credentials.port,
pass: node.credentials.password,
};
}
}

And finally make the Redis store available with the configuration options as a function:

return {
getRedisStore: function() {
return new RedisStore(options);
}
};
}

Finally, we can configure our application to use the Redis as a store for Express sessions:

app.use(session({
store: cacheConfig.getRedisStore(),
secret: ‘mysecret'
}));

Scaling Application Instances

With Cloud Foundry, it is simple to scale the number of instances for our running application. After we push our application again to Cloud Foundry:

$ cf push my_app_01

We can scale the number of running instances by using the `cf scale` command:

$ cf scale my_app -i 3

And we can verify the number of instances running by looking up our application information from Cloud Foundry:

$ cf app my_app
...
name requested state instances memory disk urls
my_app_01 started 2/2 256M 1G my_app_01.mycfdomain.com

Now a user should be able to authenticate to your application and if you save the user in the session it will be available across all running instances through Redis session store.

Conclusion

It is incredible how fast you can deploy scalable applications with add-on services that enable fairly complex behavior. I’ll post some more code examples for applications in more languages and other platforms over the next couple months.

Service Brokers in Cloud Foundry

An Introduction to Cloud Foundry Service Brokers

Service Brokers are a critical aspect of Cloud Foundry. They enable service instances to be created and bound to applications deployed into Cloud Foundry. An example could be asking a Service Broker to create a MongoDB cluster instance, magically done behind the scenes from within the broker. Once created, an app can bind that MongoDB cluster instance to its environment enabling access to information such as its DB URL.

Service Broker API

In order for Service Brokers to do this they must implement the following API endpoints (API Docs link):

GET /v2/catalog - returns information about service offerings and service plans provided by service broker
PUT /v2/service_instances/:instance_id - creates a new service instance with the specified unique ID, service offering and service plan
PATCH /v2/service_instances/:instance_id - updates an existing service instance to change service plan
PUT /v2/service_instances/:instance_id/service_bindings/:binding_id - bind a service instance to an application by specified unique binding ID
DELETE /v2/service_instances/:instance_id/service_bindings/:binding_id - remove a service application binding for a specified service instance
DELETE /v2/service_instances/:instance_id - remove a service instance

When an application is bound to a service, the service binding PUT response can include credentials that the application can use to work with the service. This response comes in JSON format and is populated into the applications environment. An example could be binding my application with the MongoDB service and then running the following using the Cloud Foundry CLI for your deployed application:

$ cf env card-maps
Getting env variables for app in org myorg / space dev as myuser...
OK
System-Provided:
{
"VCAP_SERVICES": {
"mongodb-2.6": [
{
"credentials": {
"db": "db",
"host": "10.100.1.22",
"hostname": "10.100.1.22",
"password": "genpassword",
"port": 11001,
"url": "mongodb://genusername:genpassword@10.244.1.82:11001/db",
"username": "genusername"
},
"label": "mongodb-2.6",
"name": "mymongodb",
"plan": "free",
"tags": [
"nosql",
"document",
"mongodb"
]
}
]
}
}
...

Registering Service Brokers with Cloud Foundry

Once you have a Service Broker, you can use the Cloud Foundry CLI to enable it in Cloud Foundry:

$ cf create-service-broker my-service-broker-name username password http://my-service-broker.mysite.com/api

The credentials, username and password, sent through the command line are valid credentials that can be used to access the broker URL. This could be basic auth credentials that can be configured at the service broker URL. Once your service broker is created you can see it in the list of service brokers and then enable access to the service offerings and service plans to your Cloud Foundry application developers:

$ cf service-brokers
Getting service brokers as myuser...
name url
my-service-broker-name http://my-service-broker.mysite.com/api

$ cf service-access
Getting service access as myuser...
broker: my-service-broker-name
service plan access orgs
offering-1 free all

$ cf enable-service-access offering-1

Notice that we must enable access to a specific service offering. You can be more specific about what aspects of the service offerings you want to enable:

$ cf help enable-service-access
NAME:
enable-service-access - Enable access to a service or service plan for one or all orgs
USAGE:
cf enable-service-access SERVICE [-p PLAN] [-o ORG]
OPTIONS:
-p Enable access to a specified service plan
-o Enable access for a specified organization

More information on how to manage your Service Brokers can be found here.

Community Contributed Services

There are some existing community contributed, open source services available for Cloud Foundry. Here are a few that might be of interest but this is definitely not an exhaustive list:

And there are and will be many more commercial and free open source Service Brokers for Cloud Foundry. Just do a search if you have a specific service you’d like to enable Cloud Foundry application access to. Now go forth and bind services to your Cloud Foundry applications!

Managing Software Debt 2015 Predictions

2015 is fast approaching and for the first time I felt the urge to make public predictions on what the new year will bring through the lens of Managing Software Debt. Interestingly enough, many of these predictions revolve around topics this blog was constructed to discuss. This blog has been a long time coming for me since my last official blog post prior to November was in 2012 so let me take a slight diversion to describe my reflections on how this blog came into being.

My last blog gettingagile.com had run its course after 178 posts from 2005 to 2012 and it seemed to me that we were Beyond Agile and I needed a different focus. After publishing Managing Software Debt: Building for Inevitable Change in 2010, a reference on 5 types of software debt and how they can be managed and monitored, it seemed that focusing on leading change through Continuous Delivery and reducing Configuration Management Debt had the best results in my consulting experience. Since that time, the DevOps movement has hit a full stride and embodies this approach for leading change in organizations. I think we are on the precipice of our next industry revolution to reduce the cost of change for software as cloud has become a common enterprise platform of choice. With that, here are areas of the software development, deployment and operations ecosystem that I think are going to see significant interest in 2015.

  • PaaS (Platform as a Service)
  • Twelve-Factor Apps and Microservices
  • Feature Teams around Business Capabilities
  • Deployment Orchestration

PaaS

OK, OK. I know that my new role is Product Owner for PaaS at CenturyLink Cloud but there is a good reason I took this role in November (time to note my disclaimer that the views expressed on this blog are mine alone). In my last few roles the impact of infrastructure development enabling deployment foo applications and services to cloud platforms was significant enough to cause me pause. It seemed that the problems being solved were similar across development efforts: load balancing apps & services, event publishing & processing, service discovery, continuous delivery pipelining, blue/green deployments and infrastructure provisioning just to name a few. We had looked at multiple vendor offerings but what we saw prior to 2014 had been, in our opinion, immature. As 2014 progressed, the popularity of containers kicked off a valuable conversation about separation of concerns for deployment and infrastructure. Containers provided a piece of the solution that allowed infrastructure and application/service development to execute within their own life cycles beyond what Puppet and Chef had done for configuration management.

After I heard about the Product Owner role here at CenturyLink Cloud, where I’d be working with a team to deliver PaaS based on Cloud Foundry, I updated my knowledge of the PaaS space. I had already been playing with Docker and worked with others who implemented a build pipeline for our services based on Docker containers. Through this process it was clear that there were still significant problems to solve beyond the ease of development story that Docker was just an introductory chapter to. While researching Cloud Foundry again, after trying it out back in 2012 and deciding not to use it, I was pleasantly surprised how far the platform had come. Immediately I took notice of an aspect of Cloud Foundry called Warden which manages isolated, ephemeral, and resource-controlled environments (aka containers). It been around since November 2011 and had the full Cloud Foundry ecosystem surrounding it which looked to help solve more of the problems in the infrastructure and application/service deployment space than other alternatives available today.

As more enterprise developers see the benefits of PaaS, such as ease of development and deployment with low overhead for configuration management and operations involvement, there will be a large upswing in its adoption. Also, folks in operations will further benefit and enable the DevOps culture as PaaS continues to mature allowing for more self-service provisioning and deployment while still getting the visibility needed to support service level agreements and control operating costs. Look for more on PaaS in this blog as we learn more about how customers innovate and deliver on our upcoming PaaS offering.

Twelve-Factor Apps and Microservices

My last post on The Imminent Acceleration of the Twelve-Factor Apps Approach already discussed why I think The Twelve-Factor App and Microservices will be big in 2015. Some folks in our industry are already realizing the benefits of these approaches. It is probably not surprising that this realization was found mostly outside of the monolithic ESB and SOA tool vendor offerings. Instead, the rise of SPAs (single page apps), RESTful APIs, OpenID/OAuth, cloud computing, open source and many other emergent approaches from the community have been the potion for increased adoption of service-oriented architectures. Look for significant changes in enterprise application development to support The Twelve-Factor Apps approach and implementation of Microservices (or at least less monolithic) as 2015 progresses.

Feature Teams around Business Capabilities

As the chapter “Platform Experience Debt” from the Managing Software Debt book explained, organizations are more flexible when there is clearer alignment of teams to business capabilities and ultimately to their users. The rise of DevOps has brought the cultural changes needed to be more adaptive (and dare I say “agile”) to light and ignited a follow on movement from the software development centric agile movement to incorporating production operations as an aspect of team responsibility. This has made it even more apparent that the Feature Teams collaborative team configuration helps drive alignment of business capabilities with user needs along. Not only that, this organizational alignment creates less brittle boundaries between teams than component (or functional) team configurations do. DevOps has definitely made its mark with Gene Kim’s book The Phoenix Project and the outbreak of DevOps oriented conferences around the world. Look for the DevOps movement to accelerate as more real world change stories are shared in the new year.

Deployment Orchestration

With the rise of PaaS, The Twelve-Factor App and Microservices, the need for more effective deployment orchestration tools and processes will grow. Enterprise Operations groups will need strategies for dealing with the increased frequency of deployment, proliferation of environments and running processes, and hybrid models with internal data centers and cloud architectures used in conjunction with public cloud provider offerings. Continuous Integration servers and access to server instances are not enough. The number of deployment models, platforms, and network topologies will make governance a mess. In 2015 we will need to start finding solutions to orchestrating deployments from build to validation to deployment and to governance. There is a lot of room to innovate and make a significant impact in Deployment Orchestration. I’m excited to see what is coming to solve Deployment Orchestration challenges in the new year.

This is my first attempt at a predictions blog. Let me know how I did on Twitter (@csterwa). I’m always looking for feedback.

Have a Happy New Year 2015!

The Imminent Acceleration of the Twelve-Factor Apps Approach

Software that takes advantage of what the cloud has to offer must take on a new shape. Applications deployed to the cloud need to show resilience in the face of VMs or server instances going down or not behaving as expected. Applications deployed to cloud infrastructures also tend to need to scale temporarily to keep costs reasonable and automatic based on patterns of usage. Learning faster than your competition is of utmost importance to many who deploy software into a cloud therefore deploying updates on a continuous basis becomes critical. There is an approach to developing and deploying applications into the cloud, The Twelve-Factor App.

The Twelve Factors of this approach are:

I. Codebase – One codebase tracked in revision control, many deploys
II. Dependencies – Explicitly declare and isolate dependencies
III. Config – Store config in the environment
IV. Backing Services – Treat backing services as attached resources
V. Build, release, run – Strictly separate build and run stages
VI. Processes – Execute the app as one or more stateless processes
VII. Port binding – Export services via port binding
VIII. Concurrency – Scale out via the process model
IX. Disposability – Maximize robustness with fast startup and graceful shutdown
X. Dev/prod parity – Keep development, staging, and production as similar as possible
XI. Logs – Treat logs as event streams
XII. Admin processes – Run admin/management tasks as one-off processes

These factors tend to drive highly cohesive applications and services that also exhibit low coupling with their dependencies. Each application or service should be in a single repository that can be built and deployed on its own. Rather than branching or forking to create multiple deployable versions of the application, we should externalize configuration so that the maintenance costs are not exponentially growing to support all versions of the application. These applications should also have understandable boundaries and implement a single concept or responsibility to increase their disposability. Creating stateless applications and services enables horizontal scaling and redundancy across multiple nodes in a cloud. Deploying multiple instances of an application or service results in multiple ports that can load balanced and registered for use by others.

Some of these factors are no brainers for many experienced developers and some are not as easy to implement due to access restrictions to configuration management and flexible operating platforms. For those of us fortunate enough to use “real” cloud infrastructure, today’s cloud vendors are starting to provide services that enable Twelve-Factor Apps. The Cloud Foundry Foundation officially launched this week as part of the Linux Foundation Collaborative Projects. Cloud Foundry is the most robust and mature open source Platform-as-a-Service (PaaS) offering in the market. With Cloud Foundry it has become much easier to apply The Twelve-Factor App approach to applications and services. Buildpacks enable the use of polyglot software development across applications and services yet still deploy into a single PaaS. Software can be deployed using a single command to a PaaS: `cf push <appname>`. Zero scheduled downtime deployments can be performed using a Blue/Green Deployment approach, described well by Martin Fowler here, that keeps existing versions of the software (Blue) up while deploying and testing new versions (Green) that can run alongside or replace the old version (Blue) when validated as ready. And binding to dependent services, such as DB clusters and message queues, can be simplified to create a named service deployment with `cf create-service mongodb cluster my-mongo-cluster` and then binding it to your application with `cf bind-service my-app-1 my-mongo-cluster`. It is incredible how Cloud Foundry can make creating and continuously delivering a Twelve-Factor App orders of magnitude easier than constructing and managing your own infrastructure or in an Infrastructure-as-a-Service (IaaS) platform. When you need to optimize the deployment environment you can always take a single application or service and adjust it to deploy into your own data center or an IaaS platform but since they take The Twelve-Factor Approach you don’t have to do it for all of your applications and services.

I hope this article provides awareness around The Twelve-Factor App approach and how a PaaS, such as Cloud Foundry, can enable effective use of the approach. I recommend clicking the links provided in this article to read more details about the approach and how to take advantage of what Cloud Foundry has to offer. It is imminent that in the next few years that Twelve-Factor Apps will become more the norm in software development shops and PaaS will be a common deployment platform. Take the time to read and experiment with Cloud Foundry to get a leg up on this imminent acceleration of PaaS and The Twelve-Factor Apps approach.

Managing Configuration Management Debt

Introduction

In my book “Managing Software Debt: Building for Inevitable Change”, chapter 6 discusses Configuration Management Debt and approaches to help keep this type of debt to a minimum. Since I originally wrote this chapter a lot has gone on in the software development and operations communities. There has been a couple of huge uprisings: Continuous Delivery and DevOps. Of course, the ideas expressed as part of these movements are not necessarily new but rather standing on proverbial shoulders of giants.

Managing Configuration Management Debt

When I wrote in this chapter is only a small portion of ideas that are being feverishly adopted across our industry. It is exciting to see just how fast truly Lean approaches such as DevOps are motivating teams, enhancing customer experiences, and driving business results. Application of DevOps and Continuous Delivery are quickly maturing and seem to be a stronger force than even the Manifesto for Agile Software Development.

I would like to review the 3 areas that organizations can work on to improve their Configuration Management performance to put some perspective on how these hold up 5 or so years later:

  • Transfer some Configuration Management responsibilities to the teams
  • Increase automated feedback
  • Track issues in a collaborative manner

Transfer Responsibilities to Teams

Release Engineering teams tend to get overburdened as time goes by. They tend to be centralized and support more teams than they have people. The DevOps movement has put a spotlight onto this problem and provided cultural, process, and technical approaches for reducing the risk of overburdening Release Engineering teams. Taking a DevOps approach implies moving some responsibilities for configuration management to the team.

Increase Automated Feedback

To reduce the impact of increasing the team’s platform complexity by adding these responsibilities, organizations must invest in increasing automation and developer user experience of automation tools. The other aspect of transferring responsibilities to teams from the book was automating Install and Rollback procedures. Going beyond Continuous Integration towards the fully automated build, test deployment and monitoring approaches contained in Continuous Delivery encompasses this idea. There has been a tremendous amount of innovation that makes Continuous Delivery more approachable including the growth of Cloud Computing adoption to Chef & Puppet to automate infrastructure to the promise of Platform as a Service (PaaS).

Track Issues Collaboratively

Automated build pipelines and teams that have more responsibility for software operations leads us to ask about who tracks issues. Again, the DevOps movement has pushed our organizations to think about the responsibility for production deployments. The more teams can take responsibility, given appropriate automation and developer user experience with the tools, the more responsive they will be to customer issues and operational changes needed around their software. It is my contention that the more responsibility teams have for production operations the better production will operate.

Conclusion

In review, I probably write enough on Configuration Management Debt. In my consulting and product development activities since the book I have realized that focusing on Configuration Management Debt first makes steps to address the other four types of software debt (technical, quality, design, and platform experience) easier to see. The build pipeline(s) and the spread of responsibilities can tell teams a lot about where they can find optimizations.

An Experience with Microservices Approach

James Lewis and Martin Fowler published an article on Microservices in March, 2014. The tendencies of a microservices based architecture were well laid out by these highly regarded authors. In this article I would like to provide some first-hand learning we had implementing software using the Microservices architecture approach.

The Why

To start with, lets describe why we approached the software in this manner. When our team was forming into a cohesive unit we were using existing legacy platform tools within a company on a new product in an adjacent market. These platform tools were fairly progressive and yet were still under heavy development along with showing warts of a monolithic architecture approach over the past 5-10 years. The platform had tight coupling, circular dependencies and teams could not work in isolation on cross-cutting aspects of the platform such as the UI controls and client-side data stores. Also, there were performance issues on client and service APIs that were starting to be made visible with larger customers with more data to manage. Since we were creating a new product we soon found that using the same platform tools and APIs were going to slow us down and potentially we would inhibit other teams working on resolving these issues.

In my previous engagements, the architecture patterns that supported long-term needs were those that allow for changeability. Changeability tended to go hand in hand with a *nix-like approach of components that do a single thing (Single Responsibility Principle) and involved low coupling with adjacent and/or dependent components (Just check out SOLID principles for more detailed information that every developer should learn). I had success with approaches that supported these 2 main ideas on many software teams and witnessed as a consultant many more architectures that I would also deem as successful even over time. The visibility into the infrastructure and service design at Netflix also influenced just how far we should go to develop software that would evolve naturally with the changes in the business. Thus we embarked on a journey to implement our software in a manner that would allow for flexible deployment of business capabilities in microservices.

The Domain

We were developing software for an adjacent market that we had co-defined with customers through years of experience consulting in the domain and running experiments for problem/solution fit using a Lean Startup approach. The business capability had become fairly coherent at a high level domain model perspective. We knew the parts and how they would fit together in order to create our first MVP (Minimum Viable Product). The size of the business capability was still such that it involved multiple responsible components that each had their own logic and user interactions at the client and API. To not over-complicate our development we decided to create RESTful stateless services based on Dropwizard, an authorization and external API consumer layer, and a client-side UI based on AngularJS. We used MongoDB as a main persistent storage due to the nature of the data we were supporting and PostgreSQL for user permission management.

Even though we had sufficient learning to focus on delivering the MVP to customers there was still learning to be had with those customers we were co-creating the software with closely in a beta capacity in real world situations. This meant that we needed to absorb change in all aspects of the product, client-side or in our services. Not only that, we had to deploy those changes quickly to learn if they provided an actual solution to our customer’s need. We had an effective Continuous Delivery (CD) pipeline that allowed for all services and client-side UI to be built, tested from multiple perspectives, and deployed into staging and production environments. This also included a separate pipeline for Chef cookbooks that were used to bootstrap instances on Amazon EC2 from scratch. All of this infrastructure allowed us to deploy changes at any commit to master on any source code repository that was being watched by our CD pipelines.

The Product Owner had a button they could push at any time to deploy what was in staging into production without any scheduled downtime. This was enabled through our rolling deployment approach that involved taking vertical slices of our environment out of rotation, deploying to them, running smoke test verifications on that slice, and then putting them back into rotation and then continuing to the next slice. None of this necessitated a microservices approach although it was not much more difficult than other approaches I’ve had first hand knowledge of and it provided nice isolation of capabilities within the product.

And Finally, the Learnings…

There were many learnings that we came away with. Some were specific to the context of our company and others, the ones I will share here, were more general in nature. Of course, these are in retrospect and with some (OK, maybe a lot) of opinion baked into them and I hope they are useful to others whether or not they are followed directly or just spark conversations.

Aggregate Logging

Effective logging is essential for finding resolution to issues in any software. When you have services and clients running across many instances the need for aggregating logs to resolve issues becomes even more important. On top of that, if you have production access policies, such as those found in FDA, HIPAA, and PCI just to name a few then development, teams are restricted from direct access to the running instances, personally identifiable data, and network traffic. Therefore logs must trap and identify not only levels of logging but also define consistent patterns for logging. Teams should discuss and agree on their logging patterns that also include “backstops” for exceptions or unexpected issues that aren’t captured in the implementation code. Pulling these logs into a central services such as Logstash and Splunk.

Focus on Boundary Context of Services

Using techniques from Domain-Driven Design (DDD) with special attention paid to domain models, ubiquitous language, and bounded context will help in defining where capabilities are to be separated into their own services. The time put in by the whole team in defining and understanding the language and bounded context of each capability in the domain enabled client-side code to easily separate access to each service without coupling calls across multiple services. We could have a pair of developers working on one view and service and another pair working on a separate view and service typically without affecting each other’s work.

Lookup Configurations from Deployment Environment

When deploying into multiple environments, such as development, staging, and production, it is important to allow per-environment configuration. These environment configurations could include service endpoints, database access, logging, access tokens, and more. There are many techniques for setting configurations for lookup by running processes. Some examples are shell environment variables, by URL with XML or JSON response, and coordination services such as Zookeeper and etcd. This allows operational configuration of services and access policies to environment authorization tokens to be supported.

Cohabitate Highly Cohesive Code

For some reason, putting code based on multiple languages into the same source code repository feels a bit dirty, at least it did for me. At the same time, there are many different aspects of a service within its bounded context that may necessitate multiple tools to be used. For example, we may want to provide shell scripts to deploy our code alongside the service’s business capability focused code. Some other aspects that should be considered as part of the service’s source code repository are Chef or Ansible instance configuration code, PaaS (Platform as a Service) configuration files, build scripts, instance launch automation scripts, and probably the most controversial suggestion I will make is also serving client-side code that specifically interacts with the service’s endpoints. Since serving client-side code from the service itself may be controversial, here is an example:

Given the following service API endpoints:

GET /api/items

PUT /api/items

POST /api/items/{id}

We might serve a JavaScript API client that has functions for interacting with each endpoint:

{

getItems: function() { … };

addItem: function(item) { … };

updateItem: function(item) { … };

}

This allows the client-side code that interacts with the service to be tested and updated at the same time as the service itself. If the client-side code is put into a separate source code repository and is built and deployed separately then there become situations where the client and service code changes are not independent in the deployment process. This will lead to situations where code must be deployed in a specific manner that is not in alignment with the Continuous Delivery pipeline.

Monitoring and Alerting are Essential

The flexibility of horizontal scaling of stateless services and federated data comes at a cost. It is at times difficult to know the path that is being taken through the application and which instances are involved. Finding ways to identify specific service instances in logs, on clients and in monitoring tools such as New Relic. This will reduce the Time to Resolution in many cases and keep you sane when tracking down causes of issues.

On top of monitoring, finding ways to alert when issues need to be looked into using tools such as PagerDuty will help the team track down issues quickly after their introduction into the environment. Virtual machines get rebooted, instances fail, networks get blocked, syntax errors in configurations cause outages, and any number of other issues can cause problems in an environment. I recommend become familiar with the Fallacies of Distributed Computing to help with thinking about ways that your software can fail in any distributed system. Even browser or mobile clients connecting with services is a distributed system and can fall victim to these fallacies.

Isolate Deployment Slices for Verification

To keep your services highly available it is important to have a deployment process that allows for changes to be introduced without scheduling downtime maintenance. Mostly if teams are deploying frequently, maybe multiple times per day, into an environment. Finding mechanisms to continue serving consumers of the service while deploying new versions makes deploying software a business decision rather than a technical hurdle to leap over. Our process was something like the following:

Have at least two isolated slices deployed to in an environment

Take one isolated slice out of rotation and direct all consumers to other slice(s)

Deploy to the out of rotation isolated slice

Run smoke tests that are configured to test the out of rotation isolated slice

If smoke tests pass then bring the isolated slice back into rotation

Rinse and repeat with other slice(s)

This overly simplified high level process overview has many complications that need to be resolved based on the configuration of the environment being deployed to. PaaS and other approaches to automate service deployment, networking, and configuration management can go a long way to help make this process less impactful to a team’s feature delivery by taking care of the complications through tools and APIs.

Conclusion

The Microservices architecture approach can provide effective boundaries between capabilities in a system and provide tremendous flexibility not easily attained through other more monolithic approaches. There is a cost to the microservices architecture approach in terms of a more complicated environment setup and deployment. These can be overcome through the learnings above and applying techniques provided online by many folks using this approach. If your team is creating new business capabilities I highly recommend taking a deeper look into how a microservices approach could help provide essential flexibility and scalability that is needed in modern software solutions.

From Test-Driven to DevOps and PaaS

Over a decade ago one of the main practices that I looked for in teams were automated testing and continuous integration. If teams were able to do these well then it seemed like the “golden bits” would be ready at all times for Operations to deploy. Many teams were thrilled to have delivery from planning to local development to continuously integrated and then let Ops handle it from there.

Earlier in my career I had worked in a couple of companies where there was no separate Ops team. We had to deploy, manage, and maintain our own operational environments. I recognize now that not having the ability to take delivery from plan to operational made me feel uncomfortable.

Around a decade ago I started asking the teams I was working on to focus on “Push Button Release”. Even if we couldn’t access the deployment environment we would ensure that the number of steps to deploy our software was 1 or as close to 1 as we could possibly get. Our team should automate the deployment process as though we would have to manage the actual deployment ourselves. Not only that, we should also tend to the rollback deployment process in the same way. In fact, we created CI jobs that deployed and rolled back just to ensure that these automated scripts acted as expected after all the builds, tests, and packaging jobs completed.

A decade later our industry is tending towards new possibilities due to progressive and somewhat obsessive approaches revealed publicly by companies such as IMVU, Netflix, Etsy, and others. It is no longer enough for teams to hand off a validated build with a “push button” deployment script. Teams now run their own configuration management via Puppet, Chef, Ansible and Docker and are finding many different ways to manage the deployment environment configuration dynamically.

The next few years will bring many tools and platforms that will attempt to tackle the deployment environment configuration challenges such as service discovery, log aggregation, monitoring, alerting and notification, attached storage, security, policy management, data transport and more. It is an exciting time to see how we are progressing past automated testing and continuous integration and as an industry looking to automate all the way into operations. With the momentum of the DevOps movement and PaaS (Platform as a Service) innovation, this is an exciting time and I intend to be part of it as much as I can. Lets make the promise of DevOps and PaaS a reality.