Sunday, 23 November 2014

Ruby and NO_PROXY

Proxy Servers

Like most people where they work there is always a proxy server present.  Love them or hate them they are here to stay. Recently where I am working we faced a problem. All the HTTP traffic is going thorough the proxy. However there was a specific case where we did not want it to.

The Problem

When we sent a request to a URL lets call it (please note that this is an internal address) it would resolve to a load balanced address. For some reason (I'm not a networking guy) the proxy could not connect to it.

The Solution

What I needed to do was the following:

  1. I edited the /etc/hosts file and added IP_ADDRESS
  2. I added export no_proxy="" so it does not use the proxy. 
However the fun did not stop there. We were using Faraday to connect to this server. What I realised is that the default Ruby HTTP implementation does not honour no_proxy environment variable. So after some researching I found that httpclient does. So all I needed to do is install the gem and then use it:

conn = '') do |faraday|
  faraday.request  :url_encoded
  faraday.response :logger
  faraday.adapter  :excon

Call conn.get and everything worked.

As I found it hard to find a solution I thought I would write it down for someone else to find it useful.

Saturday, 25 October 2014

Configuring Chef Development Environment


At the current place that I am at one of the things that we are lacking is the concept of Infrastructure as Code. So I have decided to put my DevOps hat on and get the ball rolling.

I decided to use Chef as it feels more natural to me as a Ruby developer. The setup process was not as straight forward as I thought so I decided to document it, as I will forget later on.


  1. Download and Install Virtual Box.
  2. Download and Install Vagrant.
  3. Download and Install the Chef Development Kit.
  4. We are going to be using Berkshelf so we need to set this up:
    1. If you are using RVM find where RVM sets 'source "$HOME/.rvm/scripts/rvm"' after that place the following: export PATH=$HOME/.chefdk/gem/ruby/2.1.0/bin:/opt/chefdk/bin:$PATH
    2. Type which berks and you should get /opt/chefdk/bin/berks
  5. Lets create a cookbook to test out:
    1. Type berks cookbook chef-test
    2. Type cd chef-test
    3. Open up the Gemfile and remove gem 'berkshelf'
    4. Type bundle install
    5. Open up your Vagrant file and comment out ' :private_network, type: "dhcp"'
  6. Lets install some Vagrant plugins:
    1. Type vagrant plugin install vagrant-berkshelf
    2. Type vagrant plugin install vagrant-omnibus
  7. Lets fire up the cookbook by typing vagrant up.
  8. Profit!

Monday, 17 February 2014

Implementing Specification By Example

Recently I have had the pleasure to work with different teams and companies. The interesting part of seeing how different people work is that I learn a lot from their codebase.  At my current work we follow the concept of specification by example, however these specifications are often a pain point for our teams. I wanted to take the time to explore some successful ways of implementing specification by example.


There are many terms that have been described for these sets of practices that I won't go into. However I think specification by example is the better term, as we tend to describe software via interactions (or examples). To get yourself up to speed I will recommend two great books on the topic:


Below will be a set of topics that I believe are important in having good specification by example. Please take them as a recommendation and not as a rule. I also encourage you to challenge these recommendations to find better ones.


The biggest thing I find when writing these features is the language that is being used. We have to remember that the language that we use needs to reflect the domain we are trying to model. Often I encounter that these features are written through the eyes of the computer.

What we have to remember when writing these features is to focus on the WHAT not the HOW. A great friend of mine has done a fantastic job explaining how to do this. I urge you all to read it.

The language is really important. Try to get other people to read it and get them to explain what the example is trying to do.

Pick your battle

Follow the guidelines of the testing pyramid. Often I see people complain about their tests taking too long and being brittle. Remember that the UI is brittle at times, however in my view the end-to-end test is the most important one. If you can comfortably test a layer below it then go ahead. Use your judgement.

Informative Steps

Often I have seen that some information is missed out because there is no action for that step. This should be avoided. It does not matter if a step performs an action, if it helps with the intent write it in.

Behind the Scenes

Here is the big thing that I think people get wrong (of course this is my opinion). When writing features we seem to forget that we need to still apply all the great concepts we do for our main application. The biggest thing here is knowing what layer does what.

Usually when writing these features, people tend to dump all of their code into the steps file. This quickly becomes a nightmare to manage. Here are some guidelines I have used to make sure this becomes easy to follow:

  1. The steps should only contain the builders and expectations.
  2. Use the concept of a PageObject pattern. If you want a great library use dill.
  3. Limit the use of the World. This just becomes a massive god class.
  4. If you find yourself writing similar code, move it to a reusable module that all your projects can use.
Remember: Treat your test code the same way as your production code.

Creating Preconditions

Often we need to create a precondition (the given step in gherkin speak). What I have found is that people create a table where the key of that table is put through a mapping of the property of the model. This should be avoided. One of the worst things that you can do is create a translation layer. DDD has a concept of Ubiquitous Language, I suggest that you follow it.

Watch your Failures

One of the most underrated techniques I see is that people don't look at the failure message. How often have we seen expected true but got false. These errors don't help. Make sure you get a failing scenario first and read the message. Does it make any sense? Could you diagnose the issue?


These patterns have served me well in the past so I wanted to share them with you all. Feel free to challenge them so we can all learn to do things better. Of course context is king. So if you have any specific examples you would like discussed feel free to share.

Sunday, 29 December 2013

Micro and Nano Services

Recently there has been a lot of hype around services. Why would there be any hype we have had the concept of Service Oriented Architectures for a long time. Well for some people SOA has not served them well. I think this just coms down to not understanding and the fact that SOA can be a complicated topic to get your head around.

With the advent of SOAP, RPC, DCOM, CORBA, etc made the SOA a very hard thing to get right. None of these technologies are compatible with one another or event interchangeable.

After going through some of these technologies we realised that maybe we should embrace HTTP as the transport. This gave rise to REST. A long with this realisation people started to remember what smart people before us have done to build software. If you look at the UNIX Philosophy we see some points that have really fascinated me:

  1. Small is beautiful.
  2. Make each program do one thing well.
  3. Choose portability over efficiency.
I think REST and the UNIX philosophy gave rise to micro services.

Micro Services

I first heard of this term a couple of years ago. We have been building services for a very long time, though I always felt there is something wrong, oh thats right it was always a monolithic pile of crap I was building. The basic idea of a micro service is to break up your problem into smaller problems and the only coupling you have between them is the HTTP protocol (which really has proven to be stable). Micro services can be thought of as a the bounded context in DDD.

So what are some of the good properties that a micro service can bring us?

  • We can implement the service in whatever language we want.
  • We can deploy each service individually.
  • We can scale each service individually.
I will try out a few things and see what is the best way to build these services. This was inspired by a great talk I heard at YOW 2013 by Sam Newman called Practical Considerations of Micro Services

Nano Services 

The concept of nano services was brought up when I was at YOW 2013 by Steve Vinoski. When he first raised it he mentioned that he found an article that described it as an SOA Anti-Pattern. Though what he meant was having the ability to build really small services within the language that you use.

These services are really about some of the new concurrency and parallelism patterns, such as:

  1. Actor Model
  2. Communicating Sequential Processes
A very interesting term indeed.

Friday, 13 December 2013

Git Flow

This post is not about git's branching model or how you should promote your code to different branches. Rather this is a post on how I have learnt to use git. Which I hope will serve as a starting point for other people.

If you are looking for a way to promote your code. Please have a look at the following pages:

When I first started using git I was just using for my own projects and like any other version control system. It was only until I actually used it in a team environment that I started to get the concepts. Luckily for me at the time I had a great teacher, that really helped me understand the basics of git.


The biggest thing I had to get used was creating a branch for every story/feature that we worked on. We never worked of the master branch. The reason for this will become more apparent when I describe the other features. Branching in git is so cheap and easy. To create a branch you perform the following command: 

git checkout -b branch_name

The name of the branch also needed to follow a convention. The start of the branch needed to start with your initials (in my case arf) followed by a small description (2 words or so) and maybe even the story number you might have been working on. This was so you know who was working on what branch without looking at the commit history.

At first I was against branching. As many of you I initially thought this goes against continuous integration. If you only work on a branch then you will never find out if it integrated properly or it will never run on CI as usually CI is configured to run against master. This was not the case there are many SAAS CI tools that will build all of your branches and if you constantly rebase master you will always see if it integrates correctly. This also allowed us to keep master clean.


When I first started committing code I was doing just like I was during my SVN days. Do a bunch of work commit it, oh I broke the build commit some more and on and on and on.

What is the problem with this style? 

Well it does not tell you the story of your implementation. Usually there are files committed together that don't belong with each other and the commits are fragmented.

When I was working on this project I got introduced to the concept of Single Responsibility for your commits. This was a powerful concept. Here were some of the basic rules that we followed:

  1. Changes that are related to one another need to be in the same commit (to achieve that we squashed changes together if necessary)
  2. Refactoring of code would be done on a separate commit. Meaning you would never change something and refactor it in the same commit.
  3. Introduction of libraries (gems for our project) would be done on a separate commit.

Achieving some of these points was really context driven, luckily for me I had smarter people helping me.

Committing code with git is fairly easy. I would generally follow 2 patterns:

git add files
git commit -m "Descriptive message"

or (once I got better at single responsibility)

git commit -a -m "Descriptive message"

Now sometimes you find your self that you went trigger happy and changed the same file for multiple reasons. This violated the single responsibility. So git has wonderful command to deal with this:

git add -p

You can read about this command here.

Once we were done with the changes we would push the branch up to Github with the following command:

git push -f origin branch_name

You may have noticed that we used the force command. This is because when you rewrite history (squashing) git will not allow you to push your changes. Since we are on our own branch this is OK.


A while ago while reading about git I came across an interview with Linus Torvalds. Where he stated the following:

Eventually you’ll discover the Easter egg in Git: all meaningful operations can be expressed in terms of the rebase command. Once you figure that out it all makes sense.

Lets say you want to get the latest changes from master into your branch. You would issue the following command:

git fetch && git rebase origin/master

Lets say you wanted to rewrite some history you would use interactive rebase command:

git rebase -i how_far_back

This command is so powerful that I suggest you spend some time with it and think about how powerful it is.

Pull Request

This feature has to be one of my favourites. Since we are branching, once we think we are complete we issue a pull request. We would try to only issue a pull request if the build was green (this was not always the case). If you have used Github then you will know what I am talking about. Finally a tool that can be used for meaningful code reviews. This is so important.  

What do I mean abut this? 

How many times have you asked someone for a code review and because your commits were all over the place you just showed them what you wanted to show them. Well no more.

Here we get to see everything that we have done as a developer and we have to explain and justify our architectural decisions. I wont lie sometimes this go frustrating (this was mostly to do with my ego), however I actually learnt so much from those discussions that I presently miss them so much.

Once the reviewer was happy they would give it the thumbs up or the ship it sign.


There would be the off chance that a merge back into master would cause some sort of issue. Which would mean that we would need to revert. I didn't get to do this much however there is a great guide from the man himself on how to do it.


Hopefully I have been able to show you how powerful git is and how you can use it to make sure you have a beautiful commit history. Git has multiple ways to accomplish things so if you have a better way please share it with me. Hope this was helpful.

Thursday, 5 December 2013

Leiningen - HTTPS issues


As I have started my adventure in Clojure programming I wanted to start playing with the leiningen tool. 

Unfortunately where I was at has some strict policies around access. We have this crazy proxy so we had to do some magic to get it working. Since I haven't done any Java development in while I was lucky that I had a great friend at work to help me.


I was trying to get lein working and was getting the following error:

Could not transfer artifact clojure-complete:clojure-complete:pom:0.2.3 from/to clojars ( peer not authenticated
This could be due to a typo in :dependencies or network issues.
If you are behind a proxy, try setting the 'http_proxy' environment variable.


To get this to work we needed to do:

1. We need to get the certificate:

echo -n | openssl s_client -connect | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > /tmp/clojars.cert

2. Import the key that we just extracted:

sudo keytool -importcert -alias 'clojars_cert'  -file /tmp/clojars.cert -keystore $(find $JAVA_HOME -name cacerts)

3. Make sure the key is there:

sudo keytool -list -v -keystore $(find $JAVA_HOME -name cacerts) | grep clojars

When you run the lein command it should work.

Hopefully this helps someone else in the future.

Monday, 28 October 2013

AWS Elastic Beanstalk

AWS Elastic Beanstalk

During my last adventure we decided to use AWS Elastic Beanstalk. This technology from Amazon is a way to provide developers an easier way to manager their deployments, meaning this a PaaS. We used it for a Ruby on Rails application.

You can control your deployments from git and magically it will send your code and deploy it to an instance. So lets decipher some of this magic.

To get started using EB you need to AWS Elastic Beanstalk Command Line Tool. To set them up on your mac all you need to do is:

brew install aws-elasticbeanstalk

Once you have that setup to get yourself started is quite straightforward. All you need to do is follow the instructions here. As you can see this is pretty straightforward and very easy if you want to launch a basic site. However as I started to go deeper with this platform I realised that I would need do some more work. Why is that you ask? Well for two reasons:

  1. The flow of how you typically install a rails app is not exactly how the team at amazon designed it.
  2. A deployment always brings the site down.

Extending EB

One of the things that we quickly realised is that we needed to extend the platform. Luckily this was easy to do, the downside is that we were changing the internals of the scripts that were provided by Amazon. Luckily other people have encountered similar problems. So what I decided to do is create a repository where I could add these extensions that we needed to do to make sure it would work.

Please have a look here. There are some interesting things that needed to be done in order to make it work. If you need a further explanation don't hesitate to reach out.


As I mentioned deployments are really easy with EB if you don't mind bringing the website down. I fortunately don't believe that a site should go down for a deployment. What we needed to do is build a more robust deployment pipeline. Luckily for me I am a massive believer of Continuos Delivery. So we knew we had to build a deployment pipeline. We decided that we would build multiple stacks (QA, UAT, LIVE) each having 2 environments within them (A and B, this is modelled around Blue/Green Deployments)

To accomplish these deployments we built a tool called mist. This tool takes the pain away from the deployment and we were extremely proud of it as all the developers could easily deploy the latest or a specific version to an environment. I urge you to take a look at the code and tell me what you think.

The Future

As with any platform there is definitely room for improvement. EB is still considered beta (which in some peoples eyes this can be seen as not production ready). Here are some of the limitations that I hit:

  1. An old version of Amazon Linux is used Unfortunately upgrading the AMI breaks EB.
  2. The ruby version is ruby 1.9.3p286 (2012-10-12 revision 37165) [x86_64-linux]. This is quite old. Could not successfully upgrade this.
  3. EB uses Phusion Passenger. The version is Phusion Passenger version 3.0.17. This again is quite old too. Could not successfully upgrade this.

We did explore other avenues like JRuby and I fell in love with the platform. Give enough time I wanted to get EB to work with JRuby.

Hopefully this is of help to anyone else out there that is thinking of using EB with Ruby on Rails.