Sunday, 29 December 2013

Micro and Nano Services

Recently there has been a lot of hype around services. Why would there be any hype we have had the concept of Service Oriented Architectures for a long time. Well for some people SOA has not served them well. I think this just coms down to not understanding and the fact that SOA can be a complicated topic to get your head around.

With the advent of SOAP, RPC, DCOM, CORBA, etc made the SOA a very hard thing to get right. None of these technologies are compatible with one another or event interchangeable.

After going through some of these technologies we realised that maybe we should embrace HTTP as the transport. This gave rise to REST. A long with this realisation people started to remember what smart people before us have done to build software. If you look at the UNIX Philosophy we see some points that have really fascinated me:

  1. Small is beautiful.
  2. Make each program do one thing well.
  3. Choose portability over efficiency.
I think REST and the UNIX philosophy gave rise to micro services.

Micro Services

I first heard of this term a couple of years ago. We have been building services for a very long time, though I always felt there is something wrong, oh thats right it was always a monolithic pile of crap I was building. The basic idea of a micro service is to break up your problem into smaller problems and the only coupling you have between them is the HTTP protocol (which really has proven to be stable). Micro services can be thought of as a the bounded context in DDD.

So what are some of the good properties that a micro service can bring us?

  • We can implement the service in whatever language we want.
  • We can deploy each service individually.
  • We can scale each service individually.
I will try out a few things and see what is the best way to build these services. This was inspired by a great talk I heard at YOW 2013 by Sam Newman called Practical Considerations of Micro Services

Nano Services 

The concept of nano services was brought up when I was at YOW 2013 by Steve Vinoski. When he first raised it he mentioned that he found an article that described it as an SOA Anti-Pattern. Though what he meant was having the ability to build really small services within the language that you use.

These services are really about some of the new concurrency and parallelism patterns, such as:

  1. Actor Model
  2. Communicating Sequential Processes
A very interesting term indeed.

Friday, 13 December 2013

Git Flow

This post is not about git's branching model or how you should promote your code to different branches. Rather this is a post on how I have learnt to use git. Which I hope will serve as a starting point for other people.

If you are looking for a way to promote your code. Please have a look at the following pages:

When I first started using git I was just using for my own projects and like any other version control system. It was only until I actually used it in a team environment that I started to get the concepts. Luckily for me at the time I had a great teacher, that really helped me understand the basics of git.


The biggest thing I had to get used was creating a branch for every story/feature that we worked on. We never worked of the master branch. The reason for this will become more apparent when I describe the other features. Branching in git is so cheap and easy. To create a branch you perform the following command: 

git checkout -b branch_name

The name of the branch also needed to follow a convention. The start of the branch needed to start with your initials (in my case arf) followed by a small description (2 words or so) and maybe even the story number you might have been working on. This was so you know who was working on what branch without looking at the commit history.

At first I was against branching. As many of you I initially thought this goes against continuous integration. If you only work on a branch then you will never find out if it integrated properly or it will never run on CI as usually CI is configured to run against master. This was not the case there are many SAAS CI tools that will build all of your branches and if you constantly rebase master you will always see if it integrates correctly. This also allowed us to keep master clean.


When I first started committing code I was doing just like I was during my SVN days. Do a bunch of work commit it, oh I broke the build commit some more and on and on and on.

What is the problem with this style? 

Well it does not tell you the story of your implementation. Usually there are files committed together that don't belong with each other and the commits are fragmented.

When I was working on this project I got introduced to the concept of Single Responsibility for your commits. This was a powerful concept. Here were some of the basic rules that we followed:

  1. Changes that are related to one another need to be in the same commit (to achieve that we squashed changes together if necessary)
  2. Refactoring of code would be done on a separate commit. Meaning you would never change something and refactor it in the same commit.
  3. Introduction of libraries (gems for our project) would be done on a separate commit.

Achieving some of these points was really context driven, luckily for me I had smarter people helping me.

Committing code with git is fairly easy. I would generally follow 2 patterns:

git add files
git commit -m "Descriptive message"

or (once I got better at single responsibility)

git commit -a -m "Descriptive message"

Now sometimes you find your self that you went trigger happy and changed the same file for multiple reasons. This violated the single responsibility. So git has wonderful command to deal with this:

git add -p

You can read about this command here.

Once we were done with the changes we would push the branch up to Github with the following command:

git push -f origin branch_name

You may have noticed that we used the force command. This is because when you rewrite history (squashing) git will not allow you to push your changes. Since we are on our own branch this is OK.


A while ago while reading about git I came across an interview with Linus Torvalds. Where he stated the following:

Eventually you’ll discover the Easter egg in Git: all meaningful operations can be expressed in terms of the rebase command. Once you figure that out it all makes sense.

Lets say you want to get the latest changes from master into your branch. You would issue the following command:

git fetch && git rebase origin/master

Lets say you wanted to rewrite some history you would use interactive rebase command:

git rebase -i how_far_back

This command is so powerful that I suggest you spend some time with it and think about how powerful it is.

Pull Request

This feature has to be one of my favourites. Since we are branching, once we think we are complete we issue a pull request. We would try to only issue a pull request if the build was green (this was not always the case). If you have used Github then you will know what I am talking about. Finally a tool that can be used for meaningful code reviews. This is so important.  

What do I mean abut this? 

How many times have you asked someone for a code review and because your commits were all over the place you just showed them what you wanted to show them. Well no more.

Here we get to see everything that we have done as a developer and we have to explain and justify our architectural decisions. I wont lie sometimes this go frustrating (this was mostly to do with my ego), however I actually learnt so much from those discussions that I presently miss them so much.

Once the reviewer was happy they would give it the thumbs up or the ship it sign.


There would be the off chance that a merge back into master would cause some sort of issue. Which would mean that we would need to revert. I didn't get to do this much however there is a great guide from the man himself on how to do it.


Hopefully I have been able to show you how powerful git is and how you can use it to make sure you have a beautiful commit history. Git has multiple ways to accomplish things so if you have a better way please share it with me. Hope this was helpful.

Thursday, 5 December 2013

Leiningen - HTTPS issues


As I have started my adventure in Clojure programming I wanted to start playing with the leiningen tool. 

Unfortunately where I was at has some strict policies around access. We have this crazy proxy so we had to do some magic to get it working. Since I haven't done any Java development in while I was lucky that I had a great friend at work to help me.


I was trying to get lein working and was getting the following error:

Could not transfer artifact clojure-complete:clojure-complete:pom:0.2.3 from/to clojars ( peer not authenticated
This could be due to a typo in :dependencies or network issues.
If you are behind a proxy, try setting the 'http_proxy' environment variable.


To get this to work we needed to do:

1. We need to get the certificate:

echo -n | openssl s_client -connect | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > /tmp/clojars.cert

2. Import the key that we just extracted:

sudo keytool -importcert -alias 'clojars_cert'  -file /tmp/clojars.cert -keystore $(find $JAVA_HOME -name cacerts)

3. Make sure the key is there:

sudo keytool -list -v -keystore $(find $JAVA_HOME -name cacerts) | grep clojars

When you run the lein command it should work.

Hopefully this helps someone else in the future.