Recently I had the privilege to read the book The
Lean Startup: How Today's Entrepreneurs Use Continuous Innovation to
Create Radically Successful Businesses, by Eric Ries. I must say
it was a great read.
For years I wondered how do we know, if what we are
building is the right thing. In the development world we realised the
need for BDD
as we needed a way to rationalise our requirements and allow the
development team to know where to begin, however no methodology
assures you that your backlog is prioritised in the right order.
Don't get me wrong the Lean Startup is not a silver
bullet to the problem mention above. It is just that it got me
thinking about validating what we build.
"Lean Startup" is an approach for launching
businesses and products, that relies on validated learning,
scientific experimentation, and iterative product releases to shorten
product development cycles, measure progress, and gain valuable
customer feedback.
That translates to me rather than guessing or letting
emotion get in the way that we can build great products.
Minimum
Viable Product
The idea behind an MVP
is to build just enough to validate that we are heading in the right
direction. Some examples
of MVP could be:
“If Apple can launch a smartphone without Find or
Cut-and-Paste, what can you cut out of your product requirements?”
Well since we just want to validate we should reuse
as much of what other people have built. From a code perspective this
means it does not have to be perfect, however quality should not be
thrown out.
Continuous
Delivery
The book talks about Continuous Deployment, however I do
believe that the better phrase should be Continuous Delivery.
As described here:
While continuous deployment implies continuous
delivery the converse is not true. Continuous delivery is about
putting the release schedule in the hands of the business, not in the
hands of IT. Implementing continuous delivery means making sure your
software is always production ready throughout its entire lifecycle –
that any build could potentially be released to users at the touch of
a button using a fully automated process in a matter of seconds or
minutes.
The point of the matter is that we want to get our MVP
out there as soon as possible. So from a software point of view we
really need to get our release cycle automated. I was never really a
fan of all these platforms that you just check in the code and it is
deployed on a live environment, this is because you really need to
understand how software is released. However to validate the MVP it
is really a great platform, so my thoughts are slowly changing.
Split
Testing
Split testing or A/B
Testing as it is sometimes called is a way to have two versions
of the application and validate which version is better. The way that
I have seen split testing being performed is that we have an existing
product and we split it with the new product that we are building.
However this got me thinking, shouldn’t we really be
doing split testing of a MVP in the same product?
So how would one go about doing some split testing in an
existing product. These are just some of my thoughts that I look
forward to validating.
- To build these MVP's we need a way to create feature branches easily. These feature branches should not be long lived. The best tool for this is git. The beauty of this is that if we realise our feature is crap we can just revert the feature.
- The MVP needs to be build with a feature flag. This feature needs to be turned on for a specific group of people. This can be done in numerous ways, e.g. If it is a web application your load balancer could introduce a HTTP header into the application which would turn on the feature.
- Lastly we need a way to gage if the MVP is worth it. One needs to choose metrics that matter.
Actionable
Metrics
The only metrics that I've been exposed to is what are
called vanity
metrics.
These are as an example: one million downloads, 10
million registered users, 200 million tweets per day.
These metrics show growth however they don't really tell
you the inside story. It is important to realise that we need to keep
the actionable
metrics closer to the user.
Actionable metrics can lead to informed business
decisions and subsequent action.
So how does this relate to the MVP? Well sometimes we
just want to get a sense if the feature is worth it, so just getting
an idea of whether people actually interact with it maybe enough.
Pivot
A pivot is a “structured course correction designed
to test a new fundamental hypothesis about the product, strategy, and
engine of growth.”
This basically tells us that at times we will have the
wrong idea and will need to change our direction. The key to be able
to pivot and abandon an MVP is to have easy way to revert the code
and release it quickly.
Conclusion
This book really made
me think about how to develop a product without incurring much waste
and as
a team be sure that we are building the right features.
No comments:
Post a Comment