Expectation is the root of all heartache (William Shakespeare quote) and Hadoop is suffering from some of this. There have been articles and blog posts with phrases such as:
- Its hit the wall
- It’s failed
- Adoption has stagnated
I’ve heard Big Data described as Teenage Sex (and no I’m not going to explain this). The general problem is that the great excitement and expectation, as normal, end up with disappointment; as the technology finds it impossible to live up to the expectations. So is Hadoop a case of the solution looking for a problem to solve?
Gartner have captured this problem in the ‘Gartner Hype Cycle’. It describes the different stages a new technology goes through – form the ‘Peak of Inflated Expectations’ to the ‘Plateau of Productivity’. In the case of Hadoop and Big Data I think there are probably three main reasons behind this:
- How many organisation’s really have enough data to warrant a Hadoop cluster? Be honest – is your business really dealing with the same volumes of data as Google, Facebook, LinkedIn etc. Now I know there are companies that do (I’ve worked for a few of them) but the large proportion of companies don’t fit this mold. Hadoop is designed for massive data volumes whilst most businesses can get away with less than 10 TB.
- Engineers love Hadoop because it’s “proper engineering” not a shrink wrapped productionised product. It requires code and is great on your CV (I know being cynical again).
- It’s free and organisation’s got enticed by this. They glossed over the fact that replacing over 25 years of data warehouse experience is not simple. Even though its open source the costs can be quite excessive; with vendors, consultants and contractors become the main source of skills. And obviously you have the cost of servers and storage.
Now I’m not suggesting there isn’t value to be had but just putting it out there that, as we should all know, there is no such thing as a silver bullet.