Pivotal HD is not Hadoop
Neither is Cloudera’s Distribution, including Apache Hadoop.
Nor the Hortonworks Data Platform.
Nor the MapR Distribution.
Nor IBM’s InfoSphere BigInsights.
Nor the WANdisco Distro.
Nor Intel’s Distribution for Apache Hadoop.
Apache Hadoop is Hadoop. And Hadoop is Apache Hadoop.
I don’t write that to be pedantic, or controversial, but because it is the only logical conclusion you can reach after reading Defining Apache Hadoop from the Apache Hadoop Wiki.
“The key point is that the only products that may be called Apache Hadoop or Hadoop are the official releases by the Apache Hadoop project as managed by that Project Management Committee (PMC)… Products that are derivative works of Apache Hadoop are not Apache Hadoop, and may not call themselves versions of Apache Hadoop, nor Distributions of Apache Hadoop.”
It is with this in mind that one should view the reaction to EMC Greenplum’s recent launch of of Pivotal HD; and in particular this statement from Scott Yara, EMC Greenplum senior Vice President, Products and Co-Founder:
“We’re all in on Hadoop, period.”
What does it mean to be “all in on Hadoop”? Based on a strict reading of Defining Apache Hadoop (a document that demands by its own words to be read strictly), being “all in” on Hadoop means only one thing: being “all in” on Apache Hadoop.
I have no doubt that EMC Greenplum is “all in” on Pivotal HD, but that’s not the same thing at all.
Not a purity debate
There is nothing wrong with offering additional functionality beyond the scope of Apache Hadoop – the licensing terms clearly encourage it.
As my fellow analyst Merv Adrian notes:
“Having some components of your solution stack provided by the open source community is a fact of life and a benefit for all. So are roads, but nobody accuses Fedex or your pizza delivery guy of being evil for using them without contributing some asphalt.”
That is true. However, to continue the analogy, you would expect any company that claimed to be “all in on roads” to be getting involved in laying and maintaining them, rather than just driving on top of them.
Despite what some people may think this isn’t a matter of arguing about which vendor has the most Hadoop committers. It is a matter of defining what users understand Hadoop to be, and what they understand it not to be. It is a matter of drawing a line between Hadoop – Apache Hadoop – and additional, proprietary, functionality beyond the scope of the project.
User preference
Whether users will choose to go with a pure approach to Hadoop-based products and services is another matter. Dan Woods, for one, clearly believes that products like Pivotal HD will drive further mainstream adoption beyond “the limits of open source.”
The idea is that most enterprises don’t care if it meets the Apache definition of Hadoop or not, as long as it works.
While I have no doubt that some companies will be drawn to the additional features and confidence that vendors such as EMC and Intel can provide, I have also spoken to multiple enterprises – including one very large enterprise just last week – for which the preference is to default to open in order to avoid any potential for lock-in and vendor-specific architecture choices.
There are many vendors that do very much care whether what they are adopting meets the Apache definition of Hadoop.
Which of these attitudes will dominate? I’m not going to pretend I know the answer to that question at this point, but our previous coverage of open source adoption suggests that once the door to openness has been unlocked its very hard to force it shut again.
Dan Woods responded to my (sarcastic) comment about this as follows:
@maslett Linux is an enterprise product. The use-value players (IBM, HP, Intel) took it over, invested, and adapted it to enterprise needs.
— Dan Woods (@danwoodscito) March 5, 2013
I would dispute that players like IBM, HP, and Intel “took Linux over” but in any case it is undeniable that they had a significant role to play – alongside Red Hat, Novell et al, and individual developers – in turning Linux into an enterprise-grade operating system.
The point is though that they did so by engaging with the Linux project, not by launching their own differentiated versions of Linux.
1 comment so far ↓
I think there are subtle distinctions between:
– packaging up tested-together versions of a bunch of Apache packages, along with patches which are all out in public on Apache’s jira;
– packaging up tested-together versions of Apache packages, along with additional proprietary tools;
– packaging up tested-together versions of Apache packages, but with proprietary modifications to their codebases;
– releasing entirely proprietary core software which implements the right interfaces to be API-compatible with Hadoop.
Putting all the vendors listed at the top together muddies the waters for people that don’t understand these distinctions.
I’m not implying any moral jugement though. Ultimately, this is the entire point of the ASL, and its USP compared to the GPL.
This kind of thing makes me all nostalgic for the Unix Wars. If only SCO were still around, they could do their own Hadoop distro…