NoSQL consolidation begins…

The predicted consolidation of the NoSQL database landscape has begun. Membase and CouchOne have announced that they are merging to form Couchbase.

And in more interesting NoSQL news, Danish IT company Trifork has announced that it has acquired an 8% stake in Basho as part of the NoSQL vendor’s $7.4m series D round, and has become the European distributor for Riak.

The formation of Couchbase brings together to of the leading companies in the NoSQL space, and the complementary nature of the their technology and business plans highlights that the term NoSQL has been applied to many different database technologies which are being adopted for different reasons.

While Membase had focused on improving the performance of distributed applications through its Membase Server distributed database, CouchOne focused on developer interest in flexible document data stores and mobile applications, rather than performance at scale.

Additionally while Membase was focused on operational adoption with a small (albeit significant) developer community, the priority with CouchOne has been on growing adoption of Apache CouchDB, with commercial efforts only recently becoming the focus of attention.

The technology is also complementary. Couchbase will combine the Membase and CouchDB projects to form a new distributed document store project of the same name that combines the caching and clustering technology of Membase with the CouchDB document data store.

The result will be a new distributed document database covering a variety of use cases from mobile applications (Mobile Couchbase) to scalable clusters (Elastic Couchbase), with synchronization of data between the various Couchbase implementations enabled by CouchSync.

The merged company will be led by Bob Weiderhold, formerly CEO of Membase, while Damien Katz, formerly CEO of CouchOne and creator of the CouchDB database, becomes CTO.

Couchbase is claiming more than 200 customers, which would indicate phenomenal growth for both companies since the launch of their CouchOne Mobile and Membase Server products in September and October 2010 respectively.

Prior to the launch of those products they previously claimed just a handful of customers each, although CouchOne had signed up thousands of users to its free hosted services, so it had a large and willing audience ready for conversion.

Additionally the company claims millions of combined users since CouchDB has been included in every installation of the Ubuntu Linux distribution since late 2009 and Heroku (now part of Salesforce.com) offers a Membase-driven service to thousands of its hosting customers.

We previously predicted that we would see the NoSQL market both consolidate and proliferate this year, and it is worth noting that the merger of CouchOne and Membase will not result in a similar consolidation of open source projects.

While Couchbase.org can be expected to replace membase.org over time, the Couchbase project will be independent of the Apache CouchDB, which will not be impacted by the merger. Couchbase will continue to contribute to both CouchDB and also the memcached project.

While we’re on the subject of NoSQL, it is also interesting to see that Danish IT vendor Trifork has not only signed up to be European distributor of the Riak database, but has also taken a stake in Basho Technologies.

Trifork has acquired newly issued shares in Basho representing 8.35% of the company as part of its series D round, with an option to acquire an additional 3.96% at the end of Q1 2011.

NoSQL – consolidating and proliferating in 2011

Among the numerous prediction pieces during the rounds at the moment, Bradford Stephens, founder of Drawn to Scale suggested we could be in for continued proliferation of NoSQL database technologies in 2011, while Redmonk’s Stephen O’Grady predicted consolidation. I agree with both of them.

To understand how NoSQL could both proliferate and consolidate in 2011 it’s important to look at the small print. Bradford was talking specifically about open source tools, while Stephen was writing about commercially successful projects.

Given the levels of interest in NoSQL database technologies, the vast array of use cases, and the various interfaces and development languages – most of which are open source – I predict we’ll continue to see cross-pollination and the emergence of new projects as developers (corporate and individual) continue to scratch their own data-based itches.

However, I think we are also beginning to see the a narrowing of the commercial focus on those projects and companies that have enough traction to generate significant business opportunities and revenue, and that a few clear leaders will emerge in the various NoSQL sub-categories (key-value stores, document stores, graph databases and distributed column stores).

We can see previous evidence of the dual impact of proliferation and consolidation in the Linux market. While commercial opportunities are dominated by Red Hat, Novell and Canonical, that has not stopped the continued proliferation of Linux distributions.

The main difference between NoSQL and Linux markets, of course, is that the various Linux distributions all have a common core, and the diversity in the NoSQL space means that we are unlikely to see proliferation on the scale of Linux.

However, I think we’ll see a similar two-tier market emerge with a large number of technically interesting and differentiated open source projects, and a small number of commercially-viable general-purpose category leaders.

Sizing the big data problem: ‘big data’ is the problem

Big data has been one of the big topics of the year in terms of client queries coming into The 451 Group, and one of the recurring questions (especially from vendors and investors) has been: “how big is the big data market?”

The only way to answer that is to ask another question: “what do you mean by ‘big data’?” We have mentioned before that the term is ill-defined, so it is essential to work out what an individual means when they use the term.

In our experience they usually mean one of two things:

  • Big data as a subset of overall data: specific volumes or classes of data that cannot be processed or analyzed by traditional approaches
  • Big data as a superset of the entire data management market, driven by the ever-increasing volume and complexity of data

Our perspective is that big data, if it means anything at all, represents a subset of overall data. However, it is not one that can be measurably defined by the size of the data volume. Specifically, as we recently articulated, we believe:

    “Big data is a term applied to data sets that are large, complex and dynamic (or a combination thereof) and for which there is a requirement to capture, manage and process the data set in its entirety, such that it is not possible to process the data using traditional software tools and analytic techniques within tolerable time frames.”

The confusion around the term big data also partly explains why we introduced the term “total data” to refer to a broader approach to data management, managing the storage and processing of all available data to deliver the necessary business intelligence.

The distinction is clearly important when it comes to sizing the potential opportunity. I recently came across a report from one of the big banks that put a figure on what it referred to as the “big data market”. However, they had used the superset definition.

The result was therefore not a calculation of the big data market, but a calculation of the total data management sector (although the method is in itself too simplistic for us to endorse the end result) since the approach taken was to add together the revenue estimates for all data management technologies – traditional and non-traditional.

.

Specifically, the bank had added up current market estimates for database software, storage and servers for databases, BI and analytics software, data integration, master data management, text analytics, database-related cloud revenue, complex event processing and NoSQL databases.

In comparison, the big data market is clearly a lot smaller, and represents a subset of revenue from traditional and non-traditional data management technologies, with a leaning towards the non-traditional technologies.

It is important to note, however, that big data cannot be measurably defined by the technology used to store and process it. As we have recently seen, not every use case for Hadoop or a NoSQL database – for example – involves big data.

Clearly this is a market that is a lot smaller than the one calculated by the bank, and the calculation required is a lot more complicated. We know, for example, that Teradata generated revenue of $489m in its third quarter. How much of that was attributable to big data?

Answering that requires a stricter definition of big data than is currently in usage (by anyone). But as we have noted above, ‘big data’ cannot be defined by data volume, or the technology used to store or process it.

There’s a lot of talk about the “big data problem”. The biggest problem with big data, however, is that the term has not – and arguably cannot – be defined in any measurable way.

How big is the big data market? You may as well ask “how long is a piece of string?”

If we are to understand the opportunity for storing and processing big data sets then the industry needs to get much more specific about what it is that is being stored and processed, and what we are using to store and process it.

The beginning of the end of NoSQL

CouchOne has become the first of the major NoSQL database vendors to publicly distance itself from the term NoSQL, something we have been expecting for some time.

While the term NoSQL enabled the likes of 10gen, Basho, CouchOne, Membase, Neo Technologies and Riptano to generate significant attention for their various database projects/products it was always something of a flag of convenience.

Somewhat less convenient is the fact that grouping the key-value, document, graph and column family data stores together under the NoSQL banner masked their differentiating features and potential use cases.

As Mikael notes in the post: “The term ‘NoSQL’ continues to lump all the companies together and drowns out the real differences in the problems we try to tackle and the challenges we face.”

It was inevitable, therefore, that as the products and vendors matured the focus would shift towards specific use cases and the NoSQL movement would fragment.

CouchOne is by no means the only vendor thinking about distancing itself from NoSQL, especially since some of them are working on SQL interfaces. Again, we would see this fragmentation as a sign of maturity, rather than crisis.

The ongoing differentiation is something we plan to cover in depth with a report looking at the specific use cases of the “database alternatives” early in 2011.

It is also interesting that CouchOne is distancing itself from NoSQL in part due to the conflation of the term with Big Data. We have observed this ourselves and would agree that it is a mistake.

While some of the use cases for some of the NoSQL databases do involve large distributed data sets not all of them do, and we had noted that the launch of the CouchOne Mobile development environment was designed to play to the specific strengths of Apache CouchDB: peer-based bidirectional replication, including disconnected mode, and a crash-only design.

Incidentally, Big Data is another term we expect to diminish in usage in 2011, since Bigdata is a trademark of a company called SYSTAP.

Witness the fact that the Data Analytics Summit, which I’ll be attending next week, was previously the Big Data Summit. We assume that is also the reason Big Data News has been upgraded to Massive Data News.

The focus on big data sets and solving big data problems will continue, of course, but expect much less use of Big Data as a brand.

Similarly, while we expect many of the “NoSQL” databases have a bright future, expect much less focus on the term NoSQL.

Webinar: navigating the changing landscape of open source databases

When we published our 2008 report on the impact of open source on the database market the overall conclusion was that adoption had been widespread but shallow.

Since then we’ve seen increased adoption of open source software, as well as the acquisition of MySQL by Oracle. Perhaps the most significant shift in the market since early 2008 has been the explosion in the number of open source database and data management projects, including the various NoSQL data stores, and of course Hadoop and its associated projects.

On Tuesday, November 9, 2010 at 11:00 am EST I’ll be joining Robin Schumacher, Director of Product Strategy from EnterpriseDB to present a webinar on navigating the changing landscape of open source databases.

Among the topics to be discussed are:

· the needs of organizations with hybrid mixed-workload environments

· how to choose the right tool for the job

· the involvement of user corporations (for better or for worse) in open source projects today.

You can find further details about the event and register here.

Scalable SQL: more than the mullet of the database world?

In the first part of our coverage on emerging database products and vendors we examined the new NoSQL databases and suggested that the incumbent database vendors would likely respond to the growing threat with a mix of in-memory and distributed caching technologies.

That is yet to happen, although it has only been a few months and the NoSQL databases have generated more noise than revenue at this stage, but in the meantime a new set of database vendors and products have emerged that could pose a more direct threat to the database incumbents while thwarting the potential of the NoSQL upstarts.

For want of a better phrase we have taken to referring to these products collectively as scalable SQL databases, and have just published a new spotlight report pulling together our various reports on the runners and riders.

Some of the vendors promise to deliver the scalability and flexibility promised by NoSQL while retaining the support for SQL queries and/or ACID (atomicity, consistency, isolation, durability). That is not an insignificant boast and it will be tough to offer the best of both worlds.

“SQL For Business, NoSQL For Partay!” is the explanation offered by MulletDB, a project that promises scalability and SQL queries. The danger is the scalable SQL ends up being the database equivalent of the celebrated mullet hairstyle or its business attire equivalent: the jacket and jeans.

One of the companies trying to avoid that problem is GenieDB (coverage) The London-based company’s GenieDB Engine is a fully replicated distributed database that combines a key-value store database with a ‘sharded’ memcached layer. Another example is Clustrix, which was founded in December 2006 to develop a new database appliance that would offer both scalability and durability in a single product.

Meanwhile VoltDB emerged earlier this summer with a transactional database management system that is designed to scale across clusters of industry-standard servers while retaining transactional integrity.

Additionally Xeround has recently confirmed its intention to reposition its Intelligent Data Grid (IDG) technology as Xeround Data Service, a scalable SQL database with support for ACID-compliant transactional capabilities for cloud computing environments, while New Technology/enterprise’s CloudTran, is designed to bring enterprise-level transaction management to GigaSpaces’ XAP in-memory data grid for on-premises deployment, and eventually any PaaS offering.

Meanwhile we are intrigued by VMware’s acquisiton of distributed data management vendor GemStone and its positioning of GemFire as a next-generation data management layer for cloud applications, as well as the forthcoming introduction of SQL querying in GigaSpaces’ eXtreme Application Platform (XAP), which will enable in-memory management of relational data and initiatives.

It is very early stages for all these vendors, and they have yet to prove that they have truly solved the problem of consistency and partition tolerance. In the meantime there are plenty of other contenders waiting in line.

Akiban is promising that it has the secret to SQL scalability with an approach that pre-groups data in order to overcome latency, caching and data distribution issues. Another company currently in stealth mode is JustOne Database which is working on perfecting a new storage model in order to deliver the performance and scalability required to support transactions and analytics on the same data simultaneously.

That is also the goal of Tokutek, which offers the TokuDB MySQL storage engine is based on Fractal Tree indexing technology designed to reduce data-insertion times and improve the performance of MySQL for both read and write applications.

JustOne and Tokutek are part of a slightly different set of vendors we are viewing under the scalable SQL umbrella: those that promise to improve performance for appropriate workloads to the extent that the advanced scale-out capabilities promised by some NoSQL databases become irrelevant.

While we’re on the subject of existing database vendors that could be considered part of the scalable SQL set, it is also worth mentioning MarkLogic. The company has recently been| associating itself with NoSQL and while the fact that it does not support SQL makes it a better literal fit with NoSQL the company’s support for ACID means that we would see it as an option for customers looking to improve performance without losing consistency, especially for unstructured or semi-structured data.*

As we previously noted; to some degree, the rise of NoSQL has resulted from the inability of the MySQL database to scale consistently. It is no surprise to see many of the scalable SQL vendors promising to improve the performance and scalability of MySQL, therefore, while others promote a clean-slate approach to address new big data management problems.

We have more details on each of the products and projects, mentioned above (as well as some not mentioned) their potential use cases, how they relate to MySQL, and what potential impact they may have on the adoption of NoSQL technologies, in the full report.

This is very much the start of our coverage of these vendors however. Expect more coverage in the near future, as well as a wider perspective on the potential for alternatives to the incumbent database suppliers, into 2011.

*Additionally, since the absence of SQL is only really tangential to many of the projects and products referred to as NoSQL it seems to me to be appropriate to have a database that does not support SQL in the scalable SQL category.

User perspectives on NoSQL

The NoSQL EU event in London this week was a great event with interesting perspectives from both vendors – Basho, Neo Technology, 10gen, Riptano – and also users – The Guardian, the BBC, Amazon, Twitter. In particular I was interested in learning from the latter about how and why they ended up using alternatives to the traditional relational database model.

Some of the reasons for using NoSQL have been well-documented: Amazon CTO Werner Vogels talked about how the traditional database offerings were unable to meet the scalability Amazon.com requires. Filling a functionality void also explains why Facebook created Cassandra, Google created BigTable, and Twitter created FlockDB (etc etc). As Werner said, “We couldn’t bet the company on other companies building the answer for us.”

As Werner also explained, however, the motivation for creating Dynamo was also about enabling choice and ensuring that Amazon was not trying to force the relational database to do something it was not designed to do. “Choosing the right tool for the job” was a recurring theme at NoSQL EU.

Given the NoSQL name it is easy to assume that this means that the relational database is by default “the wrong tool”. However, the most important element in that statement is arguably not “tool”, but “job” and The Guardian discussed how it was using non-relational data tools to create new applications that complement its ongoing investment in the Oracle database.

For example, the Guardian’s application to manage the progress of crowdsourcing the investigation of MP’s expenses is based on Redis, while the Zeitgeist trending news application runs on Google’s AppEngine, as did its live poll during the recent leader’s election debate. Datablog, meanwhile, relies on Google Spreadsheets to serve up usable and downloadable data – we’ll ignore for a moment whether Google Spreadsheets is a NoSQL database 😉

Long-term The Guardian is looking towards the adoption of a schema-free database to sit alongside its Oracle database and is investigating CouchDB. The overarching theme, as Matthew Wall and Simon Willison explained, is that the relational database is now just a component in the overall data management story, alongside data caching, data stores, search engines etc.

On the subject of choosing the right tool for the job, Basho’s engineering manager Brian Fink pointed out that using NoSQL technology alongside relational SQL database technology may actually improve the performance of the SQL database since storing data in a relational database that does not need SQL features slows down access to data that does need SQL features.

Another perspective on this came from Werner Vogels who noted that unlike database administrators/ systems architects, users don’t care about where data resides or what model it uses – as long as they get the service they require. Werner explained that the Amazon.com homepage is a combination of 200-300 different services, with multiple data systems. Users do not think about data sources in isolation, they care about the amalgamated service.

This was also a theme that cropped up in the presentation by Enda Farrell, software architect at the BBC, who noted that the BBC’s homepage is a PHP application integrated with multiple data sources at multiple data centers, and also Twitter‘s analytics lead Kevin Weil, who described Twitter’s use of Hadoop, Pig, HBase, Cassandra and FlockDB.

While the company is using HBase for low-latency analytic applications such as people search and moving to Cassandra from MySQL for its online applications, it uses its recently open-sourced FlockDB graph database to serve up data on followers and correlate the intersection of followers to (for example) ensure that Tweets between two people are only sent to the followers of both. (As something of an aside, Twitter is using Hadoop to store the 7TB of of data its generates a day from Tweets, and Pig for non-real time analytics).

Kevin noted that the company is also working with Digg to build real-time analytics for Cassandra and will be releasing the results as open source, and also discussed how Twitter has made use of open source technologies created by others such as Facebook (both Cassandra and the Scribe log data aggregation server.

One of the issues that has arisen from the fact that organizations such as Amazon and Facebook have had to create their own data management technologies is the proliferation of NoSQL databases and a certain amount of wheel re-invention.

Werner explained that SmugMug creator Don Macaskill ended up being a MySQL expert not because he necessarily wanted to be, but because he needed to be because he had to be to keep his applications running.

“He doesn’t want to have to become an expert in Cassandra,” noted Werner. “What he wants is to have someone run it for him and take care of that.” Presumably Riptano, the new Cassandra vendor formed by Jonathan Ellis – project chair for the Cassandra database – will take care of that, but in the meantime Werner raised another long-term alternative.

“We shouldn’t all be doing this,” he said, adding that Dynamo is not as popular within Amazon Web Services as it once was as it is a product, that requires configuration and management, rather than a service, and Amazon employees “have better things to do.”

Which raises the question – don’t Twitter, Facebook, the BBC, the Guardian et al have better things to do than developing and maintaining database architecture? In a perfect world, yes. But in a perfect world they’d all have strongly consistent, scalable distributed database systems/services that are suited to their various applications.

Interestingly, describing S3 as “a better key/value store than Dynamo”, Werner noted that SimpleDB and S3 are “a good start to provide that service”.

Looking forward to NoSQL EU

I was asked a few weeks ago whether I thought NoSQL was largely a US, (and specifically) West Coast phenomenon. While it might seem that way for some of those in the bubble that is the Bay Area (and to be fair that’s where I was at the time), the answer is a definite “no”.

As if to prove it, NoSQL EU is being held London next week with a great program of presentations from NoSQL vendors, projects and users.

April 20 features presentations on The Guardian’s use of NoSQL, as well as an overview from Alex Popescu of MyNoSQL, followed by presentations from Basho, 10gen, Rackspace and Neo Technology.

April 21 sees Amazon CTO Werner Vogels describing the birth of Dynamo, as well as presentations on the use of NoSQL databases from the BBC, Twitter, and Comcast. That is followed by presentations on Redis, Tokyo Cabinet (et al) and “the fate of the relational database”. Oh, and a panel debate moderated by some bloke called James Governor 😉

Then on the 22nd there’s a day of workshops involving MongoDB, Redis, Riak and Neo4J.

It’s shaping up to be a great event and I’m really looking forward to it. If you’re going to be there and want to say hi (between sessions!) let me know.

How will pro-SQL respond to NoSQL?

Gear6’s Mark Atwood is less than impressed with my recent statement: “Memcached is not a key value store. It is a cache. Hence the name.”

Mark has responded with a post in which he explains how memcached can be used as a key value store with the assistance of “persistent memcached” from Gear6, or by combining memcached with something like Tokyo Cabinet.

As much as I agree with Mark that other technologies can be used to turn memcached into a key value store I can’t help thinking his post actually proves my point: that memcached itself is not a key value store.

Either way it brings me to the next post in the NoSQL series (see also The 451 Group’s recent Spotlight report), looking at what the existing technology providers are likely to do in response.

I spent last week in San Francisco at the Open Source Business Conference where David Recordon, head of open source initiatives at Facebook, outlined how the company makes use of various open source projects, including memcached and MySQL, to scale its infrastructure.

It was an interesting presentation, although the thing that stood out for me was that Recordon didn’t once mention Cassandra, the open source key value store created by Facebook, despite being asked directly about the company’s plans for what was rather quaintly referred to as “non-relational databases”.

In fact, this recent post from Recordon puts Cassandra in context: “we use it for Inbox search, but the majority of development is now being led by Digg, Rackspace, and Twitter”. It is technologies like MySQL and memcached that Facebook is scaling to provide its core horsepower.

The death of memcached, as they say, has been greatly exaggerated.

That said, it is clear that to some extent the rise of NoSQL can be explained by CAP Theorem and the inability of the MySQL database to scale consistently. Sharding is a popular method of increasing the scalability of the MySQL database to serve the requirements of high-traffic websites, but it’s manually intensive. The memcached distributed memory object-caching system can also be used to improve performance, but does not provide persistence.

An alternative to throwing out investments in MySQL and memcached in favor of NoSQL is to improve the MySQL/memcached combination, however. A number of vendors, including Gear6 and NorthScale, are developing and delivering technologies that add persistence to memcached (see recent 451 Group coverage on Gear6 and NorthScale), while appliance providers such as Schooner Information Technology (451 coverage) and Virident Systems (451 coverage) have taken an appliance-based approach to adding persistence.

Another approach would be to improve the performance of MySQL itself. ScaleDB (451 coverage) has a shared-disk storage engine for MySQL that promises to improve its scalability. We have also recently come across GenieDB, (451 coverage) which is promising a massively distributed data storage engine for MySQL. Additionally, Tokutek’s TokuDB MySQL storage engine is based on Fractal Tree indexing technology that reduces data-insertion times, improving the performance of MySQL for both read and write applications, for example.

As we noted in our recent assessment of Tokutek, while TokuDB is effectively an operational database technology, it does blur the line between operations and analytics since the company claims it delivers a performance improvement sufficient to run ad hoc queries against live data.

Beyond MySQL, while we expect the database incumbents to feel the impact of NoSQL in certain use cases, the lack of consistency (in the CAP Theorem sense) inevitably enables quick dismissal of their wider applicability. Additionally, we expect to see the data management vendors take steps to improve performance and scalability. One method is through the use of in-memory databases to improve performance for repeatedly accessed data, another is through the use of in-memory data grid caching technologies, which are designed to solve both performance and scalability issues.

Although these technologies do not provide the scalability required by Facebook, Amazon, et al., the question is, how many applications need that level of scalability? Returning again to CAP Theorem, if we assume that most applications do not require the levels of partition tolerance seen at Google, expect the incumbents to argue that what they lack in partition tolerance they can make up for in consistency and availability.

Somewhat inevitably, the requirements mandated by NoSQL advocates will be watered down for enterprise adoption. At that level, it may arguably be easier for incumbent vendors to sacrifice a little consistency and availability for partition tolerance than it will be for NoSQL projects to add consistency and availability.

Much will depend on the workload in question, which is something that is being hidden by debates that assume a confrontational relationship between SQL and NoSQL databases. As the example of Facebook suggests, there is room for both MySQL/memcached and NoSQL

Categorizing the “Foo” fighters – making sense of NoSQL

One of the essential problems with the covering the NoSQL movement is that it describes not what the associated databases are, but what they are not (and doesn’t even do that very well since SQL itself is in many cases orthogonal to the problem the databases are designed to solve).

It is interesting to see fellow analyst Curt Monash facing the same problem. As he notes, while there seems to be a common theme that “NoSQL is Foo without joins and transactions,” no one has adequately defined what “Foo” is.

Curt has proposed HVSP (High-Volume Simple Processing) as an alternative to NoSQL, and while I’m not jumping on the bandwagon just yet, it does pass the Ronseal test (it does what it says on the tin), and it also matches my view of what defines these distributed data store technologies.

Some observations:

  • I agree with Curt’s view that object-oriented and XML databases should not be considered part of this new breed of distributed data store technologies. There is a danger that NoSQL simply comes to mean non-relational.
  • I also agree that MapReduce and Hadoop should not be considered part of this category of data management technologies (which is somewhat ironic since if there is any technology for which the terms NoSQL or Not Only SQL are applicable, it is MapReduce).
  • The vendors associated with the NoSQL movement (Basho, Couchio and MongoDB) are in a problematic position. While they are benefiting from, and to some extent encouraging, interest in NoSQL, the overall term masks their individual benefits. My sense is they will look to move away from it sooner rather than later.
  • Memcached is not a key value store. It is a cache. Hence the name.
  • .
    There are numerous categorizations of the various NoSQL technologies available on the Internet. Without wishing to add yet another to the mix, I have created another one – more for my benefit than anything else.

    It includes a list of users for the various projects (where available), and also some sense of whether the various projects fit into CAP Theorem, an understanding of which is, to my mind, essential for understanding how and why the NoSQL/HVSP movement has emerged (look out for more on CAP Theorem in a follow-up post on alternatives to NoSQL).

    Here’s my take, for those that are interested. As you can see there’s a graph database-shaped whole in my knowledge. I’m hoping to fill that sooner rather than later.

    By the way, our Spotlight report introducing The 451 Group’s formal coverage of NoSQL databases will be available here imminently.

    Update: VMware has announced that it has hired Redis creator Salvatore Sanfilippo, and is taking on the Redis key value store project. The image below has been updated to reflect that, as well as the launch of NorthScale’s Membase.