Entries Tagged 'Content management' ↓

Now we know: Alterian buys Mediasurface

Mediasurface announced there was an acquisition in the works, we just didn’t know who the acquirer would be. But word came on Friday that it is fellow UK-based Alterian, a provider of email, database and operational marketing tools.

So it’s another step in the direction of WCM becoming a key component of online marketing. Though, as Tony Byrne rightly points out, not all WCM deployments are for marketing purposes. And I would say Mediasurface in particular has not been as focused on marketing as some competitors.

It will be interesting to watch as this one shakes out. Alterian is a more profitable company than Mediasurface, particularly as Mediasurface has struggled with losses of late. It also has a stronger presence in North American, something Mediasurface has failed to attain.

Alterian doesn’t have a direct sales force though and is sold almost entirely via its channel partners. These partners could help bring Mediasurface to more geos, but at the high-end where the flagship Morello product from Mediasurface plays, WCM sales can be long and not typically entirely channel based. Mediasurface also has not one but three WCM products in its portfolio and Alterian will have to determine where best to focus its efforts if it’s looking at an ‘integrated suite.’

More exec changes at Vignette

Vignette’s industry analyst day was last Thursday and, as Guy Creese notes, these are often interesting because “Vignette personnel vanish and new people turn up to take their place with nary a word, so it’s always fun to figure out who’s missing in action based on last year’s agenda.”

Guy’s having some fun at Vignette’s expense of course, but it’s no secret Vignette has had a lot of executive turnover over the last couple of years and it hasn’t stopped. Execs on last year’s analyst day agenda gone this year include Cathie Frazzini, who led Vignette’s partner efforts for a little more than a year and long-time head of products Leo Brunnick. Dave Dutch, most recently of Level 3 Communications, has just replaced Brunnick to run product management and marketing. And Rob Amor, long-time head of Vignette’s EMEA services org, has taken on the corporate BD role from the UK.

Like Guy, this wasn’t my first Vignette analyst day and he’s also right in noting “Vignette’s Analyst Day is typically heavy with customer testimonials.” And this year was no exception. Four of the six customers that presented (including HBS and Vertrue) were fairly new to Vignette, interesting since Vignette has struggled with new license revenue in recent quarters.

Overall Vignette presented a more upbeat outlook than one might expect given the company’s recent financial results. The company has introduced several new products already this year (yes, some are OEMs and some are just enhancements, but it’s still better than what we’ve seen from Vignette in awhile) and has a few more planned before year end. It also acquired video management play Vidavee, which it claims will be integrated before the end of this quarter.

It will likely be a couple more quarters before Vignette’s largely-revamped field organization can make some hay with these new products. If it’s able to do so, the license numbers might start to turn around. We’ll certainly be watching to see.

FAST-Stellent – what might have been

The combination of search, text analysis and content management is turning into one of the central memes of this blog. This wasn’t deliberate, although it’s something we’ve deliberated internally for a couple of years.

There were plenty of partnerships between search and content management vendors around, but they seemed to us to be either at the press release level, i.e. little more than marketing, or to be as a result of a small handful of one-off projects in the field.

But it turns out others within the industry were thinking about much deeper integrations even if they weren’t saying so publicly.

About a year after Stellent and FAST (both then independent, of course) announced a partnership that resulted in Stellent OEMing FAST’s engine, FAST seriously considered buying Stellent.

I’ve heard from a couple of reliable sources that this was discussed at the highest level within FAST, but it chose not to pursue the deal and instead decided to veer way off its core business and ending up distracting itself to such an extent it got itself tied up in knots. This ended with it being forced to incur about $55m in charges in 2007 that resulted in its share rice plummeting and thus ending up costing Microsoft a lot less than it would have done.

Incidentally, one of those sidebars – Ezmo – a music community site (presented to analysts in February 2007 as a “customer” of FAST, when in fact the phrase that should’ve been used was”‘wholly-owned subsidiary”) was shut down in March.

Of course Stellent went on to be acquired by Oracle in 2007 and we’ve been impressed by the way the database giant has integrated the company so far.

But FAST and Stellent could have made for an interesting combination of the ability to manage and analyze unstructured content, and who knows, FAST-Stellent might’ve been a force to be reckoned with? Now we look to see what Microsoft – something of a toe-dipper when it comes to content management and Oracle, armed with a pretty decent search engine do to prolong this meme.

Who might be after Mediasurface?

Tony Byrne picked up on this statement from Mediasurface that “notes the recent share price and announces that it has received a preliminary approach, which may or may not lead to an offer for the Company.”

The statement goes on to say that “The approach has been received from a UK company that does not compete directly with Mediasurface and the Directors expect that regardless of the outcome of these discussions, the services that Mediasurface provides to existing and future customers will be unaffected.”

What UK company that does not compete directly with Mediasurface might be interested in acquiring it? SDL already took Tridion, a decision it is no doubt happy with given Tridion’s 2007 financial results. Autonomy is the only other company that comes to mind, as a substantial UK-based player in the information management realm, without WCM we might add. WCM doesn’t seem a logical fit for Autonomy’s current portfolio though, which has certainly grown with its 2007 acquisitions of Zantaz and Meridio. These are pretty clear cut compliance / e-discovery related buys without explicit ties to WCM.

So maybe it is an SI or design agency looking to own the delivery technology itself. Mediasurface has checked boxes at the low, mid-market, and high end in WCM, in part by acquiring fellow UK-based WCM play Immediacy in June of last year and Silverbullet out of Holland back in 2005.

Mediasurface may not be the most attractive candidate at the moment, as it had a difficult fiscal 2007 reporting an EBITDA loss of £1.3m on revenues of £11.6m. Losses were blamed on the low-end Pepperio service and market difficulty for the high-end product Morello in the financial services industry and in accounts using Microsoft SharePoint. The company’s stock tumbled on that news to 4p per share, but has been up in April based on acquisition rumors.

Growth in WCM remains strong overall though (451 clients can read analysis of sector growth rates across vendors here) and there are still too many independent players with revenues in the $20m-ish range. More consolidation certainly seems likely.

GroupSwim: text analysis meets collaboration

This blog post led us to GroupSwim, a company we met with the other day. I found GroupSwim to be a particularly interesting example of the value text analysis can lend to content management, something Nick wrote about the other day.

GroupSwim isn’t selling content management software in the classic sense. It’s SaaS offering is for collaboration, either for internal teams or externally-facing communities. It actually reminds me most of Koral, which Salesforce.com acquired a year ago and has since become Salesforce Content.

There’s a bit more meat to what GroupSwim offers though as it uses natural language processing to recommend tags, auto tag content added to this system and recommend related content. We spoke to an early GroupSwim customer yesterday who just raved about the system’s ability to auto-categorize emails and other docs, making it easier to get content into the system in an organized way and to find content on particular topic or customer account (this customer is using the service as a collab tool for sales and marketing).

Applying this sort of text analysis in a group collaboration / social software tool isn’t something I’ve heard much about lately. It will be this sort of thing that will differentiate vendors from the increasingly large pack moving forward. GroupSwim is still tiny and with its service not generally available until this past December, it’s perhaps a little late to this party. It will need to ramp up its own sales and marketing efforts significantly — 451 group clients can expect a full write up on GroupSwim in the coming days.

IWOV and VIGN: A tale of two earnings calls

Interwoven and Vignette both released Q1 numbers in the last two days and their numbers highlight the different paths these long-time competitors are now on.

Vignette announced disappointing results. Vignette’s total revenue for the quarter was $44.8 million, a 6% decrease from Q1 2007, with a net loss of $0.8 million, compared with a $4.8m profit a year ago. Particularly disappointing for Vignette was license revenue which declined 36% to $9.7m. Vignette warned three weeks its results would be weaker than expected so the news wasn’t a surprise but the mood of the call was still somber.

Interwoven, on the other hand, announced a 17% increase in revenue to $61.5m with a 12% increase in license revenue and net income of $6.1m. Interwoven has always been a fairly conservative company but even so, one of the execs on the call said something along the lines of “we’re not claiming our earnings are recession proof but…” They were downright cheerful — and with good reason.

Emergent systems and WCM

I’ve noted before that I sort of wear two hats at The 451 Group, covering both content management and collaborative technologies. They’re related surely and perhaps more so every day, but traditionally have been rather separate. In any event, I have the benefit of looking at most things through (at least) two sets of lenses and am not so far in the weeds in one market that I miss related implications.

For example, Infovark has an interesting post about emergent systems and as I was looking specifically at their table, it struck me how much their definition of “explicit” (rules-based, top down, centralized, push) defines what most WCM vendors are trying to do today with targeted content delivery. But “emergent” technologies are the opposite (outcome-based, bottom up, decentralized, pull).

In short, it’s the difference between content targeting and user-generated content. There seems to me to be a real gap between those vendors doing the former and those supporting the latter.

We’ve been spending a lot of time with vendors in the customer community realm of late, while still keeping a close eye on WCM marketplace. The relationship between these two seems quite obvious to me, though we’re only starting to see bits of it in the market. There is some partnering going on and at least one OEM I know of though have been asked not to publicize yet. I suspect we’ll see more of this in the coming months.

Text analysis + content management = insight

We have long wondered why more content management vendors don’t fully embrace text analysis (or even enterprise search for that matter).

These guardians of most organizations unstructured data were beaten to the punch in terms of exploiting text by business intelligence companies, which are more accustomed to manipulating structured data. It’s great that the BI companies are starting (slowly) to embrace the idea of unlocking the value locked within unstructured text, it’s somewhat bizarre why content management vendors didn’t get there first.

We said this many years ago, in the most coherent form in mid 2005 with our report called Text-aware applications: the endgame for unstructured data (the clue’s in the title).

In report that we said:

“…while the penetration of content management systems is relatively high when compared with other ways of managing unstructured data, these systems do little at present to help analyze that unstructured data.”

and somewhat optimistically:

“Indeed, despite the CMS’s [content management systems] ability to organize, most implementations rarely attempt to push into anything that could be considered a semantic understanding of the content. This may be set to change, however, with some vendors, such as EMC, making headway in automatically parsing documents at a deeper level than just file-level metadata.”

That was a tad premature on our part.

Think about the main players and what they do to understand what resides in the documents they ‘manage.’

EMC Documentum – it has its content intelligence services classification engine sure, and it bought a federated search product many moons ago, but neither are exactly front and central to its product strategy. And ILM (try searching on that now at EMC and see what you get) only dealt with file-level metadata, not semantic metadata. However the X-Hive acquisition was an interesting one from this standpoint (see below for more on XML databases).

Vignette – bar an OEM relationship with Autonomy (which most vendors have) nothing much doing here despite the need for Web content management to increase its understanding of the text its managing to make websites more attractive to advertisers (think of using text analysis to build links to other content automatically to keep visitors on the site longer).

Interwoven – Metatagger isn’t exactly at the bleeding edge any more, although the idea is sound.

IBM Filenet – here there is hope. IBM has taken a classifier it got from its iPhrase acquisition and used it to do initial classification to help determine what should or should not be deemed a record. IBM has all sorts of text analysis toys to play with and we expect more from it in the future.

Open Text – it once had five search engines, and was a pioneer in that space. But I’m not aware of anything it does to extract meaning from the content it manages.

Autonomy – Its tagline is ‘Meaning-based computing.’ It owns a powerful classification engine but now also owns records management and a bunch of other stuff. It’s the one company that checks most of the boxes here (but isn’t a document or Web content management vendor). But as the company currently refuses to talk to us, we’re in the dark as to which bit fits where and are unable to tell our clients what benefits Autonomy could bring them as a result. If the company cares to get in touch with me, I’m here.

This post was prompted partly by a recent conversation I had with Nstein . It is morphing from being a struggling text analysis vendor laden with debt (it’s publicly traded in Canada, so the numbers don’t lie) to a fast-growing combination of Web content management, digital asset management (via acquisitions in 2006 and 2007) and text analysis, built atop an XML database licensed from IxiaSoft. Its focusing exclusively on the largest publishing companies, using the text analysis to automatically create links between new and archived content (thus pushing it up Google rankings). It competes with Mark Logic and Interwoven, mainly.

Any Gmail user that looks in their spam folder and see ads for “Spam Swiss Pie – Bake 45-55 minutes or until eggs are set,” can appreciate how crude keyword matching against content is next to useless.

There’s so much more that can be done here and so much insight being left on the table, whether it be in better website management to attract readers, voice of the customer analysis tied to BI, or government intelligence.

Tools that manage content need to understand that content – its language, its meaning, its sentiment. Otherwise, they are missing a trick.

Recent ECM (and/or) SharePoint links

It’s been a long week in the Reidy household…coughing, pink eye…anyone with little kids knows the drill. I’m finally catching up on some feed reading and there’s been some interesting dialogue this week about SharePoint. Is it possible to post about content management or social software these days without involving SharePoint?

Bridging the “storage-information” gap

When Nick first unveiled this blog last month he rightly noted ‘storage’ as one of the many categories that falls into a capacious bucket we term ‘information management.’ With this in mind he reminded me that it would be appropriate for the 451 Group’s storage research team to contribute to the debate, so here it is!

For the uninitiated, storage can appear to be either a bit of a black hole, or just a lot of spinning rust, so I’m not going to start with a storage 101 (although if you have a 451 password you can peruse our recent research here). Suffice to say that storage is just one element of the information management infrastructure, but its role is certainly evolving.

Storage systems and associated software traditionally have provided applications and users with the data they need, when they need it, along with the required levels of protection. Clearly, storage has had to become smarter (not to mention cheaper) to deal with issues like data growth; technologies such as data deduplication help firms grapple with the “too much” part of information management. But up until now the lines of demarcation between “storage” (and data management) and “information” management have been fairly clear. Even though larger “portfolio” vendors such as EMC and IBM have feet in both camps, the reality is that such products and services are organized, managed and sold separately.

That said, there’s no doubt these worlds are coming together. The issues we as analysts are grappling with relate to where and why this taking place, how it manifests itself, the role of technology, and the impact of this on vendor, investor and end-user strategies. At the very least there is a demand for technologies that help organizations bridge the gap – and the juxtaposition – between the fairly closeted, back-end storage “silo” and the more, shall we say, liberated, front-end interface where information meets its consumers.

Here, a number of competing forces are challenging, even forcing, organizations to become smarter about understanding what “information” they have in their storage infrastructure; data retention vs data disposition, regulated vs unregulated data and public vs private data being just three. Armed with such intelligence, firms can, in theory, make better decisions about how (and how long) data is stored, protected, retained and made available to support changing business requirements.

“Hang on a minute,” I hear you cry. “Isn’t this what Information Lifecycle Management (ILM) was supposed to be about?” Well, yes, I’m afraid it was. And one thing that covering the storage industry for almost a decade has told me is that it moves at a glacial pace. In the case of ILM, the iceberg has probably lapped it by now. The hows and whys of ILM’s failure to capture the imagination of the industry is probably best left for another day, but I believe that at least one aim of ILM – helping organizations better understand their data so it can better support the business — still makes perfect sense.

What we are now seeing is the emergence of some real business drivers that are compelling a variety of stakeholders – from CIOs to General Counsel — to take an active interest in better understanding their data. This, in turn, is driving industry consolidation as larger vendors in particular move to fill out their product portfolios; the latest example of this is the news of HP’s acquisition of Australia-based records management specialist Tower Software. Over the next few weeks I’ll be exploring in more detail three areas where we think this storage-information gap is being bridged; in eDiscovery, archiving and security. Stay tuned for our deeper thoughts and perspectives in this fast-moving space.