First let me clarify. I believe mobile IS important and a huge emerging channel. Source of traffic has shifted dramatically and I don’t have my head buried in the sand in that regard. Across many of my companies, mobile origination (tablet included) comprises anywhere from 30-50%+ of traffic. I recognize that access patterns have structurally changed.
When I say API first, I mean that an idealized service needs to start with a core infrastructure with robust APIs that is tapped into via any number of “front ends”: web, mobile, and even 3rd party ecosystems. If you look behind many “web first” companies today, including in our portfolio, you’ll see a very clean architectural split between the front end and the back end. The back end exposes a range of services that allows the front end to innovate independently and be re-purposed in interesting ways depending on changing business needs. The rate of change on the front end is usually a LOT higher than in the back; the scale and stability requirements on the back are far more demanding than on the front.
“Mobile first” companies really are just a front end selection accessing a solid API driven backend infrastructure. The use case, the logic, and what the app is optimized for may be a subset or different than Web, and I think this is what Fred Wilson and others are focused on.
But as I look at the world, while point of entry may vary, I believe having all three elements of web, mobile and 3rd party are going to be table stakes in the future. You CANNOT be one only. Users want different experiences for their different point of engagement. Mobile is about speed of access, much more transactional and timely, very much about getting something done. The web is great for researching, deliberating, and exploring. Both are different aspects of the same service, and I’d want both as a user depending. Finally, enabling third parties is a realization of the web services and SOA manifests from the late 90s that allow for programmatic distribution and can launch powerful new economic models.
Facebook has already shown us the above and what a powerful, mature, winning service looks like. They have their core site, their massively used mobile applications, and their various graphs 3rd parties access which gives them tremendous power, platform extension, and plata. Instagram, normally cited as the poster child for “mobile first”, recently announced they intend to move consumption to their core web site.
So to wrap up, sure, there might be some apps that are best started purely in a mobile context. But I’d bet 99% of the services out there will have to incorporate all three elements and that starts with building an incredibly solid foundation. API first, front end second, all screens third.Read Full Post | Make a Comment ( 6 so far )
StraighterLine is an online, low cost, subscription based provider of general education courses that many take in their first two years of college (Algebra, Biology, Calculus, US History, etc). The courses are ACE Credit recommended and can be transferred for credit to various degree granting institutions (25+ automatically transfer today, over 200+ universities around the country that have accepted post review, and growing). What does that mean in lay terms? Well, you can flexibly and cheaply take a variety of high quality courses at a much lower cost than anywhere else, transfer into institutions that accept StraighterLine’s courses for credit, and bring your blended cost of a degree down dramatically.
The two charts below summarize well the drivers for an investment like StraighterLine:
Costs have skyrocketed faster than healthcare over the last few decades. Student debt has ballooned to over $1 trillion, surpassing credit card debt according to the Federal Reserve Board of New York. StraighterLine’s students pay $99/month and $39/course for their pay as you go service or $999 undiscounted for a Freshman Year equivalent. Against even public two year institutions, StraighterLine offers very significant savings for the student.
In addition to pricing, there are other issues lurking beneath the surface. Funding for public education is getting slashed. California’s 112 community colleges are having their budgets slashed by hundreds of millions of dollars. The system is having to turn away students because it is no longer able to find enough space to service them. The unfortunate incidents at Santa Monica College — where the school tried to create a higher priced system for the most in-demand courses in an attempt to balance with supply instead led to riots and maced students/children — underscore this point.
Taxpayer funding aside, the federal government is looking much more closely at graduation rates and successful job placements at institutions that accept students with federal aid. As institutions begin to trim enrollments and focus on academic quality, their acceptance criteria will continue to grow more selective. An institution like StraighterLine can be an effective partner in preparatory coursework to ease the transition and improve a student’s chances of success prior to formal enrollment.
Finally, as we think about structural unemployment challenges, the ability to easily access new learning, complete coursework in a flexible manner, and base competency on outcomes of learning and not on time spent in a course (ie, “credit hours”) will be a key part of solving the country’s labor issues. The influx of non-traditional students (older, single mothers, workers retraining) is expected to grow at a much faster rate than traditional college students, and we will need institutions that can cater to this class.
StraighterLine offers a scalable solution to these challenges, where all parties benefit – easing the burden on taxpayers who fund institutions, saving money for students seeking to improve skills, improving student selection for institutions seeking to raise academic performance, and democratizing access to education for a newly mobile work force. The ambitions of StraighterLine do not end there. Burck Smith, founder & CEO of StraighterLine, has been a passionate advocate and visionary in the education space for many years. His last company, SmartThinking, pioneered post secondary online tutoring and student support services and was acquired by Pearson.
With the round, we will invest heavily in building out a unique platform and set of services that innovate on behalf of students, embracing all of the things an online, data driven platform can do. We are working with a number of providers to build assessments to help the industry shift towards a competency based view of learning. And we are also engaging the employer community, to create better linkages between the education students receive and the more tangible successful outcome of employment.
Stay tuned for more, but suffice to say there is a fantastic opportunity to use technology and innovation to leapfrog America once again to the head of the global class! We are delighted to play a small part and partner with a great team in doing so.
1 [Source: New York Times, Lewin, Tamar.“Higher Education May Soon Become Unaffordable for Most in U.S.”]
2 [Source: LiveScience]Read Full Post | Make a Comment ( 1 so far )
Reading today about Apple redesigning their store experience got me thinking about the role of the store in the future. New online companies are exploding into multi-billion dollar vertical categories and growing rapidly. Does that mean there is no room for the store? Not in my mind, but it’s fundamental role will evolve consistent with our thesis around the convergence of offline and online models.
For the last few decades, retail companies expanded by driving greater and greater geographic footprint. Stores were the only point for transacting product and even today represent the preponderance of sales. “Same store” numbers helped to gauge the productivity of a store but top line most of the time was driven by number of new stores. Stores had P&Ls they were responsible for, and this system has created an entrenched infrastructure and KPI bias where many retailers measure and optimize around in-store traffic, rather than driving the “most efficient transaction” regardless of where they occur.
I believe this model has to change. Consumers have a myriad of choices on how and where to transact today. Online ordering with the very liberal return policies have made it far easier to buy something, try it, and send back than potentially taking a few hours to physically go somewhere. Buying has bifurcated into two buckets: routine transactions and inspired transactions. Routine transactions are things that we know we need, don’t require a lot of thinking, get replenished on a regular or reasonably regular basis, etc. We either know what we want (new jeans) or we’re indifferent across a broad enough range and need to solve a functional task (toothbrush). Inspired transactions are things that I’m not sure about, that I’d love to learn to fall in love with, that I am trying to discover. I want a new look instead of just jeans; I want to understand the philosophy of the brand that I am going to wear as I view it as an expression who I am.
I’d argue that the entire domain of routine transactions is designed to go online. It’s far more efficient and the best use of a consumers time. Educate me online and let me transact. Don’t make me spend hours to do a chore. The second category is where offline shines. I walk into the Apple store to engage with product. I’ll go into the Apple store to experience an iPad or the MacBook Air to see if I really care about how thin it is. I’ll subconsciously feel how “smart” the brand is in servicing me and differentiating. If I go to the showrooms on 5th Avenue, it’s to feel the luxury and curation of what Cartier or Saks represents. And it is about entertainment as much as anything else.
What does this all mean? Well, I see huge reductions in the number of points of presence for many retailers (not including same-day consumables like coffee or food). Retailers have to align with consumers, liberate their time and allow them to transact in whatever manner provides the greatest utility. Retailers should be in the business of selling things wherever that most efficiently occurs. My partner Larry told me about one innovative retailer that uses their physical store to crowd source and showcase new products only – as soon as something sells in meaningful volume (indicative of a repetitive buyer in a “routine transaction”), it is moved to the online store and that shelf space is freed for a new product to experience.
Over time, I see stores being “owned” by marketing and viewed as a brand expense instead of the revenue bearing, full P&L today. Should I penalize the store if you came in, had a great experience, and then bought online for convenience? Or more likely believe that it served its purpose? Controversial, maybe. But 10 years from now I think the retailers that survive the transition to digital will look at it more that way than how they do today. I also wouldn’t be surprised if 10 years from now, we see a flagship Gilt store on 5th Avenue and major cities around the world. It sounds crazy now, but that might be the most “efficient” way on a blended marketing basis to create mass awareness (like TV today).
The bottom line is that the role of the stores has to change. It cannot be about the purchase. Too much of buying is moving online. It has to be about experience, education, and inspired serendipity!Read Full Post | Make a Comment ( 4 so far )
I recently tweeted about the acquisition of HauteLook by Nordstrom. I think this is one example of many we will see in the coming years of large scale, “offline” incumbents buying their way into the future.
I believe every business today is going to be rewritten for the web, or “Internet optimized” as I call it. This is not about putting up a website or selling online. This is much more fundamental. The Internet affects literally every part of a business system and makes it much more cost efficient than their legacy comparable. Let’s take a few examples:
- Marketing – Paid, organic, display, affiliate and other channels are far more precise and cost effective to pin point your audiences than any blunt mass-market tool of the past. You are connected to all your customers, it’s just a matter of finding them (and vice versa).
- Product – In an Internet optimized business, the product is instrumented to see real metrics on how people are logging into your application, which functions people are accessing, what breaks and doesn’t, etc. Each of those tell you in real time what features to focus on or not, develop or discard, etc. Customers participate in the product development process.
- Development – Multi-tenancy, single instancing, and SaaS makes development easier and faster than the complex install matrix of the past. Cloud services like AWS, Rackspace, Engine Yard and others are fully variable infrastructure available to build upon. AGILE and other development methodologies create output on regular basis.
Any “new” company is doing things efficiently across virtually all departments from the ground up (and includes areas not mentioned like sales, hiring, finance, and more). At scale, they will have a fundamentally better cost model than any legacy player possibly could. The legacy company still has those very expensive relationship based sales reps, or the high touch TV-driven ad model, or the “divine from above”/ “decide by committees” product model. These are all points of friction that makes them hard to change, slow to adopt new business models, and not innovative. It also leaves them at a fundamental economic disadvantage.
If you think about it, this concept is true for almost all businesses. We see retail in the Nordstrom/Hautelook example. The same is true in traditional advertising versus ad networks; console based gaming versus virtual goods businesses; large media publishers versus blog aggregators/publishing platforms; stock fit retail brands versus custom manufacturers; etc. The Web is as deflationary across the internals of a business as anything else! This wholesale rewiring is happening now, creating a unique moment in time and a littany of new companies looking to lead the pack.
The most likely way for offline players to evolve is to buy these Internet optimized businesses, incent those organizations to grow as rapidly as they can, retain the talent for as long as it possibly can, in the hopes they can eventually re-make their overall business by being led by example. Those that do nothing will not survive, and there will be many; those that think aggressively have a shot, and I think we’ll see much more of these partnerships with traditional brands and Internet optimized companies going forward.
[Update 4/08 - Random House leads round of financing at Flat World Knowledge]
[Update: 4/25 - The Travel Channel announces $7.5MM investment in Oyster.com]Read Full Post | Make a Comment ( 3 so far )
This was a long overdue post, but it’s been a busy year. Fitting this comes as we head into Thanksgiving. Our investment in Boomi came at an interesting time. There were plenty of scars from the legacy integration 1.0 and EAI worlds. Those companies were marked by significant services implementation relative to license sales to deal with unique customer environments. That made integrations complex, costly and brittle. Companies like Grand Central, Bowstreet, and others had all tried to ride the Web services, SOA, and interconnected enterprise wave in the early 2000s. Most were way ahead of their time, leaving lots of dead companies on the road of venture capital.
We believed Boomi’s timing was different. The emergence of cloud compute services and the growing maturation of SaaS was a stark change from the past. Both were important backdrops to answer the question “what had changed”. We’ve had a thesis on how the cloud would require the re-writing of various middleware services. While the team had a long history in EAI, they decided to bet the farm on the cloud in 2007 and wrote an innovative forward looking platform from the ground up. They launched in early 2008, and we invested in the summer 2008 on the backs of healthy customer activity. The business wound up growing very rapidly 300%+ CAGR, continued to launch new innovation upon innovation, won major awards, struck some good strategic partnerships, and eventually got purchased by Dell in an outstanding result for us as investors and for the employees. From the outside, it was how you’d script it. But there were definitely things we learned along the way. Below are a few of them:
• SOA and Web services (WS) are foundational, not competitive with integration. Many had a view that as a result of the maturation of Web services, integration was built in and no longer needed. In fact, turns out WS were foundational to doing integration in a flexible, repeatable manner. It allowed us to connect more easily to systems, but you still needed a platform to orchestrate, move, transmute, and connect these WS ports. We believe we are finally, after a decade, scratching the surface on how SOA will empower and impact applications going forward.
• It takes time to find your sweet spot in the pyramid. Boomi launched with incredibly disruptive pricing, which led to a lot of customers quickly adopting. Early on, it turns out many were very small businesses only looking to connect two low end applications, where the value of the platform was less obvious and there were simple alternatives in the “point to point” world. The value of an integration platform grows non-linearly with the number of points connected. We pivoted to focus on companies with slightly greater needs, where our platform value would be clear and our innovation led to high stickiness. It takes time to tease out who the *right* customers are for a new category product. Once we understood that, it helped clarify decisions around product roadmap, hiring, sales model, etc.
• Don’t be afraid to raise prices. Related to above, low price, high quantity led to a lot of early customers, but it didn’t scale exactly the way we wanted or attract the best fit customers for our product. But it led to a lot of buzz. As we realized our best customers were a little further up the pyramid, we worried that increasing pricing would also mean losing the very small business segment and perhaps impact buzz. We spent a lot of time thinking about the tradeoffs, but decided it was more important to align with our target customer. We increased prices three times and the business didn’t skip a beat (in fact inflected upwards). If you find your spot on the pyramid, align all parts of the business to it.
• SaaS delivery model changed everything. Unlike the legacy world, which was plagued by high services and one off implementations, true SaaS allowed us new functionality and velocity the market hadn’t seen before. We could do exciting things like using multi-tenancy to figure out what most people do when connecting applications, and auto recommend process maps. This eliminated 90% of the manual work in integration. Our platform could be opened up, allowing people to build connections and make them available to the entire community. We could get reasonably complex integrations done quickly and reliably.
• SIs say they love SaaS but it’s hard to break economic incentives. We worked with a number of larger SIs who individually loved what Boomi was doing, but collectively found it difficult to leverage the product. It broke the model of “billable hours”. “Easier to configure” made for efficiency, but not more revenue. Some newer more progressive SIs, like WDCi out of Austrailia were great, but bigger shops found it hard to change.
• Indirect channels are hard to predictably scale early on. In addition to SIs, we also worked with dozens of ISVs who were go to market partners for the Company. We began to see success but that came after years of effort. Mark Suster has a great perspective that fits our case pretty well. No one could care about our success as much as us, nor did it matter that much for others versus us.
• Conviction is important. When we first invested in Boomi, we planned to split the round with a co-investor and introduced the Company to a few shops. Most folks could not get there, so we decided to write the entire check. After the market collapse in 2008, we told the guys to just focus on the business and be smart with cash, which they did a great job of. There was constant inbound poking given the profile, but mostly off and on distracting conversations. We decided to write an additional check so the team could focus entirely on the business. And it was ever so rewarded!
Looking forward, we’re always sad to see a market defining company go. The team did an outstanding job and I’d work with them in a heartbeat. We are glad to have been a part of it. We think there continues to be a huge opportunity in cloud infrastructure software. The strategic interest in Boomi underscored that. Dell has a fantastic opportunity to own one of the cornerstone building blocks for public or private cloud offerings, and exploit that as a real differentiator versus others out there. Meanwhile, we’ll go back and look for the next great company to back!Read Full Post | Make a Comment ( 6 so far )
One big theme that we continue to see unfold is the pressure on the distribution part of the value chain. The whole value proposition of the Internet is that it allows you to connect with all of the customers you care about instantly, assuming you know where to find them or they know where to find you. That assumption, of course, does not hold for many and leads to many successful intermediaries. But we are seeing a ton of examples of people in the middle getting squeezed across industries:
- FOX withholding rights to content from Cablevision is a great (but not unique) example in the content arena. After getting killed in the advertising and market meltdown of 2008, many of the content producers now want to be a part of that lovely predictable subscription revenue stream. After a game of chicken, FOX got its deal. The recent Netflix deals are a great example from the opposite end of the spectrum.
- Online e-tailers versus brick and mortar retailers. The explosive growth of companies like Gilt Groupe, Bonobos, J. Hilburn, ModCloth and others are great examples of people choosing to either design directly to a captive audience base or bypassing the traditional fulfillment hubs. Reducing or eliminating distribution at large retailers who require their markups allows much better pricing as margin savings can be passed on and therefore value to end customers.
- American Airlines in its recent dispute with Orbitz as they push AA Direct Connect instead of going through traditional GDS systems. Initially, the aggregators and online pricing engines had better deals than airlines did at their own sites. Quickly, the airlines moved to low price guarantees for their own sites. And now this is the first salvo stepping into the traditional supplier link setup.
A number of people wind up benefitting from this trend. Many companies have grown on the backs of helping brands and retailers find those customers online. Fulfillment and logistics for physical items winds up being far more important, as the idea of buying and sending back gets ingrained in the psyche. As we continue down the path, however, we’ll continue to see increasing pressure on people who solely sit in the middle.Read Full Post | Make a Comment ( 1 so far )
Been listening to a lot of the chatter about the group buying bonanza that is going on these days. The latest news is a rumored large round for Groupon at a reported $1.2B valuation, close on the heels of a $25MM round announced by LivingSocial, which was shortly after Buywithme.com completed their $5.5MM round. The count is now more than 70+ group buying companies that have launched, with more coming each day. In addition, many content publishers are now beginning to think about entering the space. Is this insanity?
My simple answer: No.
I think the dynamics of group buying are very different than people think. In fact, I don’t like to call it group buying. I also think it has very little to do with retail merchandising. Instead, I put it in the category of perfected local performance advertising.
People have talked for many years that the local market is the holy grail for the next stage of online ad spend. The problem is how to convince the corner pizza shop or spa to value a “click” and spend money on this thing called “Google”. These merchants are way too busy in their day to day and have none of the time we have to study TechCrunch or Read/Write/Web to follow all the twisted ways we have come up with to advertise online. The companies that have become successful in local advertising have had to solve that problem in some form or fashion. ReachLocal kicked this off by creating a large overlay sales force to go in, talk to these local merchants, and deliver “in person” translation. Companies like Yext have skyrocketed by translating online advertising into the currency of the local merchant. “Have a gym? We’ll book you appointments.” Monetization at Yelp and OpenTable are related to things restaurants have done for decades – reviews & reservations. One clear takeaway – make something simple and transaction oriented, and local merchants will pay attention.
Coming back to “group buying”. What is group buying? Well, it’s a way for a merchant to give up some margin (aka, advertising dollars) to secure a purchase and hopefully build some incidental brand goodwill. In this case, activation by sufficient buyers and selling out are simply the game mechanic. If you’ve used any one of these sites, the offer almost always gets activated and many times sells out. The group buying craze is really a merchant paying some money for the best possible performance advertising you can have – A CLOSED SALE! (Not to mention some “in person CPM” thrown in for side benefit).
And now back to Groupon, et al? Well, their model is optimized around a tight, high volume operation versus a costly field sales approach. Over time, Groupon is building a database of every local merchant out there, and also a database of who has bought what in a local area. They’ve proven their ability to execute in 30+ markets and now will just manufacture the same city widgets 100 times over. As they are more successful, they can automate greater pieces of the system. Word of mouth begins to kick in. Data synergies such as re-marketing and recommendations become possible. Is that worth $1.2B today? I have no idea. But I do know local is big. Hundreds of billions of dollars big.
What happens to all the other companies out there? The big venture outcome game is probably over with the leaders already staking out their position. Other large local content players will get into the mix. But local has a few dynamics other segments don’t have. First, scale is less relevant than in other industries – a merchant can only service so many of the offers, and the offers are inherently relevant to the people in a neighborhood. Particularly since most offers are services like a massage or spa. Second, to create good ‘rotation’, the Groupons of the world limit how many times a merchant can run a deal and the number of deals shown. If this perfected local performance advertising works for a merchant, they will gladly go to the next guy and see if it works on LivingSocial‘s members, and after them, to the next person, and so on. Most scale businesses invite more of an activity, not restrict it. Third, relationships can matter. Small players can get to their local merchants and use charm to rope them in. So while many are highly skeptical, I believe there will be a slew of companies that exist in this market and can grow profitably for the next few years. Many should never take venture capital. But there is a long way to go to activate this market!
[Update: Space is really moving. BuyWithMe Inc. announced that Cheryl Rosner, former president and chief executive of Ticketmaster company TicketsNow, is its new CEO.]Read Full Post | Make a Comment ( 8 so far )
For the past decade, business on the Web has focused on driving usage and user base independent of a clear financial model. Charging for products or services with utility was anathema to the cause of driving user adoption. Systems were designed to create as much “automation” as possible to allow for massive scalability with minimal cost. And given the Web as a new medium, those strategies made a ton of sense.
With viral loops and massive usage, services like Facebook and Twitter were able to create fundamental platform businesses that took the connectivity of the Internet and created “connections”. The goal to drive audiences brought content walled gardens down, and drove a whole new generation of folks to the Web. Automated activities like user generated content and self service models became the hallmarks of success. Get other people to create site connect or sign up for a service, and make money off of their effort. No better business right? Those mantras created a stark positive value proposition and led to a huge critical mass of online activity.
But the world of usage, automation and free has some collateral effects. Given how easy it is to start a site or a service, we now also have a world of noise. People are dealing with the problem of excess. Spam email, offers, products, content, tweets, updates – you name it, almost every category has infinite shelf space competing for finite attention.
That is part of the reason why I see the pendulum shifting again towards simplification, organization, and curation. Paid content walls are going up again, as businesses identify customers out of the masses willing to pay for content with cost and create unique ways of interacting with content. It’s not that the same perspective or content isn’t available for free somewhere on the Web, it’s that people don’t have to time to sift through and find all of it. The same is true for products and services. We’ve seen a number of businesses growing rapidly whose primary value proposition is not showing customers 1000s of SKUs, but a few really good options. And automation? Perhaps not fully. Virtual call centers, email communication, on demand conversations all seem to be getting layered back into the equation. Of course this will all be done in a much more efficient and productive way than ever before, but it seems to me the human touch is fighting its way back into dogma of long tail and free.Read Full Post | Make a Comment ( None so far )
I’ve been reading curiously about the new beta Facebook Credits platform. Most coverage tends to focus on the unique elements of allowing users to vote economically for better content. Give a good content producer some credits, and perhaps that will incent them to produce more. Think Digg with economic value. I think the launch of Credits again reflects the brilliance of Facebook and I for one see a much bigger play at hand.
Facebook understands very well the amount of money flowing into virtual goods, both from their own virtual goods, as well as the money machine created by their gaming partners like Zynga, SGN, and the like (who buy large chunks of advertising to feed their virtual goods money machine). Enabling users to generate credits that work across games and applications would be of huge value, and allows Facebook to generate different and ultimately more economics from the platform developers. In addition, Facebook now represents over 1 in 4 US pageviews. Their user base is over 200 million. They have HUGE scale, which allows them to have the credibility to pull off a payment play. Users would inherently trust the FB platform over fragmented app creators. This creates the perfect recipe for a Paypal alternative, and has inherent distribution that a Google Checkout or Amazon may not.
So why not just come out with the grand plan? Well, the launch of a payment platform is non-trivial. There are hundreds of ways it can go wrong; PayPal has spent years and huge sums of money learning lessons on how to deal with fraud. Amazon, Google Checkout and others are all working through their own issues. It also deals with one of the most sensitive items for people (ie, their money). On a social platform like Facebook, the last thing you want to do is to alienate users. Facebook cannot turn on a major transactional system that would be the immediate target of phishing, fraud, and rip offs without understanding the issues thoroughly. The initial Credits approach lets them dip a little toe into the water, quietly and under the radar, and rapidly gain feedback/experience without exposing themselves to major financial or reputational damage. With that knowledge, they can slowly train their way into the Paypal market.
I have a lot of respect for what Facebook has built. And per my prior post, I think they are going to spread their tentacles broadly. Facebook controls the social graph, Facebook Connect controls identity, Facebook “Communications” will come, and Facebook Payments on the roadmap…. Stay tuned, and note the date and time of publication, but that’s my highly speculative, uncorroborated and unsolicted vision for their future.
[Update 11/26/09 - looks like things may be happening behind the scenes . This seems much more like a transactional fee, but I'd bet once it works internal to Facebook, it'll show up externally as a third party service but with a more paypal competitive pricing model.]
[Update 1/12/2010: News flow indicating this is likely.]Read Full Post | Make a Comment ( None so far )
Certain things are all about timing. My situation with my smartphone is one of them. I have grown incredibly frustrated with AT&T’s service on the iPhone, to the point where I am close to a breaking point. 3-4 drops on a stationary 30 minute call with full bars? As much as I love the iPhone with all its applications, there are definitely a few things I would change about how it handles email support. I can’t help but think back to the simple but reliable days of my Verizon Blackberry (putting aside of course my VC requirement to have an iPhone). I am vulnerable, I am questioning, I am searching for how this gets better. Timing could not be better for some solution to this, as so far, the only answer has been to hope the iPhone continues to innovate and launches on Verizon in 18 months. As those doubts have come creeping in, I see the promising “iDon’t”, “Droid Does” ads from Verizon, causing me to pause and think. And I don’t think I’m the only one.
Android itself comes at an impeccable time. The entire industry is in pain, with the exception of Apple, who is now suffering from the woes of its partner’s network. The industry is crying out for a viable third party, open solution. Windows Mobile is currently getting terrible reviews, Linux on the mobile has had fleeting momentum, and Android is benefitting from the major halo surrounding Google. Motorola is staking the next generation of its franchise around the device. Verizon’s strong network and user reputation is using Droid as their play against the iPhone until Apple comes to the table with more reasonable terms. New specific function devices are proliferating, with the launch of e-readers, tablets, slim phones, smartphones, TV/movie devices, etc, all requiring a system to manage resources. And a whole community of developers is inspired to make Android successful – in and outside of cell phones.
My belief is that Android will become a lasting, successful platform in the mobile device space. I also believe the ecosystem around it – including an open store, applications, games – all will follow. Apple has the clear lead, but with no other player having the critical mass to build an alternative (other than Microsoft who seems to losing momentum), Android becomes a real galvanizing alternative. Whatever the outcome, I hope it leads to reliability and choice for consumers!Read Full Post | Make a Comment ( 2 so far )
With Apple’s 3.0 version of the iPhone quickly approaching, one of the most widely anticipated features is the “Push” functionality. This allows developers to send alerts, notifications, and other communications to the phone without the application actively being run.
While one can see the obvious utility in the feature, the part of me that manages my email inbox is dreading the feature. I am not as bad (or efficient, you pick the term) as those who manage to a “zero inbox“, but I do try and make an effort to have no unread emails every few days. With this new Push feature, I’m envisioning throngs of app developers desirous of keeping me engaged with their app sending daily, hourly, and minutely notificifations. I’m imagining paging across the screens in my iPhone and seeing 40+ apps each claiming I have 30+ new notifications. And I’m thinking the Email manager in me will start to feel very behind….
So what will happen? I’d bet the following:
- I will find exceptional utility from the few apps that I use regularly that provide me with notifications, and will try to stay as current as possible with them. The Push feature will enhance my productivity.
- I will no longer feel comfortable looking at screen after screen of apps I barely recognize indicating I have a bunch of missed messages. I will start deleting apps that I currently dont use but keep on my phone in the background.
- I would bet my reaction will not be dissimilar to others, and notification “spam” will eventually hit a tipping point. Apple will step in to regulate the push feature. They will ensure all notifications are explicitly opt-in and customizable, not simply by virtue of agreeing to download the app.
All of the above is with the caveat that I dont have the details for how Apple will make the feature available to developers. But I’m hoping I don’t have a new stack of attention draining activities to manage….Read Full Post | Make a Comment ( 3 so far )
Though I don’t have time to be a hardcore gamer, I do dabble with a few to keep myself current with the state of the art in games, tools, infrastructure, and services. My experience last night validated an extensive post I did a few months back on the world of Games as a Service.
I decided to fire up Halo3 (yes, I know old, and far behind other new FPS games) to log onto to the “Team Slayer” playlist. In this mode, you are linked by rank and skill level to other random players on the Xbox Live network to form a team. Your “red” team attacks another similarly formed “blue” team with the goal to be the first team to get to 50 kills. You play on maps, which differ in environment, layout, buildings, weapons, etc.
Curiously, I could not log onto Team Slayer mode because I did not have “the required maps” (Non-Mythic DLC for those that care). Upon doing some digging, it turns out that Bungie/Microsoft was requiring players to purchase newer map packs that previously had been optional upgrades. Historically, if you did not buy the new maps, the servers would match you to players that had your same map packs. This of course would lead many players to play whatever maps were free, and only download newer map packs when they became free. Hard core players who wanted to learn the best strategies before anyone else would pay for early access to the new packs, but they would have a much smaller universe of players to compete against in those worlds.
Requiring subscribers to pay for the new maps to access the Team Slayer mode raises some really interesting questions. The blogosphere and forums were full of strong opinions. On the one side were the hardcore players who wanted everyone else to pay so their network would have more players. They also defended the need for Bungie to keep getting paid for an entertainment offering to keep it alive. On the other side were gamers who believe they had paid for the game, which included the Team Slayer function, and they should be allowed to play with whatever maps they chose to have and not be forced to upgrade. They would also claim they already pay Microsoft a monthly subscription fee for the Xbox Live network, which is intended to link them to other players.
I think this approach is a perfect example of a publisher extracting economics in a continuing GaaS driven model. The new maps cost me about $10, roughly 20% of the original game cost. As an aside, that seems magically to be about the same as the annual percentage charge for maintenance with licensed software, and the rule of thumb in what annual SaaS prices should be versus comparable license charges. And one can likely bet there will be new maps in the future for which I will have to pay for. I also pay $50/year or $5/month for the Xbox Live membership. If I was not forced to upgrade, then Bungie/Microsoft would have little incentive to keep developing new maps, and eventually a large portion of the audience would move on to a different game. From their perspective, it makes complete sense to communicate continuously with me through the game, enticing or forcing me to upgrade my game to continue to play the content. It extends the life of the service to a wider audience and helps them build a strong recurring revenue base. Both great examples of GaaS offerings and a marked departure from the old CD based model!
Disagree? Or more importantly have a strong opinion on the debate?Read Full Post | Make a Comment ( None so far )
TimeWarner Cable made a lot of news over the last few weeks when they introduced their tiered pricing strategy for high speed data services. The plans ranged from $15 to $150/month depending on the amount of bandwidth consumed. Their argument was that: 1) as a facilities based provider, the growth in network usage is forcing their costs to go up, which they need to recoup; and 2) this should reduce the bill for the many customers that don’t use even the lowest level of usage (so the poor user saves) and affect the super users who extract massive benefits for the network (and the rich user pays). From TWC’s COO, “When you go to lunch with a friend, do you split the bill in half if he gets steak and you have a salad?” I’m not opposed to the rationale in concept, but I do think there are several issues with it.
Plenty of people have talked about how the magic of photonics over fiber based plant has reduced the marginal cost of adding bandwidth fairly significantly. Bandwidth has an advantage over Moore’s law, in that it has two dimensions which can demonstrate improvement: concurrency of streams (number of waves sent over a medium) and rate of modulation/encoding of those streams (10Gb/s, 40 Gb/s, 100 Gb/s, etc). That multiplication creates huge drops in the cost of providing an incremental bit.
More telling to me is how vehemently the Cable industry fought a-la-carte pricing for television. This was the idea of forcing the MSOs to allow consumers to pick the channels they wanted to subscribe to and only pay for those a-la-carte, rather than the current model of buying a monolithic stack of hundreds of channels, where the vast majority are never consumed. In the interest of philosophical consistency, wouldn’t the a-la-carte argument be just as eligible for the “consumption based pricing” label as the data plan argument? I tend to think so, and can only reason that it’s simply not in their economic interest to offer that argument.
Clearly, the industry has no interest in shooting its cash cow in the foot. It is only natural to fight the mandated a-la-carte pricing. But the industry can also not be blind to outside threats. The availability of premium shows online in high quality over the Internet, the rise of on demand time and place shifted viewing, and the high broadband penetration rate has created a competitor to the proprietary, linear world of COAX. I tell many people that if ESPN360.com were not blocked by TimeWarner, I would have little reason to pay the $160/month I currently pay for cable television and high speed data. I’d be able to watch live streaming sports via ESPN360 or CBSSports for March Madness, and I’d watch the 5-7 shows I DVR online at HULU, Boxee, or some other destination. All of a sudden, my $160/month bill would be compressed to just over $40 for unlimited data access.
I’m sure the executives at the various cable companies have also done that math. And I believe they see customers doing it at a much more rapid pace. What better way to ensure one’s revenues are not cannibalized, and in fact be allowed to thrive, than to introduce consumption based pricing for data. In order to stream a few HD shows a few times a month would automatically push one into the $150-200/month category group of consumer. At that price point, the MSOs are absolutely indifferent to whether I watch my shows over their proprietary network or over the Internet on my data pipe. You can go a-la-carte but pay them just as much. In fact, they probably are incented to switch me over for revenue generation and cost efficiency gains – it’s way more profitable for them!
The path ahead will be tricky. TimeWarner has already rescinded plans for testing of tiered pricing, because of the consumer fury it has set off. If they move too quickly, they risk net neutrality legislation being thrust upon them. Better to let consumers think they won and come out with another plan, lest their hands get tied. But I think we are crazy to think tiers won’t be introduced somehow in the future. The MSOs are too smart to let their analog dollars get turned into digital quarters.
What do you think? Am I being too skeptical?Read Full Post | Make a Comment ( 3 so far )
In the enterprise world, since the advent of Salesforce.com in the late 90s, we have heard about this notion of software delivered from the cloud and offered as a shared, multi-tenant service to customers, with the web browser acting as the universal interface to access the application. Over the past decade, SaaS based applications have become mainstream, and are rapidly being adopted by small and medium sized enterprises globally because of its alignment of service delivery and value. Interestingly, the same concepts are now beginning to affect the gaming industry.
In the old world of gaming, there were large hardware manufacturers who built specialized consoles to run and execute CD and DVD based games. Game developers would create games that were stored on DVDs, and distributed through a vast retail infrastructure. The game would have a multi-year timeline, and the developers went off building a new version of the game, which would completely replace the old DVD (much like writing new versions of licensed software). Over time, those consoles introduced networking connectivity, and services like Xbox Live were launched. You still bought the DVD as a starting point, but game updates became available online and you could even download new games in entirety over the network.
Today, a new era is emerging. It started with the incredible success of World of Warcraft, which showed that a game could be delivered over the web, onto a PC, and create a “services” style game that continually grew and upgraded. There are over 11.5 million subscribers to WoW, nearly half of which pay $15/month to play the game in North America and Europe. While the premium subscription model has proven to be wildly successful in North America and Europe, over 5 million WoW players in China continue to play via prepaid game cards at a rate of $0.07/hour. As most Massive Multiplayer Online games (MMOGs) in China are still played within PC cafes, the primary revenue model continues to be through prepaid cards via a time-based pay to play model combined with in-game item sales through micro-transactions, the latter being another gaming trend that is fast gaining traction in western markets.
WoW’s success has led to a revolution in thinking game development and delivery. There are many examples of PC based games launching that are a single instance, multi-tenant, shared game application with a monthly subscription price that customers are rapidly adopting. Two recent examples include Lord of the Rings Online (developed by Turbine and published by Midway/Codemasters) and Warhammer Online (developed by Mythic and published by EA), two western MMOGs with that have attracted over 300k paying subscribers each paying $15/month to play those games. Additionally, after having great success in markets like South Korea and China, game publishers are now experimenting with new models that allow users to play games for free upfront, and buy virtual items and characters via micro-transactions and P2P trading within the games. Want to get the Penguin Micropet in GoPets? Pay $2. Want a level 80 character in Everquest 2 without investing weeks of gameplay? Pay $500. Companies like Nexon (publisher of Maple Story, Kart Rider and Crazy Arcade) in Korea and have generated hundreds of millions of dollars in annual revenue with this free to play, micro-transactions based model.
In addition, game content distribution is going through a massive shift. Platforms like Steam from Valve are changing how we think of buying and interacting with gaming content. Steam is a digital distribution and digital rights management platform that delivers gaming content directly to gamers via a web connected client. Steam allows gamers to purchase games and receive game patches and updates in an entirely digital manner. Steam offers both first party games from parent company Valve as well as titles from third party publishers, and currently offers over 350 games to 20 million registered users in 21 different languages.
Underlying this is a significant shift that will put pressure on the largest publishers of games, and create some great opportunities for creative destruction in the gaming industry. The highlights of this new “GaaS” based ecosystem will share many of the same attributes of the “SaaS” world we have seen thrive, and will have the following attributes:
Games will be sold and played over the Internet;
The game itself will be a shared instance, with foundational upgrades instantly being applied to all players;
Game titles will have “continuous” economics, as new levels, variations, and challenges can be dynamically inserted or purchased;
Free to play model will remove barriers to adoption and encourage initial and immediate game exploration;
Micro-transactions via web payments, mobile payments and prepaid cards will allow game publishers to monetize users instantly and directly;
Game publishers will have unprecedented ability to interact with their customers directly – measuring navigation and usage as one does the internet, creating unique 1:1 marketing experiences, and watch for dips or spikes in activity and modify the environment in response;
Game publishers will be able to collect real-time gameplay data to provide a better and more personalized gaming experience for gamers, leading to more accurate leveling, improved matchmaking and increased socialization within games.
At FirstMark Capital, we have invested in a number of companies that follow on these trends, and they are seeing tremendous success in the market. Riot Games is a session based MMO based on the very popular DotA community, whose game is entering beta and is already getting exciting user feedback. LiveGamer is an exchange for virtual goods, and has seen transaction volumes and activities rise as more and more publishers introduce virtual items into their economic stream. We have a number of other initiatives under way, but I believe this notion of GaaS will be an exciting one for the next few years.
(Special thanks to Jason Yeh for his contributions to this post.)Read Full Post | Make a Comment ( 4 so far )
I just read an initial report out of ComScore indicating this year’s Friday retail e-commerce numbers were up slightly over last year. Online, nontravel e-tailer sales grew 1% for the day to $534MM from $531MM last year. For the month of November, retail e-commerce sales were down 4% from last year’s numbers. The National Retail Federation, on the other hand, is forecasting an increase of 2.2% for the full Thanksgiving weekend (with only Sunday being an estimate), on total spend of $41 billion and average customer spend up 7% from $372.57 to $347.55.
All in all, I’d consider the data to be encouraging (relatively speaking). It seems to me all retailers were very concerned about spend and pushed heavy discounts to the forefront to ensure the holiday season got off to a good start. It may not bode well for retailer margins, or for the overall health of the industry for that matter, but at least a strategy of heavy discounting did create elasticity and spend with consumers. It would have been far worse to heavily discount and feel like one was simply pushing on a rope. People could have easily refused to put any money out this holiday season, and frankly I would have guessed we would see declines in spend. I’m still not sure I believe the increase in average purchase size.
Walking around, things seemed to be pretty busy. I put a couple of pictures from Macy’s and the Apple Store in NYC this weekend below. They were jammed.
The next step is to see whether people have “forward bought” and all retailers have done is rob from tomorrow to get paid today. I noticed several retailers offering discounts for future period purchases. For example, at Banana Republic, upon completing a purchase, they offered a card for 20% off any item between December 2 and 22nd. The goal is clearly to get me back into the shop. It will be interesting to hear the data come in over the next month. If anyone has other good anecdotal data, would certainly love to hear it!Read Full Post | Make a Comment ( None so far )
Lots of chatter about Twitter being offered $500MM by Facebook. Some think Twitter is crazy not to take it, while many others correctly point out that $500MM is not $500MM if it’s in stock. While Facebook may hold out the Microsoft $15 billion valuation (an artificial auction given how strategically important the advertising deal was to Microsoft, not to mention that they received preferred stock), my discussions with a number of people tell me Facebook common stock has been trading hands at somewhere between $3 and $4 billion in value. If you’re Twitter, that’s the difference between owning 3% of Facebook and 12.5%. That’s a huge difference in ownership when it comes to upside!
Facebook is an unbelievable social hub, where casual communications amongst friends are mainstay. Facebook has also been incredibly successful with mobile usage. There are over 15 million active users of Facebook Mobile, growing over 300% from last year! By comparison, Twitter “only” has 6 million active users of the product. If Facebook is the dominant player in casual communications and has an incredibly strong product in the Mobile space, why would it make a “buy” decision versus a “build” one?
I think Facebook is looking to take advantage of this downturn in the economy to become the largest social network and communications hub out there. It turned down some huge offers to stay independent. There’s no turning back now. Twitter is the poster child “Web 2.0” company – incredible usage, no revenues. If Facebook could get Twitter for a reasonable price (ie, selling them on a $15 billion valuation), they could clearly capitalize on Twitter’s market momentum. Pick up a viral service that has got a high degree of overlap with your own users, and use the integrated service to draw everyone else from Facebook onto the service. Even if they don’t buy Twitter, Facebook must be working on some sort of SMS-based Twitter-like feature. They might even add a Loopt style service alongside the same platform. Extrapolating from the chart below, text messaging is a very important communications medium for Facebook’s core audience, and clearly offering a full feature set would rank high in ensuring Facebook’s dialogue with their core audiences.
Looking at the chart above also provides some hint as to where Facebook might be headed next. In my mind, the next most obvious place for them to go is email. While younger kids view email as the “formal” way of communicating with adults, its usage is uniform across age demographics (see below). And we all know how incredibly sticky email addresses are. Yahoo! has over 260 million users of its email service, and AOL has long maintained audiences with its legacy email accounts. Gmail by Google, while growing, is a surprising distant follower. I’d bet many of the younger users of Facebook would easily use an “@facebook” account, or any separate brand Facebook might come up with, especially if it was appropriately integrated into their social messaging platform. Facebook might even do something really interesting by providing POP access to its social messages to drive adoption. Putting aside the details of how Facebook sorts/presents email from chat or social messages, it would seem like a great way to start building an organic presence in email for a huge audience you control.
Other possibilities could be expanding Facebook’s chat platform. While they have their own internal chat function, why not approach Meebo or eBuddy to acquire their tens of millions of interoperable IM users. Like Twitter, they likely share attributes of “high usage, light revenues”. In addition, Facebook could launch a VoIP based voice service that it embeds into their chat platform and their smart phone mobile applications.
Imagine the converged communications possibilities. Facebook would have the SMS market cornered via Twitter or its own offering, it could have not only the internal usage of their Chat application but also corner interoperable IM services via acquisition, it could have users starting their sticky email “lives” with the launch of @Facebook/ @nameyourbrand so users can communicate with all those “adults” outside the Facebook ecosystem, it could have applications messaging enabled by the open Facebook platform, and it could have voice (VoIP) services embedded via the web and the downloadable mobile app. While pure speculation on my part, one can see how the innocent Twitter play could be one small step towards Facebook aggressively trying to converge our messaging platforms.Read Full Post | Make a Comment ( 4 so far )
We are on the eve of Google announcing their search results for their 3Q. Google has become a major force in discovery and advertising by virtue of their ability to surface the closest result relevant to a user across the broadest set of queries on the Internet. Dozens of start-ups and certainly a few large players have tried to de-throne Google’s supremacy, but few have been successful. The switching costs are zero, yet Google’s market share has only gone up. Narrowing the domain has helped, and by limiting topical areas to things like shopping or health, companies have created market share distributions more favorable than in broad search; however, an end user is not going to use or remember 100 different search engines optimized for 100 different topics. In fact, as it has in Health or in Local, Google has picked off verticals one by one to super-optimize. This all got me thinking about how a start-up could ever beat Google at the broad game of search.
Search is decomposed into a few different elements. The first is a “spider” – a virtual bot that scours the web, parses web pages, and builds a representation of the web; the second is an algorithm that takes those spliced pieces and decides what pages are more important than others given a set of constraints or inputs; the third is a massive index that takes all this analysis and stores it so that at “query time”, an engine can quickly take the digested knowledge and weights, and return a result.
It’s my view that algorithms are not people or resource intensive. A few guys thinking very hard can come up with simple, revolutionary ideas as Sergey Brin and Larry Page did. Sure, Google has an incredible number of variables and residual terms that help refine its algorithm, but at the end of the day, it’s very rare that math is invented or discovered. In fact, I’d wager a “better algorithm” already exists somewhere in academic labs throughout the country. If it can be written or built by few, it is within the realm of startup possibility today.
I tend to believe the biggest challenge for a start-up remains circumventing the need to re-create Google’s infrastructure against an algorithm. Google spends over $2.8bln in CAPEX a year. They spend significantly more in CAPEX than they do on search algorithm specific R&D. I have heard estimates that maintenance and improvement of Google’s algorithms can be satisfied by a few hundred engineers, a small number relative to the 5,800 headcount in R&D. Google’s CAPEX purchases machines that process huge streams of information, run calculations, and store all that data into massive repositories. In fact, it is estimated that a normal Google search query involves anywhere from 700 to 1000 servers! Their compute farms grows as the web grows.
To fundamentally change the playing field, a breakthrough is needed on the indexing and spidering schema. An index can’t require anywhere near the amount of storage that Google currently has on its disks; the spider must more efficiently parse pages to go into that index. Perhaps the spider performs distributed analysis while out in the web rather than in a central location; maybe the index is broken up or organized in a completely novel way. Without breaking Google’s CAPEX curve, a startup would be hard pressed to go as broad and yet be more relevant than Google with the head start in investment that Google already has.
I fully acknowledge the first objection to the above: Microsoft has all the resources in the world, and has not been able to replicate Google’s effectiveness. I cannot claim to know how Microsoft’s money has been spent, but my hunch is that Microsoft has tried to catch up by using variants of the same approach as Google. The problem with that is Microsoft started significantly behind, and playing by the same rules will continue to leave them behind. Cashback is an interesting attempt to buy traffic, but startups don’t have that option. I would also concede that the more Google feeds its algorithm with data it gets by increased usage of the engine, the more disadvantaged any new approach would be.
All that being said, my current bias is that for a start-up, we need massive innovations in spidering and indexing (or the concepts they represent) to defeat the Google machine, not better algorithms. The few that have started with a better algorithm have always had to constrain their bounds as a result of running into the wall of how much money they spend on capital equipment. I am fascinated by the discussion and would love any feedback to the above. I’d also enjoy reading about anything going on in academia that shows promise. And if you’d like my views on particular sub-segments within search (vertical, social, etc), feel free to ping me…Read Full Post | Make a Comment ( 1 so far )
This was the supposition of Richard Stallman, founder of the Free Software Foundation. As a venture investor hoping to invest in businesses that are ultimately profitable, with strong customer stickiness, and sustainable defensibility, you might be shocked to hear that I find some of Stallman’s assertions to be quite reasonable. The cloud does have the potential to create lock-in under a certain set of circumstances, and can be called proprietary development platforms. Where I disagree is that as a result of the above, customers should stay far away from cloud computing platforms (such as CPUoD, SaaS, and PaaS, as defined in my last post). In fact, I believe given the rise of open systems, APIs, and standardized data access and retrieval layers, customers can enjoy all the benefits of a cloud platform, while maintaining sufficiently healthy competitive dynamics between vendors to keep them open and honest.
There is the obvious issue in Stallman’s position, which is that only 0.01% of customers have the expertise and resources to build one’s one server farm using all open source components and manage a fully controlled applications and data environment. Putting that aside, I’m focused on the rest of the customers out there, large and small, that only have time to focus on their own value proposition, and where time to market makes use of clouds a very seductive option.
Most SaaS applications today can be decomposed into forms that collect data, links to connect to data, workflow that pushes data to people in the right order, analytics that repurpose data “A” into new data “B”, and presentation to display data. These SaaS applications are “multi-tenant” in nature – meaning there is one version of the application that all customers use. While there are customizations, 90%+ of the app looks the same from customer to customer. IF an application boils down to a calculation and presentation layer between various “rest states” of data, and a single application is fungible to many customers, then “uniqueness” lies in the data, not the application. Therefore, the primary inhibitor to switching to a different application revolves around the concern for one’s data. The easier I can get my data into and out of an application, the less beholden I am to any one vendor. And if I am not beholden to a vendor, I can insist on the value proposition I need when purchasing the application. Thus, to me, the argument all boils down to data portability.
As a very simple consumer analogy, let’s pick the fun world of photo upload applications. If I could easily extract all my Flickr photos and pump them into any other competing service (Ofoto, Shutterfly, Picasa), then I can feel fairly comfortable that Flickr is highly incented to offer best functionality at best cost. If they do not, I take my photos out, and push them into the superior offering. While many services do not provide such photo portability, I believe those that will win long term will be those that do, as savvy consumers will flock to such services.
In the old days, data was stored in proprietary formats that could only be read by the application writing the data. In fact, way back, the physical storage of data to disk was proprietary! Things have come a long way with the advent of standards such as SCSI, SQL, ODBC/JDBC, and XML, as well as published ways to extract the information via APIs via a ubiquitous transport layer in TCP/IP. Data is isolated from the application, and able to be extracted via a variety of methods. Almost all of the major SaaS suppliers today offer APIs (perhaps of varying quality) to push and pull information out of their application. Many also allow connectivity at the database layer, and have built in export functionality. The means to get at the data are provided for by the in the application provider, and I would expect this to increase significantly over time.
The next challenge after being able to access the data is to be able to take data on one side and make sure it is intelligible to any other application one might want to use. Fortunately, there are a number of vendors who offer data integration and migration capabilities in the “cloud”. As an example, FirstMark has an investment in a company called Boomi. There are others. These companies build software that takes the “taxonomy” of one application and translates it for other applications to use. These can be comparable applications, to migrate from one to another, or they can be complementary applications, so that one set of data can be leveraged in multiple dimensions and avoid data input redundancies.
If data is portable, then customers benefit greatly by leveraging a “cloud”. Cloud vendors have extraordinary leverage in CAPEX, one that few companies can match. The bandwidth and storage consumed by users of EC2 & S3 now exceed that from Amazon.com and all its other sites combined! Quite a striking example, and it’s hard to fathom matching that kind of purchasing power. In addition, the people and software investments to scale the infrastructure, the processes and procedures, the knowledge, all are very costly to duplicate. If done right, clouds can be a much cheaper place to operate and allow customers to focus on their core value proposition as long as they insist on data flexibility.
The above is also true for PaaS vendors. Most PaaS vendors go out of their way to note that applications built on their platform have APIs built into the application out of the gate. Now, it is true that ISVs choosing to use a PaaS platform are buying into a proprietary programming style. In addition, they are at the mercy of the viability of the PaaS vendor, and that the PaaS vendor will not jump into the SaaS game by building competitive applications. But ISVs have the same data portability options as an end customer. If they choose to build on another PaaS, they simply have to ensure their PaaS vendor allows them to pump data from one platform to the other.
None of this is easy. Data movement has always been challenging. But I believe we are now in a permanent era where you cannot “hide” data behind layers upon layers of proprietary code. Customers and ISVs must insist that any cloud vendor they choose provide easy and standardized means to access and move their data. If we all do a good job insisting and asking the right questions, the winners in the cloud battles will be those that embrace openness and portability, and who focus on retaining customers by having the best application instead of by scaring them with lock-in.Read Full Post | Make a Comment ( 4 so far )
Given Larry Ellison’s recent objections to the term “cloud computing”, and that I will likely write about the space often, I thought I would take a shot at defining things that get lumped into the term.
I tend to agree that “cloud computing” is an abused term, but I believe if you parse the various definitions, I think you come out with four categories:
· Co-location and web hosters: The forefathers of the cloud computing space. They created specialized data centers with redundant infrastructures (such as power, network connectivity, etc) for third parties to leverage. Customers were separated by cages, where they could put their own servers into racks (or lease the hoster’s servers). Applications and data were technically outside the offices of the customer, and accessed via IP protocol and the Internet cloud. Put Internet cloud together with computing elsewhere, one could play the game and conceptually call that “cloud computing”.
· CPU/Storage on demand (“CPUoD”): These players start with their own data center facilities and servers, but have leveraged the explosion in hypervisors to virtualize server pools. They then layer on standardized OS environment, web servers, load balancers, databases, etc. The application must be built for that run-time environment, but if it is, one simply focuses on the development of their application and can buy compute/storage that executes the software and stores the data in a usage driven pricing model. Some folks optimize for specific languages, such as Google’s AppEngine in Python, while others provide specialized diagnostics and monitoring services on top of their cloud to differentiate. Some are stateful, some are stateless, some with persistent storage, some with dynamic storage. But at the end of the day, it is a standardized operating environment that one pays per GHz and/or GB running ANY application. I’d view this as the basic “brick” in cloud computing.
· Software as a service (“SaaS”): On the other end of the spectrum, software as a service providers build all the way up through the application/UI layer to offer a business function to the end user in a shared, multi-tenant, recurring revenue model. While extensible and customizable, it is one instance of the software that serves many customers. It is often lumped into cloud computing because the data center cost (where the software executes and data resides) and assumed scalability are bundled into the cost charged to the end user for the application. The vendor can either: 1) take their own racks, cages, and servers (as in first option above) to build their own internal CPUoD environment and write their application on top of their own controlled stack, or 2) the provider can use a CPUoD provider and write their application for that environment. The end user pays for an application that scales by usage of the application (which may or may not need more compute) but the scalability and cost of the infrastructure is hidden from the user. From the customer’s standpoint, this is a “cloud” + application. But buyer beware, as Bob Moul of Boomi points out, many things calling themselves SaaS are not.
· Platform as a service (“PaaS”): This is the newest category. It began when Salesforce realized that their SaaS application could be decomposed into more basic units that could be building blocks for any application. Forms, tabs, and links, tied together with workflow logic and wrapped around data. Force.com is a generic representation of an application – no data, no logic, but all the means to present, push, and pull information. To build an application, one “programs visually”. Customize a form, create a workflow for the application, specify the data types via fields, and your app is built. PaaS removes the engineering level concepts in writing code in computer languages like C++ or Java (compiling, de-bugging, inheritances, message passing, etc), and incorporates the infrastructure scalability of CPUoD. Like SaaS, the purchaser of an application built on a PaaS platform pays an application fee that assumes the infrastructure scales transparent to them. Unlike SaaS, PaaS creates multi-tenancy across applications! There is a single shared instance of a platform that supports multiple applications running on one or many CPUoD infrastructures.
Where’s the opportunity for startups? Well, building and running clouds are a complex and costly activity. It’s hard to envision as a young company having any comparable buying leverage on the CAPEX side. One cannot hope to get anywhere near the same discount as Google on CPUs and motherboards. And people use Amazon because it’s cheap. The only hope I see for companies to make it are 1) in differentiated scaling systems that drive down the OPEX cost equation, 2) such a differentiated coding/support environment that people are willing to pay a real premium, or 3) gaining critical mass in a specific ecosystem of diverse applications that generate a network effect to one’s cloud. The other area I like are plays that ride on top of clouds providing value added services on top that are gaps for the CPUoD/SaaS/PaaS provider . That shifts the game from economic capital to an intellectual capital exercise, where nimble innovators thrive!Read Full Post | Make a Comment ( 3 so far )