Author Archives: Amish

Practical Tips from FirstMark Capital’s Online Marketing Summit

I did a post a few days ago around the high level themes from our Online Marketing Summit called “Stop Selling, Start Giving”.  There were enough very practical tactics that emerged from the event that I thought I would share some below.

SEO: 

  • The best time to think about SEO is when building a new site.  When using any good CMS system, such as Drupal or Joomla, be sure to use their SEO plug-in modules.  It will make it very tough to not have SEO on the site. 
  • Links are very important, particularly ratio of inbound links to outbound links.  Also, the deeper and more specific you can have links to the site (rather than just all to the homepage) that will improve the SEO of the site and pages.  Make sure your content is structured in such a way that incents people to point to deeper pages.
  • SEO is a process involving content creation, engineering head count, links, technology, and budget.  Create commitment to SEO in the organization.  Hiring one person cannot change an organization or generate real SEO value.  Consider allocating 10% of engineering time to SEO work.  The best practitioners have everyone in the organization focused and thinking about it.
  • Resources:  SEOmoz.com, Conductor.com

SEM/PPC

  • Before you spend your budget on an SEM campaign, be sure to take 10% of it FIRST and do a test run.  You can save yourself some major embarassment in case something was not set right and to further tweak. 
  • Be very careful using BroadMatch – you could spend money in a heartbeat on terms that are not related to your product or service.
  • Keyword research is critical.  Lots of tools out there can help, but also thinking about negative keywords, plural vs singular, etc, are all ways to create variation. 
  • Resources:  Clickable’s free guide SEM best practices and tips

Community

  • Create a community and empower it to set directions – a censored community is not one at all.  Manage but “with a light touch”.   Allow users to moderate content.
  • Recognition is key for community growth – tiered structures, badges, experts, rewards (virtual or physical) are great ways to accomplish this.
  • Transparency is critical – if you have an issue, publicly engage the community and tell them what is going on.  Building trust is paramount to a vibrant community.
  • Measure the community –  post activities, engagement, session lengths, etc.   The numbers will tell you if your community is active and thriving.  If it’s not working, find out why!  It’s usually something you did.

Email

  • Email is NOT for acquisition, it is for retention! 
  • The FROM and SUBJECT alone determine if someone opens – the questions they are asking are “DO I KNOW YOU?” AND “DO I CARE?” respectively.  Answer those questions well.
  • Build your lists organically by providing VALUE to users such that they want the information rather than a marketing message.  Use things like questions that your customer service receives as material for future newsletters.  You dont need dozens of articles – a few targeted ones that serve a purpose and give value to customers is better.
  • Create links back to specific pages on your site so you can track activity and users interests. 
  • Make sure you have a sign-up form on ALL pages of your site.  Customize the thank you note when someone does sign up – show genuine appreciation for signing up.
  • Most people have images off in their email clients – dont have a huge picture at the top or users will see a big X instead of a message in their preview screen.
  • Testing is key – treat email just like PPC.
  • Use the word “Feedback” instead of “Survey” – people are much more willing to provide feedback than take a survey.  One improves their life, another takes time from their life.

Marketing Automation

  • If you can read the “Digital Body Language” of how customers are interacting with your site, content, and marketing activities, you can calculate how likely they are to buy and where they are in sales cycle.
  • Lead scoring is critical to understand when marketing activities transition to sales type of activities.
  • Separating FIT of buyer from ENGAGEMENT of a user is critical.  A key decision maker doing a few things online and a summer intern doing a lot online should not have the same lead score.  A CEO doing A LOT is the ideal.  Segment those rigidly and pass on to sales things at the closest intersection to improve MQL close rates.

Integrated Marketing Approach – Case Study of Omniture

  • Marketing commits to generating 35-80% of sales accepted leads, and in closing 35-40% of deals in a quarter.  If you do not know what number you are responsible for, you are not strategic.
  • Dont do live webinars – record and push it out there – allow your customers to sign on when they can, fast forward to what they want, and interact as they wish.
  • It’s hard to find online marketing savvy folks.  If you cant find someone smart, hire an inexperienced, smart person and send him/her to get certifications:  DMA, AdWords, etc.  Make sure they have gone through the formal trainings – well worth the investment, and smart people without legacy biases will get this system.
  • Map your marketing process to a sales process – someone looking deep on product page is much further in a funnel than someone downloading whitepapers.   Know that and automate.
  • Sample mix of budget:  25% Site and Content, SEO 15%, SEM 15%, Email 20%, 3rd Party Emails 10%, Display Ads 5%, Newsletters 3%, Tradeshows 7%.

John Deighton’s definition of Interactive Marketing:  “The ability to address the customer, remember what the customer says and address the customer again in a way that illustrates that we remember what the customer has told us.”

Any other suggestions, please post below!!

Tagged , , , , , ,

Stop Selling, Start Giving

FirstMark Capital yesterday hosted an Online Marketing Summit for our portfolio companies and friends in the community.  The goal was to bring together the latest thinking across a variety of functions (SEO, SEM, Email, Community, Social, Automation, and others) and to improve the overall fluency of our companies regardless of their field.  If I were to summarize everything I learned at the event, it was to “Stop Selling, Start Giving”.

The Internet has democratized customers’ abilities to learn about new products, instantly provide feedback, and share their experiences with others.  The traditional model where sales controlled the product message to buyers, carefully built relationships, and used those relationships to close deals has been permanently broken.  One way marketing strategies can now be easily sidestepped by a user that self-selects how to use products and research decisions on his or her own.  As a result, marketing’s role has changed to find buyers when they are ready to make a decision based on their OWN actions.  Steven Woods from Eloqua calls it “Digital Body Language”.  By reading the Digital Body Language, sales can step in at an appropriate and desired moment to facilitate the close of a deal at the right moment of intent.

What does it mean to “Stop Selling, Start Giving”?  By that, you should try to begin the dialogue with a customer with a value proposition and an insight that addresses a problem they have.  If they don’t have a problem, they don’t need your solution.  If they do, actively help them understand the PROBLEM better, not your PRODUCT better.  One tactic could be a whitepaper, another could be giving away your product for free initially, another could be hosting a community forum where experts comment on industry issues.  In addition, by actively participating in the customer pain and facilitating their dialogue, you gain a precious opportunity to subtly influence and learn from the dialogue.  Transparency exists whether you want it or not – embrace it!

By the act of giving, you’ll begin to engage a prospective customer in a series of activities.  Each of those activities can be measured online and used to decode where a customer is in their buying process.  Are they just exploring the web site?  What sections?  Have they downloaded a couple of specific whitepapers?  Now moved to using the product?  Asked for some help?  These data points can be mapped to a buying cycle where you can appropriately insert yourself to a sales activity.  Done well, you can tie all of these data into one continguous funnel that starts with first contact at the top and closes with a sale.  But it all starts by giving, not selling!

Tagged , ,

Zappos & Amazon – Happy News For All

I had been asked a few times over the last week about my thoughts on the Zappos transaction.  I think this is a great story for innovation and startups.  Zappos started in a space many believed you could not transact online: selling shoes without people trying them on… Of course, as the world has grown increasingly comfortable transacting on the Web, that changed pretty quickly and Zappos took off.   With their focus on customer service and company culture (can watch a video by Tony Hsieh on that here), they were able to build sustaining brand advantage.

Ultimately, I think Zappos could have gone public, but Amazon stepped in and paid over 20x+ reported EBITDA of Zappos.  That’s a serious multiple, healthier than the public markets now.  And of course, in an online business at this scale there are significant capex cost, so I’m sure if you looked at cash flow, you get an even bigger premium.  Zappos built a dominant brand in a category, and Amazon stepped up and paid a premium to get the company.  To me, that’s a textbook entrepreneurial story.  I think you will continue to find next generation e-retailing companies thrive, but with an innovative new spin.  Gilt just raised money at a reported $400MM valuation, and had multiple bidders competing to get in.  There are a whole generation of companies pushing the ‘mass customization’ or ‘personalization’ theme, and doing well.  It’s all about finding a novel approach, attacking it quickly, and building scale at a brand level before someone can catch up.

Tagged , , , , ,

Marketing is the New Sales

Many have written about the rise of highly capital efficient companies, the growth of SaaS, the penetration of technology into the SMB community, and the new requirement to deliver value to customers at the time of purchase.   Each of these strategies has radically altered the sales process.  No longer are armies of sales people sitting in customers’ offices or spending endless hours over dinner trying to forge relationships they can lean on in the sales process.  Instead, customers are “pulling” solutions they are interested in, trying them out for free early on, and selecting products that meet their needs.  If the best way to do sales is to have customers “declare” their interest, then one needs an exceptionally broad funnel to ensure productive sales activity.

As a result, marketing in this new era is undergoing a rapid transformation.  Marketing is no longer about softer concepts like brand building and trade shows; no longer simply providing the appropriate message and collateral for the sales organization; no longer sitting with industry analysts, hoping for positive coverage.  Instead, it has become a much more active, tactical, and quantitative function.  Done right, it becomes a highly integrated and critical part of the overall funnel. 

The next generation of marketing leaders will be fluent in online acquisition channels and implementing a real time, transparent, measurable system.  The areas of spend are highly fragmented: SEO, SEM, Email Marketing, Affiliate, Social Media, Community, Video, and on and on, balanced with the traditional channels of PR, magazine and trade shows.  Quantifying cost per lead, customer acquisition costs, conversion rates, and value per customer across each of these channels requires significant discipline.  Architecting follow up in a highly automated manner and driving to increasing levels of qualification becomes key.  Webinars, emails, and free trials are scalable ways to move potential customers along and require minimal touch.  Site activity, logins, and usage inform how much deeper customers are getting. 

The challenge is that many of these channels have only been around for a few years at most.  At FirstMark Capital, we are trying to support our CMOs make the transition by sharing the nuggets of best practices across our portfolio.  Luckily, via companies like Clickable, Conductor, and others, we have some of the leaders in these areas within the portfolio.  We are also organizing an annual Online Marketing Bootcamp for our portfolio companies and friends of the firm to augment organic knowledge with leading experts in the field. 

If you are one of these types of CMOs, we’d love to hear from you.  We have lots of places you can be used!  🙂

Tagged , , , , , , ,

Jeff Bezos at Wired: Disruptive By Design Conference

I attended the Wired: Disruptive by Design Conference earlier today at the Morgan Library in NYC.  One of the best sessions was of course with Jeff Bezos, CEO of Amazon.com.  I have an incredible amount of respect for Jeff, not only because he stayed true to his strategy in spite of an incredible amount of pressure during the bubble bust, but also because of the spectular innovations that have come out of Amazon over the years.  The Kindle has revolutionized the e-reader market and launched Amazon into a consumer electronics company.  Amazon Web Services of course has transformed Internet economics from fixed costs to variable ones, and unleashed a wave of new companies to boot.  Jeff did not disappoint, and I thought I would share some of his thoughts below.  My favorite from below – “The trick as an entrepreneur is to be stubborn on the big things and be very flexible on the details.”  Enjoy, and feel free to post any other good ones you have from Jeff. 

On the economics of e-books and the Kindle: 

  • A text book is re-sold 5 times over it’s life, which is why they cost so much.  With digital books, publishers have the opportunity to sell that 5 times to consumers.   The price can now come way down.
  • Historically, we have never made money on bestsellers.  We make money on the mix.
  • For books where we have both physical and e-book inventory (300,000 books), Kindle unit sales are 35% of the physical book sales.
  • “We humans do more of what is made easy”.  You do more when you reduce the friction.  Making buying books so easy makes people buy more.
  • Reading is an important enough activity to have it’s own device.
  • On multi-function devices versus signle function:  “I like my phone… I like my swiss army knife, but I also like my steak knives too.”
  • “The physical book has had a great 500 year run, but it’s time to change”
  • “Our vision is to have every book ever printed, in any language, available within 60 seconds.”
  • On Google’s pending deal with the US book industry:  “It doesn’t seem right to get a prize for violating a large series of copyrights”

On staying true to the path and entrepreneurship:

  • “We always noticed some of our harshest critics were our best customers.  Told us we must be doing something right.”
  • Regarding the run up in the bubble: “I always told our employees not to feel 30% smarter when the stock went up by that amount because one day it will go down by the same.”
  • “One of the differences with founders and professional managers is that the founders care about the detail of the vision.”
  • Regarding vision and strategy:  “The trick as an entrepreneur is to be stubborn on the big things and be very flexible on the details.”
  • “If you disrupt something, you have to be willing to be misunderstood for long periods of time.”
  • Regarding products that seem very different:  “A question people at large companies don’t ask enough is “Why not?”” 
  • “I wouldnt know how to respond to someone if they said, “We cant do this because it’s not in our knitting.'”
  • “The two things we do is work backwards from customer needs and work forward from our set of skills.  AWS is an example of us working forward from our skills, while the Kindle is an example of us working backwards from customer needs.”
  • “Many companies believe learning a new skill is akin to leaving your core competency.”
  • “Errors of comission are over focused on versus errors of omission.  People over dramatize how expensive failure is.  You never hear of a company getting criticized for failing to try something.”
  • On trying different ideas:  “If you are in the investment phase and you stop doing it, the only thing that happens is your profits go up.  How hard can that be?”
  • On mistakes:  “We launched Auctions, no one came.  We licensed Google’s search and launced A9 and no one came.  A year after we shut it down it was still my mom’s homepage.”
  • Citing another quote in response to why they didn’t better service and if it was deliberate or not: “Never attribute to conspiracy what can be explained by incompetence.”

It was a great session and Jeff had some great lessons.

Tagged , , , , , ,

Fearing the iPhone Push

With Apple’s 3.0 version of the iPhone quickly approaching, one of the most widely anticipated features is the “Push” functionality.  This allows developers to send alerts, notifications, and other communications to the phone without the application actively being run. 

While one can see the obvious utility in the feature, the part of me that manages my email inbox is dreading the feature.  I am not as bad (or efficient, you pick the term) as those who manage to a “zero inbox“, but I do try and make an effort to have no unread emails every few days.  With this new Push feature, I’m envisioning throngs of app developers desirous of keeping me engaged with their app sending daily, hourly, and minutely notificifations.  I’m imagining paging across the screens in my iPhone and seeing 40+ apps each claiming I have 30+ new notifications.  And I’m thinking the Email manager in me will start to feel very behind….

So what will happen?  I’d bet the following:

  1. I will find exceptional utility from the few apps that I use regularly that provide me with notifications, and will try to stay as current as possible with them.  The Push feature will enhance my productivity.
  2. I will no longer feel comfortable looking at screen after screen of apps I barely recognize indicating I have a bunch of missed messages.  I will start deleting apps that I currently dont use but keep on my phone in the background. 
  3. I would bet my reaction will not be dissimilar to others, and notification “spam” will eventually hit a tipping point.  Apple will step in to regulate the push feature.  They will ensure all notifications are explicitly opt-in and customizable, not simply by virtue of agreeing to download the app.

All of the above is with the caveat that I dont have the details for how Apple will make the feature available to developers.  But I’m hoping I don’t have a new stack of attention draining activities to manage….

Tagged , , , ,

GaaS at Work: Halo3

Though I don’t have time to be a hardcore gamer, I do dabble with a few to keep myself current with the state of the art in games, tools, infrastructure, and services.  My experience last night validated an extensive post I did a few months back on the world of Games as a Service

I decided to fire up Halo3 (yes, I know old, and far behind other new FPS games) to log onto to the “Team Slayer” playlist.  In this mode, you are linked by rank and skill level to other random players on the Xbox Live network to form a team.  Your “red” team attacks another similarly formed “blue” team with the goal to be the first team to get to 50 kills.  You play on maps, which differ in environment, layout, buildings, weapons, etc. 

Curiously, I could not log onto Team Slayer mode because I did not have “the required maps” (Non-Mythic DLC for those that care).  Upon doing some digging, it turns out that Bungie/Microsoft was requiring players to purchase newer map packs that previously had been optional upgrades.  Historically, if you did not buy the new maps, the servers would match you to players that had your same map packs.  This of course would lead many players to play whatever maps were free, and only download newer map packs when they became free.  Hard core players who wanted to learn the best strategies before anyone else would pay for early access to the new packs, but they would have a much smaller universe of players to compete against in those worlds. 

Requiring subscribers to pay for the new maps to access the Team Slayer mode raises some really interesting questions.  The blogosphere and forums were full of strong opinions.  On the one side were the hardcore players who wanted everyone else to pay so their network would have more players.  They also defended the need for Bungie to keep getting paid for an entertainment offering to keep it alive.  On the other side were gamers who believe they had paid for the game, which included the Team Slayer function, and they should be allowed to play with whatever maps they chose to have and not be forced to upgrade.  They would also claim they already pay Microsoft a monthly subscription fee for the Xbox Live network, which is intended to link them to other players. 

I think this approach is a perfect example of a publisher extracting economics in a continuing GaaS driven model.  The new maps cost me about $10, roughly 20% of the original game cost.  As an aside, that seems magically to be about the same as the annual percentage charge for maintenance with licensed software, and the rule of thumb in what annual SaaS prices should be versus comparable license charges.  And one can likely bet there will be new maps in the future for which I will have to pay for.  I also pay $50/year or $5/month for the Xbox Live membership.   If I was not forced to upgrade, then Bungie/Microsoft would have little incentive to keep developing new maps, and eventually a large portion of the audience would move on to a different game.  From their perspective, it makes complete sense to communicate continuously with me through the game, enticing or forcing me to upgrade my game to continue to play the content.  It extends the life of the service to a wider audience and helps them build a strong recurring revenue base.  Both great examples of GaaS offerings and a marked departure from the old CD based model!

Disagree?  Or more importantly have a strong opinion on the debate?

Tagged , , ,

The Real Culprit Behind TimeWarner’s Pricing Strategy

TimeWarner Cable made a lot of news over the last few weeks when they introduced their tiered pricing strategy for high speed data services.   The plans ranged from $15 to $150/month depending on the amount of bandwidth consumed.  Their argument was that:  1) as a facilities based provider, the growth in network usage is forcing their costs to go up, which they need to recoup;  and 2) this should reduce the bill for the many customers that don’t use even the lowest level of usage (so the poor user saves) and affect the super users who extract massive benefits for the network (and the rich user pays).   From TWC’s COO, “When you go to lunch with a friend, do you split the bill in half if he gets steak and you have a salad?”   I’m not opposed to the rationale in concept, but I do think there are several issues with it. 

Plenty of people have talked about how the magic of photonics over fiber based plant has reduced the marginal cost of adding bandwidth fairly significantly.  Bandwidth has an advantage over Moore’s law, in that it has two dimensions which can demonstrate improvement:  concurrency of streams (number of waves sent over a medium) and rate of modulation/encoding of those streams (10Gb/s, 40 Gb/s, 100 Gb/s, etc).  That multiplication creates huge drops in the cost of providing an incremental bit. 

More telling to me is how vehemently the Cable industry fought a-la-carte pricing for television.  This was the idea of forcing the MSOs to allow consumers to pick the channels they wanted to subscribe to and only pay for those a-la-carte, rather than the current model of buying a monolithic stack of hundreds of channels, where the vast majority are never consumed.  In the interest of philosophical consistency, wouldn’t the a-la-carte argument be just as eligible for the “consumption based pricing” label as the data plan argument?  I tend to think so, and can only reason that it’s simply not in their economic interest to offer that argument. 

Clearly, the industry has no interest in shooting its cash cow in the foot.  It is only natural to fight the mandated a-la-carte pricing.  But the industry can also not be blind to outside threats.  The availability of premium shows online in high quality over the Internet, the rise of on demand time and place shifted viewing, and the high broadband penetration rate has created a competitor to the proprietary, linear world of COAX.  I tell many people that if ESPN360.com were not blocked by TimeWarner, I would have little reason to pay the $160/month I currently pay for cable television and high speed data.  I’d be able to watch live streaming sports via ESPN360 or CBSSports for March Madness, and I’d watch the 5-7 shows I DVR online at HULU, Boxee, or some other destination.  All of a sudden, my $160/month bill would be compressed to just over $40 for unlimited data access. 

I’m sure the executives at the various cable companies have also done that math.  And I believe they see customers doing it at a much more rapid pace.  What better way to ensure one’s revenues are not cannibalized, and in fact be allowed to thrive, than to introduce consumption based pricing for data.  In order to stream a few HD shows a few times a month would automatically push one into the $150-200/month category group of consumer.  At that price point, the MSOs are absolutely indifferent to whether I watch my shows over their proprietary network or over the Internet on my data pipe.  You can go a-la-carte but pay them just as much.  In fact, they probably are incented to switch me over for revenue generation and cost efficiency gains – it’s way more profitable for them! 

The path ahead will be tricky.  TimeWarner has already rescinded plans for testing of tiered pricing, because of the consumer fury it has set off.  If they move too quickly, they risk net neutrality legislation being thrust upon them.  Better to let consumers think they won and come out with another plan, lest their hands get tied.  But I think we are crazy to think tiers won’t be introduced somehow in the future.  The MSOs are too smart to let their analog dollars get turned into digital quarters.

What do you think?  Am I being too skeptical?

Tagged , , , , ,

LTV: Another Metric in SaaS?

I recently had an interesting conversation with a very smart hedge fund buddy of mine.  We were of course talking about investment ideas, given many of us were holding either cash or gold, and I threw out Salesforce.com.  It is generating 15-20% free cash flow margins, growing revenues at 30%+, with a solid recurring base.  This led to a discussion of valuing SaaS companies.

As venture folks trying to build companies, we tend to focus on operational metrics like Annual Contract Value (ACV), Monthly Recurring Revenue (MRR), Average Selling Price (ASPs), and Churn.  Both Byron Deeter of Bessemer and Will Price formerly of Hummer Winblad have done very nice posts here.  My friend’s perspective was entirely different as a public market buyer.  He looks at everything through the valuation lens.  He said the metrics above are all interesting, but he and his peers tend to focus on Lifetime Value of a Customer.  Essentially wrapping many of the components above to look at the DCF value per customer.  It is very similar to how analysts look at cable companies on the overall value per subscriber.  An obvious point he made, but framed from an entirely different angle, was that small changes to churn assumptions would lead to drastic changes in the overall valuation and associated multiples of a company.  While one can focus on the revenue or FCF multiples, it’s really the LTV that he cares about.

[UPDATE: Many searching for specific LTV calculations come to this site – a great summary of the formulas to use can be found here by Joel York of Chaotic Flow].

As a venture investor, I had never really thought about the public market perspective on my companies.  But it got me thinking about adding it to the key list of metrics our SaaS CEOs think about, because someday, we hope they will be selling that LTV metric to the Street.  Its component parts are made up of all the metrics we track, but creating an explicit metric often generates focus, and it’s probably one to think about early on in building value.

What do you think?

Tagged , , , ,

GaaS – The Rising World of Games as a Service

In the enterprise world, since the advent of Salesforce.com in the late 90s, we have heard about this notion of software delivered from the cloud and offered as a shared, multi-tenant service to customers, with the web browser acting as the universal interface to access the application.  Over the past decade, SaaS based applications have become mainstream, and are rapidly being adopted by small and medium sized enterprises globally because of its alignment of service delivery and value.  Interestingly, the same concepts are now beginning to affect the gaming industry.

In the old world of gaming, there were large hardware manufacturers who built specialized consoles to run and execute CD and DVD based games.  Game developers would create games that were stored on DVDs, and distributed through a vast retail infrastructure.  The game would have a multi-year timeline, and the developers went off building a new version of the game, which would completely replace the old DVD (much like writing new versions of licensed software).  Over time, those consoles introduced networking connectivity, and services like Xbox Live were launched.   You still bought the DVD as a starting point, but game updates became available online and you could even download new games in entirety over the network.

Today, a new era is emerging.  It started with the incredible success of World of Warcraft, which showed that a game could be delivered over the web, onto a PC, and create a “services” style game that continually grew and upgraded.  There are over 11.5 million subscribers to WoW, nearly half of which pay $15/month to play the game in North America and Europe. While the premium subscription model has proven to be wildly successful in North America and Europe, over 5 million WoW players in China continue to play via prepaid game cards at a rate of $0.07/hour. As most Massive Multiplayer Online games (MMOGs) in China are still played within PC cafes, the primary revenue model continues to be through prepaid cards via a time-based pay to play model combined with in-game item sales through micro-transactions, the latter being another gaming trend that is fast gaining traction in western markets.

WoW’s success has led to a revolution in thinking game development and delivery.  There are many examples of PC based games launching that are a single instance, multi-tenant, shared game application with a monthly subscription price that customers are rapidly adopting.  Two recent examples include Lord of the Rings Online (developed by Turbine and published by Midway/Codemasters) and Warhammer Online (developed by Mythic and published by EA), two western MMOGs with that have attracted over 300k paying subscribers each paying $15/month to play those games. Additionally, after having great success in markets like South Korea and China, game publishers are now experimenting with new models that allow users to play games for free upfront, and buy virtual items and characters via micro-transactions and P2P trading within the games.  Want to get the Penguin Micropet in GoPets?  Pay $2.  Want a level 80 character in Everquest 2 without investing weeks of gameplay?  Pay $500.  Companies like Nexon (publisher of Maple Story, Kart Rider and Crazy Arcade) in Korea and have generated hundreds of millions of dollars in annual revenue with this free to play, micro-transactions based model.

In addition, game content distribution is going through a massive shift.  Platforms like Steam from Valve are changing how we think of buying and interacting with gaming content.  Steam is a digital distribution and digital rights management platform that delivers gaming content directly to gamers via a web connected client. Steam allows gamers to purchase games and receive game patches and updates in an entirely digital manner. Steam offers both first party games from parent company Valve as well as titles from third party publishers, and currently offers over 350 games to 20 million registered users in 21 different languages.

Underlying this is a significant shift that will put pressure on the largest publishers of games, and create some great opportunities for creative destruction in the gaming industry.  The highlights of this new “GaaS” based ecosystem will share many of the same attributes of the “SaaS” world we have seen thrive, and will have the following attributes:

  • Games will be sold and played over the Internet;
  • The game itself will be a shared instance, with foundational upgrades instantly being applied to all players;
  • Game titles will have “continuous” economics, as new levels, variations, and challenges can be dynamically inserted or purchased;
  • Free to play model will remove barriers to adoption and encourage initial and immediate game exploration;
  • Micro-transactions via web payments, mobile payments and prepaid cards will allow game publishers to monetize users instantly and directly;
  • Game publishers will have unprecedented ability to interact with their customers directly – measuring navigation and usage as one does the internet, creating unique 1:1 marketing experiences, and watch for dips or spikes in activity and modify the environment in response;
  • Game publishers will be able to collect real-time gameplay data to provide a better and more personalized gaming experience for gamers, leading to more accurate leveling, improved matchmaking and increased socialization within games.

At FirstMark Capital, we have invested in a number of companies that follow on these trends, and they are seeing tremendous success in the market.  Riot Games is a session based MMO based on the very popular DotA community, whose game is entering beta and is already getting exciting user feedback.  LiveGamer is an exchange for virtual goods, and has seen transaction volumes and activities rise as more and more publishers introduce virtual items into their economic stream.  We have a number of other initiatives under way, but I believe this notion of GaaS will be an exciting one for the next few years.

(Special thanks to Jason Yeh for his contributions to this post.)

Tagged , , ,

NYC: The Media Capital in 2020?

Today, I attended the kick-off dinner to the NYC Economic Development Center’s efforts to shape and support NYC’s position as the media capital of the world for the next decade.  The event was held at Gracie Mansion, and included 40-50 of the media industry’s most notable names.  It was a great cross-section – from traditional media to the largest ad agencies to the newest digital media properties to deans of the leading NY universities to various NYC venture capitalists – all assembled to discuss the changes affecting the industry, and specifically how the city can create long term initiatives to ensure NYC remains the media capital in 2020. 

The city seems to take this project quite seriously.  It’s hired Oliver Wyman as lead consultants, it is setting up a website that will leverage the latest technologies to facilitate the dialogue, and has set up a rigorous program by which to have regular discourse and detailed action items.   

The session was kicked off by Mayor Bloomberg, who spent much of his time talking about the need for hope and the belief that the bad times would eventually yield a strong recovery.  He made a number of interesting points, including highlighting an article by Fareed Zakariya that talked about how Canada has had zero bank failures, managed consistent government budget surpluses, has home ownership at the same rate as America without the tax deductibility of mortgage interest payments, and has been growing as a country largely on the backs of sound fiscal policy, common sense and an open immigration policy.   He mentioned a number of interesting statistics about NYC, including that it had more fashion houses than Paris, and took the occasional shot at the folks in Albany.  He also pointed out NYC’s fiscal discipline in cutting expenses in the budget by $3 billion.  It felt like he was beginning his campaign.

After the Mayor spoke, members of the NYC EDC set the stage about the media industry in NYC.  Some relevant facts:

·         Media is the second largest industry in NYC, behind financial services;

·         Media employs over 300,000 people, representing 10% of the total, and over $30 billion in revenues;

·         The 305 large and very large media businesses accounted for only 50% of the media jobs in the city, the remaining 50% came from the 15,000 small and medium sized businesses in the city (driving the point home that supporting innovation and small businesses are high on the agenda).

We then transitioned to a plenary session discussing the positives and negatives of doing business in NYC, and the key issues we would need to deal with to ensure the “Media NYC 2020” vision.  The highlights included:

·         NYC is still “the place” young people want to be, for its energy, arts, culture, and unique mix of people, and that is an important characteristic to maintain and support;

·         The lines between media and technology are blurring, and there is a strong need to improve the quality of the engineering and development talent to face this growing trend, lest Silicon Valley keep all the technology spoils to itself;

o   Many examples of how media firms have simply put their technology development in different geographies because they could not find the needed talent in NYC;

o   A universal sentiment that NYC needed to establish a “Media Center” that brought together academia, industry, and the bleeding media technology issues in a similar manner that MIT has done for Boston/Cambridge or Stanford for the Valley;

o   Discussions about creating engineering scholarship programs to attract the best and smartest students from around the globe to the city;

o   Discussions about tax incentives and credits for startups to combat both the higher cost of living and the more lucrative salaries that financial services firms had paid techies.  One radical idea of creating tax free zones as some other foreign countries have done to foster community and innovation;

·         In the debate about whether the city should support “traditional media” or “new media”, an acknowledgement by several that “big media” could not lead the charge, as it is facing a fundamental shift in market forces and will be permanently in a cost optimization mode for its legacy products.  The key is not who should be protected, but that the industry of content creation, aggregation, distribution, and monetization be supported without regard to old or new so that NYC maintains its status as media capital.

It was an interesting night filled with plenty of good conversation.  What other suggestions would you have for the folks at the NYC EDC?  Please post them here, and I’ll be sure to pass them along!

Tagged , , , ,

AlwaysOn Panel: “Big Media’s Digital Strategies: Where do Private Companies Fit?”

I moderated a panel this past week at the AlwaysOn OnMedia conference in NYC.  It was an opportunity to get behind what the “big media” folks are thinking in this economy, and how they interact with startups.  The panelists were Jessica Schell, SVP, NBC Universal; Walker Jacobs, SVP at Turner Digital; Vivek Shah, Group President Digital, Time; Jim Spanfeller, President, Forbes.com; and Sanjaya Krishna, Principal & US Digital Services Leader, KPMG.  Below are the most interesting takeaways I got from the session.  For the full panel, click here.

·         On the overall economy, as expected most of the panelists indicated it was tough going out there, and they were focused on partnerships that drove revenue.   In fact, given the pressures in the broader market, they were “more open than ever” to partner.   Some of the panelists highlighted their willingness to do deals in areas like content as evidence of that openness.

·         One of the key challenges they saw in unlocking more digital dollars was translating brand advertising into value online.  One of the more interesting ideas was from Vivek Shah, who said that while growth in performance based advertising in a recession is to be expected (as demonstrated by the most recent Google and Yahoo quarterly results), it is akin to harvesting crops.  It’s easy to pull in more food near term by harvesting more (search) but if you don’t plant any seeds (brand advertising), you may find yourself without crops in the future.

·         All the panelists want to find ways to drive additional lift and yield – the “optimization” problem has still not been solved.  Each were working with various contextual, behavioral, and other techniques to try and improve CPMs and deliver a more compelling story for this medium versus other areas of spend to advertisers.  There were a few areas of strength highlighted, including in QSRs (quick service restaurants) and entertainment, to go along the usual weak spots of finance and autos.

·         In defense of traditional media, the panelists pointed out that people turned first to CNN when news of the airplane landing in the Hudson River broke, not the blogosphere.   The panel expressed a need for better curation tools.

·         There was lots of discussion around the dearth or plethora of data online, and the need to make better sense of it all.  Data standardization continues to be a recurring theme.

·         Time Warner and NBCU both highlighted their investment arms (Time Warner Investments and Peacock Equity Fund) as one way to get introduced and a way for them to learn about startups, but quickly pointed out that the best way was to get a direct operational relationship.  An investment did not guarantee a deal, and a deal did not guarantee an investment.

·         In terms of mistakes startups make when engaging with big media, the panel offered the following advice:  1) don’t present a deal that assumes you’d capture the lion share of the economics out of the gate; 2) set expectations appropriately – start small and prove success rather than promising the moon; 3) focus on how to drive revenues in this environment; 4) know what items they are willing to outsource and what items they would never (such as the sales relationship).

·         I concluded asking the panel what company they would start knowing the problems they currently faced in their environments.   The answers:  a next generation data exchange, improving the mobile experience, new back office systems designed for the digital era, improving operational efficiencies.

It was a fun panel to moderate.  The panel cited numerous examples of startups they have successfully partnered with to drive mutual value, but it was clear there was a long way to go.  Those of us part of the startup ecosystem should take heart!

Tagged , , , ,

Black Friday Shopping

I just read an initial report out of ComScore indicating this year’s Friday retail e-commerce numbers were up slightly over last year.  Online, nontravel e-tailer sales grew 1% for the day to $534MM from $531MM last year.  For the month of November, retail e-commerce sales were down 4% from last year’s numbers.  The National Retail Federation, on the other hand, is forecasting an increase of 2.2% for the full Thanksgiving weekend (with only Sunday being an estimate), on total spend of $41 billion and average customer spend up 7% from $372.57 to $347.55. 

All in all, I’d consider the data to be encouraging (relatively speaking).  It seems to me all retailers were very concerned about spend and pushed heavy discounts to the forefront to ensure the holiday season got off to a good start.  It may not bode well for retailer margins, or for the overall health of the industry for that matter, but at least a strategy of heavy discounting did create elasticity and spend with consumers.  It would have been far worse to heavily discount and feel like one was simply pushing on a rope.  People could have easily refused to put any money out this holiday season, and frankly I would have guessed we would see declines in spend.  I’m still not sure I believe the increase in average purchase size.

Walking around, things seemed to be pretty busy.  I put a couple of pictures from Macy’s and the Apple Store in NYC this weekend below.  They were jammed.

Apple Store NYC

Macy's

The next step is to see whether people have “forward bought” and all retailers have done is rob from tomorrow to get paid today.  I noticed several retailers offering discounts for future period purchases.  For example, at Banana Republic, upon completing a purchase, they offered a card for 20% off any item between December 2 and 22nd.  The goal is clearly to get me back into the shop.  It will be interesting to hear the data come in over the next month.  If anyone has other good anecdotal data, would certainly love to hear it!

Tagged , , , ,

Facebook, Twitter, and the Convergence of Messaging

Lots of chatter about Twitter being offered $500MM by Facebook.  Some think Twitter is crazy not to take it, while many others correctly point out that $500MM is not $500MM if it’s in stock.  While Facebook may hold out the Microsoft $15 billion valuation (an artificial auction given how strategically important the advertising deal was to Microsoft, not to mention that they received preferred stock), my discussions with a number of people tell me Facebook common stock has been trading hands at somewhere between $3 and $4 billion in value.  If you’re Twitter, that’s the difference between owning 3% of Facebook and 12.5%.  That’s a huge difference in ownership when it comes to upside! 

Facebook is an unbelievable social hub, where casual communications amongst friends are mainstay.  Facebook has also been incredibly successful with mobile usage.  There are over 15 million active users of Facebook Mobile, growing over 300% from last year!  By comparison, Twitter “only” has 6 million active users of the product.  If Facebook is the dominant player in casual communications and has an incredibly strong product in the Mobile space, why would it make a “buy” decision versus a “build” one? 

I think Facebook is looking to take advantage of this downturn in the economy to become the largest social network and communications hub out there.  It turned down some huge offers to stay independent.  There’s no turning back now.  Twitter is the poster child “Web 2.0” company – incredible usage, no revenues.  If Facebook could get Twitter for a reasonable price (ie, selling them on a $15 billion valuation), they could clearly capitalize on Twitter’s market momentum.  Pick up a viral service that has got a high degree of overlap with your own users, and use the integrated service to draw everyone else from Facebook onto the service.   Even if they don’t buy Twitter, Facebook must be working on some sort of SMS-based Twitter-like feature.  They might even add a Loopt style service alongside the same platform.   Extrapolating from the chart below, text messaging is a very important communications medium for Facebook’s core audience, and clearly offering a full feature set would rank high in ensuring Facebook’s dialogue with their core audiences. 

 

teen

Looking at the chart above also provides some hint as to where Facebook might be headed next.  In my mind, the next most obvious place for them to go is email.   While younger kids view email as the “formal” way of communicating with adults, its usage is uniform across age demographics (see below).  And we all know how incredibly sticky email addresses are.  Yahoo! has over 260 million users of its email service, and AOL has long maintained audiences with its legacy email accounts.  Gmail by Google, while growing, is a surprising distant follower.  I’d bet many of the younger users of Facebook would easily use an “@facebook” account, or any separate brand Facebook might come up with, especially if it was appropriately integrated into their social messaging platform.  Facebook might even do something really interesting by providing POP access to its social messages to drive adoption.  Putting aside the details of how Facebook sorts/presents email from chat or social messages, it would seem like a great way to start building an organic presence in email for a huge audience you control.

Other possibilities could be expanding Facebook’s chat platform.  While they have their own internal chat function, why not approach Meebo or eBuddy to acquire their tens of millions of interoperable IM users.  Like Twitter, they likely share attributes of “high usage, light revenues”.  In addition, Facebook could launch a VoIP based voice service that it embeds into their chat platform and their smart phone mobile applications. 

Imagine the converged communications possibilities.  Facebook would have the SMS market cornered via Twitter or its own offering, it could have not only the internal usage of their Chat application but also corner interoperable IM services via acquisition, it could have users starting their sticky email “lives” with the launch of @Facebook/ @nameyourbrand so users can communicate with all those “adults” outside the Facebook ecosystem, it could have applications messaging enabled by the open Facebook platform, and it could have voice (VoIP) services embedded via the web and the downloadable mobile app.  While pure speculation on my part, one can see how the innocent Twitter play could be one small step towards Facebook aggressively trying to converge our messaging platforms.

Tagged , , , , , , ,

Improving Our Infrastructure, One Election Booth At a Time

What a week!  There has been an enormous amount of press coverage, both domestically and internationally, about the implications of Barack Obama as the next President of the United States.  I do not think they can be overstated.  This election is a testament to the ideals of this country and the power of empassioned masses.   While I am a registered Independent, and prefer to keep my politics out of my regular dialogues, the avid traveler in me cannot help but feel great about the universal refrain of support and acknowledgement from every corner of the globe.  I am quite hopeful in a period that only offers bleak challenges.

Amongst many of President Obama’s policies, one that particularly resonated was the call to upgrade and improve our infrastructure.  Obama’s use of technology has been well documented, from his campaigns online to his engagement with Facebook audiences to his mass SMS message to campaign volunteers acknowledging their work before delivering his acceptance speech.  I believe this country has not come close to generating the efficiencies possible by leveraging the latest information technology.  Nowhere was this more evident than at the voting booth where I cast my vote.

On an electrifying Tuesday morning, rather than casting my absentee vote, I walked over to the Prince Georges Hotel on 28th Street in Manhattan to cast my vote in person.  The line rounded the corner, but this was my day to participate in all this country offers.  I waited patiently for an hour before getting inside.  When I entered the hotel, I felt like I had been thrown back centuries.  I was greeted by a woman who had a crumpled up piece of paper with handwritten numbers, which would identify which machine I would stand in line to vote from.   These numbers were misaligned and looked like chicken scratch.  She directed me to the line for “12”. 

The line for “12” overlapped with the lines for “28”, “51”, “13”, and others.  They wound and zigzagged around each other in a swirling mess.  Once I got to the front, I was greeted by a woman who checked my ID and pointed me towards a booth.  The booth itself was enormous.  Twice the size of those usual “Polaroid” photo booths that you would take pictures in as a kid at an amusement park.  Inside this big, old grey piece of metal were columns for the candidates.  Next to the names were manual black knobs.  At the bottom of this machine was a massive (3 foot long) rusted red lever.  I had no idea what I was supposed to do, and still uncertain after reading the three point instruction at the top.   Turns out you have to take this massive lever and push it all the way to the right.  Then you had to turn each manual knob counter clockwise for the candidates you wanted.  And when you were done, you pulled the lever all the way to the left again.  After that, you stood there and hoped something happened.  There was no feedback, no click, no guidance.  I walked out scratching my head, until another woman came over, pulled some other lever in the back to reset the machine for the next person.  It somehow implicitly validated I did something.  Nothing about the registration of my vote was tied in the machine to who I was (ie, did I vote?).  Maybe some manual reconciliation happens afterwards.  And I couldn’t tell how this machine could possibly do anything but offer very high level total calculations via an internal abacus.  All in all, this machine could have easily been built in the era of the Guttenberg press back in the 1400s. 

My tiny experience at the voting booth could have gone radically different, and generated massive savings all around.  The first issue that struck me was the opportunity for human error.  From the first woman and her hand written notes, to the swirling chaotic lines, to the archaic machine, each of these had material >5% chance of errors, especially in light of the volumes of people.  Compounded, it could lead to material errors in votes – the fundamental priviledge of our citizenhood!  Second, there is a huge manual effort that could be entirely eliminated by using a modern booth as available in select other states.  No counting votes, or manually inputting data into computers, it would happen instantaneously.  This could lead to substantive savings if implemented in a standardized manner on a national basis.  Third, while safeguarding the privacy and sanctity of the actual vote, is the opportunity to take the information from these machines to understand and improve our democratic process. 

Technology cannot cure all ails.  It is not magic.  But if used effectively it can transform how we leverage and process information.  This little example is one of an infinite number of inefficiencies that exist within our government.  Even the smallest of focus on these can lead to substantive productivity and cost improvements that can go towards any number of the major issues facing the country.  I am hopeful our new President elect will push the drive to modernizing our state.

Tagged , , , , ,

Advertising in 2009

As many of you know, Ad-Tech is in NYC this week.  It’s a great conference that brings together some of the leading traditional and digital thinkers to explore the latest topics affecting the industry.  The timing of this Ad-Tech was particularly interesting given the broader market environment. 

I had the pleasure of being on a panel entitled “The Digital Economy” with David Moore of 24/7, Bob Raciti of GE, Imran Khan of JP Morgan Chase, and moderated by Henry Blodget.  Much of the discussion focused on the state of the online advertising market.  I thought I’d share some of my predictions:

·      2009 will mark a very, very tough year for overall advertising, and I would not be surprised if the total ad market (which exceeds $230 billion) declines by 10% or more.  Mary Meeker had put out an interesting analysis that showed the correlation between GDP and advertising spend at 81%.  Based on that analysis, at a 0% GDP growth rate, one would see a 4% decline in overall advertising.  With a 2% contraction in GDP, one would expect to see 8% decline in advertising.  I believe 10% is a real possibility.

·      There will be a continuing rotation of dollars from legacy advertising markets to online advertising.  The overall online advertising market (which is only $25 billion out of that $230 billion pie) will grow, though at much more muted levels than the 15%+ currently predicted by the market.  More likely is mid single digits overall.  History shows that advertising eventually follows the user, and given how woefully behind ad dollars are to the time spent online, growth should be expected.  This will be offset by declines in the unit pricing, both on a CPM and CPC/A basis.  Clicks or actions won’t matter if the consumer cannot ultimately convert because they don’t have the money.   

·      We will not see the 25% drop that we saw between 2001 -2003, for two reasons:  1) Overinflated tech startups are not buying from other overinflated startups.  Online is mainstream and touches nearly every industry in a meaningful way.  2)  The inventory being offered has evolved from display only many years back to display, search, SEO, email, lead generation, affiliate, etc. 

·      This contraction could put MAJOR pressure on the traditional media players.  In particular, I worry about the newspapers, who still generate over $38 billion in advertising, with content that is often readily available from hundreds of sources, including blogs of which many are viewed as more “authentic” to young readers.  I think we can see some major failures over the next few years.  Those who produce premium content, or content that has a high cost of production, controlled distribution, and long shelf life (eg the networks, film studios, etc) will have to work through their transitional issues and the current tough environment but will survive and thrive online.  

·      Within online advertising, consistent with prior recessions, we will see retrenchment to direct response/performance oriented spending.  Search will grow much faster than display, as people will release dollars only to the extent they are certain they will see them back very shortly. 

·      Other areas of robust growth will include online video and in gaming advertising, as people increase time on leisure entertainment.  Online video will get even more compelling as we get beyond the pre-roll only.  There is such a rich opportunity to make advertising within a video context so much more engaging and real-time.  You can engage the users with calls to action, can make real time “hot lead” phone connections, can offer incentives to induce immediate behavior.  We should watch for some exciting innovations.

We are still early in many aspects of the online revolution.  One of my companies, Conductor, just released a report that showed over 75% of the Fortune 500 have no presence for their keywords and brands in the natural search domain.  Consistency of measurement has continued to prove a challenge to unlocking more spend.  We have plenty of data, just no good idea how to agree on it.  Increasing fragmentation in the sources of online spend in a market where people had enough to do with just TV and newspapers will require much more robust technology for automation.   The whole concept of de-portalization and free flowing content will necessitate a re-writing of all of our Web 1.0 and 2.0 tools.  There is still a lot more innovation needed to move the rest of the $200 billion or so that is not yet online, and so while the short term market looks tough, the long term opportunities remain exciting.

Tagged , , , , , , , , ,

Winning at Search: The Algorithm or The Infrastructure?

We are on the eve of Google announcing their search results for their 3Q.  Google has become a major force in discovery and advertising by virtue of their ability to surface the closest result relevant to a user across the broadest set of queries on the Internet.   Dozens of start-ups and certainly a few large players have tried to de-throne Google’s supremacy, but few have been successful.  The switching costs are zero, yet Google’s market share has only gone up.  Narrowing the domain has helped, and by limiting topical areas to things like shopping or health, companies have created market share distributions more favorable than in broad search; however, an end user is not going to use or remember 100 different search engines optimized for 100 different topics.  In fact, as it has in Health or in Local, Google has picked off verticals one by one to super-optimize.  This all got me thinking about how a start-up could ever beat Google at the broad game of search.

Search is decomposed into a few different elements.  The first is a “spider” – a virtual bot that scours the web, parses web pages, and builds a representation of the web; the second is an algorithm that takes those spliced pieces and decides what pages are more important than others given a set of constraints or inputs; the third is a massive index that takes all this analysis and stores it so that at “query time”, an engine can quickly take the digested knowledge and weights, and return a result. 

It’s my view that algorithms are not people or resource intensive.  A few guys thinking very hard can come up with simple, revolutionary ideas as Sergey Brin and Larry Page did.  Sure, Google has an incredible number of variables and residual terms that help refine its algorithm, but at the end of the day, it’s very rare that math is invented or discovered.   In fact, I’d wager a “better algorithm” already exists somewhere in academic labs throughout the country.   If it can be written or built by few, it is within the realm of startup possibility today. 

I tend to believe the biggest challenge for a start-up remains circumventing the need to re-create Google’s infrastructure against an algorithm.  Google spends over $2.8bln in CAPEX a year.  They spend significantly more in CAPEX than they do on search algorithm specific R&D.  I have heard estimates that maintenance and improvement of Google’s algorithms can be satisfied by a few hundred engineers, a small number relative to the 5,800 headcount in R&D.  Google’s CAPEX purchases machines that process huge streams of information, run calculations, and store all that data into massive repositories.  In fact, it is estimated that a normal Google search query involves anywhere from 700 to 1000 servers!  Their compute farms grows as the web grows.

To fundamentally change the playing field, a breakthrough is needed on the indexing and spidering schema.  An index can’t require anywhere near the amount of storage that Google currently has on its disks; the spider must more efficiently parse pages to go into that index.  Perhaps the spider performs distributed analysis while out in the web rather than in a central location; maybe the index is broken up or organized in a completely novel way.  Without breaking Google’s CAPEX curve, a startup would be hard pressed to go as broad and yet be more relevant than Google with the head start in investment that Google already has. 

I fully acknowledge the first objection to the above:  Microsoft has all the resources in the world, and has not been able to replicate Google’s effectiveness.  I cannot claim to know how Microsoft’s money has been spent, but my hunch is that Microsoft has tried to catch up by using variants of the same approach as Google.  The problem with that is Microsoft started significantly behind, and playing by the same rules will continue to leave them behind.  Cashback is an interesting attempt to buy traffic, but startups don’t have that option.  I would also concede that the more Google feeds its algorithm with data it gets by increased usage of the engine, the more disadvantaged any new approach would be.

All that being said, my current bias is that for a start-up, we need massive innovations in spidering and indexing (or the concepts they represent) to defeat the Google machine, not better algorithms.  The few that have started with a better algorithm have always had to constrain their bounds as a result of running into the wall of how much money they spend on capital equipment.  I am fascinated by the discussion and would love any feedback to the above.  I’d also enjoy reading about anything going on in academia that shows promise.  And if you’d like my views on particular sub-segments within search (vertical, social, etc), feel free to ping me…

Tagged ,

Is Cloud Computing Stupid?

This was the supposition of Richard Stallman, founder of the Free Software Foundation.  As a venture investor hoping to invest in businesses that are ultimately profitable, with strong customer stickiness, and sustainable defensibility, you might be shocked to hear that I find some of Stallman’s assertions to be quite reasonable.   The cloud does have the potential to create lock-in under a certain set of circumstances, and can be called proprietary development platforms.  Where I disagree is that as a result of the above, customers should stay far away from cloud computing platforms (such as CPUoD, SaaS, and PaaS, as defined in my last post).  In fact, I believe given the rise of open systems, APIs, and standardized data access and retrieval layers, customers can enjoy all the benefits of a cloud platform, while maintaining sufficiently healthy competitive dynamics between vendors to keep them open and honest.

There is the obvious issue in Stallman’s position, which is that only 0.01% of customers have the expertise and resources to build one’s one server farm using all open source components and manage a fully controlled applications and data environment.   Putting that aside, I’m focused on the rest of the customers out there, large and small, that only have time to focus on their own value proposition, and where time to market makes use of clouds a very seductive option. 

Most SaaS applications today can be decomposed into forms that collect data, links to connect to data, workflow that pushes data to people in the right order, analytics that repurpose data “A” into new data “B”, and presentation to display data.  These SaaS applications are “multi-tenant” in nature – meaning there is one version of the application that all customers use.  While there are customizations, 90%+ of the app looks the same from customer to customer.  IF an application boils down to a calculation and presentation layer between various “rest states” of data, and a single application is fungible to many customers, then “uniqueness” lies in the data, not the application.  Therefore, the primary inhibitor to switching to a different application revolves around the concern for one’s data.  The easier I can get my data into and out of an application, the less beholden I am to any one vendor.  And if I am not beholden to a vendor, I can insist on the value proposition I need when purchasing the application.  Thus, to me, the argument all boils down to data portability. 

As a very simple consumer analogy, let’s pick the fun world of photo upload applications.  If I could easily extract all my Flickr photos and pump them into any other competing service (Ofoto, Shutterfly, Picasa), then I can feel fairly comfortable that Flickr is highly incented to offer best functionality at best cost.  If they do not, I take my photos out, and push them into the superior offering.  While many services do not provide such photo portability, I believe those that will win long term will be those that do, as savvy consumers will flock to such services.

In the old days, data was stored in proprietary formats that could only be read by the application writing the data.  In fact, way back, the physical storage of data to disk was proprietary!  Things have come a long way with the advent of standards such as SCSI, SQL, ODBC/JDBC, and XML, as well as published ways to extract the information via APIs via a ubiquitous transport layer in TCP/IP.  Data is isolated from the application, and able to be extracted via a variety of methods.  Almost all of the major SaaS suppliers today offer APIs (perhaps of varying quality) to push and pull information out of their application.  Many also allow connectivity at the database layer, and have built in export functionality.  The means to get at the data are provided for by the in the application provider, and I would expect this to increase significantly over time.

The next challenge after being able to access the data is to be able to take data on one side and make sure it is intelligible to any other application one might want to use.  Fortunately, there are a number of vendors who offer data integration and migration capabilities in the “cloud”.  As an example, FirstMark has an investment in a company called Boomi.  There are others.  These companies build software that takes the “taxonomy” of one application and translates it for other applications to use.  These can be comparable applications, to migrate from one to another, or they can be complementary applications, so that one set of data can be leveraged in multiple dimensions and avoid data input redundancies. 

If data is portable, then customers benefit greatly by leveraging a “cloud”.  Cloud vendors have extraordinary leverage in CAPEX, one that few companies can match.   The bandwidth and storage consumed by users of EC2 & S3 now exceed that from Amazon.com and all its other sites combined!  Quite a striking example, and it’s hard to fathom matching that kind of purchasing power.  In addition, the people and software investments to scale the infrastructure, the processes and procedures, the knowledge, all are very costly to duplicate.  If done right, clouds can be a much cheaper place to operate and allow customers to focus on their core value proposition as long as they insist on data flexibility.   

The above is also true for PaaS vendors.  Most PaaS vendors go out of their way to note that applications built on their platform have APIs built into the application out of the gate.  Now, it is true that ISVs choosing to use a PaaS platform are buying into a proprietary programming style.  In addition, they are at the mercy of the viability of the PaaS vendor, and that the PaaS vendor will not jump into the SaaS game by building competitive applications.  But ISVs have the same data portability options as an end customer.  If they choose to build on another PaaS, they simply have to ensure their PaaS vendor allows them to pump data from one platform to the other. 

None of this is easy.  Data movement has always been challenging.  But I believe we are now in a permanent era where you cannot “hide” data behind layers upon layers of proprietary code.  Customers and ISVs must insist that any cloud vendor they choose provide easy and standardized means to access and move their data.  If we all do a good job insisting and asking the right questions, the winners in the cloud battles will be those that embrace openness and portability, and who focus on retaining customers by having the best application instead of by scaring them with lock-in.

Tagged , ,

What is Cloud Computing?

Given Larry Ellison’s recent objections to the term “cloud computing”, and that I will likely write about the space often, I thought I would take a shot at defining things that get lumped into the term. 

I tend to agree that “cloud computing” is an abused term, but I believe if you parse the various definitions, I think you come out with four categories:

·         Co-location and web hosters:  The forefathers of the cloud computing space.  They created specialized data centers with redundant infrastructures (such as power, network connectivity, etc) for third parties to leverage.  Customers were separated by cages, where they could put their own servers into racks (or lease the hoster’s servers).  Applications and data were technically outside the offices of the customer, and accessed via IP protocol and the Internet cloud.   Put Internet cloud together with computing elsewhere, one could play the game and conceptually call that “cloud computing”.

·         CPU/Storage on demand (“CPUoD”):  These players start with their own data center facilities and servers, but have leveraged the explosion in hypervisors to virtualize server pools.  They then layer on standardized OS environment, web servers, load balancers, databases, etc.   The application must be built for that run-time environment, but if it is, one simply focuses on the development of their application and can buy compute/storage that executes the software and stores the data in a usage driven pricing model.  Some folks optimize for specific languages, such as Google’s AppEngine in Python, while others provide specialized diagnostics and monitoring services on top of their cloud to differentiate.  Some are stateful, some are stateless, some with persistent storage, some with dynamic storage.  But at the end of the day, it is a standardized operating environment that one pays per GHz and/or GB running ANY application.   I’d view this as the basic “brick” in cloud computing.

·         Software as a service (“SaaS”):  On the other end of the spectrum, software as a service providers build all the way up through the application/UI layer to offer a business function to the end user in a shared, multi-tenant, recurring revenue model.  While extensible and customizable, it is one instance of the software that serves many customers.  It is often lumped into cloud computing because the data center cost (where the software executes and data resides) and assumed scalability are bundled into the cost charged to the end user for the application.  The vendor can either:  1) take their own racks, cages, and servers (as in first option above) to build their own internal CPUoD environment and write their application on top of their own controlled stack, or 2) the provider can use a CPUoD provider and write their application for that environment.  The end user pays for an application that scales by usage of the application (which may or may not need more compute) but the scalability and cost of the infrastructure is hidden from the user.  From the customer’s standpoint, this is a “cloud” + application.   But buyer beware, as Bob Moul of Boomi points out, many things calling themselves SaaS are not.

·          Platform as a service (“PaaS”):   This is the newest category.  It began when Salesforce realized that their SaaS application could be decomposed into more basic units that could be building blocks for any application.  Forms, tabs, and links, tied together with workflow logic and wrapped around data.  Force.com is a generic representation of an application – no data, no logic, but all the means to present, push, and pull information.  To build an application, one “programs visually”.  Customize a form, create a workflow for the application, specify the data types via fields, and your app is built.  PaaS removes the engineering level concepts in writing code in computer languages like C++ or Java (compiling, de-bugging, inheritances, message passing, etc), and incorporates the infrastructure scalability of CPUoD.  Like SaaS, the purchaser of an application built on a PaaS platform pays an application fee that assumes the infrastructure scales transparent to them.  Unlike SaaS, PaaS creates multi-tenancy across applications!  There is a single shared instance of a platform that supports multiple applications running on one or many CPUoD infrastructures.

Where’s the opportunity for startups?  Well, building and running clouds are a complex and costly activity.  It’s hard to envision as a young company having any comparable buying leverage on the CAPEX side.  One cannot hope to get anywhere near the same discount as Google on CPUs and motherboards.  And people use Amazon because it’s cheap.   The only hope I see for companies to make it are 1) in differentiated scaling systems that drive down the OPEX cost equation, 2) such a differentiated coding/support environment that people are willing to pay a real premium, or 3) gaining critical mass in a specific ecosystem of diverse applications that generate a network effect to one’s cloud.  The other area I like are plays that ride on top of clouds providing value added services on top that are gaps for the CPUoD/SaaS/PaaS provider .  That shifts the game from economic capital to an intellectual capital exercise, where nimble innovators thrive!

Tagged , ,

Entrepreneur’s Guide to Surviving the Credit Crunch

Given the events of the last few weeks, there are many provocative questions being asked about what the subprime implosion and subsequent bailout mean to all of us.  Will the bailout bring liquidity back to the market?  Are we descending into the worst recession we have seen since the 1930s?  Is the US secretly a socialist regime shrouded in capitalist clothes?   Amongst all the questions, very few are oriented towards the startup that has no leverage and is still building product or traction into the market. 

While I’m not here predicting the “bread lines” scenario, for most entrepreneurs and even us VCs, our companies are our lives.  The last time around, many “whistled by the graveyard” refusing to believe things were different until it was too late.  It would be irresponsible not to consider a tougher environment.  The following are things to consider for any entrepreneur beginning to prepare for upcoming market volatility:

1.   Cash is king once again, and is all that matters.  Preserving, extending, replenishing it.  If you have less than six months of cash, you need to seriously evaluate how to replenish your balance sheet.   Venture is generally the last industry to be impacted from market implosions given the long term nature of LP capital commitments and horizon for our investments – raise your money now if you can.

2.   Planning to hire a lot more people?  Especially sales folks?  Slower, more responsible growth will be cheered, but running out of cash will not generate sympathy.  Rather than hiring a bunch of sales people, who may spend months to get productive and still be pushing on a rope, keep the team lean until you feel that your customers can feel the bottom.  People do not make any decisions when they are worried about their own jobs.  No point subsidizing commiserating Happy Hours.

3.   Focus resources on a great, specific product.  In a tough market, large companies are cutting wholesale.  R&D groups are in disarray or not as productive because they too are worried about their jobs.  Smaller companies have finite resources and are all playing the “last person standing” game.  Building a narrower product that is incredible at one thing and working outward is better than building a broad set of functions in parallel.  Get to the 10x customer value proposition (3x improvement at 1/3rd the cost) and start selling as quickly as you can.  This will help you leave competitors in the dust.

4.   If you have venture investors, ask them how much they have reserved for this investment.  All responsible venture firms create budgets for how much capital a specific company will need over its life.  It’s how we know how many new investments we can make.  If there are no other investors out there, your existing investors will be your support structure.  Their summed reserve amount is the most capital you can plan on being available to you.

5.   Work even closer with your investors to define value creating events.  No VC will simply hand over all that reserved cash.  Start early, work with them to adjust near and longer term goals to realistic levels, and document them, so that when you visit the partnership having accomplished your objectives, they’ll have your check ready to go.

6.   Check in with your local banker.  Many of the emerging growth banks specifically stayed clear of any credit derivatives or subprime mortgages.   Their balance sheets are strong and leverage relatively low.  This market could be an opportunity for them to grow share.  The stronger the relationship you build now, the more likely they could be a supportive source of venture debt or capital.  

7.   Maximize any existing space and avoid signing new long term liabilities.  This benefits you in a few ways:  first, you avoid things like security deposits that tie up cash that can be used for operations; second, the commercial markets could be affected just as much as residential ones and you could negotiate a better deal after the ripple effect hits the economy;  third, it can foster a “cash is king” mindset amongst the team.  Everyone is in this together.

8.   Try to do deals where you get paid upfront, and avoid doing deals that require significant cash upfronts.  Again, cash is king.  Better to use cash for business deals rather than security deposits (number 6).  EVEN better to structure a win-win partnership that allows you to operate longer, rather than letting someone else carry your cash on their balance sheet.  If the only deal is an upfront cash deal, hold out a little longer.  If someone else jumps in now, you’ll be there when they go away.  If no one else does, people will really begin to feel pain and you might be able to structure the “death blow” deal against a competitor or lock up some invaluable web inventory for a song.  The flip side is if you can get money upfront, that is worth a lot!  Whether that’s prepaid inventory, annual billing terms with cash paid upfront, or non-recurring engineering that actually subsidizes your team, keep all the chips in your corner you can. 

9.   Quantify marketing and shift it towards DR (direct response).  There is a reason why in recessions, even before Google, brand advertising pulled back but DR grew – it has a defined ROI!  Examine all “goodwill” oriented marketing costs.  Paid search, display advertising (esp with its recent contraction), email marketing, webinars, and other forms of spend can be a much more efficient way to generate leads or acquire customers.  All of them are measureable – what better way to know if your cash is well spent?!  In addition, many ad networks will do guaranteed placement or conversion deals in tough times IF YOU HAVE THE CASH.  This is the Internet era, don’t let marketing spend a dime without knowing what you get back.

10. Start thinking about potential HR upgrades!  Tough markets mean top candidates are much more available than normal.  Think about swapping out B- employees for A+ folks.  Every person in a tough market has to contribute – sharpen the blade and drive productivity!

Got any other tidbits?  Feel free to post them here….

Tagged ,