A while ago, we started an R&D program called Integrated Digital Experiences (IDE), where our goal was to develop solutions for new digital channels such as social networks, and to integrate those channels to provide a single consistent multichannel relationship between our clients and their customers. The previous sentence will take many blog articles to fully explain, but today I want to focus on social networks, social media, and all things “social”, and set the stage for later, more detailed articles.
I often have the pleasure of presenting to clients and conference audiences about current trends and opportunities around social. By and large, people now recognize that sites like Facebook and YouTube can be powerful marketing channels, but I still get people who ask if Twitter is a fad, or if the excitement around social networking will die down. If you’re reading this, I’m probably preaching to the choir, but I want to spend some time answering those questions, and give some examples of why I think we’ll be talking about Facebook and social tools in general for quite some time.
Let’s talk about “social” in general. If you’re over 30, Facebook feels like a new fangled tool, but it (and many others) is simply supporting a human behavior that has existed as long as we have. We communicate, we share, we express our interests, and we align ourselves with different communities and interest groups. Facebook didn’t invent these behaviors. It simplified them and, in the process, made them more manageable and visible to others. Similarly, we can describe Twitter in Web 2.0 terms like “microblogging”, but the important part is that it’s a manageable communication channel between individuals and the people who care about them. These are capabilities that people always wanted, but technology didn’t allow us to do this in a scalable way. So, what are people doing with these “new” social tools? They are doing the same things they’ve always done, in dinner parties, postcards, fan clubs, and reunions. The difference is that they are doing it more broadly and more easily.
Now, let’s talk for a moment about businesses using these tools to communicate with their customers. I will say much more about this in other posts, but I’ll talk in broad strokes here. If the consumers are communicating on social networks in a way that is both personal and social, then the same should be true for businesses. However, many businesses are locked into a mindset of mass marketing and broadcast messages. This leads them to do counterproductive things like sending mass emails to their Facebook fans instead of recognizing the fact that their fans have established a much richer connection with them over Facebook itself. Instead, businesses need to reset their thinking to match the new capabilities of social channels. They need to adopt the mindset of the small town shopkeeper. This was a person who knew his customers’ faces, their names, their habits, and their friends. He said hello to them when they came in, and chatted about the things that were important to them. He listened to them. He didn’t scream slogans at them when they walked by.
Everything I just described can now be done on Facebook, at a global scale for millions of customers. In later posts I’ll talk more about how to do it successfully. In the meantime, ask yourself where you are on the spectrum between mass marketer and shopkeeper, and if you need to change your mindset.
*Kelly Dempski is the Director of Research for Accenture Technology Labs, Sophia-Antipolis, France. He can be followed at @accenturesocial.
** photo by Gauldo.
In my last post (On Web + TV), I talked about a star-crossed couple, the Web and the TV, sitting in a tree. It's been 15 about years now since friends and relatives started trying to set these two up on a date. But somehow, Web and TV never really hit it off. Why is this?
Interface is one answer. The mouse and keyboard were simply never a natural way of interfacing with technology in the living room. Both of them require a flat surface, for example – lots of extra horizontal space -- and they reward an upright seated posture and a high degree of accuracy. Which makes sense, since both were designed to be used on a desk, not a couch and a coffee table. The Web + TV marriage was definitely going to be rocky until something come along to displace the mouse and keyboard. (Something like the iPad, perhaps?)
On its own, however, the interface factor isn't really what has consigned Web + TV solutions to niche status thus far. No, I think there's a simpler reason even than that.
It boils down to what people actually use the web to do. And that is: read. Also: write.
The web is a text medium.
And no one uses a TV to read – and no one’s going to any time soon.
Hear me out. 1 in every 6 minutes on the web is spent on a social network – reading and writing. By some estimates another 35% of internet time is spent in an email client – reading and writing. Shopping is reading, just like flipping through a paper catalog is reading. Then there’s time spent reading blogs, news, wikis, tweets, and so forth. These are all text media. HTML is a text markup format. The fact is that so far in its history, the dominant use for the web has been for exchanging textual and tabular information. Images and graphics are included, of course, but they play essentially a subordinate role. This is why the blind can, and do, use the vast majority of web sites.
Some people have speculated over the years on the ‘death of reading’, while in fact, in the last 50 years, as our media diet has grown to encompass more and more hours of the day, the written word has actually grown substantially as a fraction of that diet. According to this study on information consumption, in 1960, out of all the words the typical person encountered in a day in 1960, only 25% of them were written down. (The rest was mostly radio and TV.) Now the written share is up to 40%, split between print and screen. The market share of the written word is up by 60%.
Now, I do realize that so far this account flagrantly ignores one minor detail: web video. Yes – I have heard of a little thing called YouTube.
But let’s do a video reality check. Out of the 66 hours the average user spends on the Internet in a month, a fairly puny 1.8 hours of that time is spent watching video. (Compare to 150 hours of TV, per month, per user!)
If we are willing to grant that most of the other 64 hours of web time is reading and writing time, then we can see why Web + TV is a niche product thus far. It just so happens that TVs are intrinsically quite bad as an interface for dealing with text. Even if there was enough resolution on a standard or HD TV screen to make text good and sharp (which, frankly, there is not), the screen is far away. Text has to be scaled up to be a comfortable size, and once scaled up, it has to be scrolled frequently. Tables and charts are troublesome – if they fit onscreen, they are too small, and if they are zoomed in, they must be panned around.
Even more basic than that factor is the way that people seem to crave physical proximity to the written signs they are working with. For example, despite the fact that many people could, with a nice projector and a white wall, give themselves a larger monitor – larger in size, and larger in their field of view – almost no one does this. (I had a professor who tried it. He ended up switching back to a regular monitor on his desk.)
Then again, this preference for keeping our words close at hand might be a historical artifact which will fade away. The practice of reading has changed before. In classical times, literate people always read the written word aloud (even to themselves!). Almost no one knew how to read silently, and the task was thought to be extremely difficult. It was not until medieval times (and the invention of the space character) that silent reading became the norm in Europe. So perhaps we will learn a new skill: to read and write from across the room. But I doubt it.
No, I think we’ll sit on the couch and do something much like what we already do there: watch video. This is what the rise of YouTube really means, and why Google is getting together with Sony and Intel to work on Web + TV. It’s not about convincing people to put down their laptops and spend their 64 monthly hours browsing with a TV instead. No, it’s about a bigger target: the 150 hours of TV viewing itself. These companies want to figure out how to replace huge swaths of broadcast hours with web video hours. (And it makes perfect sense if you’re Google, and you believe that you’ll eventually be able to (figure out how to) generate more revenue per hour than any TV network ever could.)
So the talk of Web + TV isn’t about the web at all. It’s about a different, parallel, and still basically uncharted medium: the video web. For my money, the video web and the written web are two separate media, begetting totally different user experiences, different hardware setups, different patterns of consumption. They’re hot and cool media, respectively, just like Marshall McLuhan said. And what it would take to unite them in some new kind of fusion medium – well, that we have not seen yet.
So I don’t see Web and TV getting together, not anytime soon. But TV and Web’s little brother – Video Web – will get along famously.
If you ask three people what is the cloud (as in cloud computing), you'll probably get back 10 different definitions. Even though no one can agree on the definition, most will agree that "on-demand" and "pay-per-use" are its key characteristics, and all would use Amazon Web Services as an example of the cloud. So it's really confusing when people say MapReduce is a cloud technology, because MapReduce is not associated with "on-demand" or "pay-per-use". If you're wondering about the connection between MapReduce and cloud, read on.
First of all, if you have not heard of MapReduce, it's a technology first proposed by Google in 2003 to cope with the challenge of processing an exponentially growing amount of data. In the same year the technology was invented, Google's production index system was converted to MapReduce. Since then, it has quickly proven to be applicable to a wide range of problems. For example, there are roughly 10,000 MapReduce programs written in Google by June 2007, and there are 2,217,000 MapReduce job runs in the month of September 2007.
MapReduce enjoyed wide adoption outside of Google too. Many enterprises are increasingly facing the same challenges of dealing with a large amount of data. They want to analyze and act on their data quickly to gain competitive advantages, but their existing technology could not keep up with the workload. Facebook is using it in production, and many large traditional enterprises are experimenting with the technology. It turns out that MapReduce can perform most tasks a database management system (e.g., Oracle) can, but it has many advantages over other technologies, including its scale, its ad-hoc query capability and its flexibility.
The first connection between MapReduce and cloud is that MapReduce could benefit from cloud technology. It is demonstrated in the Cloud MapReduce
project, which is an implementation of the MapReduce programming model on top of the Amazon services (EC2, S3, SQS and SimpleDB). Back in late 2008, we saw the emergence of a cloud Operating System (OS) -- a set of cloud services managing a large cloud infrastructure rather than an individual PC. We asked ourselves the following questions: what if we build systems on top of a cloud OS instead of directly on bare metal? Can we dramatically simplify system design? We thought we will try implementing MapReduce as a proof of concept. In the course of the project, we encountered a lot of problems working with the Amazon cloud OS, most could be attributed to the weaker consistent model it presents. Fortunately, we were able to work through all the issues and successfully built MapReduce on top of the Amazon cloud OS. The end result surprised us somewhat because Cloud MapReduce has several advantages over other implementations:
- It is faster. In one case, it is 60 times faster than Hadoop (Actual speedup depends on the application and the input data).
- It is more scalable. It has a fully distributed architecture, so there is no single point of bottleneck.
- It is more fault tolerant. Again due to its fully distributed architecture, it has no single point of failure.
- It is dramatically simpler. It has only 3,000 lines of code, two orders of magnitude smaller than Hadoop.
Cloud MapReduce advocates building more cloud services. If we can separate out a common component as a stand-alone cloud service, the component not only can be leveraged for other systems, but it can also evolve independently. As we have seen in other contexts (e.g., SOA, virtualization), decoupling enables faster innovation.
The second connection between MapReduce and cloud is that MapReduce is building the foundation of cloud. Other MapReduce implementations, such as Hadoop, are building cloud services, except those services are embedded in the project today and cannot be easily used by other projects. Fortunately, those implementations are moving towards cloud services separation. In the recent 0.20.1 release of Hadoop, the HDFS file system is separated out as an independent component. This makes a lot of sense because HDFS is useful as a stand alone component to store large data sets, even if the users are not interested in MapReduce at all. In the future, MapReduce may indeed be a cloud technology, when it starts to include cloud services implementations.
Ever since Tim O’Reilly, et al., coined the term “Web 2.0” to describe the second major wave of web technologies, use cases, and business models, technologists, sociologists, and futurists have all struggled to be the first to plausibly identify and characterize what comes next. I would assert, and I am probably not the first, that the proverbial “Web 3.0” has in fact already quietly emerged, and just like Web 1.0 and 2.0, represents significant opportunities for enterprises that are prepared to aggressively pursue innovative business strategies.
To set the stage for this hypothesis, let’s first take a look at what attributes have emerged to define the current “web”:
- A network designed for a limited audience / purpose that has been exploded in scale and expanded in purpose
- Initial limited functionality (data networking) being expanded to become a broad multi-modal communication medium for individuals and corporation
- Information sharing expanded from simple binary/text to graphical content, photos, audio, video
- Expansion of messaging capabilities from asynchronous to fully synchronous, real-time, streaming
- Single geography expansion to global reach
- Emergence of standards and interface schemas (XML) to facilitate integration of heterogeneous systems
- Text morphed to flat HTML and now becoming robust Rich Internet Applications (RIA)
- Static content home pages evolved to storefronts and now rapidly transforming into eCommerce
- Revenue derived from eCommerce and ad sales
- SPAM, Scams, privacy concerns, and viruses
And while my categorization may not be perfectly complete nor exactly precise I think it conveys the point. So, given all this, what is Web 3.0? Well, I’m suggesting Web 3.0 is in fact effectively a “parallel web” that has emerged, has taken on and mimicked many of the classic web attributes, has seen explosive early growth, and is poised to become a major economic engine and ecosystem in its own right. And just what is this magic, wonderful, parallel web of the future? FACEBOOK.
Let’s run it down (note: stats are all based on my recall of stuff I read on the internet so they must be true. Check your favorite search engine for more actual, accurate, and timely facts and figures):
- Started as an on-campus tool to replace hard copy “freshman facebooks” with a closed online network -> now more than 380+ million users including my mom
- On-campus people networking via the web -> global people networking, games, community sites, commerce, chat, public & private groups, mobile (>65 million users), XBox 360 (2million users in the first week)
- Simple basic UI (still predominantly in place) -> more photos than Flickr, videos, wall art
- Messaging: wall-to-wall, “poke”, IM, live voice
- Geographic expansion -> usage globally close to, if not yet exceeding, use in the US
- Facebook Connect: 80,000+ websites use to exchange data with Facebook and other sites
- Mostly a “flat” experience – site decorations are the “flaming icons” of the web circa 1997. (we’ll see how the UI progresses)
- eCommerce: Enterprises have been able to build “fan pages” and now can build full-fledged storefronts (www.bigcommerce.com)
- Revenue? VC funds, ad dollars, and the game vendors sure seem to be making some measureable cash.
- SPAM (Mafia Wars / Farmville updates), Scams? (http://techcrunch.com/2009/10/31/scamville-the-social-gaming-ecosystem-of-hell/), privacy concerns (Facebook “Beacon” anyone?), viruses = yes
So to wrap it up, Facebook is the web of 1999 with an extra 11 years of technology and social change mixed in – explosive growth, tons of potential for those enterprises willing to make a move, baseline capabilities in place, active re-definition of online behaviors, and nothing but upside. Now if we can collectively help it to avoid becoming the web of 2001!
PS. A topic for further discussion and debate: at what point does Facebook become a legitimate Google threat? The debate is already raging…
* Michael Redding is the Director for Accenture Technology Labs. He can be followed on his Twitter account, @michaeljredding.
That’s right. I said “aaS”—as in the “as-a-Service” characterization typically used for cloud. Today’s “as-a-Service” characterization of cloud fails to adequately categorize the architecture components.
The cloud landscape is confusing—with over 85 cloud vendors and various definitions of cloud it is difficult to compare services.
Consider Amazon’s EC2 for virtual machines and S3 for object level storage. Often, these offerings are all grouped as Infrastructure-as-a-Service. However, the interface and the required solutions to work with EC2 are very different from those for S3. Users submit objects for storage on S3 without visibility into the virtual machine appliances, their configuration, or the way they scale. Amazon handles these functions and the user accepts the composite storage platform’s availability and access guarantees. Conversely with EC2, these functions are the user’s responsibility: determining the middleware installed to create the virtual appliance, adding the provisioning scripts to configure the deployed appliances, and then implementing the algorithms to scale. For these efforts the user controls the implementation, the configuration, and the SLAs. This comparison of Amazon services demonstrates the need for a concrete description of cloud and the different architectural components of cloud.
As such, I define a Cloud Reference Model that brings order to this cloud landscape. Like the OSI Model for networks, this Cloud Model is layered to separate concerns and abstract details. The Cloud Model divides cloud-based application architecture into seven layers: Application, Transformation, Control, Instantiation, Appliance, Virtual, and Physical. Each layer focuses IT functionality on supporting a specific area of concern.
Then application architecture design becomes an exercise in determining the necessary functionality at each layer—fulfilling functional, security, and reliability requirements is decoupled. For example, security encompasses solutions from access to encryption, to network protections, and to physical. The Model decouples these decisions into what needs to be placed at each layer (Physical, Virtual, Appliance, etc.). Then fulfilling these needs is an exercise in mapping vendor offerings and do-it-yourself responsibilities.
Stay tuned for a detailed example illustrating the components of this model.
*image courtesy of Noel Coates.
A cloud is a large distributed system, whose design requires trade-offs among competing goals. Notably, the CAP theorem, conjectured by Eric Brewer -- a professor at UC Berkeley and the founder of Inktomi -- at a PODC keynote talk
, governs the trade-off. The CAP theorem states that a system can only achieve two out of three desirable properties: Consistency, Availability and Partition tolerance. Since distributed systems, by definition, use a cluster of machines and since the network connecting them could fail, these systems have to tolerate network partitions. So, in reality, the trade-off is often between Availability and Consistency.
However, such a decision is not necessarily good for enterprise applications -- a big target audience of the Amazon Web Services offerings. Most enterprise applications require the data to be correct. It does not matter if the system is available, if the result is wrong, you cannot make progress on your application. As a concrete example, we recently worked on a project at Labs, called Cloud MapReduce
, which implements Google's MapReduce
programming model on the Amazon cloud. We have seen many manifestations of eventual consistency
at work, which create many problems for our implementation. Cloud MapReduce cannot progress correctly when it reads wrong results from the cloud; instead, it has to spend a lot of efforts to detect and correct the wrong results when consistency problems arise.
So, the natural question is, if you are not running an e-commerce business, why not choose an eventually available system over an eventually consistent system? "Eventually available" is a term coined by myself. It is a system design that guarantees a strong consistency while trading off availability. The downside of such a system is that it may not be available for a brief period of time, but it guarantees that the system would eventually be available.
Why choose eventually available? The key reason is that it is much easier to deal with than an eventually consistent system. In Cloud MapReduce, we have to invent all sorts of techniques to detect and get around consistency problems, which are not easy. However, in an eventually available system, it is very easy to both detect (check the error code) and get around (just retry) when system is unavailable.
Fortunately, Amazon cloud is moving towards having eventually available (even though they do not call it so yet) as an option to be enterprise-application friendly. A couple of weeks ago, Amazon announced Consistent Read, Conditional Put & Delete features for SimpleDB (see Werner's
and Jeff Barr's
posts). Both of these new features guarantee strong consistency. We have done an extensive performance study on the cost of eventual consistency, and surprisingly, our study has found that strong consistency introduces no additional overhead, both in terms of latency and throughput. In addition, during the 3 days of testing, we are not able to observe any system unavailability. Maybe eventually available is even good enough for an e-commerce application?
The Times reported last week that Google is working with Intel and Sony on some future (perhaps near-future?) system for "bringing the Web into the living room". In case you hadn't noticed any similar headlines over the last fifteen years, I can assure you that this is a technology frontier that has been explored before. A final frontier, if you will...or at least a final resting place for many technology investment dollars. (It seems like just yesterday that Microsoft bought WebTV for $500 million, but actually it was 1997.)
It's fascinating how certain technology nuts prove to be so hard to crack, while others, which look forbidding at the outset, turn out not to be so tough after all. Imagine going back in time to 1997 and asking a well-informed technologist which was more likely to be considered a solved problem in the year 2010: searching the web, or surfing the web with a TV.
If your trapped-in-amber nerd knew anything at all about the web -- had the least inkling of how it was growing, and how it might grow -- they'd wonder why you were even asking the question. Search is hard. Search is a hard problem. Sifting through dozens of petabytes of unsorted, unstructured, often-updated data and returning an answer in less than a second...? That sounds like the labor of a generation of computer scientists, a labor that seemed, in 1997, to have just begun.
Putting the web on TV, on the other hand, is just a matter of making the necessary hardware cheap enough that people will buy it. Right?
If you told this historical nerd that in 2010 most people get a good result from their search engine, most of the time, in the first 10 or even 5 (!) results, they might believe you. Maybe. But if you told him/her that in 2010 most people have never even looked at a web page on their TV, you might lose all credibility, and be sent packing, back to the future.
I won't prognosticate about what form web + TV will take when it does roll around. But I am interested in just why it is that the living room is such a tricky place for the web to go. Various reasons suggest themselves. I'll get into those in another post.
In many circles, it’s becoming accepted wisdom that small- and medium-size businesses will be strong adopters of cloud computing (or SaaS, which in this case generally amounts to the same thing). According to a recent study from Microsoft, 65% of SMBs use at least some form of hosted software already. Of those who are still on the sidelines, about three-quarters of them have at least considered SaaS. Based on numbers like these, as well as anecdotal evidence, startups and SMBs do indeed seem to be leading the charge into the cloud.
What is the affinity between small business and cloud computing? It boils down to three factors:
- Being up and running. Cloud computing providers may not have 100% uptime, but they probably achieve better reliability than small shops can manage on their own.
- Being nimble. Small businesses need to be able to turn on a dime (and then pick up the dime). Example: Flickr, the photo sharing site, actually started as an attempt to build a content support system for video game development. Managing pictures was just one piece of what they were trying to do – but it turned out to be the important piece. Forecasts are generally wrong, so rather than tie up capital based on forecasts, with cloud you buy for what’s actually happening.
- Being timely. One of the most precious commodities for a small company is the time of its employees. Time not spent choosing, buying, configuring, and babying technology is time that can be devoted to building value.
For reasons like these, most small businesses are already used to buying hosted software. Almost no small business would run their own email server, for example: email-as-a-service has been a mature category for some time. What’s changing now is that the -as-service model is making headway in software markets that have traditionally been more traditional. In every category you might think of -- from patient outreach for doctors’ offices, to helping college athletics scouts manage their recruiting process – new SaaS applications are springing up to take the place of client-server systems. And there’s a national market for these kinds of products, where before many business-software verticals were regional and fragmented. So competition is increasing, and the products are improving fast.
Some larger enterprises who have no plans to hurry into cloud computing may say that what happens in the SMB market is all well and good, but what does it matter for them? And indeed, sometimes IT practices vary from sector to sector, and remain that way for a long time. Look at the late 1990s, when the educational market was almost universally Mac OS-based, while business was equivalently dominated by the Wintel platform. They existed as parallel worlds.
But the relationship between the small and large business sectors is a bit more interesting, and there’s more need for technology transfer back and forth. Firstly, small business is actually fairly large: SMBs employ around 40% of the workforce in the United States. So if this sector goes one way, nearly half the workforce is along for the ride. Secondly, hiring can happen in either direction, from small to large and from large to small, senior or junior people. So skill diffusion is a bigger factor. Thirdly, there’s another kind of pipeline into large companies: acquisition. Here, the acquirer has to deal very directly with the data and applications of the acquired – even if they don’t match “how we do things around here”. And fourthly, small and large businesses often are one another’s customers.
All this means that before too long, even the most cloud-proof enterprises will probably be home to people with cloud experience and cloud skills, people who are accustomed to buying software on a SaaS basis. When these folks do a skunkworks project or a proof-of-concept, make a purchasing decision, or reach for a tool to fill a gap in a business process, expect them to reach for the technologies they know and like.
And remember how easy this is to do. It just takes a web browser and a credit card. It’s not a Mac vs. Windows or Sun vs. IBM situation, where some upfront capital investment locks the business into one kind of platform. The “platform” (the network pipe) is already in place. Beyond actually site-blocking SaaS vendors (which could quickly become untenable), it will be increasingly difficult to lock an enterprise into one particular set of IT choices.
In some enterprises, the cloud may be part of a broad, top-down IT initiative. But in other cases, in other enterprises, cloud adoption may grow simply by starting small.
The two trends I’ve been talking about in my last couple of posts (1st, 2nd) are growth in the network and growing strength in the endpoints. These trends mutually reinforce. If more of people's valued information -- their 'stuff,’ as George Carlin would have it -- is in the network, they’ll want to be able to get to it, using whatever device is at hand. This helps drive demand for devices. Conversely, if people carry around, and are surrounded by, devices which can let them get at (and have a good experience of) their stuff, then they feel that much more comfortable leaving their stuff in the network. This helps drive demand for network services.
With papyrus scrolls, compasses, sundials, record players, GPS units, paintings, and a plethora of other informational devices throughout history, some special object has provided us privileged access to some special information: symbols, directions, time, music, location, imagery. The device is the data.
Gutenberg helped humanity learn to make copies of (at least some kinds of) data. And naturally, this changed everything. The book is a powerful device. But even with many printed copies of a given book in existence, you still must get physical access to one of those copies to have an experience of the content inside. Perhaps you can memorize some of a book, but probably not all of it. To have an experience of “War and Peace,” it’s you probably need to have the physical object. And it's precisely this tight relationship – between the object, and the data it conveys -- that's now starting to fade.
If access to information and the experience of that information used to be the same thing, now access and experience are being disaggregated. We are entering a world where any device can deliver any content: the world where devices are not data, they are doorways.
When a new kind of endpoint device is invented, users begin to consume (and produce) in unanticipated ways. One example from the headlines is Pandora. In 2008, after almost a decade in the business of streaming music to desktop and laptop users, Pandora released an iPhone app. It wasn't a radical change in the Pandora product; it simply meant that people could now stream music from Pandora directly to their phones, over the Internet. But the company’s growth rate doubled at that point -- almost that same week -- and it hasn't slowed down since. Being able to access the service in a new way changed how people use Pandora, during what hours, perhaps what ads or what music they may respond to, in what ways. Almost overnight, mobile listening became a new and important use case for the Pandora service. Now, building on this success, Pandora is expanding its mobile footprint to include, among other things, embedding Pandora service in car audio systems.
So that's just one example. Got any others in mind? Please leave a comment!
I'd like to spell out in a bit more depth some of our thinking about devices. We talk in our intro to the Devices topic about a historical shift from a paradigm of device-as-data into a new paradigm of device-as-doorway. To make the case for this shift, we point to two main trends: growth in the network, and growing strength in the endpoints.
In this post, I’ll start explaining all this by talking more about just the network trend.
We all know that the amount of content that is available on the web — from the History Channel to individual medical histories — keeps growing exponentially. Not just public archives and entertainment, but also personal, governmental, and corporate content is migrating online. One medium after another will start to use the cloud as its default location: music, documents, pictures, slide presentations, academic papers, video, and on and on. (None of these shifts, of course, is complete yet. Music is probably closest to a tipping point, though -- see recent news about Apple's acquisition of cloud-based music service LaLa.)
While the bulk of content stored in the network grows, the number of network end-nodes – devices that can connect – is growing faster than ever before. Sometime in 2010, there will be 5 billion mobile subscriptions out of a world population of 7 billion. This means there are already more people with mobile phones than running water. (Indoor plumbing has a worldwide adoption rate of 55-60%.) Within the next three years, more people will have mobile phones than have electricity (and yes, charging these phones is an issue).
The web we have now has about 1.6 billion devices connected to it, serving about 1.7 billion users. The PC revolution has come a long way. But it’s important to realize that the PC is not actually going to be the device that first introduces most human beings to the Internet. The device that brings the next two to three billion users to the network will probably be the web-enabled phone.
By 2011, more smartphones will be sold each year than laptops. And two years later, according to Gartner, the installed base of web-ready phones will reach 1.8 billion units, permanently surpassing the installed base of PCs. Let's repeat that: the PC will permanently become a minority participant in the web, starting in 2013.
People who talk about how the ‘mobile web’ is going to be a big deal? This is what they’re talking about. The web we know is going to get a lot more content-heavy, a lot more populated, and lot more mobile in a very short period of time.
The Accenture Technology Labs blog will feature the opinions and perspectives from the very people that are driving innovation today for Accentu...