I'm not obsessed with Web + TV, but here I am posting again about it. Mostly because I’m interested in where technology is going, and part of the question of where it’s going is a question of media. Whether the iPad succeeds as a technology platform, for example, is largely dependent on whether it succeeds as a media platform: a venue for books, for comic books, for cookbooks, for movies, for TV shows. Establishing a new technology platform, now, is largely a matter of establishing a new channel, with all that entails.
Now one claim that I made in a previous post was that
the video web and the written web are still basically two separate media, begetting totally different user experiences, different hardware setups, different patterns of consumption, even different rooms of the house. They’re hot and cool media, respectively, just like Marshall McLuhan said. And what it would take to unite them in some new kind of fusion medium – well, that we have not seen yet.
The distinction between lean-back experiences and lean-forward experiences is probably a cliche at this point, but it’s also an accurate reflection of a real chasm. In one kind of media world, you take in a story; in another, you reach out to type, click, scroll, parse. They are usually separate worlds.
One more bit of evidence for this chasm: "cut-scenes" in video games. Cut-scenes are those expensive-looking video segments that are often screened between game levels, to help establish the story or the mood. But it’s a strange idea, if you think about it: so much energy is devoted to “enhancing” a (highly interactive) game with elements that are so completely. . . non-interactive. It’s just a video. When the cut-scene begins, there’s nothing for game players to do but drop their game controllers slack in their laps and sit back to watch TV for a few minutes. Interaction ceases; perhaps somebody gets up to go to the bathroom. This is a 'game'?
Steven Spielberg, who I was surprised to learn is a serious player (and occasionally a creator) of video games, makes a similar point:
You know the thing that doesn't work for me in these games are the little movies where they attempt to tell a story in between the playable levels. That's where there hasn't been a synergy between storytelling and gaming.
However, I did recently encounter a Nielsen study on media consumption, which paints a slightly different picture. It’s helped me understand how this chasm – between storytelling and gaming, between the flow of video and the interactivity of the web – can sometimes be bridged. Because it turns out that people are, in fact, discovering their own new ways to interact and consume at the same time.
in the last quarter of 2009, simultaneous use of the Internet while watching TV reached three and a half hours a month, up 35% from the previous quarter. Nearly 60% of TV viewers now use the Internet once a month while also watching TV.
Now, is this just multi-tasking in the living room? At least partially, it is. People do unrelated activities in front of the television all the time. That’s not interesting.
What’s interesting is that it’s not all multitasking – and the Twitter traffic says so. Online chatter about American Idol, for example, is substantial, but what’s even bigger is chatter traffic about American Idol while American Idol is on. That is, people are turning to the web – Twitter, Facebook, chat rooms, discussion boards -- to talk about the media experience they’re having . . . even as they’re having it. They’re using the web as one big backchannel.
So despite what Spielberg and I both feel is a lack of “synergy” between lean-back and lean-forward experiences, this is one kind of real synergy. You lean back and enjoy the show – you lean forward give your two cents about Ellen DeGeneres’ new haircut. The video tells the story, while the layer of commentary – the social layer – reflects a shared experience of that story.
I’m certainly not the first one to have noticed this phenomenon. There’s already a new breed of mobile apps like tvChatter, expressly designed for purpose of commenting and dialoguing about live media, as it’s happening. And there’s research interest, too, in what this kind of live commentary tells us about the live event – and about the audience. How was a televised debate received, blow by blow? How does an audience connect with certain scenes, certain characters, certain themes?
Call it the Mystery Science Theater effect, in honor of the campy TV show where robot puppets, silhouetted against the screen of a bad movie, call out snarky comments. Just substitute tweets into the speech balloons.
Cloud computing professes many advantages: on-demand pricing, less IT overhead, lower cost through economies of scale, lower entry barrier into new territories and so forth. All these are definite nice-to-haves, but is this just a minor chapter in the IT saga or a proverbial paradigm shift? In other words, is this just a passing cloud or a rainmaker?
If I wind my mental clock forward 3-5 years, I see three radical changes that cloud computing could bring.
Prognosis 1: Cloud computing will lead to a dramatic increase in cross-company business processes that will dwarf today’s “business ecosystems”.
Prognosis 2: Cloud computing enables an “exoskeleton” model (as opposed to today’s “endoskeleton” model) for corporate computing. This will open up new white spaces for IT services in many large but fragmented industries such as construction, education, healthcare etc.
Prognosis 3: Cloud computing will give rise to what could be called business process “utilities” – i.e., companies that provide simple and common business processes (e.g., sales tax calculation and remission) but at such a massive scale that they’ll dwarf today’s SaaS companies.
There’s a lot of wealth to be created. But then there are also lots of technical problems to be solved. The first 3 parts of this series will examine each of the three prognoses above. The fourth will outline the set of technical problems that need to be solved in order for these prognoses to come true.
Prognosis 1: Cloud computing will lead to a dramatic increase in cross-company business processes that will dwarf today’s “business ecosystems”.
The moment a company’s IT systems migrate outside the firewall, they can much more easily communicate and exchange information with other IT systems from other companies to execute business processes that cross company boundaries.
To be sure, cross-company processes are not new. In the 80’s EDI was aimed at communication across companies to exchange information across a supply chain within different “business ecosystems” (most notably, within the automobile industry). The travel industry has integrated systems across airlines, car rental companies and hotels to create business ecosystems (e.g., the oneworld alliance, the Star alliance etc) to offer passengers a seamless travel experience across multiple airlines, hotels and rental car companies. However, today such processes are handcrafted and hardwired across systems from a small number of business partners or orchestrated by third party “clearing houses.”
Cloud computing in combination with integration standards like web services and REST has the potential to create cross-enterprise processes at an industrial scale: complex, yet flexible business processes that snake through multiple companies that are part of fluid and ever-changing business ecosystems. One may very well ask: “even if this is technologically possible, what is the business driver for it?”
Practically any human experience you can think of – whether it’s a vacation, a stint at the hospital, or just living your average humdrum day – involves products and services provided by multiple companies. Today, companies provide discrete products and services that we, as individuals, manage and orchestrate. The ability to flexibly weave together a business process with
services from multiple companies around an individual and his or her life seems like a strong driver in the business-to-consumer world.
Much as an individual’s life involves touch points with multiple products and services, almost every process in organizations also involve interactions with multiple business partners. Today, each business partner sells a discrete product or provides a discrete service and organizations manage and orchestrate these internally into business processes. Cloud computing makes it considerably easier for companies to configure business processes that involve internal components and many external components into complex yet fluid processes around their business needs. This seems like a strong driver in the business-to-business world.
To be continued.
Hi, I’m a first year analyst at Accenture Technology Labs (ATL) in Silicon Valley. Many people might be wondering what life is like as an analyst in ATL. What is the average week or day like? What about the average project? I can tell you right now that the only norm about what you’d be doing is that there is none. You have to expect the unexpected and your project work varies nearly every day.
For me, I am currently finishing up a project where I help a company develop a mobile device app. The client is across the country on the east coast, so I have the option of travelling every other week. This is nice, considering the commute to the client is five hours there and six hours back.
When I'm at home in California, I usually get up around 7:30 A.M., get ready, check the day's news headlines, have a quick breakfast, and then walk to work. I like to get there quick and hit the ground running. I don't work with anyone offshore, so I don't get many emails overnight. I am the only person on my project working at my home site, so all of my meetings are phone calls and on the web. I have three status meetings a week with either my client or my Accenture manager, and that is mostly it. The rest of my time is spent working on the project independently.
On my own, I could be doing one of many things: design work, presentations, spreadsheets, coding, testing, planning for the next release, etc. I try and keep my day varied, so I'll usually spend two to three hours on a specific task and then switch to another one, since there is always a variety of things to do. I guess that is one of the advantages of being on a small project. I get to see it [the project] from beginning to end and am involved in almost every part of it.
When I'm in Virginia, I usually fly out Sunday afternoon and return Thursday or Friday evening. On each flight, I can average three to four hours of work, usually coding or documentation. Weekdays at the client site always seem busier and more hectic than whenever I am at my home office, perhaps because I simply have more client meetings that take up my time. Because I'm working independently every other week, face time is a key factor while I am at the client, and everyone will tell you: building a strong professional relationship with the client really goes a long way.
Monday and Tuesday afternoons are usually booked with meetings regarding status, workplans, design, and defect handling. I usually get off around 6:00 or 6:30 in the evening. I'm not done working then, I just take a dinner break. Two or three nights of the week, I have some extra work to catch up on at the hotel after dinner. It's mostly coding or design work that I couldn't get done due to daytime meetings. At the end of the week, I fly back to California to enjoy a relaxing weekend at home.
That's it! I'm transitioning next week to another project, this one being internal. After six months at the client, all I can say is that it was an awesome experience. I learned a fantastic array of new skills and built a strong relationship with a great client. I can't wait to see what my next project has in store.
It turns out that by far the biggest cloud computing systems in operation are . . . botnets! Beyond Google, Amazon, Microsoft, and Yahoo, the biggest cloud on the planet is controlled by the Conficker computer worm. But instead of living in a data center, this cloud is made up of ordinary end-user machines – maybe yours! Maybe mine.
Conficker controls 6.4 million computer systems in 230 countries, more than 18 million CPUs and 28 terabits per second of bandwidth, said Rodney Joffe, senior vice president and senior technologist at the infrastructure services firm Neustar.
[It] is controlled by a vast criminal enterprise that uses that botnet to send spam, hack computers, spread malware and steal personal information and money. . .
Like legitimate cloud vendors, Conficker is available for rent and is just about anywhere in the world a user would want their cloud to be based. Users can choose the amount of bandwidth they want, the kind of operating system they want to use and more. Customers have a variety of options for what services to put in the Conficker cloud, be it a denial-of-service attack, spam distribution or data exfiltration.
Botnet software resembles a virus, in the way it silently infiltrates a user’s machine through some software flaw, or some innocent action on the user’s part. But unlike a virus, which is usually simply destructive, botnets have a useful goal in mind – useful to someone else, that is.
I find botnets scary, but also fascinating. How did we get here? How did we get to the point where the French Navy had to order staff not to even open their own computers?
Not very surprisingly, some of the biggest, juiciest targets for botnets are large organizations with fleets of machines. These fleets can present a soft target for a number of reasons, but one of these has to do with organizational attitudes toward computers. I’m talking about the tendency to treat computers like hardware assets – physical things in space – when computers are really organisms in an ecosystem.
Companies know how to take care of machines, with regular (but hopefully minimal) maintenance. Machines operate in the physical world, subject to regular forces like friction and load. The same kind of care and feeding will work for typewriters, sedans, and even printers.
But software is a different kind of investment. It lives in a parallel universe, subject to frictions and loads that are ill-understood. Its environment is always changing – it’s an unstable ecosystem. And it’s an ecosystem that has predators.
Organizations are used to taking care of printers -- not prey.
New predators can emerge day to day; but many organizations with locked software images are on a 2-, 3-, perhaps 4-year "upgrade cycle". In the gap between 2 days and 2 years, the botnet thrives.
In a funny way, a software investment doesn’t just depreciate over time, like most assets. If you leave it alone for long enough, it can gain the potential for active destruction of value. Look again at that occasionally-handy Windows 2000 machine under Fred’s desk (which he leaves running 24x7, of course). As each day goes by, the odds improve, just a little bit, that this machine has sent spam; has stored illegal information; has helped crack a password; has leaked your own company’s information far and wide.
At what point is a machine’s ability to do useful work outweighed by its potential for doing anti-work?
(Looked at this way, even donations of old computer systems could have a negative value (to the world) that is greater than the positive value of the gift. If a company gives all of its Windows 2000 machines to a school in Peru, then the school might gain a new computer lab, but the Conficker cloud might have just gained a new node which will last for many years.)
So what's the solution?
Fundamentally, these kinds of botnets exist not because organizations (and users) can’t keep their systems up to date, but because end-user system software is simply designed wrong. Or rather, it was designed for a different environment. It was effective in its original environment -- but now it's prey.
Vendors can keep issuing patches; organizations and individuals can apply them; but the botnet authors can, and do, adapt to the patches. There's a reason they're called "patches" and not "fixes".
What would a fundamentally redesigned end-user OS -- designed for today's (and tomorrow's) software ecosystem -- look like? It would most likely involve a top-to-bottom rewrite, from the kernel on up, with predators in mind. Every system design choice would have to be made with an eye to empowering the user but balancing the user’s privileges against threats from outside. System updates, for example, should be seamless, secure, immediate, and not optional. The system itself should be capable of certifying its own secure state.
There are lots of different ways that a botnet-proof system might look -- here is one of them.
Last time I defined a Cloud Reference Model to bring concreteness to cloud-based application architecture. Here I provide an example that illustrates the components of this 7-layer stack. Consider media transcoding where users input data files of one format and the application outputs the files in a different format.
Here the Application Layer contains the program that transcodes each file and that creates the end-user interface for accepting the commands and presenting the data files. The code interfaces with the APIs from the Transformation Layer with minimal concern for the underlying computing platform.
Next the Transformation Layer transforms the program code and collected data to suit the platform. In the case where Amazon provides the platform for storage and message queue, these transformations handle the details of an Application Layer’s “put” and “get” instructions specific to the underlying Amazon interface: the code with APIs for SQS to log status and with S3 to access the files, and the data for storage in S3. For example, this is breaking a large data object into smaller pieces for storage on the cloud, and upon retrieval recombining and checking the object. Changing the platform from Amazon requires updates to “put” and “get” implementations.
At this point, let’s assume that the lower layer implementation of the message queue and storage are handled by Amazon. In return we accept Amazon’s guarantees for this functionality: 99.9% availability and no latency guarantees. What remains our responsibility is to complete the platform to handle the processing.
Within the Control Layer, controller logic determines the number and type of appliances to meet desired guarantees, that is, it requires 5 large images to meet a time constraint of 24 hours, and locate in Amazon’s west coast region to be physically close to the data stored on S3. With the help of constant monitoring from the Instantiation Layer below, logic scales the number of appliance to meet delays. By implementing the logic, we control the placement of appliances, the time constraint, and the availability.
Based on the determination of the Control Layer, the Instantiation Layer automates the scripts to provision the appliances. For Amazon’s EC2, these provisioning scripts supply Amazon credentials, apply configuration, load the content, and perform error handling if the returned appliance is faulty (e.g., memory errors).
The responsibility to create the EC2 machine image is at the Appliance Layer. In this case, create a job scheduler appliance by installing the BOINC control process software to choreograph and distribute the files and a worker appliance using the BOINC client software that runs the transcoding program.
Finally within Amazon, Xen virtualization software carves the virtual machine from the physical resources at the Virtualization Layer. And the Physical Layer deals with data center concerns with supplying power, cooling, compute to support the virtual machines.
At each layer, the IT functions focus one level of abstraction with minimal concern for other details. Combining the solutions forms the overall cloud architecture. Stay tuned next time for an overdue definition of cloud and how to use the model.
How can a company take advantage of these fundamental changes in communication? Here are some relatively simple things to keep in mind, along with actions you can take immediately.
1. Not only are your employees, customers and competitors all talking and talking openly—so are your competitors’ employees and customers, and customers you never knew you had. Technology exists to process unstructured text for you to listen to them all—and listen at an industrial scale—enabling you to gauge your customers’ latent needs, how they feel about your products and brand, and what your competitors are up to. There’s no reason to spend money on surveys anymore: Everything you want to know—and more—is out there in the open for you to harvest. With technology from vendors such as Sentimine and The Nielsen Co., companies ranging from carmakers to agribusinesses are monitoring and measuring customer sentiment over the Web.
2. Embrace video as a communication medium. If you are like most companies, you produce tons of written instruction manuals in multiple languages and package them with your products. Very few of your customers read them, in any language. With video, you can show them rather than tell them. The Home Depot, Dell, Best Buy and many others are increasingly turning to video as a means of customer support.
3. Create an online community of people who use, like, love—and, don’t forget, hate—your products. Facilitate but don’t dominate the conversation. Face it: Whatever your product or service may be, it’s just a small part of your customers’ lives. By creating as much conversation as you can among your customers about as many subjects as possible, you’ll ensure that your products become legitimate topics of conversation from time to time. This kind of publicity is more genuine and credible than any ad campaign you can run.
4. Caution: A good, active community is a double-edged sword. On the one hand, your community members are your ambassadors. On the other, your community is effectively offering your customers to your competitors on a platter as well as a forum where your disgruntled customers can complain. The bigger your community is, the more vulnerable you are. Which is why you need to create and support brand ambassadors.
*Kishore Swaminathan is the chief scientist for Accenture.
Remember how simple it was to communicate with your customers in the good old days? You initiated and managed specific conversations about your company and your products. For example, you could buy a couple of 10-second spots during the Super Bowl. Or you could entice customers to participate in focus groups and fill out surveys. Whatever the medium or venue, you ensured that in any conversation about you, you were the subject as well as the object. Things have changed. Today, your customers are having various kinds of conversations among themselves out in the open. They write blogs; they argue with each other in bulletin boards; they tweet on Twitter about whatever; they post videos of their kids online; they talk about their lives and feelings in social networks. And they post reviews and commentaries about you. In other words, you no longer manage the conversations among your customers in which your company or products may be mentioned, often in unpredictable contexts. What’s a brand-conscious company to do? Just shut up and listen? No. But in light of this significant social change, companies need to recalibrate the way they see their marketing, branding and corporate communications.
Successful technologies often create discontinuities between the past and the present that go well beyond the technologies themselves—discontinuities in individual and social behavior that alter some aspect of society forever. Here are some examples of how recent advances in information technology are leading to important changes in how we communicate and consume information.
From “need to know” to “good to know”:
Quaint as it may seem, communication used to have a purpose—typically, to convey information that I needed to tell and you needed to know. No longer. Technology now gives me an extraordinary ability to talk a lot about nothing to no one in particular for no reason.
As a result, communication today is less a matter of my decision about what to tell whom than it is about your choice of what to pay attention to from whom.
From “tell me” to “show me”:
Cheap digital and cell phone cameras that can shoot video plus free distribution media such as YouTube are fueling an explosion in video communication. Want to know how to fix your plumbing? Or learn how to make a Mexican tamale or play the guitar? Perhaps you’d like to see how arthroscopic surgery is done?
Today, on the Web, you can find video clips—by amateurs as well as professionals—on almost any subject.
From “talk at you” to “talk with you”: Not long ago, when producing and distributing information was relatively expensive, organizations ranging from companies to hospitals to governments controlled, and were at the center of, the communication with their public. By necessity, they talked at you. Today, two-way dialogue is not only possible but almost expected by individuals from these same organizations— especially from doctors and health care providers.
In my next post, I will share how companies can take advantage of these fundamental changes in communication.
*Kishore Swaminathan is the Chief Scientist of Accenture.
The Accenture Technology Labs blog will feature the opinions and perspectives from the very people that are driving innovation today for Accentu...