We all know exercising is good for us and we all had our new year resolution, but few of us are able to carry it through. Can we inject a better motivation to help us get in shape? To answer that question, the Labs started the Steptacular pilot, in collaboration with Stanford University and Live Well at Accenture, which applies game concepts as a motivator for people to get healthy. The pilot includes several gaming concepts:
- Clear goals: We set up a goal target for our participants, which is to achieve 10k steps a day, a recommendation from many fitness experts as a minimum required daily exercise. In addition, we set smaller goals for those that are less active, i.e., Silver, Gold and Platinum levels depending on how much you walk each week.
- Instant feedback: Clear goals do not make a difference if there is no way knowing how far you are away from the goal. We ask our participants to carry an Omron HJ-720 pedometer, which has a digital display to instantly show you what you have achieved. In addition, the Steptacular website shows participants' step history and how they stack up against other participants.
- Social: Social games have been a run-away success. Steptacular also leverage social features to encourage peer pressure. In Steptacular, you can connect with friends, and then you can watch (and more importantly push for) each other's progress.
- Engaging user interface: Steptacular uses a very engaging game allowing user to redeem random rewards for their achievement. We got many love emails expressing how they are motivated to walk more in order to play the game more.
The Steptacular pilot has just concluded. Although we will be publishing the research results shortly, I thought I will share some high-level statistics. 5,105 people signed up for the pilot. Collectively, the participants walked more than 1.8 Billion steps, that is more than three times the distance to the moon. Along the way, we got many fan emails expressing how motivating it is for them, and how they are able to lose weight and lower their cholesterol level. I will post a link to the research paper when we publish it, so that you can see in details what game mechanisms worked and how effective are the game mechanisms.
From a technical stand point, launching such a pilot is not an easy task. We operated under a very tight constraint. First, we had a very short window to launch the pilot. The pilot must end by a deadline due to other HR constraints. To maximize the pilot duration, we had to act fast. Through the hard work of several Stanford students, we are able to launch in 3.5 weeks in the end. Second, the pilot is only scheduled to run for a few months. Hence, we do not want to waste money and time to procure hardware to power our application. Third, we have to manage a large number of participants, in particular, the distribution of pedometers presents a major challenge for our small team.
Fortunately, Cloud comes to the rescue. Our application is a prime candidate to use Cloud. Our application not only is temporary (only needed for few months), but we also require capacity scale up quickly (have to launch quick). A quick TCO analysis clearly shows that choosing cloud is more economical. In the end, we chose Amazon as the technology platform, and we ended up using several services, including
- Amazon EC2: We are able to get our server quickly. With few weeks to launch, we have no time to wait for server procurement. In addition to spinning up servers quickly, we also leveraged the free Cloud Watch service to enable us closely monitor our system's performance.
- Amazon SES: We have to send thousands of emails to our participants, for example, email verification during sign up or sending out announcements. We could not get an internal email account set up quickly (takes time to build a business case and takes time for provisioning), and we do not have access to an external email service allowing us to send a massive number of messages. It took us only a couple of hours to setup Amazon's Simple Email Service, allowing us to focus on application design.
- Amazon retail: I have been doing Cloud research for the past several years, so spinning up server instances is easy. Unfortunately, running a supply chain to get pedometers into our participants' hand is no easy task. We looked into being our own dealer (buy bulk from Omron, then send), or using Amazon Fulfillment services, but in the end, we choose to just use Amazon retail. It turns out to be cheaper than what we could achieve ourselves. Within a couple of weeks of launching, we helped to sell 3,000+ pedometer through the Amazon retail site.
It is the ultimate dream of Cloud that you can provision any service you need by yourself, and pay only for what you use. The Steptacular pilot is definitely a beneficiary of that grand vision.
It is no question that Cloud is making a big impact in the industry, but how would it impact consulting firms specifically? If you were to believe some analysis on cloud impacts
(see reason #3), we, Accenture, will be out of busy very soon, because we cannot adjust to "the smaller projects, agile approach, and lower margins". But if you were to believe me, I think cloud is the best thing that happened to consultants.
One reason is that I believe faster innovation and the myriad of technology choices require deep technology skills and experience. It used to be simple to determine the infrastructure choice. Simply buy a hardware from HP or IBM, install Oracle software, and most of your stack is in place. Unfortunately, in the cloud, there are many options to choose from. On the server options alone, there are many infrastructure cloud providers and each provides many different virtual machine configurations. Even a simple apple-to-apple cloud cost comparison
is non-trivial. Once you choose the hardware, you then have to choose the software stack. Again, there are many options. For example, if you were to use a NoSQL data store, you are faced with many NoSQL platforms. There are many dimensions to compare NoSQL stores
, and understanding your application requirement and choosing the right one is a daunting task.
To make the matter worse, the innovation rate is much higher in the cloud era. As an example, every month, Amazon is pushing out new services or new features. Even for a person like me who is focusing 100% on cloud, it is difficult to keep up, let alone someone who is focusing on his business application and just want to choose the right platform. For those who do not want to be bothered with the intricacy of cloud (that is most of us), it is better to contact a consultant.
Indeed, with cloud, the infrastructure cost will be greatly reduced, which could eat into the deal size. But in large-scale projects, infrastructure cost is typically a small fraction, often less than 20%. The majority of the cost is in application development. As the recent Accenture technology vision
points out, the value (hence the margin) is in the high-layer of the stack. Consultants' value is in solving clients' business problem, not in saving infrastructure cost.
One thing that is accurate from the cloud impact analysis
is that I would not golf with my clients much. But, sadly, it has been true all along :-(.
Although MapReduce has found wide spread usage within the startups and SMBs community, its enterprise adoption is just beginning. We are seeing increasingly more enterprises evaluating the technology and developing PoC (Proof of Concept) to see where the technology may fit in the enterprise landscape. Also, there are several pioneering clients who have already implemented Hadoop in production. For example, I am informed recently that one of our government clients has already deployed 12 instances of Hadoop clusters.
For enterprises looking to deploy the technology, they now have one more choice to choose from. Accenture recently has partnered up with Appistry -- an internal cloud platform provider -- to release the next version of Cloud MapReduce (CMR). Building on a different architecture, the CMR implementation achieves many advantages over other implementations, such as Hadoop, including better scalability and higher performance. The Appistry integration allows an enterprise to run CMR inside its firewall instead of or in addition to running in the Amazon Cloud environment. This is officially announced by Appistry, which is also covered by GigaOM and New York Times. You can read about technical details on Cloud MapReduce on Appistry.
For an enterprise choosing which platform to deploy, they have several dimensions to consider. In addition to scalability and performance, there are several key reasons to consider the Appistry's version of CMR.
- No single point of failure. If you cannot tolerate downtime and potential data loss, then you should look into a fully distributed architecture as used by CMR instead of a master/slave architecture as used by Hadoop.
- Streaming support and incremental batch. If you constantly have new data coming in and need to continuously run your batch analysis to including the new data, then think about deploying a framework that supports streaming. The latest CMR implements streaming support, conceptually similar to what is provided by HOP, but it is implemented on a commercial grade.
- Support. If you are looking for a product company (instead of a consulting company) to support your deployment, Appistry is there for you.
Obviously, there are lots of reasons to choose Hadoop as well. For one, there is a huge community around the Hadoop platform. It is our intention to continue to develop CMR to make it interface compatible with Hadoop, so that CMR can be an integral part of the community. Stay tuned for the next version.
Good news from the infrastructure cloud, such as Amazon’s EC2 and S3 services. At the price of $100, you have a cluster with 1000 servers in your hand for an hour. Data center and server configuration? Done. Management and maintenance? Done.
So, the question is “what are you going to do with it?” Now, it’s time to sit back and think about what we couldn’t do before, and what we can do now with this elastic infrastructure cloud.
Many have already discussed applications with seasonal load. For example, you don’t need to target your web server farm’s size to peak load any longer. Rather, you can build an elastic web server farm that automatically scales its capacity following the traffic demand. Or, your company must generate a quarterly forecast that requires a business analytics application with computing power equivalent to a hundred servers. Because you only run it four times a year, three hours each, you couldn’t financially justify purchasing 100 servers for that purpose only. Now, with the cloud, twelve hours multiplied by 100 servers is only $120. (Yes, network bandwidth fee and so on will make it more than $120, but still much better than buying 100 servers.)
I want to introduce the Opportunity Eco-System. This is a place where the software provider and the user can meet safely. It is hosted in a public cloud. Before we get into details, let’s take a look at a typical setup scenario.
Bob is a manager in the quant department of an investment firm. His job is to determine a target price of a mutual fund by running sophisticated pricing algorithms. Like yesterday, Bob received an email that advertises AccuPricer, a new pricing engine. Bob has three strategies: 1) investment is all about taking a risk for bigger return! Try’em out! 2) When it comes to IT, “don’t change until it breaks!” or 3) wait 6 months and see what the reaction from early adopters are. None of them are optimal strategies, though.
Opportunity Eco-system for users
Users can try out new software on pre-configured virtual machines. Yes, you have a practically unlimited resource pool; try multiple softwares from multiple vendors at the same time. Remember “at the same time”, not “one-by-one.” In this way, You can give a new offering a chance without interrupting your current workflow. At the first phase, you can consider the new offering’s workflow as a “what-if” scenario. The Opportunity Eco-system makes this flow seamless.
Now, you are running multiple options for one task in parallel to using the cloud. Therefore, you want a comparison metric for various reasons. First, Bob needs to choose one of many results from many engines, as his target price should be one number. Second, Bob may want to draw a conclusion from many results using a “voting scheme” or an “averaging scheme.” Even in this case, Bob may not want to give even weight to all sources until they obtain Bob’s confidence. Lastly, after Bob tries out the new engine several times he may want to filter it out if he is not satisfied with the engine. Otherwise, a year later he will be running more than 100 engines all at once, quickly burning out his IT budget. The Trial Eco-system provides an interface where you can setup your comparison metric objective (accuracy, time, cost, etc.,), and collect statistics for comparison. It is interesting that sometimes “accuracy” cannot be scored in real time. If a software engine is to predict a future value of one property, we can tell the real accuracy at the time when the future arrives. Trial Eco-system provides a future-feedback feature that collects the real value at future time and updates the credibility of the engine.
So, what should Bob pay for? Bob pays for the usage-based license for engines he chose. Trial Eco-system is responsible for monitoring usage, billing, accounting, as an uninterested third party. Note that Bob already purchased a site license for some pricers like SosoPricer, and it’s not reasonable to pay extra license fee for the SosoPricer Trial Eco-system edition. Trial Eco-system enables BYOL (Bring Your Own License) feature so that Bob doesn’t need to pay extra.
Trial Eco-system for software providers
Software providers are another crucial party of the Trial Eco-system. Unless the Trial Eco-system can invite most of the market leading providers into the system, the user’s choice will be limited and the whole system will collapse. The Trial Eco-system lowers a market entrance barrier for software providers. It provides a standard virtual machine image with security features to protect their proprietary software and accounting features for license fee charge. Moreover, it organizes a standard framework so that many of the comparable tools from vendors can fairly compete with each other.
With Trial Eco-system, from now on, Bob will get a promotional email like this:
“We are pleased to introduce AccuPricer, our new pricing engine. More than 100 customers already tried it out, and 80% of them experienced better results from AccuPricer (see chart 1). To compare the performance of AccuPricer to what you have now, click this link. By subscribing today, you will get AccuPricer $100 credit that is equivalent to a 30 day license.”
Bob clicked the link to allow Trial Eco-system include AccuPricer to his parallel pricing engine list. Still, he does exactly the same thing as he did before to run pricing calculation – he presses the “Price it” button. AccuPricer then runs in parallel with SosoPricer and 4 other engines in Bob’s list. At the end of the day, Trial Eco-system sends him a report that shows the statistics of all the engines in his list. The report shows that AccuPricer performs pretty well. After a month of consistent high performance, Bob and his team decide to make AccuPricer official and discontinue their SosoPricer subscription.
Many of our clients are interested in migrating to cloud, but all of them are concerned about security. I wrote before that a cloud is more secure than one's own data center
. Following on that thread, today I will focus on a set of security best practices you can follow to enhance your cloud security even further. Since a lot of our clients are evaluating Amazon cloud as a potential choice, I will focus on the best practices in Amazon cloud, but the principles should apply to other clouds as well.
1. Check before connecting.
Since a cloud VM (virtual machine) is outside your firewall, you have no control on the path to reach it. For example, it is well known that one can hack DNS servers
. Just a few months ago, china's largest search engine Baidu suffered a DNS attack
. So it is likely that your VM could be hijacked too. One best practice is to always check your VM's signature before connecting. Amazon instances generate a random SSH server key on boot up. This SSH server key can be obtained by querying Amazon's (secure) API for the console output. This key from the console output should always be checked against the SSH key reported when you SSH into your VM. To ensure the best practice is always followed, for our clients, we wrote a wrapper around a SSH client, which automatically checks the key before connecting. You can quickly code up something like ours.
2. Encrypt as much as you can.
You should always use SSH or SSL to connect to your VM, which encrypts all traffic. In addition, you should encrypt your data to guard against hard disk theft or inappropriate hard disk disposition. On a Linux OS, this is easy to do because you just need to install an encrypted loopback file system. On Windows, there are a number of products you could use to encrypt.
3. Wipe when you quit.
As an added precaution, you should wipe out the encrypted loopback file when you shut down your instances. This will prevent the most determined hackers from trying to decrypt your bits, if they got a hold of your file, for example by stealing the hard disk. When you shut down your Linux instances by calling Amazon's API, the proper shutdown procedure is invoked. All you need to do is to hook into the shutdown script and wipe out the hard disk in the process. In our experiments, we find that there is enough time to wipe out about 7GB of data before your instance is just killed. That should be long enough for you to wipe out the most critical section so that no one can reconstruct your bits.
4. Stand all by yourself.
One key difference between cloud and your internal data center is that your VM may sit next to some strangers' VM on the same physical hardware. Even though hypervisor isolation has been robust so far, there is always the concern that a vulnerability could be discovered someday and your VM may be hacked by the neighboring VM. One solution is to launch a VM onto its own hardware all by itself. We analyzed Amazon's cloud hardware configuration
recently, and concluded that there are two types of instances that occupy the whole physical hardware. Since Amazon does not have any capability to online migrate your VM to another hardware, you can be sure that your VM is standing all by itself. You will receive an email notification if Amazon needs to change the underlying hardware, which happened to us recently.
5. Open to only those you trust
Amazon offers a powerful software firewall, called Security Group. You can have any many Security Groups and as many rules per Security Group as you want. You should use Security Group to lock down access to your application to as narrow a list as possible. For example, if you enable SSH access, you should open port 22, but make sure you only open to the IP addresses from where you will access it. Never open it to the whole world (i.e., 0.0.0.0/0) unless your application is public facing.
Because the fine grain control a cloud offers you, if you follow the above best practices, you can be sure that your application is more secure if hosted in cloud than hosted in your own data center.
One of the success stories of cloud is around a startup company called Animoto. The company's website was hit hard when they became famous, but, by running on top of Amazon cloud, they handled the onslaught gracefully. As the traffic increased, they scaled their infrastructure from roughly 50 servers to a peak of 4000 servers. The story demonstrates the power of using cloud -- dynamically scaling based on usage. This is particularly important for web applications where the number of users fluctuates unpredictably, not only through the application life cycle, but also throughout the day.
In comparison, in a traditional infrastructure, you have to provision a fixed capacity because it takes too long to provision additional capacity. In a couple of client projects that I have seen, people grossly over-provision for what they need because it is better to be safe than sorry. As a result, they pay way more than what they should pay, sometimes orders of magnitude higher.
The order of magnitude in infrastructure cost savings alone should convince you that cloud is the right way to go. But, I have to break the bad news to you that you cannot simply drop your application in cloud and expect it to scale all by itself. Although there are many solutions out there now, it is difficult to determine which solution works the best for your application and which solution is the most robust. As a proof, I visited Animoto's website recently and I was surprised to find that their website is down (see below). Apparently, even the cloud poster child did not get it right completely, at least the failure handling part.
What solution out there should you choose if you decide to host your web application in the cloud? Obviously, you can build it on your own. I have talked about how to choose a load balancer in the cloud
, but it is just a start. You still need to figure out how to add auto-scaling capabilities and fault-tolerant capabilities. Instead of designing a new solution from scratch, you can leverage an Accenture's pre-built solution.
I would like to introduce WebScalar -- a prebuilt platform developed by Accenture which can host any web applications. WebScalar is auto-scaling, which means that servers could be brought up/down depending on traffic demands. It is also fault-tolerant, any single point of failure is repaired automatically. For example, if the load balancer fails, an active standby load balancer immediately takes over. At the same, a new standby load balancer is spun up to protect against future failures. Lastly, WebScalar also has a set of load balancing modules that could be plugged in. Depending on your application profile, we can plug in the optimal load balancer for your need. This demo
is a capture of the monitoring front end which shows the status of the moving parts in the systems, including the load balancers and the web servers.
WebScalar is free of charge to our client teams, and it is typically used in a larger cloud application migration project. If you have an interest using this solution, you can reach out to your client director or to us directly.
Cloud computing professes many advantages: on-demand pricing, less IT overhead, lower cost through economies of scale, lower entry barrier into new territories and so forth. All these are definite nice-to-haves, but is this just a minor chapter in the IT saga or a proverbial paradigm shift? In other words, is this just a passing cloud or a rainmaker?
If I wind my mental clock forward 3-5 years, I see three radical changes that cloud computing could bring.
Prognosis 1: Cloud computing will lead to a dramatic increase in cross-company business processes that will dwarf today’s “business ecosystems”.
Prognosis 2: Cloud computing enables an “exoskeleton” model (as opposed to today’s “endoskeleton” model) for corporate computing. This will open up new white spaces for IT services in many large but fragmented industries such as construction, education, healthcare etc.
Prognosis 3: Cloud computing will give rise to what could be called business process “utilities” – i.e., companies that provide simple and common business processes (e.g., sales tax calculation and remission) but at such a massive scale that they’ll dwarf today’s SaaS companies.
There’s a lot of wealth to be created. But then there are also lots of technical problems to be solved. The first 3 parts of this series will examine each of the three prognoses above. The fourth will outline the set of technical problems that need to be solved in order for these prognoses to come true.
Prognosis 1: Cloud computing will lead to a dramatic increase in cross-company business processes that will dwarf today’s “business ecosystems”.
The moment a company’s IT systems migrate outside the firewall, they can much more easily communicate and exchange information with other IT systems from other companies to execute business processes that cross company boundaries.
To be sure, cross-company processes are not new. In the 80’s EDI was aimed at communication across companies to exchange information across a supply chain within different “business ecosystems” (most notably, within the automobile industry). The travel industry has integrated systems across airlines, car rental companies and hotels to create business ecosystems (e.g., the oneworld alliance, the Star alliance etc) to offer passengers a seamless travel experience across multiple airlines, hotels and rental car companies. However, today such processes are handcrafted and hardwired across systems from a small number of business partners or orchestrated by third party “clearing houses.”
Cloud computing in combination with integration standards like web services and REST has the potential to create cross-enterprise processes at an industrial scale: complex, yet flexible business processes that snake through multiple companies that are part of fluid and ever-changing business ecosystems. One may very well ask: “even if this is technologically possible, what is the business driver for it?”
Practically any human experience you can think of – whether it’s a vacation, a stint at the hospital, or just living your average humdrum day – involves products and services provided by multiple companies. Today, companies provide discrete products and services that we, as individuals, manage and orchestrate. The ability to flexibly weave together a business process with
services from multiple companies around an individual and his or her life seems like a strong driver in the business-to-consumer world.
Much as an individual’s life involves touch points with multiple products and services, almost every process in organizations also involve interactions with multiple business partners. Today, each business partner sells a discrete product or provides a discrete service and organizations manage and orchestrate these internally into business processes. Cloud computing makes it considerably easier for companies to configure business processes that involve internal components and many external components into complex yet fluid processes around their business needs. This seems like a strong driver in the business-to-business world.
To be continued.
Last time I defined a Cloud Reference Model to bring concreteness to cloud-based application architecture. Here I provide an example that illustrates the components of this 7-layer stack. Consider media transcoding where users input data files of one format and the application outputs the files in a different format.
Here the Application Layer contains the program that transcodes each file and that creates the end-user interface for accepting the commands and presenting the data files. The code interfaces with the APIs from the Transformation Layer with minimal concern for the underlying computing platform.
Next the Transformation Layer transforms the program code and collected data to suit the platform. In the case where Amazon provides the platform for storage and message queue, these transformations handle the details of an Application Layer’s “put” and “get” instructions specific to the underlying Amazon interface: the code with APIs for SQS to log status and with S3 to access the files, and the data for storage in S3. For example, this is breaking a large data object into smaller pieces for storage on the cloud, and upon retrieval recombining and checking the object. Changing the platform from Amazon requires updates to “put” and “get” implementations.
At this point, let’s assume that the lower layer implementation of the message queue and storage are handled by Amazon. In return we accept Amazon’s guarantees for this functionality: 99.9% availability and no latency guarantees. What remains our responsibility is to complete the platform to handle the processing.
Within the Control Layer, controller logic determines the number and type of appliances to meet desired guarantees, that is, it requires 5 large images to meet a time constraint of 24 hours, and locate in Amazon’s west coast region to be physically close to the data stored on S3. With the help of constant monitoring from the Instantiation Layer below, logic scales the number of appliance to meet delays. By implementing the logic, we control the placement of appliances, the time constraint, and the availability.
Based on the determination of the Control Layer, the Instantiation Layer automates the scripts to provision the appliances. For Amazon’s EC2, these provisioning scripts supply Amazon credentials, apply configuration, load the content, and perform error handling if the returned appliance is faulty (e.g., memory errors).
The responsibility to create the EC2 machine image is at the Appliance Layer. In this case, create a job scheduler appliance by installing the BOINC control process software to choreograph and distribute the files and a worker appliance using the BOINC client software that runs the transcoding program.
Finally within Amazon, Xen virtualization software carves the virtual machine from the physical resources at the Virtualization Layer. And the Physical Layer deals with data center concerns with supplying power, cooling, compute to support the virtual machines.
At each layer, the IT functions focus one level of abstraction with minimal concern for other details. Combining the solutions forms the overall cloud architecture. Stay tuned next time for an overdue definition of cloud and how to use the model.
If you ask three people what is the cloud (as in cloud computing), you'll probably get back 10 different definitions. Even though no one can agree on the definition, most will agree that "on-demand" and "pay-per-use" are its key characteristics, and all would use Amazon Web Services as an example of the cloud. So it's really confusing when people say MapReduce is a cloud technology, because MapReduce is not associated with "on-demand" or "pay-per-use". If you're wondering about the connection between MapReduce and cloud, read on.
First of all, if you have not heard of MapReduce, it's a technology first proposed by Google in 2003 to cope with the challenge of processing an exponentially growing amount of data. In the same year the technology was invented, Google's production index system was converted to MapReduce. Since then, it has quickly proven to be applicable to a wide range of problems. For example, there are roughly 10,000 MapReduce programs written in Google by June 2007, and there are 2,217,000 MapReduce job runs in the month of September 2007.
MapReduce enjoyed wide adoption outside of Google too. Many enterprises are increasingly facing the same challenges of dealing with a large amount of data. They want to analyze and act on their data quickly to gain competitive advantages, but their existing technology could not keep up with the workload. Facebook is using it in production, and many large traditional enterprises are experimenting with the technology. It turns out that MapReduce can perform most tasks a database management system (e.g., Oracle) can, but it has many advantages over other technologies, including its scale, its ad-hoc query capability and its flexibility.
The first connection between MapReduce and cloud is that MapReduce could benefit from cloud technology. It is demonstrated in the Cloud MapReduce
project, which is an implementation of the MapReduce programming model on top of the Amazon services (EC2, S3, SQS and SimpleDB). Back in late 2008, we saw the emergence of a cloud Operating System (OS) -- a set of cloud services managing a large cloud infrastructure rather than an individual PC. We asked ourselves the following questions: what if we build systems on top of a cloud OS instead of directly on bare metal? Can we dramatically simplify system design? We thought we will try implementing MapReduce as a proof of concept. In the course of the project, we encountered a lot of problems working with the Amazon cloud OS, most could be attributed to the weaker consistent model it presents. Fortunately, we were able to work through all the issues and successfully built MapReduce on top of the Amazon cloud OS. The end result surprised us somewhat because Cloud MapReduce has several advantages over other implementations:
- It is faster. In one case, it is 60 times faster than Hadoop (Actual speedup depends on the application and the input data).
- It is more scalable. It has a fully distributed architecture, so there is no single point of bottleneck.
- It is more fault tolerant. Again due to its fully distributed architecture, it has no single point of failure.
- It is dramatically simpler. It has only 3,000 lines of code, two orders of magnitude smaller than Hadoop.
Cloud MapReduce advocates building more cloud services. If we can separate out a common component as a stand-alone cloud service, the component not only can be leveraged for other systems, but it can also evolve independently. As we have seen in other contexts (e.g., SOA, virtualization), decoupling enables faster innovation.
The second connection between MapReduce and cloud is that MapReduce is building the foundation of cloud. Other MapReduce implementations, such as Hadoop, are building cloud services, except those services are embedded in the project today and cannot be easily used by other projects. Fortunately, those implementations are moving towards cloud services separation. In the recent 0.20.1 release of Hadoop, the HDFS file system is separated out as an independent component. This makes a lot of sense because HDFS is useful as a stand alone component to store large data sets, even if the users are not interested in MapReduce at all. In the future, MapReduce may indeed be a cloud technology, when it starts to include cloud services implementations.
That’s right. I said “aaS”—as in the “as-a-Service” characterization typically used for cloud. Today’s “as-a-Service” characterization of cloud fails to adequately categorize the architecture components.
The cloud landscape is confusing—with over 85 cloud vendors and various definitions of cloud it is difficult to compare services.
Consider Amazon’s EC2 for virtual machines and S3 for object level storage. Often, these offerings are all grouped as Infrastructure-as-a-Service. However, the interface and the required solutions to work with EC2 are very different from those for S3. Users submit objects for storage on S3 without visibility into the virtual machine appliances, their configuration, or the way they scale. Amazon handles these functions and the user accepts the composite storage platform’s availability and access guarantees. Conversely with EC2, these functions are the user’s responsibility: determining the middleware installed to create the virtual appliance, adding the provisioning scripts to configure the deployed appliances, and then implementing the algorithms to scale. For these efforts the user controls the implementation, the configuration, and the SLAs. This comparison of Amazon services demonstrates the need for a concrete description of cloud and the different architectural components of cloud.
As such, I define a Cloud Reference Model that brings order to this cloud landscape. Like the OSI Model for networks, this Cloud Model is layered to separate concerns and abstract details. The Cloud Model divides cloud-based application architecture into seven layers: Application, Transformation, Control, Instantiation, Appliance, Virtual, and Physical. Each layer focuses IT functionality on supporting a specific area of concern.
Then application architecture design becomes an exercise in determining the necessary functionality at each layer—fulfilling functional, security, and reliability requirements is decoupled. For example, security encompasses solutions from access to encryption, to network protections, and to physical. The Model decouples these decisions into what needs to be placed at each layer (Physical, Virtual, Appliance, etc.). Then fulfilling these needs is an exercise in mapping vendor offerings and do-it-yourself responsibilities.
Stay tuned for a detailed example illustrating the components of this model.
*image courtesy of Noel Coates.
The Accenture Technology Labs blog will feature the opinions and perspectives from the very people that are driving innovation today for Accentu...