Tag Archives: IaaS

Cisco Is Shutting Down Its Public Cloud, Exploring Hybrid IT Strategy

Cisco just confirmed it’ll shut down its Intercloud by March 2017. Intercloud was supposed to be Cisco’s move in public cloud, addressing both businesses and service providers. But Cisco learned the painful lesson of the cloud, same lesson learned by HPE which shut down its Helion public cloud a year ago and Verizon which shut down its cloud earlier this year. On its statement, Cisco explained:

Cisco has evolved its cloud strategy from federating clouds to helping customers build and manage hybrid IT environments.

It appears Cisco realized that hybrid cloud may be its answer to Amazon, Microsoft and Google. It may have learned from Rackspace, which abandoned its cloud product and turned to partnering with Amazon, a strategy shift which paid off big time a few months ago. VMware is another datacenter giant that realized it’d better off partnering with Amazon for cloud services, and announced partnership a couple of months ago.

With open source taking over networking, Cisco foresees rough times also on traditional networking side. The big guys go as far as building their own datacenters from the ground up from simple hardware, skipping Cisco’s expansive purpose-built high-end boxes.

While many abandon their public cloud aspirations, it’s interesting to see that last month Oracle launched bare metal cloud services, which is in fact just the first step of its new cloud strategy announced back in September. Will it succeed where the others have failed?

1311765722_picons03 Follow Horovits on Twitter!

Advertisements

Leave a comment

Filed under Cloud, OpenStack

Oracle Launches Bare Metal Cloud Services, Challenges Amazon AWS

Oracle threw some big announcement back in September at Oracle OpenWorld conference about its plan to add Infrastructure as a Service (IaaS) to its Oracle Cloud Solutions. And now the first piece of that IaaS is announced: Oracle Bare Metal Cloud Services. Oracle’s service offers integrated network block storage, object storage, identity and access management, VPN connectivity, and a software-defined Virtual Cloud Network (VCN) – their implementation of Software Defined Networking (SDN). The new service is launched in Oracle’s new Phoenix Region (Arizona, USA), with the promise of growing to additional regions. The Phoenix region has 3 Availability Domains (similar to Availability Zones in Amazon Web Services).

oracle-iaas-logoOracle has been exploring cloud for a while and has made several startup acquisitions in that direction. With this move Oracle is going jumping heads-on to the ruthless cloud IaaS wars. In fact, it seems Oracle lured in some cloud experts from Amazon, Microsoft and Google to build its new IaaS.

One amazing thing that Oracle did with its IaaS, is that it designed its entire data center, up to the hardware stack, on its own! Oracle learned well the lesson from Amazon, Google et al. Thank to that design it claims to provide competitive pricing that will challenge the legendary AWS pricing.

For more information on Oracle Bare Metal Cloud Services see here.

1311765722_picons03 Follow Horovits on Twitter!

1 Comment

Filed under Cloud, Uncategorized

Building Cloud Applications the Easy Way Using Elastic Application Platforms

Patterns, Guidelines and Best Practices Revisited

In my previous post I analyzed Amazon’s recent AWS outage and the patterns and best practices that enabled some of the businesses hosted on Amazon’s affected availability zones to survive the outage.

The patterns and best practices I presented are essential to guarantee robust and scalable architectures in general and on the cloud in particular. Those who dismissed my latest post as exaggeration of an isolated incident got affirmation of my statement last week when Amazon found itself apologizing once again after its Cloud Drive service was overwhelmed by unpredictable peak demand for Lady Gaga’s newly-released album (99 cents, who wouldn’t buy it?!) and was rendered non-responsive. This failure to scale up/out to accommodate fluctuating demands raises the scalability concern in the public cloud, in addition to the resilience concern raised in the AWS outage.

Surprisingly, as obvious as the patterns I listed may seem, it seems they are definitely not common practice, seeing the amount of applications that went down when AWS did, and seeing how many other applications have similar issues on public cloud providers.

Why are such fundamental principles not prevalent in today’s architectures on the cloud?

One of the reasons these patterns are not prevalent in today’s cloud applications is that it requires an experienced and confident architect in the areas of distributed and scalable systems to design such architectures. The typical public cloud APIs also require developers to perform complex coding and utilize various non-standard APIs that are usually not common knowledge. Similar difficulties are found in testing, operating, monitoring and maintaining such systems. This makes it quite difficult to implement the above patterns to ensure the application’s resilience and scalability, and diverts valuable development time and resources from the application’s business logic that is the core value of the application.

How can we make the introduction of these patterns and best practices smoother and simpler? Can we get these patterns as a service to our application? We are all used to traditional application servers that provide our enterprise applications with underlying services such as database connection pooling, transaction management and security, and free us from worrying about these concerns so that we can focus on designing our business logic. Similarly, Elastic Application Platforms (EAP)allow your application to easily employ the patterns and best practices I enumerated on my previous post for high availability and elasticity without having to become experts in the field and allowing you to focus on your business logic.

So what is Elastic Application Platform? Forrester defines an elastic application platform as:

An application platform that automates elasticity of application transactions, services, and data, delivering high availability and performance using elastic resources.

Last month Forrester published a review under the title “Cloud Computing Brings Demand For Elastic Application Platforms”. The review is the result of a comprehensive research, and spans 17 pages (a blog post introducing it can be found on the Forrester blog). It analyzes the difficulties companies encounter in implementing their applications on top of cloud infrastructure, and recognizes the elastic application platforms as the emerging solution for a smooth path into the cloud. It then maps the potential providers of such solutions. For its research Forrester interviewed 17 vendor and user companies. Out of all the reviewed vendors, Forrester identified only 3 vendors that are “offering comprehensive EAPs today”: Microsoft, SalesForce.com and GigaSpaces.

As Forrester did an amazing job in their research reviewing and comparing solutions for EAP today, I’ll avoid repeating that. Instead, I’d like to review the GigaSpaces EAP solution in light of the patterns discussed on my previous post, and see how building your solution on top of GigaSpaces enables you to introduce these patterns easily and without having to divert your focus from your business logic.

Patterns, Guidelines and Best Practices Revisited

Design for failure

Well, that’s GigaSpaces’ bread and butter. Whereas thinking about failure diverts you from your core business, in our case it is our core business. GigaSpaces platform provides underlying services to enable high availability and elasticity, so that you don’t have to take care of that. So now that we’ve established that, let’s see how it’s done.

Stateless and autonomous services

The GigaSpaces architecture segregates your application into Processing Units. A Processing Unit (PU) is an autonomous unit of your application. It can be a pure business-logic (stateless) unit, or hold data in-memory, or provide a web application, and mix together these and other functions. You can define the required Service Level Agreement (SLA) for your Processing Unit, and the GigaSpaces platform will make sure to enforce it. When your Processing Unit SLA requires high-availability – the platform will deploy a (hot) backup instance (or multiple backups) of the Processing Unit to which the PU will fail over in case the primary instance fails. When your application needs to scale out – the platform will add another instance of the Processing Unit (maybe over a newly-provisioned virtual machine booted automatically by the platform). When your application needs to distribute data and/or data processing – the platform will shard the data evenly on several instances of the Processing Unit, so that each instance will handle a subset of the data independently of the other instances.

Redundant hot copies spread across zones

You can divide your deployment environment into virtual zones. These zones can represent different data centers, different cloud infrastructure vendors, or any physical or logical division you see fit. Then you can tell the platform (as part of the SLA) not to place both primary and its backup instances of the Processing Unit on the same zone – thus making sure the data stored within the Processing Unit is backed up on two different zones. This will provide your application resilience over two data centers, two cloud vendors, two regions, depending on your required resilience, all with uniform development API. You want higher level of resilience? Just define more zones and more backups for each PU.

Spread across several public cloud vendors and/or private cloud

GigaSpaces abstracts the details of the underlying infrastructure from your application. GigaSpaces’ Multi-Cloud Adaptor technology provides built-in integration with several major cloud providers, including the JClouds open source abstraction layer, thus supporting any cloud vendor that conforms to the JClouds standard. So all you need to do is plug in your desired cloud providers into the platform, and your application logic remains agnostic to the cloud infrastructure details. Plugging in two vendors to ensure resilience now becomes just a matter of configuration. The integration with JClouds is an open-source project under OpenSpaces.org, so feel free to review and even pitch in to extend and enhance integration with cloud vendors.

Automation and Monitoring

GigaSpaces offers a powerful set of tools that allow you to automate your system. First, it offers the Elastic Processing Unit, which can automatically monitor CPU and memory utilization and employ corrective actions based on your defined SLA. GigaSpaces also offers a rich Administration and Monitoring API that enables administration and monitoring of all the GigaSpaces services and components and layers running beneath the platform such as transport layer and, machine and operating system. GigaSpaces also offers a web-based dashboard and a management center client. Another powerful tool for monitoring and automation is the administrative alerts that can be configured and then viewed through GigaSpaces or external tools (e.g. via SNMP traps).

Avoiding ACID services and leveraging on NoSQL solutions

GigaSpaces does not rule out SQL for querying your data. We believe that true NoSQL stands for “Not Only SQL”, and that SQL as a language is good for certain uses, whereas other uses require other query semantics. GigaSpaces supports some of the SQL language through its SQLQuery API or through standard JDBC . However, GigaSpaces also provides a rich set of alternative standards and protocols for accessing your data, such as Map API for key/value access, Document API for dynamic schemas, Object-oriented (through proprietary Space API or standard JPA), and Memcached protocol.

Another challenge of the traditional relational databases is scaling data storage in read-write environment. The distributed relational databases were enough to deal with read-mostly environments. But Web2.0 brought social concepts into the web, with customers feeding data into the websites. Several NoSQL solutions try to address distributed data storage and querying. GigaSpaces provides this via its support for clustered topology of the in-memory data grid (the “space”) and for distributing queries and execution using patterns such as Map/Reduce and event-driven design.

Load Balancing

The elastic natureof the GigaSpaces platform allows it to automatically detect the CPU and memory capacity of the  deployment environment and optimize the load dynamically based on your defined SLA, instead of employing arbitrary division of the data into fixed zones. Such dynamic nature also allows your system to adjust in case of a failure of an entire zone (such as what happened with Amazon’s availability zones) so that your system doesn’t go down even in such extreme cases, and maintains optimal balance under the new conditions.

Furthermore, GigaSpaces platform supports content-based routing, which allow for smart load balancing based on your business model and logic. Content-based routing allows your application to route related data to the same host and then execute entire business flows within the same JVM, thus avoiding network hops and complex distributed transaction locking that hinder your application’s scalability.

Conclusion

Most significant advancements do not happen in slow gradual steps but rather in leaps. These leaps happen when the predominant conception crashes in face of the new reality, leaving behind chaos and uncertainty, and out of the chaos then emerges the next stage in the maturity of the system.

This is the case with the maturity process of the young cloud realm as well: the AWS outage was a major reality check that opened the eyes of the IT world to see that their systems crashed with AWS because they counted on their cloud infrastructure provider to handle your application’s high-availability and elasticity using its generic logic. This concept proved to be wrong. Now the IT world is in confusion, and many discussions are done on whether the faith in cloud was mistaken, with titles like “EC2 Failure Feeds Worries About Cloud Services”.

The next step in the cloud’s maturity was the realization that cloud infrastructure is just infrastructure, and that you need to implement your application correctly, using patterns and best practices such as the ones I raised in my previous post, to leverage on the cloud infrastructure to gain high-availability and elasticity.

The next step in the evolution is to start leveraging on designated application platforms that will handle these concerns for you and virtualize the cloud from your application, so that you can simply define the SLA for your application for high-availability and elasticity, and leave it up to the platform to manipulate the cloud infrastructure to enforce your SLA, while you concentrate on writing your application’s business logic. As Forrester said:

… A new generation of application platforms for elastic applications is arriving to help remove this barrier to realizing cloud’s benefits. Elastic application platforms will reduce the skill required to design, deliver, and manage elastic applications, making automatic scaling of cloud available to all shops …

 

1311765722_picons03
Follow Dotan on Twitter!

9 Comments

Filed under Cloud, PaaS

Retrospect on recent AWS Outage and Resilient Cloud-Based Architecture

According to the television series “Terminator: the Sarah Connor Chronicles”, Skynetcomputer system began its attack against humanity on April 21, 2011. Luckily that hasn’t happened (or has it?) but on that very day another predominant computing system provided us with a painful reminder on how much humanity relies on computers to run the world.

A couple of weeks ago, on April 21, the IT world experienced a tsunami: Amazon Web Services (AWS) cloud went down in the US East Region for over 3 days (!!), and took down with it numerous systems and services that rely on AWS such as HootSuite, Reddit, Foursquare, Quora and many more, with a damage estimated at 2M$. The affected services were EC2 and RDS and Amazon provided a detailed technical summaryof the event.

This tsunami was a wake-up call. This wasn’t the first outage in the cloud arena, and not even the first of Amazon (in fact this is their second outage this year). But its impact was so vast that it finally brought the realization that cloud is not a silver bullet. Those who counted on the generic cloud’s provision of scalability and resilience did not survive the AWS outage. The resilience of your application (as well as scalability) does not exist unless you take care of it.
So how do we achieve this resilience in our application?

Why not learn from the experience of those who survived the outage?
As a cloud evangelist, I was intrigued by the history of the outage as it occurred. There were great posts during and after the outage from those who went down. But more interestingly for me as architect were the detailed posts of those who managed to survive the outage relatively unharmed, such as SimpleGeo, Netflix, SmugMug, SmugMug’s CTO, Twilio, Bizo and others.

In this post I’d like to summarize the patterns, principles and best practices that emerge from these posts, as I believe we can learn a lot from them on how to design our business applications to truly leverage on the benefits that the cloud offers in resilience and scalability.

Patterns, Guidelines and Best Practices

Design for failure

The first and fundamental principle in building robust architecture is to design for failure. As SmugMug states:

… we designed for failure from day one. Any of our instances, or any group of instances in an AZ [Availability Zone], can be “shot in the head” and our system will recover …

This principle should be prevalent during design, development, deployment and maintenance stages of the system. SimpleGeo presents an excellent work practice:

… At SimpleGeo failure is a first class citizen. We talk a lot about it in design discussions, it influences our operational procedures, we think about it when we’re coding, and we joke about it at lunch …

Some companies even embedded random failure simulation in their work procedures, such as SmugMug:

… once your system seems stable enough, start surprising your Ops and Engineering teams by killing stuff in the middle of the day without warning them. They’ll love you …

Netflix even automated random failure simulation using a designated Chaos Monkey service to get their engineering team used to a “constant level of failure in the cloud”.

Stateless and autonomous services

If possible, divide your business logic into stateless services, to allow easy fail-over and scalability. Netflix explained the fail-over benefits:

… if a server fails it’s not a big deal. In the failure case requests can be routed to another service instance and we can automatically spin up a new node to replace it …

Twilio aggregates their stateless services into homogeneous pools, which provides them both fail-over and elasticity:

… The pool of stateless recording services allows upstream services to retry failed requests on other instances of the recording service. In addition, the size of the recording server pool can easily be scaled up and down in real-time based on load …

To contain the ripple effect of the failure, make the services well-encapsulated, as SmugMug states:

… Make your system divided into well-encapsulated components that can fail individually without failing the entire system …

Redundant hot copies spread across zones

By replicating your data to other zones, you insulate your service from zone-wide failure, as SmugMug, Twilio, Netflix and others describe. As Netflix explains:

… we ensure that there are multiple redundant hot copies of the data spread across zones. In the case of a failure we retry in another zone, or switch over to the hot standby …

Twilio also emphasizes the configuration of timeout and retry to avoid delays in failing over to another copy:

… By running multiple redundant copies of each service, one can use quick timeouts and retries to route around failed or unreachable services …

Spread across several public cloud vendors and/or private cloud

Most IT organizations avoid depending on a single ISP by having another ISP as backup. Even Amazon is using this strategy internally to ensure high-availability of their cloud by using a primary and a backup network. Similarly, you would like to avoid dependency on a single cloud vendor by having another vendor as backup. This holds true even if the vendor provides a certain level of resilience, as we saw with Amazon’s multi-AZ failure on the recent outage. Many of the companies that survived the recent AWS outage owe it to using their own datacenters, to using other vendors, or to using the US West Region of AWS. SmugMug for instance kept their critical data on their own datacenter:

… the exact types of data that would have potentially been disabled by the EBS meltdown don’t actually live at AWS at all – it all still lives in our own datacenters, where we can provide predictable performance …

and also recommends to “spread across many providers”, although admitting that

… This is getting more and more difficult as AWS continues to put distance between themselves and their competitors …

When considering using different regions of AWS for resilience, it’s interesting to note Amazon’s statement about the effort required on your application’s side to work with multiple regions, which makes you wonder if it’s that much easier than working with a different vendor altogether:

… if you want to move data between Regions, you need to do it via your applications as we don’t replicate any data between Regions on our users’ behalf. You also need to use a separate set of APIs to manage each Region. Regions provide users with a powerful availability building block, but it requires effort on the part of application builders to take advantage of this isolation …

Automation and Monitoring

Automation is the key. Your application needs to automatically pick up alerts on system events, and should be able to automatically react to the alerts. As SimpleGeo architect states:

… Everything needs to be automated. Spinning up new instances, expanding your clusters, backups, restoring from backups, metrics, monitoring, configurations, deployments, etc. should all be automated …

Interesting to see that even Netflix that took pride in surviving the failure, admitted that the manual responses their engineers used this time will not work in the future, as they grow to a

… worldwide service with dozens of availability zones, even with top engineers we simply won’t be able to scale our responses manually …

Detailed alerting mechanisms are also essential to the manual control of the system, as Bizo states:

… we have our own internal alarms and dashboards that give us up to the minute metrics such as request rate, cpu utilization etc. …

Avoiding ACID services and leveraging on NoSQL solutions

The CTO of SimpleGeo recommends avoiding to rely on ACID services, as it inhibits the distributed nature of the cloud. In order to achieve that, Twilio recommends to “relax consistency requirements”. Netflix implemented that by

… leveraging NoSQL solutions wherever possible to take advantage of the added availability and durability that they provide, even though it meant sacrificing some consistency guarantees …

Load Balancing

Use dynamic balancing, regardless of the zone. When balancing equally by zone, like Amazon’s Elastic Load Balancer (ELB) does, if a zone fails this can bring down the system.

… Netflix uses its own software load balancing service that does balance across instances evenly, independent of which zone they are in. Services using middle tier load balancing are able to handle uneven zone capacity with no intervention …

Conclusion

Recent AWS outage serves as an important lesson to the IT world, and an important milestone in our maturity in using the cloud. The most important thing to do now is to learn from the mistakes made by those who went down with AWS, as well as from the success of the ones who survived it, and come up with proper methodology, patterns, guidelines and best practices on doing it right, so that Skynet will not take down humanity.

Update: added links to my follow-up posts with some more thoughts I had following subsequent AWS outages, around Disaster Recovery Policies and Moving from Multi-AZ to Multi-Cloud.

Resources

· Netflix, http://techblog.netflix.com/2011/04/lessons-netflix-learned-from-aws-outage.html

· Joe Stump, SimpleGeo, http://stu.mp/2011/04/the-cloud-is-not-a-silver-bullet.html

· SimpleGeo, http://developers.simplegeo.com/blog/2011/04/26/how-simplegeo-stayed-up/

· Ted Theodoropoulos, http://blog.acrowire.com/cloud-computing/failing-to-plan-is-planning-to-fail

· http://stu.mp/2011/04/the-cloud-is-not-a-silver-bullet.html

· SmugMug, http://don.blogs.smugmug.com/2011/04/24/how-smugmug-survived-the-amazonpocalypse

· Twilio, http://www.twilio.com/engineering/2011/04/22/why-twilio-wasnt-affected-by-todays-aws-issues/

· Bizo, http://dev.bizo.com/2011/04/how-bizo-survived-great-aws-outage-of.html

· Heroku, http://status.heroku.com/incident/151

· http://groups.google.com/group/cloud-computing/browse_thread/thread/e8079a54e6a8c4b9/72756bf9e587869d?show_docid=72756bf9e587869d

· Amazon, http://aws.amazon.com/message/65648/

· Amazon, http://aws.amazon.com/architecture/

1311765722_picons03
Follow Dotan on Twitter!

19 Comments

Filed under Cloud, IaaS