Category Archives: PaaS

Enterprises Taking Off to the Cloud(s)

Cloud Deployment: the enterprise angle

The Cloud is no longer the exclusive realm of the young and small start up companies. Enterprises are now joining the game and examining how to migrate their application ecosystem to the cloud. A recent survey conducted by research firm MeriTalk showed that one-third of respondents say they plan to move some mission-critical applications to the cloud in the next year. Within two years, the IT managers said they will move 26 percent of their mission-critical apps to the cloud, and in five years, they expect 44 percent of their mission-critical apps to run in the cloud. Similar results arise from surveys conducted by HP, Cisco and others.

SaaS on the rise in enterprises

Enterprises are replacing their legacy applications with SaaS-based applications. A comprehensive survey published by Gartner last week, which surveyed nearly 600 respondents in over 10 countries, shows that

Companies are not only buying into SaaS (software as a service) more than ever, they are also ripping out legacy on-premises applications and replacing them with SaaS

IaaS providers see the potential of the migration of enterprises to the cloud and adapt their offering. Amazon, having spearheaded Cloud Infrastructure, leads with on-boarding enterprise applications to their AWS cloud. Only a couple of weeks ago Amazon announced that AWS is now certified to run SAP Business Suite (SAP’s CRM, ERP, SCM, PLM) for production applications. That joins Microsoft SharePoint and other widely-adopted enterprise business applications now supported by AWS, which helps enterprises migrate their IT to AWS easier than ever before.

Mission-critical apps call for PaaS

Running your CRM or ERP as SaaS in the cloud is very useful. But what about your enterprise’s mission-critical applications? Whether in the Telco, Financial Services, Healthcare or  other domains, the core business of the organization’s IT usually lies in the form of a complex ecosystem of 100s of interacting applications. How can we on-board the entire ecosystem in a simple and consistent manner to the cloud? One approach that gains steam for such enterprise ecosystems is using PaaS. Gartner predicting PaaS will increase from “three percent to 43 percent of all enterprises by 2015”.

Running your ecosystem of applications on a cloud-based platform provides a good way to build applications for the cloud in a consistent and unified manner. But what about legacy applications? Many of the mission-critical applications in enterprises are ones that have been around for quite some time and were not designed for the cloud and are not supported by any cloud provider. Migrating such applications to the cloud often seems to call for major overhaoul, as stated in MeriTalk’s report on the Federal market:

Federal IT managers see the benefits of moving mission-critical applications to the cloud, but they say many of those application require major re-engineering to modernize them for the cloud

The more veteran PaaS vendors such as Google App Engine and Heroku provide great productivity for developing new applications, but do not provide answer for such legacy applications, which gets us back to square one, having to do the cloud migration ourselves. This migration work seems too daunting for most enterprises to even dare, and that is one of the main inhibitors for cloud adoption despite the incentives.

It is only recently that organizations have started to use PaaS for critical functions, examining PaaS for mission-critical applications. According to a recent survey conducted by Engine Yard among some 162 management and technical professionals of various companies:

PaaS is now seen as a way to boost agility, improve operational efficiency, and increase the performance, scalability, and reliability of mission-critical applications.

What IT organizations are looking for is a way to on-board their existing application ecosystem to the cloud in a consistent manner as provided with the PaaS, but while having IaaS-like low-level control over the environment and the application life cycle. IT organizations seek the means to keep the way they are used to doing things in the data center even when moving to the cloud. A new class of PaaS products emerged over the past couple of years to answer this need, with products such as OpenShift, CloudFoundry and Cloudify. In my MySQL example discussion I demonstrated how the classic MySQL relational database can be on-boarded to the cloud using Cloudify without need for re-engineering MySQL, and without locking into any specific IaaS vendor API.

Summary

Enterprises are migrating their applications to the cloud in an increasing rate. Some applications are easily migrated using existing SaaS offering. But the mission-critical applications are complex and call for PaaS for on-boarding them to the cloud. If the mission-critical application contains legacy systems or requires low-level control of OS and other environment configuration then not every PaaS would fit the job. There are many cloud technologies, infrastructure, platforms, tools and vendors out there, and the right choice is not trivial. It is important to make proper assessment of the enterprise system at hand and choose the right tool for the job, to ensure smooth migration, avoid re-engineering as much as possible, and keep flexible to accomodate for future evolution of the application.

If you are interested in consulting around assessment of your application’s on-boarding to the cloud, feel free to contact me directly or email ps@gigaspaces.com

1311765722_picons03 Follow Dotan on Twitter!

Leave a comment

Filed under cloud deployment, IaaS, PaaS

AWS Outage: Moving from Multi-Availability-Zone to Multi-Cloud

A couple of days ago Amazon Web Services (AWS) suffered a significant outage in their US-EAST-1 region. This has been the 5th major outage in that region in the past 18 months. The outage affected leading services such as Reddit, Netflix, Foursquare and Heroku.

How should you architect your cloud-hosted system to sustain such outages? Much has been written on this question during this outage, as well as past outages. Many recommend basing your architecture on multiple AWS Availability Zones (AZ) to spread the risk. But during this outage we saw even multi-Availability Zone applications severely affected. Even Amazon published during the outage that

Customers can launch replacement instances in the unaffected availability zones but may experience elevated launch latencies or receive ResourceLimitExceeded errors on their API calls, which are being issued to manage load on the system during recovery.

The reason is that there is an underlying infrastructure that escalates the traffic from the affected AZ to other AZ in a way that overwhelms the system. In the case of this outage it was the AWS API Platform that was rendered unavailable, as nicely explained in this great post:

The waterfall effect seems to happen, where the AWS API stack gets overwhelmed to the point of being useless for any management task in the region.

But it doesn’t really matter for us as users which exact infrastructure it was that failed on this specific outage. 18 months ago, during the first major outage, the reason was another infastructure component, the Elastic Block Store (“EBS”) volumes, that cascaded the problem. Back then I wrote a post on how to architect your system to sustain such outages, and one of my recommendations was:

Spread across several public cloud vendors and/or private cloud

The rule of thumb in IT is that there will always be extreme and rare situations (and don’t forget, Amazon only commits to 99.995% SLA) causing such major outages. And there will always be some common infrastructure that under that extreme and rare situation will carry the ripple effect of the outage to other Availability Zones in the region.

Of course, you can mitigate risk by spreading your system across several AWS Regions (e.g. between US-EAST and US-WEST), as they have much looser coupling, but as I stated on my previous post, that loose coupling comes with a price: it is up to your application to replicate data, using a separate set of APIs for each region. As Amazon themselves state: “it requires effort on the part of application builders to take advantage of this isolation”.

The most resilient architecture would therefore be to mitigate risk by spreading your system across different cloud vendors, to provide the best isolation level. The advantages in terms resilience are clear. But how can that be implemented, given that the vendors are so different in their characteristics and APIs?

There are 2 approaches to deploying across multiple cloud vendors and keeping cloud-vendor-agnostic:

  1. Open Standards and APIs for cloud API that will be supported by multiple cloud vendors. That way you write your application using a common standard and have immediate support by all conforming cloud vendors. Examples for such emerging standards are OpenStack and JClouds. However, the Cloud is still a young domain with many competing standards and APIs and it is yet to be determined which one shall become the de-facto standard of the industry and where to “place our bet”.
  2. Open PaaS Platforms that abstract the underlying cloud infrastructure and provide transparent support for all major vendors. You build your application on top of the platform, and leave it up to the platform to communicate to the underlying cloud vendors (whether public or private clouds, or even a hybrid). Examples of such platforms, are CloudFoundry and Cloudify. I dedicated one of my posts for exploring how to build your application using such platforms.

Conclusion

System architects need to face the reality of the Service Level Agreement provided by Amazon and other cloud vendors and their limitations, and start designing for resilience by spreading across isolated environments, deploying DR sites, and by similar redundancy measures to keep their service up-and-running and their data safe. Only that way can we guarantee that we will not be the next one to fall off the 99.995% SLA.

This post was originally posted here.

8 Comments

Filed under cloud deployment, Disaster-Recovery, IaaS, PaaS, Solution Architecture, Uncategorized

Cloud Deployment: It’s All About Cloud Automation

Not only for modern applications

Many organizations are facing the challenge of migrating their IT to the cloud. But not many know how to actually approach this undertaking. In my recent post – Cloud Deployment: The True Story – I started sketching best practices for performing the cloud on-boarding task in a manageable fashion. But many think this methodology is only good for modern applications that were built with some dynamic/cloud orientation in mind, such as Cassandra NoSQL DB from my previous blog, and that existing legacy application stacks cannot use the same pattern. For example, how different would the cloud on-boarding process be if I modify the PetClinic example application from my previous post to use a MySQL relational database instead of the modern Cassandra NoSQL clustered database? In this blog post I intend to demonstrate that cloud on-boarding of brownfield applications doesn’t have to be a huge monolithic migration project with high risk. Cloud on-boarding can take the pragmatic approach and can be performed in a gradual process that both mitigates the risk and enables you to enjoy the immediate benefits of automation and easier management of your application’s operational lifecycle even before moving to the cloud.

MySQL case study

Let’s look at the above challenge of taking a standard and long-standing MySQL database and adapt it to the cloud. In fact, this challenge was already met by Amazon for their cloud. Amazon Web Services (AWS) include the very popular Relational Database Service (RDS). This service is an adaptation of a MySQL database to the Amazon cloud. MySQL DB was not built or designed for cloud environment, and yet it proved highly popular, and even the new SimpleDB service that Amazon built from scratch with cloud orientation in mind was unable to overthrow the RDS reign. The adaptation of MySQL to AWS was achieved using some pre-tuning of MySQL to the Amazon environment and extensive automation of the installation and management of the DB instances. The case study of Amazon RDS can teach us that on-boarding existing application is not only doable but may even prove better than developing a new implementation from scratch to suit the cloud.

I will follow the MySQL example throughout this post and examine how this traditional pre-cloud database can be made ready for the cloud.

Automation is the key

We have our existing application stack running within our data center, knowing nothing of the cloud, and we would like to deploy it to the cloud. How shall we begin?

Automation is the key. Experts say automated application deployment tools are a requirement when hosting an application in the cloud. Once automation is in place, and given a PaaS layer that abstracts the underlying IaaS, your application can easily be migrated to any common cloud provider with minimal effort.

Furthermore, automation has a value in its own right. The emerging agile movements such as Agile ALM (Application Lifecycle Management) and DevOps endorse automation as a means to support the Continuous Deployment methodology and ever-increasing frequency of releases to multiple environments. Some even go beyond DevOps and as far as NoOps. Forrester analyst Mike Gualtieri states that “NoOps is the peak of DevOps”, where “DevOps Is About Collaboration; NoOps Is About Automation“:

DevOps is a noble and necessary movement for immature organizations. Mature organizations have DevOps down pat. They aspire to automate to speed release increments.

This value of automation in providing a more robust and agile management of your application is a no-brainer and will prove useful even before migrating to the cloud. It is also much easier to test and verify the automation when staying in the well-familiar environment in which the system has been working until now. Once deciding to migrate to the cloud, automation will make the process much simpler and smoother.

Automating application deployment

Let’s take the pragmatic approach. The first step is to automate the installation and deployment of the application in the current environment, namely within the same data center. We capture the operational flows of deploying the application and start automating these processes, either using scripts or using higher-level DevOps automation tools such as Chef and Puppet for Change and Configuration Management (CCM).

Let’s revisit our MySQL example: MySQL doesn’t come with built-in deployment automation. Let’s examine the manual processes involved with installing MySQL DB from scratch and capture that in a simple shell script so we can launch the process automatically:

This script is only the basics. A more complete automation should take care of additional concerns such as super-user permissions, updating ‘yum’, killing old processes and cleaning up previous installations, and maybe even handling differences between flavors of Linux (e.g. Ubuntu’s quirks…). You can check out the more complete version of the installation script for Linux here (mainstream Linux, e.g. RedHat, CentOS, Fedora), as well as a variant for Ubuntu (adapting to its quirks) here. This is open source and a work in progress so feel free to fork the GitHub repo and contribute!

Automating post-deployment operations

Once automation of the application deployment is complete we can then move to automating other operational flows of the application’s lifecycle, such as fail-over or shut down of the application. This aligns with cloud on-boarding, since “Deployment in the cloud is attached to the whole idea of running the application in the cloud”, as Paul Burns, president and analyst at Neovise, says:

People don’t say, ‘Should I automate my deployment in the cloud?’ It’s, ‘Should I run it in the cloud?’ Then, ‘How do I get it to the cloud?’

In our MySQL example we will of course want to automate the start-up of the MySQL service, stopping it and even uninstalling it. More interestingly, we may also want to automate operational steps unique to MySQL such as granting DB permissions, creating a new database, generating a dump (snapshot) of our database content or importing a DB dump to our database. Let’s look at a snippet to capture and automate dump generation. This time we’ll use the Groovy scripting language which provides higher-level utilities for automation and better yet it is portable between OSs, so we don’t have the headache as we described above with Ubuntu (not to mention Windows …):

Adding automation of these post-deployment steps will provide us with end-to-end automation of the entire lifecycle of the application from start-up to tear-down within our data center. Such automation can be performed using elaborate scripting, or can leverage modern open PaaS platforms such as CloudFoundry, Cloudify, and OpenShift to manage the full application lifecycle. For this MySQL automation example I used the Cloudify open source platform, where I modeled the MySQL lifecycle using a Groovy-based DSL as follows:

As you can see, the lifecycle is pretty clear from the DSL, and maps to individual scripts similar to the ones we scripted above. We even have the custom commands for generating dumps and more. With the above in place, we can now install and start MySQL automatically with a single command line:

install-service mysql

Similarly, we can later perform other steps such as tearing it down or generating dumps with a single command line.

You can view the full automation of the MySQL lifecycle in the scripts and recipes in this GitHub repo.

Monitoring application metrics

We may also want to have better visibility into the availability and performance of our application for better operational decision-making, whether for manual processes (e.g. via logs or monitoring tools) or automated processes (e.g. auto-scaling based on load). This is becoming common practice in methodologies such as Application Performance Management (APM). This will also prove useful once in the cloud, as visibility is essential for successful cloud utilization. Rick Blaisdell, CTO at ConnectEDU, explains:

… the key to successful cloud utilization lays in the management and automation tools’ capability to provide visibility into ongoing capacity

In our MySQL example we can sample several interesting metrics that MySQL exposes (e.g. using the SHOW STATUS syntax or ‘mysqladmin’), such as the number of client connections, query counts or query throughput.

Summary

On-boarding existing applications to the cloud does not have to be a painful and high-risk migration process. On boarding can be done in a gradual “baby-step” manner to mitigate risk.

The first step is automation. Automating your application’s management within your existing environment is a no-brainer, and has its own value in making your application deployment, management and monitoring easier and more robust.

Once automation of the full application lifecycle is in place, migrating your application to the cloud becomes smooth sailing, especially if you use PaaS platforms that abstract the underlying cloud provider specifics.

This post was originally posted here.

For the full MySQL cloud automation open source code see this public GitHub repo. Feel free to download, play around, and also fork and contribute.

3 Comments

Filed under Cloud, cloud automation, cloud deployment, DevOps, IaaS, PaaS

Cloud Deployment: The True Story

Everyone wants to be in the cloud. Organizations have internalized the notion and have plans in place to migrate their applications to the cloud in the immediate future. According to Cisco’s recent global cloud survey:

Presently, only 5 percent of IT decision makers have been able to migrate at least half of their total applications to the cloud. By the end of 2012, that number is expected to significantly rise, as one in five (20 percent) will have deployed over half of their total applications to the cloud.

But that survey also reveals the fact that on-boarding your application to the cloud “is harder, and it takes longer than many thought”, as David Linthicum said in his excellent blog post summarizing the above Cisco survey. Taking standard enterprise applications that were designed to run in the data center and on-boarding them to the cloud is in essence a reincarnation of the well-known challenge of platform-migration, which is never easy. But why is there a sense of extra difficulty in on-boarding to the cloud? The first reason David identifies for the extra difficulty is the misconception that cloud is a “silver bullet”. Such “silver bullet” misconception can lead to lack of proper design of the system, which may result in application outage, as I outlined in my previous blogs. Another reason David states for the extra difficulty is the lack of well-defined process and best practices for on-boarding applications to the cloud:

What makes the migration to the cloud even more difficult is the lack of information about the process. Many new cloud users are lost in a sea of hype-driven desire to move to cloud computing, without many proven best practices and metrics.

It is about time for a field-proven process for on-boarding applications to the cloud. In this post I’d like to start examining the accumulated experience in on-boarding various types of applications to the cloud, and see if we can extract a simple process for the migration. This is of course based on the experience of me and my colleagues, and not the result of any academic research, so I would very much like for it to serve as a cornerstone to trigger an open discussion in the community, sharing experience of different types of migration projects and applications, and iteratively refine the suggested process based on the joint experience.

Examining the n-tier enterprise application use case

As a first use case, it makes sense to examine a classic n-tier enterprise application. For the sake of discussion, I’d like to use common open-source modules, assuming they are well-known and to allow us to play with them freely. For the test-case application let’s take Spring’s PetClinic Sample Application and adapt it. We’ll use Apache Tomcat web container and Grails platform for the web and business logic tiers, and MongoDB NoSQL DB for the data tier, to simulate a Big Data use case. We can later add the Apache HTTP Server as a front-end load balancer. To those who start wondering, I’m not invested in the Apache Foundation, just an open-source enthusiast.

First step of on-boarding the application to the cloud is to identify the individual services which comprise the application, and the dependency between these services. In this use case, since the application is well-divided into tiers, it is quite easy to map the services from the tiers. Also, the dependency between the tiers is quite clear. For example, the Tomcat instances are dependent on the back-end database. Mapping the application’s services and their dependency will help us determine which VMs we should spin up, of which images, how many of each, and in which order. In later posts I’ll address additional benefits of the services paradigm.

Next let’s dive into the specific services, and see what it takes to prepare them for on-boarding to the cloud. First step is to identify the operational phases which comprise the service’s lifecycle. Typically, a service will undergo a lifecycle of install-init-start-stop-shutdown. We should capture the operational process for each such phase and formalize it into an automated DevOps process, for example in the form of a script. This process of capturing and formalizing the steps also helps exposing many important issues that need to be addressed to enable the application to run in the cloud, and may even require further lifecycle phases or intermediate steps. For example in the case of Tomcat we may want to support deploying a new WAR file to Tomcat without restarting the container. Another example for MongoDB is that we noticed that it may fail starting up without failure indication in the OS process status, so simple generic monitoring of the process status wasn’t enough and we needed a more accurate and customized way to know when the service successfully completed start-up and is ready to serve. Similar considerations arise with almost every application. I will touch these considerations further in a follow-up post.

With the break-down of the application into services, and the services break-down into their individual lifecycle stages, we have a good skeleton to automate the work on the cloud. You are welcome to review the result of the experimentation available as open-source under CloudifySourice GitHub. On my next post I will further examine the n-tier use case and discuss additional concerns that needed to be addressed to bring it to a full solution.

 

1311765722_picons03
Follow Dotan on Twitter!

2 Comments

Filed under Cloud, DevOps, IaaS, PaaS

Cloud integration and DevOps automation experience shared

The Cloud carries the message of automation to system architecture. The ability to spin up VMs on demand and take them down when no longer needed as per the applications’s real-time requirements and metrics is the key for making the system truely elastic, scalable and self-healing. When using external IaaS providers, this also saves the hassle of managing the IT aspects of the on-demand infrastructure.

But with potential of automation comes the challenge of integrating with the cloud provider (or providers) and automating the management of the VMs, dealing with DevOps aspects such as accessing the VM, transferring contents to it, performing installations, running and stopping processes on it, coordinating between the services, etc. On this post I’d like to share with you some of my experience integrating with IaaS cloud providers, as part of my work with customers using the open source Cloudify PaaS product. Cloudify provides out-of-the-box integration with many popular cloud providers, such as Amazon EC2 and The Rackspace Cloud, as well as integration with the popular jclouds framework and OpenStack open standard. But when encountering an emerging cloud provider or standard, you just need to pull up your sleeves and write your own integration. As a best practice, I use Java for the cloud integration and try to leverage on well-proven and community-backed open source projects wherever possible. Let’s see how I did it.

First we need to integrate with the IaaS API to enable automation of resource allocation and deallocation. The main integration point is called a Cloud Driver, which is basically a Java class that adheres to a simple API for accessing the cloud for resources. Various clouds expose various APIs for accessing them. Programmatic access is native and easy to implement from the Cloud Driver code. REST API is also quite popular, in which cases I found the Apache Jersey client open source library quite convenient for implementing a RESTful client. Jersey is based on JAX-RS Java community standard, and offers easy handling of various flavors of calls, cookie handling, policy governance, etc. Cloudify offers a convenient Groovy-based DSL that enables you to configure the cloud provider’s parameters and properties in a declarative and easy-to-read manner, and takes care of the wiring for you. When writing your custom cloud driver you should make sure to sample and use the values from the Groovy (you can add custom properties as needed), so after the cloud driver is ready for a given cloud provider, you can use it in any deployment by simply setting the configuration. I used the source code of the cloud drivers on CloudifySource public GitHub repository, as a great source of reference for writing my cloud driver.

The next DevOps aspect of the integration is accessing the VMs and managing them. Linux/Unix VMs are accessed via SSH for executing scripts, and uses SFTP for file transfer. For generic file transfer layer there’s the Apache Commons VFS2 (Virtual File System), which offers a uniform view of the files from various different sources (local FS, remote over HTTP, etc.). For remote command execution over SSH there’s JCraft’s JSch library, providing a Java implementation of SSH2. Authentication also needs to be addressed with the above. Luckily, many of these things that we used to do manually as part of DevOps integration are new being taken care of by Cloudify. Indeed, there’s still much integration headache with ports not opened, passwords incorrect etc. which takes up most of the time, and more logs are definitely required in Cloudify to figure things out and troubleshoot. What I did is I simply forked the open source project from GitHub and debugged right through the code, which has the side benefit of  fixing and improving the project on the fly and contributing back to the community. I should mention that although the environments I integrated with where Linux-based, Cloudify also provides support for Windows-based systems (based on WinRM, CIFS and PowerShell).

One of the coolest things added in Cloudify 2.1 that was launched last week was the BYON (Bring Your Own Node) driver, which allows you to take your existing bare-metal servers and use them as managed resources for deployment by Cloudify, as if they were on-demand resources. This provides a neat answer to the growing demand for bare-metal cloud services. I’m still waiting for the opportunity to give this one a wet run with a customer in the field …

All in all, it turned out to be a straight-forward task to integrate with a new cloud provider. Just make sure you have a stable environment and a test code on how to consume the APIs, and use the existing examples as reference, and you’re good to go.

 

1311765722_picons03
Follow Dotan on Twitter!

Leave a comment

Filed under Cloud, DevOps, IaaS, PaaS

Building Cloud Applications the Easy Way Using Elastic Application Platforms

Patterns, Guidelines and Best Practices Revisited

In my previous post I analyzed Amazon’s recent AWS outage and the patterns and best practices that enabled some of the businesses hosted on Amazon’s affected availability zones to survive the outage.

The patterns and best practices I presented are essential to guarantee robust and scalable architectures in general and on the cloud in particular. Those who dismissed my latest post as exaggeration of an isolated incident got affirmation of my statement last week when Amazon found itself apologizing once again after its Cloud Drive service was overwhelmed by unpredictable peak demand for Lady Gaga’s newly-released album (99 cents, who wouldn’t buy it?!) and was rendered non-responsive. This failure to scale up/out to accommodate fluctuating demands raises the scalability concern in the public cloud, in addition to the resilience concern raised in the AWS outage.

Surprisingly, as obvious as the patterns I listed may seem, it seems they are definitely not common practice, seeing the amount of applications that went down when AWS did, and seeing how many other applications have similar issues on public cloud providers.

Why are such fundamental principles not prevalent in today’s architectures on the cloud?

One of the reasons these patterns are not prevalent in today’s cloud applications is that it requires an experienced and confident architect in the areas of distributed and scalable systems to design such architectures. The typical public cloud APIs also require developers to perform complex coding and utilize various non-standard APIs that are usually not common knowledge. Similar difficulties are found in testing, operating, monitoring and maintaining such systems. This makes it quite difficult to implement the above patterns to ensure the application’s resilience and scalability, and diverts valuable development time and resources from the application’s business logic that is the core value of the application.

How can we make the introduction of these patterns and best practices smoother and simpler? Can we get these patterns as a service to our application? We are all used to traditional application servers that provide our enterprise applications with underlying services such as database connection pooling, transaction management and security, and free us from worrying about these concerns so that we can focus on designing our business logic. Similarly, Elastic Application Platforms (EAP)allow your application to easily employ the patterns and best practices I enumerated on my previous post for high availability and elasticity without having to become experts in the field and allowing you to focus on your business logic.

So what is Elastic Application Platform? Forrester defines an elastic application platform as:

An application platform that automates elasticity of application transactions, services, and data, delivering high availability and performance using elastic resources.

Last month Forrester published a review under the title “Cloud Computing Brings Demand For Elastic Application Platforms”. The review is the result of a comprehensive research, and spans 17 pages (a blog post introducing it can be found on the Forrester blog). It analyzes the difficulties companies encounter in implementing their applications on top of cloud infrastructure, and recognizes the elastic application platforms as the emerging solution for a smooth path into the cloud. It then maps the potential providers of such solutions. For its research Forrester interviewed 17 vendor and user companies. Out of all the reviewed vendors, Forrester identified only 3 vendors that are “offering comprehensive EAPs today”: Microsoft, SalesForce.com and GigaSpaces.

As Forrester did an amazing job in their research reviewing and comparing solutions for EAP today, I’ll avoid repeating that. Instead, I’d like to review the GigaSpaces EAP solution in light of the patterns discussed on my previous post, and see how building your solution on top of GigaSpaces enables you to introduce these patterns easily and without having to divert your focus from your business logic.

Patterns, Guidelines and Best Practices Revisited

Design for failure

Well, that’s GigaSpaces’ bread and butter. Whereas thinking about failure diverts you from your core business, in our case it is our core business. GigaSpaces platform provides underlying services to enable high availability and elasticity, so that you don’t have to take care of that. So now that we’ve established that, let’s see how it’s done.

Stateless and autonomous services

The GigaSpaces architecture segregates your application into Processing Units. A Processing Unit (PU) is an autonomous unit of your application. It can be a pure business-logic (stateless) unit, or hold data in-memory, or provide a web application, and mix together these and other functions. You can define the required Service Level Agreement (SLA) for your Processing Unit, and the GigaSpaces platform will make sure to enforce it. When your Processing Unit SLA requires high-availability – the platform will deploy a (hot) backup instance (or multiple backups) of the Processing Unit to which the PU will fail over in case the primary instance fails. When your application needs to scale out – the platform will add another instance of the Processing Unit (maybe over a newly-provisioned virtual machine booted automatically by the platform). When your application needs to distribute data and/or data processing – the platform will shard the data evenly on several instances of the Processing Unit, so that each instance will handle a subset of the data independently of the other instances.

Redundant hot copies spread across zones

You can divide your deployment environment into virtual zones. These zones can represent different data centers, different cloud infrastructure vendors, or any physical or logical division you see fit. Then you can tell the platform (as part of the SLA) not to place both primary and its backup instances of the Processing Unit on the same zone – thus making sure the data stored within the Processing Unit is backed up on two different zones. This will provide your application resilience over two data centers, two cloud vendors, two regions, depending on your required resilience, all with uniform development API. You want higher level of resilience? Just define more zones and more backups for each PU.

Spread across several public cloud vendors and/or private cloud

GigaSpaces abstracts the details of the underlying infrastructure from your application. GigaSpaces’ Multi-Cloud Adaptor technology provides built-in integration with several major cloud providers, including the JClouds open source abstraction layer, thus supporting any cloud vendor that conforms to the JClouds standard. So all you need to do is plug in your desired cloud providers into the platform, and your application logic remains agnostic to the cloud infrastructure details. Plugging in two vendors to ensure resilience now becomes just a matter of configuration. The integration with JClouds is an open-source project under OpenSpaces.org, so feel free to review and even pitch in to extend and enhance integration with cloud vendors.

Automation and Monitoring

GigaSpaces offers a powerful set of tools that allow you to automate your system. First, it offers the Elastic Processing Unit, which can automatically monitor CPU and memory utilization and employ corrective actions based on your defined SLA. GigaSpaces also offers a rich Administration and Monitoring API that enables administration and monitoring of all the GigaSpaces services and components and layers running beneath the platform such as transport layer and, machine and operating system. GigaSpaces also offers a web-based dashboard and a management center client. Another powerful tool for monitoring and automation is the administrative alerts that can be configured and then viewed through GigaSpaces or external tools (e.g. via SNMP traps).

Avoiding ACID services and leveraging on NoSQL solutions

GigaSpaces does not rule out SQL for querying your data. We believe that true NoSQL stands for “Not Only SQL”, and that SQL as a language is good for certain uses, whereas other uses require other query semantics. GigaSpaces supports some of the SQL language through its SQLQuery API or through standard JDBC . However, GigaSpaces also provides a rich set of alternative standards and protocols for accessing your data, such as Map API for key/value access, Document API for dynamic schemas, Object-oriented (through proprietary Space API or standard JPA), and Memcached protocol.

Another challenge of the traditional relational databases is scaling data storage in read-write environment. The distributed relational databases were enough to deal with read-mostly environments. But Web2.0 brought social concepts into the web, with customers feeding data into the websites. Several NoSQL solutions try to address distributed data storage and querying. GigaSpaces provides this via its support for clustered topology of the in-memory data grid (the “space”) and for distributing queries and execution using patterns such as Map/Reduce and event-driven design.

Load Balancing

The elastic natureof the GigaSpaces platform allows it to automatically detect the CPU and memory capacity of the  deployment environment and optimize the load dynamically based on your defined SLA, instead of employing arbitrary division of the data into fixed zones. Such dynamic nature also allows your system to adjust in case of a failure of an entire zone (such as what happened with Amazon’s availability zones) so that your system doesn’t go down even in such extreme cases, and maintains optimal balance under the new conditions.

Furthermore, GigaSpaces platform supports content-based routing, which allow for smart load balancing based on your business model and logic. Content-based routing allows your application to route related data to the same host and then execute entire business flows within the same JVM, thus avoiding network hops and complex distributed transaction locking that hinder your application’s scalability.

Conclusion

Most significant advancements do not happen in slow gradual steps but rather in leaps. These leaps happen when the predominant conception crashes in face of the new reality, leaving behind chaos and uncertainty, and out of the chaos then emerges the next stage in the maturity of the system.

This is the case with the maturity process of the young cloud realm as well: the AWS outage was a major reality check that opened the eyes of the IT world to see that their systems crashed with AWS because they counted on their cloud infrastructure provider to handle your application’s high-availability and elasticity using its generic logic. This concept proved to be wrong. Now the IT world is in confusion, and many discussions are done on whether the faith in cloud was mistaken, with titles like “EC2 Failure Feeds Worries About Cloud Services”.

The next step in the cloud’s maturity was the realization that cloud infrastructure is just infrastructure, and that you need to implement your application correctly, using patterns and best practices such as the ones I raised in my previous post, to leverage on the cloud infrastructure to gain high-availability and elasticity.

The next step in the evolution is to start leveraging on designated application platforms that will handle these concerns for you and virtualize the cloud from your application, so that you can simply define the SLA for your application for high-availability and elasticity, and leave it up to the platform to manipulate the cloud infrastructure to enforce your SLA, while you concentrate on writing your application’s business logic. As Forrester said:

… A new generation of application platforms for elastic applications is arriving to help remove this barrier to realizing cloud’s benefits. Elastic application platforms will reduce the skill required to design, deliver, and manage elastic applications, making automatic scaling of cloud available to all shops …

 

1311765722_picons03
Follow Dotan on Twitter!

8 Comments

Filed under Cloud, PaaS