Category Archives: cloud deployment

Google-Amazon Fight Over Big Data In The Cloud Is Heating Up

Google today announced that it’s releasing its Cloud Dataflow in open beta. This big data analytics service was launched in closed beta version at Google’s annual developer conference last June, with a major update last December when they released an open source Java SDK to make it easier for developers to integrate with the new service.

Just last month Google announced that it was moving its Cloud Pub/Sub into public beta. This service for real-time messaging is yet another layer in the overall big data and analytics suite that Google has been building up.

Google’s strategy aims to cater for the full big data and analytics cycle of Capture->Store->Process->Analyze from within Google Cloud Platform’s organic services (such as Pub/Sub, Dataflow and BigQuery), as well as with plugging in external popular frameworks such as Hadoop, Spark and Kafka, in a modular way.

Google Cloud Platform BI conf

Google Cloud Platform BI Suite

Google’s offering comes as a response to Amazon’s offering in the big data and analytics area, with services such as KinesisRedShift, Elastic MapReduce and Lambda. Interesting to note that last week at the AWS Summit in San Francisco Amazon announced Lambda service is generally available for production use. Amazon also maintains its smart strategy of tightening integration between their services, now enabling to run AWS Lambda functions in response to events in Amazon Cognito.

Amazon also puts emphasis on optimizing the infrastructure services for big data. A couple of weeks ago AWS launched new type of EC2 instances with high density storage optimized for storing and processing multi-terabyte data sets.

Another very interesting announcement from AWS last week was the announcement of Amazon Machine Learning new service, which gives an important dimension of analytics to their suite.

Amazon and Google are not the only players in the big data cloud services. With big companies such Oracle and Microsoft, this market definitely becomes hot.

1311765722_picons03 Follow Dotan on Twitter!

Advertisements

7 Comments

Filed under Big Data, Cloud, cloud deployment

Enterprises Taking Off to the Cloud(s)

Cloud Deployment: the enterprise angle

The Cloud is no longer the exclusive realm of the young and small start up companies. Enterprises are now joining the game and examining how to migrate their application ecosystem to the cloud. A recent survey conducted by research firm MeriTalk showed that one-third of respondents say they plan to move some mission-critical applications to the cloud in the next year. Within two years, the IT managers said they will move 26 percent of their mission-critical apps to the cloud, and in five years, they expect 44 percent of their mission-critical apps to run in the cloud. Similar results arise from surveys conducted by HP, Cisco and others.

SaaS on the rise in enterprises

Enterprises are replacing their legacy applications with SaaS-based applications. A comprehensive survey published by Gartner last week, which surveyed nearly 600 respondents in over 10 countries, shows that

Companies are not only buying into SaaS (software as a service) more than ever, they are also ripping out legacy on-premises applications and replacing them with SaaS

IaaS providers see the potential of the migration of enterprises to the cloud and adapt their offering. Amazon, having spearheaded Cloud Infrastructure, leads with on-boarding enterprise applications to their AWS cloud. Only a couple of weeks ago Amazon announced that AWS is now certified to run SAP Business Suite (SAP’s CRM, ERP, SCM, PLM) for production applications. That joins Microsoft SharePoint and other widely-adopted enterprise business applications now supported by AWS, which helps enterprises migrate their IT to AWS easier than ever before.

Mission-critical apps call for PaaS

Running your CRM or ERP as SaaS in the cloud is very useful. But what about your enterprise’s mission-critical applications? Whether in the Telco, Financial Services, Healthcare or  other domains, the core business of the organization’s IT usually lies in the form of a complex ecosystem of 100s of interacting applications. How can we on-board the entire ecosystem in a simple and consistent manner to the cloud? One approach that gains steam for such enterprise ecosystems is using PaaS. Gartner predicting PaaS will increase from “three percent to 43 percent of all enterprises by 2015”.

Running your ecosystem of applications on a cloud-based platform provides a good way to build applications for the cloud in a consistent and unified manner. But what about legacy applications? Many of the mission-critical applications in enterprises are ones that have been around for quite some time and were not designed for the cloud and are not supported by any cloud provider. Migrating such applications to the cloud often seems to call for major overhaoul, as stated in MeriTalk’s report on the Federal market:

Federal IT managers see the benefits of moving mission-critical applications to the cloud, but they say many of those application require major re-engineering to modernize them for the cloud

The more veteran PaaS vendors such as Google App Engine and Heroku provide great productivity for developing new applications, but do not provide answer for such legacy applications, which gets us back to square one, having to do the cloud migration ourselves. This migration work seems too daunting for most enterprises to even dare, and that is one of the main inhibitors for cloud adoption despite the incentives.

It is only recently that organizations have started to use PaaS for critical functions, examining PaaS for mission-critical applications. According to a recent survey conducted by Engine Yard among some 162 management and technical professionals of various companies:

PaaS is now seen as a way to boost agility, improve operational efficiency, and increase the performance, scalability, and reliability of mission-critical applications.

What IT organizations are looking for is a way to on-board their existing application ecosystem to the cloud in a consistent manner as provided with the PaaS, but while having IaaS-like low-level control over the environment and the application life cycle. IT organizations seek the means to keep the way they are used to doing things in the data center even when moving to the cloud. A new class of PaaS products emerged over the past couple of years to answer this need, with products such as OpenShift, CloudFoundry and Cloudify. In my MySQL example discussion I demonstrated how the classic MySQL relational database can be on-boarded to the cloud using Cloudify without need for re-engineering MySQL, and without locking into any specific IaaS vendor API.

Summary

Enterprises are migrating their applications to the cloud in an increasing rate. Some applications are easily migrated using existing SaaS offering. But the mission-critical applications are complex and call for PaaS for on-boarding them to the cloud. If the mission-critical application contains legacy systems or requires low-level control of OS and other environment configuration then not every PaaS would fit the job. There are many cloud technologies, infrastructure, platforms, tools and vendors out there, and the right choice is not trivial. It is important to make proper assessment of the enterprise system at hand and choose the right tool for the job, to ensure smooth migration, avoid re-engineering as much as possible, and keep flexible to accomodate for future evolution of the application.

If you are interested in consulting around assessment of your application’s on-boarding to the cloud, feel free to contact me directly or email ps@gigaspaces.com

1311765722_picons03 Follow Dotan on Twitter!

5 Comments

Filed under cloud deployment, IaaS, PaaS

AWS Outage: Moving from Multi-Availability-Zone to Multi-Cloud

A couple of days ago Amazon Web Services (AWS) suffered a significant outage in their US-EAST-1 region. This has been the 5th major outage in that region in the past 18 months. The outage affected leading services such as Reddit, Netflix, Foursquare and Heroku.

How should you architect your cloud-hosted system to sustain such outages? Much has been written on this question during this outage, as well as past outages. Many recommend basing your architecture on multiple AWS Availability Zones (AZ) to spread the risk. But during this outage we saw even multi-Availability Zone applications severely affected. Even Amazon published during the outage that

Customers can launch replacement instances in the unaffected availability zones but may experience elevated launch latencies or receive ResourceLimitExceeded errors on their API calls, which are being issued to manage load on the system during recovery.

The reason is that there is an underlying infrastructure that escalates the traffic from the affected AZ to other AZ in a way that overwhelms the system. In the case of this outage it was the AWS API Platform that was rendered unavailable, as nicely explained in this great post:

The waterfall effect seems to happen, where the AWS API stack gets overwhelmed to the point of being useless for any management task in the region.

But it doesn’t really matter for us as users which exact infrastructure it was that failed on this specific outage. 18 months ago, during the first major outage, the reason was another infastructure component, the Elastic Block Store (“EBS”) volumes, that cascaded the problem. Back then I wrote a post on how to architect your system to sustain such outages, and one of my recommendations was:

Spread across several public cloud vendors and/or private cloud

The rule of thumb in IT is that there will always be extreme and rare situations (and don’t forget, Amazon only commits to 99.995% SLA) causing such major outages. And there will always be some common infrastructure that under that extreme and rare situation will carry the ripple effect of the outage to other Availability Zones in the region.

Of course, you can mitigate risk by spreading your system across several AWS Regions (e.g. between US-EAST and US-WEST), as they have much looser coupling, but as I stated on my previous post, that loose coupling comes with a price: it is up to your application to replicate data, using a separate set of APIs for each region. As Amazon themselves state: “it requires effort on the part of application builders to take advantage of this isolation”.

The most resilient architecture would therefore be to mitigate risk by spreading your system across different cloud vendors, to provide the best isolation level. The advantages in terms resilience are clear. But how can that be implemented, given that the vendors are so different in their characteristics and APIs?

There are 2 approaches to deploying across multiple cloud vendors and keeping cloud-vendor-agnostic:

  1. Open Standards and APIs for cloud API that will be supported by multiple cloud vendors. That way you write your application using a common standard and have immediate support by all conforming cloud vendors. Examples for such emerging standards are OpenStack and JClouds. However, the Cloud is still a young domain with many competing standards and APIs and it is yet to be determined which one shall become the de-facto standard of the industry and where to “place our bet”.
  2. Open PaaS Platforms that abstract the underlying cloud infrastructure and provide transparent support for all major vendors. You build your application on top of the platform, and leave it up to the platform to communicate to the underlying cloud vendors (whether public or private clouds, or even a hybrid). Examples of such platforms, are CloudFoundry and Cloudify. I dedicated one of my posts for exploring how to build your application using such platforms.

Conclusion

System architects need to face the reality of the Service Level Agreement provided by Amazon and other cloud vendors and their limitations, and start designing for resilience by spreading across isolated environments, deploying DR sites, and by similar redundancy measures to keep their service up-and-running and their data safe. Only that way can we guarantee that we will not be the next one to fall off the 99.995% SLA.

This post was originally posted here.

10 Comments

Filed under cloud deployment, Disaster-Recovery, IaaS, PaaS, Solution Architecture, Uncategorized

Cloud Deployment: It’s All About Cloud Automation

Not only for modern applications

Many organizations are facing the challenge of migrating their IT to the cloud. But not many know how to actually approach this undertaking. In my recent post – Cloud Deployment: The True Story – I started sketching best practices for performing the cloud on-boarding task in a manageable fashion. But many think this methodology is only good for modern applications that were built with some dynamic/cloud orientation in mind, such as Cassandra NoSQL DB from my previous blog, and that existing legacy application stacks cannot use the same pattern. For example, how different would the cloud on-boarding process be if I modify the PetClinic example application from my previous post to use a MySQL relational database instead of the modern Cassandra NoSQL clustered database? In this blog post I intend to demonstrate that cloud on-boarding of brownfield applications doesn’t have to be a huge monolithic migration project with high risk. Cloud on-boarding can take the pragmatic approach and can be performed in a gradual process that both mitigates the risk and enables you to enjoy the immediate benefits of automation and easier management of your application’s operational lifecycle even before moving to the cloud.

MySQL case study

Let’s look at the above challenge of taking a standard and long-standing MySQL database and adapt it to the cloud. In fact, this challenge was already met by Amazon for their cloud. Amazon Web Services (AWS) include the very popular Relational Database Service (RDS). This service is an adaptation of a MySQL database to the Amazon cloud. MySQL DB was not built or designed for cloud environment, and yet it proved highly popular, and even the new SimpleDB service that Amazon built from scratch with cloud orientation in mind was unable to overthrow the RDS reign. The adaptation of MySQL to AWS was achieved using some pre-tuning of MySQL to the Amazon environment and extensive automation of the installation and management of the DB instances. The case study of Amazon RDS can teach us that on-boarding existing application is not only doable but may even prove better than developing a new implementation from scratch to suit the cloud.

I will follow the MySQL example throughout this post and examine how this traditional pre-cloud database can be made ready for the cloud.

Automation is the key

We have our existing application stack running within our data center, knowing nothing of the cloud, and we would like to deploy it to the cloud. How shall we begin?

Automation is the key. Experts say automated application deployment tools are a requirement when hosting an application in the cloud. Once automation is in place, and given a PaaS layer that abstracts the underlying IaaS, your application can easily be migrated to any common cloud provider with minimal effort.

Furthermore, automation has a value in its own right. The emerging agile movements such as Agile ALM (Application Lifecycle Management) and DevOps endorse automation as a means to support the Continuous Deployment methodology and ever-increasing frequency of releases to multiple environments. Some even go beyond DevOps and as far as NoOps. Forrester analyst Mike Gualtieri states that “NoOps is the peak of DevOps”, where “DevOps Is About Collaboration; NoOps Is About Automation“:

DevOps is a noble and necessary movement for immature organizations. Mature organizations have DevOps down pat. They aspire to automate to speed release increments.

This value of automation in providing a more robust and agile management of your application is a no-brainer and will prove useful even before migrating to the cloud. It is also much easier to test and verify the automation when staying in the well-familiar environment in which the system has been working until now. Once deciding to migrate to the cloud, automation will make the process much simpler and smoother.

Automating application deployment

Let’s take the pragmatic approach. The first step is to automate the installation and deployment of the application in the current environment, namely within the same data center. We capture the operational flows of deploying the application and start automating these processes, either using scripts or using higher-level DevOps automation tools such as Chef and Puppet for Change and Configuration Management (CCM).

Let’s revisit our MySQL example: MySQL doesn’t come with built-in deployment automation. Let’s examine the manual processes involved with installing MySQL DB from scratch and capture that in a simple shell script so we can launch the process automatically:

This script is only the basics. A more complete automation should take care of additional concerns such as super-user permissions, updating ‘yum’, killing old processes and cleaning up previous installations, and maybe even handling differences between flavors of Linux (e.g. Ubuntu’s quirks…). You can check out the more complete version of the installation script for Linux here (mainstream Linux, e.g. RedHat, CentOS, Fedora), as well as a variant for Ubuntu (adapting to its quirks) here. This is open source and a work in progress so feel free to fork the GitHub repo and contribute!

Automating post-deployment operations

Once automation of the application deployment is complete we can then move to automating other operational flows of the application’s lifecycle, such as fail-over or shut down of the application. This aligns with cloud on-boarding, since “Deployment in the cloud is attached to the whole idea of running the application in the cloud”, as Paul Burns, president and analyst at Neovise, says:

People don’t say, ‘Should I automate my deployment in the cloud?’ It’s, ‘Should I run it in the cloud?’ Then, ‘How do I get it to the cloud?’

In our MySQL example we will of course want to automate the start-up of the MySQL service, stopping it and even uninstalling it. More interestingly, we may also want to automate operational steps unique to MySQL such as granting DB permissions, creating a new database, generating a dump (snapshot) of our database content or importing a DB dump to our database. Let’s look at a snippet to capture and automate dump generation. This time we’ll use the Groovy scripting language which provides higher-level utilities for automation and better yet it is portable between OSs, so we don’t have the headache as we described above with Ubuntu (not to mention Windows …):

Adding automation of these post-deployment steps will provide us with end-to-end automation of the entire lifecycle of the application from start-up to tear-down within our data center. Such automation can be performed using elaborate scripting, or can leverage modern open PaaS platforms such as CloudFoundry, Cloudify, and OpenShift to manage the full application lifecycle. For this MySQL automation example I used the Cloudify open source platform, where I modeled the MySQL lifecycle using a Groovy-based DSL as follows:

As you can see, the lifecycle is pretty clear from the DSL, and maps to individual scripts similar to the ones we scripted above. We even have the custom commands for generating dumps and more. With the above in place, we can now install and start MySQL automatically with a single command line:

install-service mysql

Similarly, we can later perform other steps such as tearing it down or generating dumps with a single command line.

You can view the full automation of the MySQL lifecycle in the scripts and recipes in this GitHub repo.

Monitoring application metrics

We may also want to have better visibility into the availability and performance of our application for better operational decision-making, whether for manual processes (e.g. via logs or monitoring tools) or automated processes (e.g. auto-scaling based on load). This is becoming common practice in methodologies such as Application Performance Management (APM). This will also prove useful once in the cloud, as visibility is essential for successful cloud utilization. Rick Blaisdell, CTO at ConnectEDU, explains:

… the key to successful cloud utilization lays in the management and automation tools’ capability to provide visibility into ongoing capacity

In our MySQL example we can sample several interesting metrics that MySQL exposes (e.g. using the SHOW STATUS syntax or ‘mysqladmin’), such as the number of client connections, query counts or query throughput.

Summary

On-boarding existing applications to the cloud does not have to be a painful and high-risk migration process. On boarding can be done in a gradual “baby-step” manner to mitigate risk.

The first step is automation. Automating your application’s management within your existing environment is a no-brainer, and has its own value in making your application deployment, management and monitoring easier and more robust.

Once automation of the full application lifecycle is in place, migrating your application to the cloud becomes smooth sailing, especially if you use PaaS platforms that abstract the underlying cloud provider specifics.

This post was originally posted here.

For the full MySQL cloud automation open source code see this public GitHub repo. Feel free to download, play around, and also fork and contribute.

6 Comments

Filed under Cloud, cloud automation, cloud deployment, DevOps, IaaS, PaaS