Tag Archives: OpenStack

Amazon and VMware: Strange Allies In The Game Of Clouds

Before cloud there was datacenter virtualization. The king of virtualization was VMware, who had ruled enterprise datacenters for decades uninterruptedly. Then a new force arose – Public Cloud – ruled by the reincarnated online retailer Amazon, which swiftly won the hearts of startups and web apps alike. As enterprises started exploring the cloud, VMware adapted its offering in the form of Private Cloud in attempt to keep the lucrative enterprises under its dominion, while Amazon has been fighting to convert them to its public cloud, with relentless price cuts and innovative services. War was fierce.

But in the Game of Clouds strange alliances are formed…

Now VMware is striking an alliance with Amazon. The new strategic partnership announced this month brings forth a hybrid child: VMware Cloud on AWS, which promises to let enterprises have their cake and eat it too – keeping them working in their good-old VMware vSphere environment while letting VMware operate it for them as a managed service on the Amazon Web Services (AWS) bare metal infrastructure. The new service is currently in Technology Preview, with general availability expected mid-2017.

vmware-aws

What could bring together these bitter rivals? In the land of private cloud VMware has been suffering fierce competition from OpenStack open source community, so fierce that ultimately VMware jumped on the OpenStack bandwagon. Flanked by OpenStack from private cloud and by Amazon from public cloud, VMware came to realize what HP, Verizon and others learned the hard way – that hybrid cloud can be the alternative. A similar strategy change brought the got Rackspace acquired a couple of months ago.

And what’s Amazon’s angle with WMware you ask? Amazon has been eyeing the lucrative enterprises for a long time, but has largely failed to convert them to the public cloud. Microsoft, Amazon’s public cloud competitor, identified that and launched Azure Stack (currently in Technical Preview 2), a flavor of its Azure public cloud that can extends to the enterprise’s datacenter. Amazon so far has been dogmatic in its public cloud vision, preaching full migration to the public cloud and refusing to provide variants for private cloud. But market forces are stronger, and Amazon’s way off the proverbial tree was found in the form of VMware. With Microsoft’s Azure Stack expected in general availability mid-2017, Amazon had to prepare its counter move towards the same mark.

In the Game of Clouds great forces are at play. With private and public clouds, open source communities and vendor-locked solutions, incumbents and startups all at play. And everyone’s eyeing the holy grail of enterprises.

Who will win the Cloud Throne?

iron-throne

1311765722_picons03 Follow Horovits on Twitter!

Advertisements

4 Comments

Filed under Cloud

HP Acquires Stackato Aiming to Bring Hybrid Cloud to Enterprises

Enterprises are looking to transform their IT to a more lean operation, in the spirit of recent trends of cloud computing and hybrid clouds, DevOps and containers, which have emerged from the open-source communities. Major IT vendors have identified this potential and are putting a lot of effort in developing Platform-as-a-Service (PaaS) to enable the transition of enterprises.

Last October HP launched (as part of HP Helion cloud platform) its own PaaS offering code-named HP Helion Development Platform, based on CloudFoundry open-source PaaS. Now HP took a step further and acquired Stackato, a platform based on CloudFoundry and Docker containers, to enhance its PaaS offering with support for hybrid cloud model, and to speed up delivery times and ease of IT configuration. According to the statement:

HP’s acquisition of Stackato further demonstrates our commitment to Cloud Foundry technology and broadens our hybrid cloud capabilities.

While HP is betting heavily on CloudFoundry, it is also betting seriously on containers. A month and a half ago HP joined the Open Containers Initiative (OCI). The open approach of OCI is also aligned with HP’s strategic choice to make HP Helion based on open-source community-backed projects such as CloudFoundry, Eucalyptus (which HP acquired last year) and OpenStack. Interesting to note that OpenStack community also addressed hybrid cloud and containers on its recent releases.

HP is not the only one to realize the trend. Cloud and containers have been growing closer to bring hybrid IT to enterprises, with all major players offering combined offerings, including IBM, Google, Amazon, Micrsofot and VMware.

Last month HP filed to split into two companies – HP Enterprise and HP Inc. – to enable each one to be more focused and flexible “to adapt quickly to market and customer dynamics”. The newly-formed HP Enterprise will focus on the enterprise business including servers, storage, networking, converged systems and Helion cloud platform, without the burden of HP’s traditional printers and PC businesses (left for HP Inc.). In this fast-paced, dynamic and highly competitive realm of agile IT and cloud computing, HP Enterprise would need that flexibility and agility to gain the lead.


* Update: a couple of months after the above acquisition HP announced shutting down its public cloud, and focusing on hybrid cloud offering, in line with the above post. You can read more on this post.

1311765722_picons03 Follow Dotan on Twitter!

1 Comment

Filed under Cloud, cloud automation, DevOps, PaaS

OpenStack Kilo Is Out, Major Updates on Bare Metal, Containers, NFV And More

OpenStack’s latest release, code-named Kilo, is out. And it brings some interesting updates. The most prominent part is by far Ironic, OpenStack’s bare-metal provisioning support, which is officially released. It will enable provisioning of VM-based as well as container-based deployments on top of bare metal machines using same familiar APIs and conventions. The community keeps on promoting containers and make them available as first citizens on the new features.

kilo-logo

On the networking side Network Virtualization gets attention, with port security for OpenVSwitch, VLAN transparency and MTU API extensions. IPv6 is now also supported, allowing the extended address space to fit current demand and proliferation of connected devices which come with the Internet of Things (IoT).

On the storage front, Swift now supports, in addition to replicas, also erasure-coded storage, so users can choose the right tradeoffs per case. Kilo also brings container-level temporary URLs which allow time-limited access to a set of objects in a container. Kilo also offers improvements to global cluster replication, storage policy metrics and full Chinese translation.

Other aspects addressed are support for hybrid-cloud and multi-cloud models in Keystone Identity management, and automated provisioning and orchestration of entire application environments in a single template, much similar to Amazon’s CloudFormation.

Not less important is the great emphasis given to maturity, stability, scalability and usability of OpenStack, subjects of much debate and major barriers of adoption by organizations. We are yet to see if that brings smoother adoption of OpenStack in the industry for production use cases.

Read the full details here.

1 Comment

Filed under Cloud

Cloud and Docker Grow Closer To Bring Hybrid IT To Enterprises

Cloud computing and Linux containers (ala Docker and LXC) are two hot trends in the modern software delivery paradigm. They are both backed by strong and active global open communities, have rich tools, and new start up companies and projects are building their solutions with them. But enterprises, while acknowledging the new paradigms, are still struggling with implementing them. One of the biggest concerns of enterprises is the vast infrastructure, which spans multiple systems, technologies, vendors, data centers, and even (private/public) clouds. Enterprises therefore require support for hybrid IT, with adequate automation to manage it. Gartner research shows that 75% of clients will have some type of hybrid strategy in place by the end of the year. Puppet, the popular DevOps tool used for automating complex deployment environments, identified the need for provisioning and managing such mesh of cloud, containers and even bare-metal. This week on its latest Puppet Enterprise release it added support for AWS, Docker and bare metal. IBM on its InterConnect 2015 conference this week announced an update to the container service beta on Bluemix, to provide capabilities such as push and pull containers from on-premises to off-premises service to support private & public clouds uniformly, and hosted private registry for container images for enterprise security and privacy. The OpenStack community has been making efforts in the past few releases on integrating containers, initially as if they were virtual machine instances and later with full support so you can deploy containers through Heat orchestration just like you deploy applications and services. The Docker driver for Nova (OpenStack Compute), which has been developed out of tree so far, is expected to return to mainline in the upcoming ‘Kilo’ release next month. Public cloud vendors staying behind either. Google adopted the technology internally, saying that “everything at Google runs in a container”, as well as developing Kubernetes open source project for orchestrating pods of Docker containers, and actively pushing it also into OpenStack via its recently announced partnership with Mirantis. Amazon on its last re:invent conference announced its EC2 Container Service which lets you start, stop, query and manage container-enabled applications via API and using the rich AWS set of services. VMware, which rules the traditional enterprise virtualization domain, made moves to adopt both open cloud and containers. First, it started getting actively involved in the communities and contributing code. Also, in the cloud front VMware launched an OpenStack-compatible version of its private cloud. On the containers front VMware partnered with Docker, Google and Pivotal to offer enterprises simplified path for containers over hybrid cloud model. There are others exploring this field, such as RedHat, Cisco and even Microsoft, offering container integration in all levels from hardware through operating systems, hypervisors, cloud management systems, to orchestration and monitoring tools. We shall be seeing a growing number of such offerings of converged solutions for hybrid IT, more targeted at the complex environments of enterprises, and with enterprise-grade tooling and maturity levels.

——————————————————————————

Update: Microsoft officially announced adding support for Docker containers to its Windows Server and Azure cloud. You can read the full details in this post.

1311765722_picons03 Follow Dotan on Twitter!

10 Comments

Filed under Cloud, DevOps

Amazon, Google Public Clouds Drive Networking to Next Gen

As more enterprises and telcos are moving their infrastructure to private cloud, the more they raise needs for advanced networking to meet their modern, dynamic and virtualized architectures. This trend is fueled by the recent flux of telcos now looking for a carrier-grade private cloud solution to virtualize their IT. These needs from the community took central place in the OpenStack Summit a couple of weeks ago.

But while the OpenStack community only now gets to address the next-gen networking needs for the private cloud, the major public cloud providers the likes of Amazon and Google have long been facing these challenges.

Amazon’s cloud networking strategy

Amazon, on last week’s AWS re:Invent annual event in Las Vegas, shared some of its networking strategy for managing its global IT deployment, with 11 regions and 28 AZ (Availability Zones) across 5 continents. You can read the full technical details in this great article, but the interesting point I find beyond the details is that Amazon realized that traditional networking backbone and paradigms cannot meet the challenges it’s facing, and therefore innovatively reached out to explore the next gen networking for its organization. One such example was cutting costs of high-end networking equipment. Instead:

it buys routing equipment from original design manufacturers… that it hooks up to a custom network-protocol software that’s supposedly more efficient than commodity gear

Another interesting example was achieving network virtualization by utilizing single-root I/O virtualization (SR-IOV) and supporting multiple virtual functions on same infrastructure while maintaining good network performance.

Amazon didn’t come out with its internal networking strategy for no reason. Amazon’s strategy has been to externalize its networking capabilities as cloud services for its end customers. 5 years ago they offered VPC (Virtual Private Cloud), logically isolated AWS clusters which can be connected to the customer’s data center using VPN. On last year’s AWS re:Invent Amazon announced its “Enhanced Networking” for AWS cloud, where it provided SR-IOV support on its new high-end instances. Then March this year they announced support for VPC peering within a region, to enable private connectivity between VPCs.

www_enriquedans_com_wp-content_uploads_2009_06_google-amazon

Google’s take on cloud networking

While the Stackers had their conference and announcements in Paris a couple of weeks ago, Google ran it’s own Cloud Platform Live event in San Francisco, where it announced its Google Cloud Interconnect. Google has been investing in its networking for over a decade, and is now starting to externalize some of it as network cloud services, much in response to Amazon’s aforementioned networking services.

Google’s first important announcement was made March at the Open Networking Summit with the launch of Andromeda – Google’s network virtualization stack, which now got a new release and increased performance. With its Cloud Interconnect Google also responded to Amazon with its own capabilities around VPN connectivity (to be GA in Q1 2015) and Direct Peering. It is interesting to note that Google specifically targets Telcos, namely access network operators and ISPs, offering to meet the demanding carrier-grade challenge of the Telecommunications industry with their global infrastructure and services.

Public clouds heading for network virtualization

Amazon and Google own massive infrastructure and cater for massive and diverse workloads. As such they face the networking challenges and limitations ahead of the market, and lead with innovation around next gen networking and virtualization. I expect we shall see more work around SDN and network virtualization to meet these challenges, with the private clouds following and perhaps also taking the lead with telco-driven carrier-grade requirements and workloads.

1311765722_picons03 Follow Dotan on Twitter!

6 Comments

Filed under Cloud, SDN

AWS Outage: Moving from Multi-Availability-Zone to Multi-Cloud

A couple of days ago Amazon Web Services (AWS) suffered a significant outage in their US-EAST-1 region. This has been the 5th major outage in that region in the past 18 months. The outage affected leading services such as Reddit, Netflix, Foursquare and Heroku.

How should you architect your cloud-hosted system to sustain such outages? Much has been written on this question during this outage, as well as past outages. Many recommend basing your architecture on multiple AWS Availability Zones (AZ) to spread the risk. But during this outage we saw even multi-Availability Zone applications severely affected. Even Amazon published during the outage that

Customers can launch replacement instances in the unaffected availability zones but may experience elevated launch latencies or receive ResourceLimitExceeded errors on their API calls, which are being issued to manage load on the system during recovery.

The reason is that there is an underlying infrastructure that escalates the traffic from the affected AZ to other AZ in a way that overwhelms the system. In the case of this outage it was the AWS API Platform that was rendered unavailable, as nicely explained in this great post:

The waterfall effect seems to happen, where the AWS API stack gets overwhelmed to the point of being useless for any management task in the region.

But it doesn’t really matter for us as users which exact infrastructure it was that failed on this specific outage. 18 months ago, during the first major outage, the reason was another infastructure component, the Elastic Block Store (“EBS”) volumes, that cascaded the problem. Back then I wrote a post on how to architect your system to sustain such outages, and one of my recommendations was:

Spread across several public cloud vendors and/or private cloud

The rule of thumb in IT is that there will always be extreme and rare situations (and don’t forget, Amazon only commits to 99.995% SLA) causing such major outages. And there will always be some common infrastructure that under that extreme and rare situation will carry the ripple effect of the outage to other Availability Zones in the region.

Of course, you can mitigate risk by spreading your system across several AWS Regions (e.g. between US-EAST and US-WEST), as they have much looser coupling, but as I stated on my previous post, that loose coupling comes with a price: it is up to your application to replicate data, using a separate set of APIs for each region. As Amazon themselves state: “it requires effort on the part of application builders to take advantage of this isolation”.

The most resilient architecture would therefore be to mitigate risk by spreading your system across different cloud vendors, to provide the best isolation level. The advantages in terms resilience are clear. But how can that be implemented, given that the vendors are so different in their characteristics and APIs?

There are 2 approaches to deploying across multiple cloud vendors and keeping cloud-vendor-agnostic:

  1. Open Standards and APIs for cloud API that will be supported by multiple cloud vendors. That way you write your application using a common standard and have immediate support by all conforming cloud vendors. Examples for such emerging standards are OpenStack and JClouds. However, the Cloud is still a young domain with many competing standards and APIs and it is yet to be determined which one shall become the de-facto standard of the industry and where to “place our bet”.
  2. Open PaaS Platforms that abstract the underlying cloud infrastructure and provide transparent support for all major vendors. You build your application on top of the platform, and leave it up to the platform to communicate to the underlying cloud vendors (whether public or private clouds, or even a hybrid). Examples of such platforms, are CloudFoundry and Cloudify. I dedicated one of my posts for exploring how to build your application using such platforms.

Conclusion

System architects need to face the reality of the Service Level Agreement provided by Amazon and other cloud vendors and their limitations, and start designing for resilience by spreading across isolated environments, deploying DR sites, and by similar redundancy measures to keep their service up-and-running and their data safe. Only that way can we guarantee that we will not be the next one to fall off the 99.995% SLA.

This post was originally posted here.

10 Comments

Filed under cloud deployment, Disaster-Recovery, IaaS, PaaS, Solution Architecture, Uncategorized

Cloud integration and DevOps automation experience shared

The Cloud carries the message of automation to system architecture. The ability to spin up VMs on demand and take them down when no longer needed as per the applications’s real-time requirements and metrics is the key for making the system truely elastic, scalable and self-healing. When using external IaaS providers, this also saves the hassle of managing the IT aspects of the on-demand infrastructure.

But with potential of automation comes the challenge of integrating with the cloud provider (or providers) and automating the management of the VMs, dealing with DevOps aspects such as accessing the VM, transferring contents to it, performing installations, running and stopping processes on it, coordinating between the services, etc. On this post I’d like to share with you some of my experience integrating with IaaS cloud providers, as part of my work with customers using the open source Cloudify PaaS product. Cloudify provides out-of-the-box integration with many popular cloud providers, such as Amazon EC2 and The Rackspace Cloud, as well as integration with the popular jclouds framework and OpenStack open standard. But when encountering an emerging cloud provider or standard, you just need to pull up your sleeves and write your own integration. As a best practice, I use Java for the cloud integration and try to leverage on well-proven and community-backed open source projects wherever possible. Let’s see how I did it.

First we need to integrate with the IaaS API to enable automation of resource allocation and deallocation. The main integration point is called a Cloud Driver, which is basically a Java class that adheres to a simple API for accessing the cloud for resources. Various clouds expose various APIs for accessing them. Programmatic access is native and easy to implement from the Cloud Driver code. REST API is also quite popular, in which cases I found the Apache Jersey client open source library quite convenient for implementing a RESTful client. Jersey is based on JAX-RS Java community standard, and offers easy handling of various flavors of calls, cookie handling, policy governance, etc. Cloudify offers a convenient Groovy-based DSL that enables you to configure the cloud provider’s parameters and properties in a declarative and easy-to-read manner, and takes care of the wiring for you. When writing your custom cloud driver you should make sure to sample and use the values from the Groovy (you can add custom properties as needed), so after the cloud driver is ready for a given cloud provider, you can use it in any deployment by simply setting the configuration. I used the source code of the cloud drivers on CloudifySource public GitHub repository, as a great source of reference for writing my cloud driver.

The next DevOps aspect of the integration is accessing the VMs and managing them. Linux/Unix VMs are accessed via SSH for executing scripts, and uses SFTP for file transfer. For generic file transfer layer there’s the Apache Commons VFS2 (Virtual File System), which offers a uniform view of the files from various different sources (local FS, remote over HTTP, etc.). For remote command execution over SSH there’s JCraft’s JSch library, providing a Java implementation of SSH2. Authentication also needs to be addressed with the above. Luckily, many of these things that we used to do manually as part of DevOps integration are new being taken care of by Cloudify. Indeed, there’s still much integration headache with ports not opened, passwords incorrect etc. which takes up most of the time, and more logs are definitely required in Cloudify to figure things out and troubleshoot. What I did is I simply forked the open source project from GitHub and debugged right through the code, which has the side benefit of  fixing and improving the project on the fly and contributing back to the community. I should mention that although the environments I integrated with where Linux-based, Cloudify also provides support for Windows-based systems (based on WinRM, CIFS and PowerShell).

One of the coolest things added in Cloudify 2.1 that was launched last week was the BYON (Bring Your Own Node) driver, which allows you to take your existing bare-metal servers and use them as managed resources for deployment by Cloudify, as if they were on-demand resources. This provides a neat answer to the growing demand for bare-metal cloud services. I’m still waiting for the opportunity to give this one a wet run with a customer in the field …

All in all, it turned out to be a straight-forward task to integrate with a new cloud provider. Just make sure you have a stable environment and a test code on how to consume the APIs, and use the existing examples as reference, and you’re good to go.

 

1311765722_picons03
Follow Dotan on Twitter!

Leave a comment

Filed under Cloud, DevOps, IaaS, PaaS