Google Unveils Its Next Gen Datacenter Network Architecture

Organizations such as Google, Amazon and Facebook posses sheer size, scale and distribution of data that pose a new class of challenges for networking, one which traditional networking vendors cannot meet. According to Google’s team:

Ten years ago, we realized that we could not purchase, at any price, a datacenter network that could meet the combination of our scale and speed requirements.

Facebook engineering team ran into much similar problems. Late last year Facebook published its datacenter networking architecture, called “data center fabric”, which is meant to meet this exact challenge, and has continued this year expanding the architecture.

Now Google is joining the game, sharing their in-house datacenter network architecture in a new paper published this week. The current (5th) generation of Google’s architecture, called Jupiter, is able to deliver more than 1 petabit/sec of total bisection bandwidth. This means that each of 100,000 servers can communicate with one another in an arbitrary pattern at 10Gb/s. The new architecture also means substantially improved efficiency of the compute and storage infrastructure, and ultimately much higher utilization in jobs scheduling.

Google based its new networking architecture on the principle of Software-Defined Netowrking (SDN). Using the SDN approach, Google was able to escape the traditional distributed networking protocols with their slow dissemination, high bandwidth overhead and manual switch configurations, and move to a single global configuration for the entire network that is then pushed to all switches, with each switch taking its part of the scheme.

Google has been an advocate of SDN for quite some time, and is a member of the Open Networking Foundation (ONF), a consortium of industry leaders such as Facebook, Microsoft, Deutsche Telecom, Verizon and of course Google, promoting open standards for SDN, primarily the OpenFlow project which Google fully adopted.

SDN and network virtualization have been major trends in the networking realm, especially with cloud-based deployments with their highly distributed, scalable and dynamic environments. All major cloud vendors have been innovating in their next gen networking. Most notably, Google has been actively competing with Amazon on driving its cloud networking to next gen, where Google presented its Andromeda project for network virtualization.

The big players will continue to forefront the networking and scalability challenges of the new cloud and distributed era, and will lead innovation in that field. The open approach that was adopted by the big players, with open standards, open source and sharing with the community, will enable the smaller players to benefit from this innovation and push the industry forward.

You can read Google’s paper on Jupiter here.

1311765722_picons03 Follow Dotan on Twitter!

Leave a comment

Filed under Cloud, SDN

New ‘Cloud Native Computing Foundation’ Trying to Standardize on Cloud and Containers

Cloud Native Computing Foundation (CNCF) is a new open standardization initiative recently formed under the Linux Foundation with the mission of providing standard reference architecture for cloud native applications and services, based on open-source software (OSS). The first OSS is Google’s Kubernetes, which was released in v1.0 the same day, and was donated by Google to the foundation.

Google is one of the 22 founding members, together with big names such as IBM, Intel, Redhat, VMware, AT&T, Cisco and Twitter, as well as important names in the containers realm such as Docker, Mesosphere, CoreOS and Joynet.

The announcement of the new foundation came only a few weeks after the announcement of the Open Container Initiative (OCI), also formed under the Linux Foundation. Even more interesting to note that almost half of the founding companies of CNCF are among the founders of OCI. According to the founders, these two initiatives are complementary: while OCI is focused on standardizing the image and runtime format for containers, CNCF will target the bigger picture of how to assemble components to address a comprehensive set of container application infrastructure needs, starting with the orchestration level, based on Kubernetes. This is the same bottom-up dynamics as we see in most other initiatives and projects, starting from standardizing on the infrastructure and then continuing upwards: cloud computing evolved same way from IaaS to PaaS and to SaaS, Network Function Virtualization (NFV) evolved from the NFV Infrastructure to Management and Orchestration (MANO), etc.

Open strategy has become the name of the game and all the big companies realize that in order to take the technology out of infancy and enabling its adoption in large-scale production deployments in enterprises they need to take the lead on the open field. Google’s Kubernetes and its recent contribution to CNCF is one example. Now we’ll wait to see which other open-source ingredients will be incorporated and which blueprint will emerge and how it succeeds in meeting the industry’s varying use cases.

1311765722_picons03 Follow Dotan on Twitter!

Leave a comment

Filed under Cloud, cloud automation, DevOps

HP Acquires Stackato Aiming to Bring Hybrid Cloud to Enterprises

Enterprises are looking to transform their IT to a more lean operation, in the spirit of recent trends of cloud computing and hybrid clouds, DevOps and containers, which have emerged from the open-source communities. Major IT vendors have identified this potential and are putting a lot of effort in developing Platform-as-a-Service (PaaS) to enable the transition of enterprises.

Last October HP launched (as part of HP Helion cloud platform) its own PaaS offering code-named HP Helion Development Platform, based on CloudFoundry open-source PaaS. Now HP took a step further and acquired Stackato, a platform based on CloudFoundry and Docker containers, to enhance its PaaS offering with support for hybrid cloud model, and to speed up delivery times and ease of IT configuration. According to the statement:

HP’s acquisition of Stackato further demonstrates our commitment to Cloud Foundry technology and broadens our hybrid cloud capabilities.

While HP is betting heavily on CloudFoundry, it is also betting seriously on containers. A month and a half ago HP joined the Open Containers Initiative (OCI). The open approach of OCI is also aligned with HP’s strategic choice to make HP Helion based on open-source community-backed projects such as CloudFoundry, Eucalyptus (which HP acquired last year) and OpenStack. Interesting to note that OpenStack community also addressed hybrid cloud and containers on its recent releases.

HP is not the only one to realize the trend. Cloud and containers have been growing closer to bring hybrid IT to enterprises, with all major players offering combined offerings, including IBM, Google, Amazon, Micrsofot and VMware.

Last month HP filed to split into two companies – HP Enterprise and HP Inc. – to enable each one to be more focused and flexible “to adapt quickly to market and customer dynamics”. The newly-formed HP Enterprise will focus on the enterprise business including servers, storage, networking, converged systems and Helion cloud platform, without the burden of HP’s traditional printers and PC businesses (left for HP Inc.). In this fast-paced, dynamic and highly competitive realm of agile IT and cloud computing, HP Enterprise would need that flexibility and agility to gain the lead.

1311765722_picons03 Follow Dotan on Twitter!

Leave a comment

Filed under Cloud, cloud automation, DevOps, PaaS

Industry Standardizing on Containers with Open Container Project

* Update: the foundation has decided to rename from Open Container Project (OCP) to Open Container Initiative (OCI)

“Open” is not just providing open source software. It’s also, and perhaps more importantly, about open standards, enabling the community to converge on one single path and work together on improving it. The absence of such agreement drives the community to wars for domination, especially in emerging fields. We see that with the Internet of Things, with cloud computing and network virtualization.

The containers community, headed by Docker, was no different. Docker’s success drew the attention of every major player, including every major player in the cloud and DevOps world, and created competing standards which threatened to draw everyone into battles for domination. But there’s good news. This week at DockerCon 2015 these players joined forces to form the Open Container Project (OCP).  The new governance body, formed under The Linux Foundation, aims to create standards around container format and runtime. And though under Linux Foundation governance, it certainly targets other operating systems, with Microsoft pushing Windows.

The Open Container Project has all major cloud players, including Amazon, Google (which promotes Kubernetes), Microsoft, HP, IBM. It also has players in the DevOps scene such as Docker itself and CoreOS (which offers a prominent competing container called appc), Mesosphere, Rancher Labs, Red Hat and VMware/EMC.

Seems Docker will be leading the path, writing the first draft of the format specification and using Docker’s implementation as baseline. Docker’s first contribution is runC, which is already available on the project’s GitHub page. But that’s only the beginning. The true test will be the adoption within enterprises, which have been struggling in adopting the technology.

1311765722_picons03 Follow Dotan on Twitter!


Filed under DevOps

Samsung Launches IoT Open Cloud and Artik IoT Platform

On last year’s CES (Consumer Electronics Show) Samsung announced its new Smart Home Service, which was followed by the acquisition of SmartThings for smart home hub (you can read my coverage on both in this post). Then on this year’s CES conference Samsung reinforced the statements, stating that within five years all of its hardware will be able to connect to the Internet, and putting great emphasis on openness and vendor collaboration as the way to fulfill the IoT promise. On his keynote at CES2015, Samsung President and CEO Boo-Keun Yoon said that

Without this kind of openness, there won’t be an Internet of Things because the things will not fit together

Samsung has been pursuing openness by actively collaborating in several open standardization groups in the IoT field, such as Google’s Thread Group and the Open Interconnect Consortium, as well as investing in an open developer community. Samsung’s aforementioned acquisition SmartThings is also an open platform, compatible with several different smart home standards.

Now Samsung is taking it another step forward, with the recent announcement of SmartThings Open Cloud, a new open software and data aggregation cloud for the Internet of Things (IoT), which is coupled with Samsung’s SAMI architecture. The new platform promises to ease the lives of device manufacturers and developers when coming to innovate with connected devices and related applications.

In attempt to make it easier for developers to build IoT solutions, Samsung also recently launched Artik platform, with an initial suite of modules optimized for performance, battery life and small form-factor, to meet the typical range of IoT use cases. The modules come in different specs, with built-in sensors and hardware security, and supporting various communication protocols such as Bluetooth, Wi-Fi and the popular IoT protocol ZigBee. The Artik platform comes with development tools and open APIs that aim to ease the development. Samsung is initially opening the platform only to a limited group of developers as alpha users.

1311765722_picons03 Follow Dotan on Twitter!

1 Comment

Filed under Cloud, Internet of Things

Google’s Secret Android OS To Rule The Internet Of Things

See updates fresh from Google I/O conf added at the bottom

Google is reportedly developing a new operating system (OS) under the Android brand, aimed at running low-powered devices (as low as 64 or even 32 MB RAM), which are very common in today’s connected world. If the new operating system, code-named ‘Brillo’, gains similar traction as the Android brand, it may become the engine running the multitude of connected devices now looking for a common platform. Google may also offer it free of charge for OEMs to increase penetration.

This is not the first indication that Google wants to be the framework that drives the Internet of Things (IoT). Last year Google initiated the Thread Group, an open consortium of industry leaders which by now has over 80 members, with the goal of defining the communications protocol for the Internet of Things. Last month the Thread Group also partnered with the ZigBee Alliance, the alliance behind the popular ZigBee open wireless standard for IoT, to support interoperability and enable the ZigBee Cluster Library to run over Thread networks.


Google also promotes IoT on the research front. Last December Google launched an open IoT research program called the “Open Web of Things”, to encourage research around burning topics in IoT such as security, privacy and protocols.

Another interesting angle that I bet Google will explore is the integration with its latest big data cloud offering to enable processing, storage and analytics of the massive amounts of data generated by the IoT, enhancing its cloud’s IoT solutions.

What exactly is the new ‘Brillo’ OS? How does it relate to the Thread Group’s protocol? How will that integrate with Google’s cloud offering? We’ll probably get more information in a couple of days at Google I/O conference.

1311765722_picons03 Follow Dotan on Twitter!


Update: Today at Google I/O conference we got the official announcement of Brillo OS. There wasn’t much more detail than the above, but one note was made on developer tools for voice commands, so people could “order” their devices with natural language. And here’s the developer website for Project Brillo.

More importantly, together with Brillo Google announced Weave, which seems like yet another standard for the common language of the Internet of Things. With Weave communications protocol, events can be defined by one device and followed by others to trigger custom actions. We’ll have to wait for the full detail on that, to understand how Weave is different from the multitude of other standards.

Leave a comment

Filed under Internet of Things

The Community Needs Quality Tech News. Don’t Let TechCrunch Go Down Like Gigaom

Only fast-paced hi-tech industry can produce such drama: TechCrunch just reported its own acquisition, together with the parent company AOL, by Verizon. Verizon will buy AOL for $4.4 Billion. However, it is still unclear what Verizon is going to do with AOL’s different activities, including TechCrunch. As TechCrunch writer and editor Ingrid Lunden wrote

There are lingering questions about whether it’s an all-in deal for the longer term, or whether certain operations that are not as central to Verizon’s bigger strategy may eventually get offloaded.

Reading this, I could not help remembering what happened with the successful tech blog Gigaom just a couple of months ago. Back then, Gigaom publisher fired all of its employees in quite a surprising announcement. In that case, it turned out financial debts were accumulating quietly. The drama was big, from the first dripping tweets to the following network flood, with rumors, guesses, protests, bits shared by Gigaom employees, best wishes by the community members, and one touching statement by Gigaom founder Om Malik, confessing

It is not how you want the story of a company you founded to end.

But beyond the financial details and drama, the bigger picture is that the websites and blogs serve as a vital voice for the tech community, and a place for the community to get together, both virtually and physically in their impressive conferences. We need these channels up, running, quality and objective.

Good Luck to all TechCrunch team!

Read the TechCrunch coverage of the acquisition here.

Leave a comment

Filed under technology