Tag Archives: datacenter

IoT, Big Data and Machine Learning Push Cloud Computing To The Edge

“The End Of Cloud Computing” – that’s the dramatic title for a talk given by Peter Levine at a16z Summit last month. Levine, a partner at Anderssen Horowitz (a16z) VC fund, worked out his investor foresight and tried to imagine the world beyond cloud computing. The result was an insightful and fluent talk, stating that the centralized cloud computing as we know it is about to be superseded by a distributed cloud inherent in a multitude of edge devices. Levine highlights the rising forces driving this change:

The Internet of Things (IoT). Though the notion of IoT has been around for a few decades it seems it’s really taking place now, and that our world will soon be inhabited by a multitude of smart cars, smart homes and smart everything, each with embedded compute, storage and networking. Levine gives a great example of a computer card found in current day’s luxury cars, containing around 100 CPUs in it. having several such cards in a car would make it a mini data center on wheels. Having thousands of such cars on the roads makes it a massive distributed data center.

smart-car-card

Big Data Analytics. The growing amount of connected devices and sensors around us constantly collecting real world input generates massive amount of data of different types, from temperature and pressure to images and videos. And that unstructured and highly variable data stream needs to be processed and analyzed in real time in order to extract insights and make decisions by the little brains of the smart devices. Just imagine your smart car approaching a stop sign, and the need to process the image input, realize the sign and make the decision to stop – all in a matter of a second or less- would you send it over to the remote cloud for the answer?

Machine Learning. While traditional computer algorithms are well suited for dealing with well-defined problem spaces, the real world has a complex, diverse and unstructured nature of data. Levine believes that endpoints will need to execute Machine Learning algorithms to decipher the data effectively and make intelligent insights and decisions to the countless number and permutations of situations that can occur in the real world.

So should Amazon, Microsoft and Google start worrying? Not really. The central cloud services will still be there, but with different focus. Levine sees the central cloud role in curating data from the edge, performing central non-real-time learning which can then be pushed back to the edge, and long-term storage and archiving of the data. In its new incarnation, the entire world becomes the domain of IT.

You can watch the recording of Levine’s full talk here.

1311765722_picons03 Follow Horovits on Twitter!

Leave a comment

Filed under Big Data, Cloud, IoT

Open Source Is Taking Over Networks, Startups Lead The Way

Innovating in the networking world is hard. With purpose-built boxes, protocols, technologies, legacy, processes… But when industry veterans from the likes of Apple, Juniper and Big Switch start up fresh and think outside the box – that’s when networks get shaken up. Just see the updates from the last couple of weeks:

After building the complex networks for iCloud, Apple engineering veterans decided to leverage their experience and last week launched their new startup SnapRoute. SnapRoute promises to bring a “developer friendly and operations focused network protocol stack that runs on all commoditized network and hardware with any Linux operating system”. This open stack will remove the dependency in the software provided by the vendors providing the network equipment (such as routers and switches) and will enable innovation decoupled from the vendor.

snaproute-open-source-network

SnapRoute’s first open source project is its FlexSwitch, which it contributed to the Facebook-founded Open Compute Project. FlexSwitch will also be offered as an option for the OpenSwitch operating system. OpenSwitch is an open source, Linux-based network operating system designed to power enterprise grade switches from multiple hardware vendors that will enable organizations to rapidly build data center networks that are customized for unique business needs. Earlier this month OpenSwitch got accepted to the Linux Foundation, which will surely facilitate and boost its open source community activity.

openswitch

Another promising startup, which made headlines recently following Google’s investment, is Barefoot Networks, which brings the vision of programmable networks. Their innovative switch chips can be programmed using the P4 language to run various network tasks to replace today’s purpose-built networking equipment. Interesting to note that both Barefoot Networks and P4.org are also members at the OpenSwitch project.

Apstra is another interesting startup that was launched last week and was founded by networking veterans from Big SwitchArista and Juniper, which offers data center network automation. It employs an intent-driven approach for network operations, and treats the network using the methodologies of distributed systems:

“You need to recognize that your network is a distributed system. This allows you to operate your network as a system”

To be fair, startups are not alone in this front. Check out what GoogleFacebook and Amazon have been doing in their data centers. Together, startups, big players and open communities push the traditional networking world to the modern era.

1311765722_picons03 Follow Horovits on Twitter!

2 Comments

Filed under Cloud, SDN, Telecommunications

An Inside Peek At Google’s Data Centers

Google released last week at its GCP Next event a cool clip on YouTube which takes you on a virtual tour in their data center. You can even take a look around (a 360° view) by simply tilting your smartphone, which is pretty neat. The tour gives a glimpse of how Google’s data centers are built, from compute racks, storage and networking to power and cooling.

Why would Google bother giving such an intimate inside view? To increase credibility in its Google Cloud Platform, by providing visibility into aspects such as the design for scale, reliability, and security. As part of that effort Google recently started sharing its data center design, and even open-sourced some of it. Google’s recent strategic move in this direction was joining the Open Compute Project (joining Facebook, Intel, Microsoft and others) and donating its data center’s rack design to the open community.
For more on this, check out this post.

 

1 Comment

Filed under Cloud, IT

Want To Scale Like Google, Amazon? Design Your Own Data Center

Google is joining the Open Compute Project (OCP), the community-driven open project founded by Facebook for standardizing on IT infrastructure. OCP’s mission statement is to

“break open the black box of proprietary IT infrastructure to achieve greater choice, customization, and cost savings”

Google strategically announced joining the OCP at last week’s OCP Summit, together with its first contribution of a new energy-efficient rack specification that includes 48V power distribution. According to Google, their new rack design  was at least 30% more energy efficient and more cost effective in supporting their higher-performance systems.

The OCP includes, in addition to Facebook and Google, other big names such as Intel, Goldman Sachs, Microsoft and Deutsche Telecom. The member list also includes some traditional server and networking manufacturers such as Ericsson, Cisco, HP and Lenovo, which are expected to be seriously disrupted by the new open standards initiative which undermines their domination over this $140B industry.

OpenComputeProjectLogo650

Last year Google already made an important move, sharing its next-generation data center network architecture. On their announcement last week, Google hinted for additional upcoming contributions to OCP such as better disk solutions for cloud based applications. In his post, John Zipfel shared Google’s longer-term vision for OCP:

And we think that we can work with OCP to go even further, looking up the software stack to standardize server and networking management systems.

Google and Facebook are among the “big guys” running massive data centers and infrastructure, which sheer scale drove them to drop the commodity IT infrastructure and start developing their own in-house optimized infrastructure to reduce costs and improve performance.

Amazon is another such big guy, especially with the massive infrastructure required to power its Amazon Web Services which has the lion’s share of the public cloud market, followed by Microsoft and Google (both latter are OCP members). In an interview last week, Amazon’s CTO Werner Vogels said:

“To be able to operate at scale like we do it makes sense to start designing your own server infrastructure as well as your network. There is great advantages in [doing so].”

With the growing popularity of cloud computing, many of the “smaller guys” (even enterprises and banks) will migrate their IT to some cloud hosting service to save them from buying and managing their own infrastructure, which in turn will mean even more of the world’s IT will be with the “big guys”. To aggravate things further, the public cloud market is undergoing consolidation, with big names such as HP, Verizon and Dell dropping the race, which would leave most of the world’s IT in the hands of a few top-tier cloud vendors and Facebook-scale giants. These truly “big guys” will not settle for anything short of the best for their IT.

1311765722_picons03 Follow Dotan on Twitter!

———————————————————————-
Update: At GCP Next conference the following week Google released a 360° virtual tour at its data center. See more here.

5 Comments

Filed under Cloud, IT

Google Unveils Its Next Gen Datacenter Network Architecture

Organizations such as Google, Amazon and Facebook posses sheer size, scale and distribution of data that pose a new class of challenges for networking, one which traditional networking vendors cannot meet. According to Google’s team:

Ten years ago, we realized that we could not purchase, at any price, a datacenter network that could meet the combination of our scale and speed requirements.

Facebook engineering team ran into much similar problems. Late last year Facebook published its datacenter networking architecture, called “data center fabric”, which is meant to meet this exact challenge, and has continued this year expanding the architecture.

Now Google is joining the game, sharing their in-house datacenter network architecture in a new paper published this week. The current (5th) generation of Google’s architecture, called Jupiter, is able to deliver more than 1 petabit/sec of total bisection bandwidth. This means that each of 100,000 servers can communicate with one another in an arbitrary pattern at 10Gb/s. The new architecture also means substantially improved efficiency of the compute and storage infrastructure, and ultimately much higher utilization in jobs scheduling.

Google based its new networking architecture on the principle of Software-Defined Netowrking (SDN). Using the SDN approach, Google was able to escape the traditional distributed networking protocols with their slow dissemination, high bandwidth overhead and manual switch configurations, and move to a single global configuration for the entire network that is then pushed to all switches, with each switch taking its part of the scheme.

Google has been an advocate of SDN for quite some time, and is a member of the Open Networking Foundation (ONF), a consortium of industry leaders such as Facebook, Microsoft, Deutsche Telecom, Verizon and of course Google, promoting open standards for SDN, primarily the OpenFlow project which Google fully adopted.

SDN and network virtualization have been major trends in the networking realm, especially with cloud-based deployments with their highly distributed, scalable and dynamic environments. All major cloud vendors have been innovating in their next gen networking. Most notably, Google has been actively competing with Amazon on driving its cloud networking to next gen, where Google presented its Andromeda project for network virtualization.

The big players will continue to forefront the networking and scalability challenges of the new cloud and distributed era, and will lead innovation in that field. The open approach that was adopted by the big players, with open standards, open source and sharing with the community, will enable the smaller players to benefit from this innovation and push the industry forward.

You can read Google’s paper on Jupiter here.

1311765722_picons03 Follow Dotan on Twitter!

5 Comments

Filed under Cloud, SDN

Microsoft Brings Azure Cloud To The Enterprise Datacenter

Cloud computing is a market with huge potential, as the financial reports from Amazon and Microsoft earlier this month showed. But the really big potential yet vastly untapped is the enterprise cloud. Enterprises find it difficult to transition their IT to the cloud with their large array of existing applications, datacenters and security requirements. This is the holy grail for the cloud providers.

While big cloud providers Amazon and Google come from the consumers and are now trying to make their way to the enterprises, for Microsoft enterprises are the traditional playground, and Microsoft is trying to build on that and position its Azure public cloud as the enterprise preferred cloud. When hearing Microsoft’s Scott Guthrie, Executive Vice President of the Cloud and Enterprise group, the man and the red shirt, lay out his vision this week, it was clear Microsoft is pushing it harder than ever.

Scott_Guthrie_Microsoft_Think_Next_2015

Now Microsoft wants to put Azure also in the enterprises’ data center, with its new service announced this week – Azure Stack. Built on the same core technology as Azure, the new service takes the compute, networking, storage and security solutions and brings them on-premise in a consistent way. Existing Microsoft-based customers will have the advantage of keeping their existing Microsoft assets in their data center, such as SQL Server, SharePoint, and Exchange, and connect them to modern distributed applications and services while maintaining centralized oversight.

The new Azure Stack will enable hybrid cloud strategy for the enterprises, so that the customer can create applications once and then decide where to deploy them later. This will give Microsoft the desired agent for transitioning enterprises to the public cloud in a gradual, controlled and smooth path.

A preview of Azure Stack will become available this summer. At first stage Azure Stack will focus on Linux and Windows virtual machines. But seeing how cloud and containers grow closer and Microsoft integrating Docker into Azure, I expect we’d be seeing container support pretty soon as well.

Check out the full details from Microsoft’s official site.

1311765722_picons03 Follow Dotan on Twitter!

3 Comments

Filed under Cloud

Facebook Shares Open Networking Switch Design, Part of its Next Gen Networking

Facebook’s enormous scale comes with enormous technological challenges, which go beyond conventional available solutions. For example, Facebook decided to abandon Microsoft’s Bing search engine and instead develop its own revamped search capabilities. Another important area is Facebook’s massive networking needs, which called for a whole new paradigm, code named data center fabric.

10734294_775986955823316_1197403205_n[1]

The next step in Facebook’s next-gen networking architecture is “6-pack” – a new open and modular switch announced just a few days ago. Interesting to note that Facebook chose to announce the new switch the same day Cisco reported its earnings. This is more than a hint at the Networking equipment giant, representing the “traditional networking”. As Facebook says in its announcement, it started the quest for next-gen networking due to

the limits of traditional networking technologies, which tend to be too closed, too monolithic, and too iterative for the scale at which we operate and the pace at which we move.

The new “6-pack” is a modular high volume switch built on merchant silicon based hardware. It enables you to build any size switch using a simple set of common building blocks. The design uses Software Defined Networking (SDN) hybrid approach: While classic SDN separates control plane from forwarding plane and centralizes control decisions, in Facebook’s hybrid architecture each switching element contains a full local control plane on a microserver that communicates with a centralized controller.

Facebook made the design of “6-pack” open as part of the Open Compute Project, together with all the other components of its data center fabric. This is certainly not good news for Cisco and the other vendors, but great news for the community. You can find the full technical design details in Facebook’s post.

Faceook is not the only one in the front line of scaling challenges. The open cloud community OpenStack, as well as the leading public cloud vendors Google and Amazon also shared networking strategies to meet the new challenges coming with the new workloads in modern cloud computing environment.

Cloud and Big Data innovations were born out of necessity in IT, driven by companies with the most challenging use cases and backed by open community. The same innovation is now happening with networking, paving the way to simpler, scalable, virtual and programmable networking based on merchant silicon.

1311765722_picons03 Follow Dotan on Twitter!

4 Comments

Filed under Cloud, IT, SDN