Tag Archives: networking

One Open Source To Orchestrate Them All

First the change happened in Information Technology (IT): moving from hardware to software; virtualization inspired by cloud computing; data centers becoming configurable and programmable as software using DevOps approach; traditional vendor-locked solutions superseded by new world open source initiatives such as OpenStack, Open Compute Project and Cloud Native Computing Foundation.

Then Communications Technology (CT) followed the lead, making its move into the new world with notions such as software defined networking (SDN), network functions virtualization (NFV) and central office re-architected as a data center (CORD). Inevitably open source took a lead role here as well, with a multitude of projects popping up, led by different industry forces.

LinuxFoundationNetworkingAndOrchestrationIn fact, too many project, which left the Telecom industry perplexed and unable to converge under one de-facto standard. Have you tried to orchestrate with each player requiring a different sign language from the maestro?

But then came the twist in the plot when the Chinese and Americans decided to join forces: ECOMP (Enhanced Control, Orchestration, Management and Policy) that was open sourced by AT&T, and Open-O (Open Orchestrator) project led primarily by China Mobile, China Telecom and Huawei, have decided to join forces under the Linux Foundation’s umbrella, to create Open Network Automation Platform (ONAP).

What shape will the merged project take? That is yet to be decided by the community. This topic was much discussed February at the announcement on Mobile World Congress and even more so during Open Networking Summit this month, but still more questions than answers for ONAP, around modeling, protocols, descriptors, architecture…

The most important question, however, is whether the new merged mega-project will bear the critical mass required to gravitate the industry towards it, to become the converging force, the de-facto standard. Seeing the forces behind ECOMP, OPEN-O and now ONAP, including Intel, IBM, Cisco, Nokia and others, it looks promising. And the Linux Foundation is a proven vehicle for widely adopted open source projects. If succeed, this may very well be the turning point, taking the NFV & SDN wagon out of the mud and unto the fast track to production.

*Disclaimer: The writer has been working on the orchestration initiatives of ONAP members Amdocs and GigaSpaces.

1311765722_picons03 Follow Horovits on Twitter!

Advertisements

1 Comment

Filed under Cloud, DevOps, NFV, SDN, Telecommunications

Open Source Is Taking Over Networks, Startups Lead The Way

Innovating in the networking world is hard. With purpose-built boxes, protocols, technologies, legacy, processes… But when industry veterans from the likes of Apple, Juniper and Big Switch start up fresh and think outside the box – that’s when networks get shaken up. Just see the updates from the last couple of weeks:

After building the complex networks for iCloud, Apple engineering veterans decided to leverage their experience and last week launched their new startup SnapRoute. SnapRoute promises to bring a “developer friendly and operations focused network protocol stack that runs on all commoditized network and hardware with any Linux operating system”. This open stack will remove the dependency in the software provided by the vendors providing the network equipment (such as routers and switches) and will enable innovation decoupled from the vendor.

snaproute-open-source-network

SnapRoute’s first open source project is its FlexSwitch, which it contributed to the Facebook-founded Open Compute Project. FlexSwitch will also be offered as an option for the OpenSwitch operating system. OpenSwitch is an open source, Linux-based network operating system designed to power enterprise grade switches from multiple hardware vendors that will enable organizations to rapidly build data center networks that are customized for unique business needs. Earlier this month OpenSwitch got accepted to the Linux Foundation, which will surely facilitate and boost its open source community activity.

openswitch

Another promising startup, which made headlines recently following Google’s investment, is Barefoot Networks, which brings the vision of programmable networks. Their innovative switch chips can be programmed using the P4 language to run various network tasks to replace today’s purpose-built networking equipment. Interesting to note that both Barefoot Networks and P4.org are also members at the OpenSwitch project.

Apstra is another interesting startup that was launched last week and was founded by networking veterans from Big SwitchArista and Juniper, which offers data center network automation. It employs an intent-driven approach for network operations, and treats the network using the methodologies of distributed systems:

“You need to recognize that your network is a distributed system. This allows you to operate your network as a system”

To be fair, startups are not alone in this front. Check out what GoogleFacebook and Amazon have been doing in their data centers. Together, startups, big players and open communities push the traditional networking world to the modern era.

1311765722_picons03 Follow Horovits on Twitter!

2 Comments

Filed under Cloud, SDN, Telecommunications

Verizon Shutting Down Its Public Cloud

Verizon has officially decided to shut down the majority of its public cloud operation. In an announcement sent to its cloud customers, which hit waves in the social media, it announced decision to “discontinue its Public Cloud, Reserve Performance and Marketplace services on April the 12th”, leaving their customers 2 months to migrate their data to another safe haven before it disappears together with the cloud services.

The company is offering its Virtual Private Cloud services as an alternative, which indicates Verizon will now focus its cloud offering on private cloud, probably aimed at enterprises.

Verizon is not alone in its decision. Last October HP made a similar choice to quit the public cloud, and so has Dell. The reason for their decision is the harsh price competition in the public cloud arena, led by Amazon who controls the vast majority of this market, and followed by Microsoft, Google and IBM. In addition to very competitive prices for their infrastructure-as-a-service (IaaS), these leading vendors also offer an ever-growing plethora of platform services (PaaS) which ease the development on the cloud.

hphelion2

Traditionally enterprises have utilized Verizon and the likes for reliable high-quality networking. But the public cloud players quickly stepped up and provided next-generation networking, same as they did with compute and storage (on the expense of HP, Dell and the likes), and gained foothold with enterprises. Even the very conservative banking sector is now going for public cloud with these guys.

Another important advantage of the leading vendors is their global geographical presence, with which local vendors find hard to compete. We are expected to see further consolidation in the public cloud, with traditional enterprise vendors increasingly pressed to innovate and reinvent themselves. IBM is one such good example.We’ll see who else stays relevant in the public cloud age.

1311765722_picons03 Follow Dotan on Twitter!

 

5 Comments

Filed under Cloud

Google Unveils Its Next Gen Datacenter Network Architecture

Organizations such as Google, Amazon and Facebook posses sheer size, scale and distribution of data that pose a new class of challenges for networking, one which traditional networking vendors cannot meet. According to Google’s team:

Ten years ago, we realized that we could not purchase, at any price, a datacenter network that could meet the combination of our scale and speed requirements.

Facebook engineering team ran into much similar problems. Late last year Facebook published its datacenter networking architecture, called “data center fabric”, which is meant to meet this exact challenge, and has continued this year expanding the architecture.

Now Google is joining the game, sharing their in-house datacenter network architecture in a new paper published this week. The current (5th) generation of Google’s architecture, called Jupiter, is able to deliver more than 1 petabit/sec of total bisection bandwidth. This means that each of 100,000 servers can communicate with one another in an arbitrary pattern at 10Gb/s. The new architecture also means substantially improved efficiency of the compute and storage infrastructure, and ultimately much higher utilization in jobs scheduling.

Google based its new networking architecture on the principle of Software-Defined Netowrking (SDN). Using the SDN approach, Google was able to escape the traditional distributed networking protocols with their slow dissemination, high bandwidth overhead and manual switch configurations, and move to a single global configuration for the entire network that is then pushed to all switches, with each switch taking its part of the scheme.

Google has been an advocate of SDN for quite some time, and is a member of the Open Networking Foundation (ONF), a consortium of industry leaders such as Facebook, Microsoft, Deutsche Telecom, Verizon and of course Google, promoting open standards for SDN, primarily the OpenFlow project which Google fully adopted.

SDN and network virtualization have been major trends in the networking realm, especially with cloud-based deployments with their highly distributed, scalable and dynamic environments. All major cloud vendors have been innovating in their next gen networking. Most notably, Google has been actively competing with Amazon on driving its cloud networking to next gen, where Google presented its Andromeda project for network virtualization.

The big players will continue to forefront the networking and scalability challenges of the new cloud and distributed era, and will lead innovation in that field. The open approach that was adopted by the big players, with open standards, open source and sharing with the community, will enable the smaller players to benefit from this innovation and push the industry forward.

You can read Google’s paper on Jupiter here.

1311765722_picons03 Follow Dotan on Twitter!

5 Comments

Filed under Cloud, SDN

Facebook Shares Open Networking Switch Design, Part of its Next Gen Networking

Facebook’s enormous scale comes with enormous technological challenges, which go beyond conventional available solutions. For example, Facebook decided to abandon Microsoft’s Bing search engine and instead develop its own revamped search capabilities. Another important area is Facebook’s massive networking needs, which called for a whole new paradigm, code named data center fabric.

10734294_775986955823316_1197403205_n[1]

The next step in Facebook’s next-gen networking architecture is “6-pack” – a new open and modular switch announced just a few days ago. Interesting to note that Facebook chose to announce the new switch the same day Cisco reported its earnings. This is more than a hint at the Networking equipment giant, representing the “traditional networking”. As Facebook says in its announcement, it started the quest for next-gen networking due to

the limits of traditional networking technologies, which tend to be too closed, too monolithic, and too iterative for the scale at which we operate and the pace at which we move.

The new “6-pack” is a modular high volume switch built on merchant silicon based hardware. It enables you to build any size switch using a simple set of common building blocks. The design uses Software Defined Networking (SDN) hybrid approach: While classic SDN separates control plane from forwarding plane and centralizes control decisions, in Facebook’s hybrid architecture each switching element contains a full local control plane on a microserver that communicates with a centralized controller.

Facebook made the design of “6-pack” open as part of the Open Compute Project, together with all the other components of its data center fabric. This is certainly not good news for Cisco and the other vendors, but great news for the community. You can find the full technical design details in Facebook’s post.

Faceook is not the only one in the front line of scaling challenges. The open cloud community OpenStack, as well as the leading public cloud vendors Google and Amazon also shared networking strategies to meet the new challenges coming with the new workloads in modern cloud computing environment.

Cloud and Big Data innovations were born out of necessity in IT, driven by companies with the most challenging use cases and backed by open community. The same innovation is now happening with networking, paving the way to simpler, scalable, virtual and programmable networking based on merchant silicon.

1311765722_picons03 Follow Dotan on Twitter!

4 Comments

Filed under Cloud, IT, SDN

Facebook’s Big Data Analytics Boosts Search Capabilities

A few days ago Facebook announced its new search capabilities. These are Google-like capabilities of searching your history, the feature that was the crown jewel of Google+ – Google’s attempt to fight off Facebook. Want to find that funny thing you posted when you took the ice bucket challenge a few months ago? It’s now easier than ever. And it’s now supported also on your phone.

facebook ice bucket challenge search

You may think this is a simple (yet highly useful) feature. But when you come to think of it, this is quite a challenge, considering the 1.3 billion active users generating millions of events per second. The likes of Facebook, Google and Twitter cannot settle for the traditional processing capabilities, and need to develop innovative ways for stream processing at high volume.

A challenge just as big is encountered with queries: Facebook’s big data stores process tens of petabytes and hundreds of thousands of queries per day. Serving such volumes while keeping most response times under 1 second is hardly the type of challenge of traditional databases.

These challenges called for innovative approach. For example, Facebook’s Data Infrastructure Team was the one to develop and open-source Hive, the popular Hadoop-based software framework for Big Data queries. Facebook also took innovative approach in building its data centers, both in the design of the servers, and in its next-gen networking designed to meet the high and constantly-increasing traffic volumes within their data centers.

10734294_775986955823316_1197403205_n[1]

Facebook is taking its data challenge very seriously, investing in internal research as well as in collaboration with the academia and the open-source community. In a data faculty summit hosted by Facebook a few months ago, Facebook shared its top open data problems. It raised many interesting challenges in managing Small Data, Big Data and related hardware. With the announced release of Facebook Search for mobile, I remembered in particular the challenges raised in the Facebook data faculty summit on how to adapt their systems to the mobile realm, where network is flaky, where much of the content is pre-fetched instead of pulled on-demand, where privacy checks need to be done much earlier on in the process. The recent release may indicate new innovative solutions to these challenges. Looking to hear some insights from the technical team.

Facebook, Twitter and the like face the Big Data challenges early on. as I said before:

These volumes challenge the traditional paradigms and trigger innovative approaches. I would keep a close eye on Facebook as a case study for the challenges we’d all face very soon.

 

1311765722_picons03 Follow Dotan on Twitter!

3 Comments

Filed under Big Data, Real Time Analytics, Solution Architecture

Facebook Shares Its Next Gen Networking

In this age of cloud-based services, social media and the Internet Of Things, when everyone and everything is connected and even our once-local assets such as our documents, spreadsheets and photos are now stored and edited online, network connectivity has become more expensive than gold. Naturally, the biggest players with the biggest workloads face the challenges first, and pave the way beyond current technologies, protocols and methodologies. Recently we got great case studies when Amazon and Google shared their next-gen networking strategies.

Another major player that recently shared its next-gen networking strategy is Facebook. In a detailed blog post, Alexey Andreyev, a Facebook network engineer, shared a detailed technical overview of their new “data center fabric” that was piloted in their Altoona data center. This caught the attention of GigaOm, which last week invited Facebook’s Director of Network Engineering Najam Ahmad to a dedicated podcast to gain some more insight.

Facebook-data-fabric

Facebook moved away from the old cluster-based architecture to the modern fabric-based one. This helped them overcome the endless race after the bleeding-edge and high-end networking equipment and the associated vendor lock-in:

To build the biggest clusters we needed the biggest networking devices, and those devices are available only from a limited set of vendors.

Another interesting point was about the move to a bottom-up Software Defined Networking (SDN) approach:

The only difference is that were essentially saying that we don’t want to build the networks in the traditional way. We want to build them in more of the SDN philosophy, and the vendors need to catch up, and so whoever provides the solution will be part of the system overall.

We see the trend of SDN and virtual networking also with vendors such as Amazon and Google, as well as with the cloud community such as was evident in the last OpenStack Summit. I expect network virtualization and software-defined methodologies shall become even more prominent in Facebook’s architecture as it evolves and as Facebook’s volumes and complexity grow.

Facebook is a great example of an online company in the largest scale, with more than 1.35 billion users around the globe, with a diverse set of services, application and workloads, and with an ever-increasing traffic volume (vast majority of which is machine-to-machine). These volumes challenge the traditional paradigms and trigger innovative approaches. I would keep a close eye on Facebook as a case study for the challenges we’d all face very soon.

————————————————————————-

Update: on Feb 2015 Facebook shared details on “6-pack”, a new open and modular switch in the heart of their datacenter networking architecture. you can read more about it in this post

1311765722_picons03 Follow Dotan on Twitter!

4 Comments

Filed under Cloud, SDN