Tag Archives: Open Source

Edge Computing Gets A Push From Telcos To Power Self-Driving Cars, AR/VR and Other Future 5G Applications

The next revolution after Cloud Computing is Edge Computing, a revolution pushed by industry trends such as the Internet of Things (IoT), Big Data Analytics and Machine Learning. The idea behind Edge Computing is simple: doing the processing not in central cloud data centers hundreds of miles away but rather “at the edge”, in close proximity to the source (end user, cellphone, smart car etc.). Running at the edge, according to AT&T, can “boost the potential of self-driving cars, augmented and virtual reality, robotic manufacturing, and more”.

But where is this “edge”? And who provides it to us?

Could public Cloud Computing vendors serve Edge Computing? In fact, cloud vendors make their money off centralized services, leveraging their economy of scale to serve the masses off their monstrous centralized state-of-the-art data centers. But when it comes to Edge Computing this winning formula breaks since cloud vendors simply don’t have localized edge presence that can reach a few miles from the end user (not even their distributed caching/CDN sites).  Indeed, cloud vendors are starting to recognize the potential threat and are trying to mitigate it by providing some edge computing solutions, but these depend on others to provide the edge location. One might even speculate that Amazon’s recent purchase of Whole Foods store chain may also serve as local real estate for edge computing aspirations.

So who has the edge presence?

The perfect candidates are the Telcos, the communications service providers who own the access networks that deliver data, telephony and even TV to every home, business and cellphone. A prime example is AT&T, which last month announced its plans to deliver Edge Computing:

Instead of sending commands hundreds of miles to a handful of data centers scattered around the country, we’ll send them to the tens of thousands of central offices, macro towers, and small cells usually never farther than a few miles from our customers.

AT&T will start deploying Edge Computing in dense urban areas. The first deployed service is FlexWareSM, targeted for enterprise customers. But that’s just the first step. AT&T sets out to “reinvent the cloud through Edge Computing”, leveraging its other cutting-edge technologies of Software Defined Networking and Network Virtualization. Later on, with its next generation 5G networks, AT&T says it expects to reach “single-digit millisecond latency” – an ambitious goal indeed.

AT&T is not the only one in Telecoms to explore Edge Computing

The European Telecommunications Standards Institute (ETSI) has set out in 2014 to standardize edge computing for the telcos under its Multi-Access Edge Computing architecture (MEC, originally Mobile Edge Computing which was expanded last year to cover both mobile and fixed access networks). There are also open-source initiatives such as Open Fog Consortium (initiated by Cisco which coined the term “Fog Computing“), Open Edge Computing, and the recently-announced Automotive Edge Computing Consortium (which focuses on connected cars), each with its proud list of member telcos teamed up with vendors and academic institutions (some members participating in more than one). Edge Computing is also widely discussed by telcos in 5G forums, seeing that the upcoming 5th Generation networks will face not just a surge in bandwidth demand but also rising needs by massive IoT communications and latency-sensitive applications.

Open Edge Computing

Open Edge Computing

Telcos can leverage their unique footprint to provide Edge Computing services

The world of Edge Computing is getting a serious boost from the Telco industry, with its existing ubiquitous local points of presence, customer base and service provider capabilities – all the ingredients needed to provide edge computing as a service. This is also a life-saver for the telcos, which are facing the risk of becoming “just a dumb pipe”. While telcos largely failed to compete in the public cloud arena, Edge Computing enables telcos to fight off the cloud vendors and other over-the-top players biting off their business, and bring much-needed value-added services right to the very edge.

1311765722_picons03 Follow Horovits on Twitter!

Advertisements

Leave a comment

Filed under Edge Computing, Telecommunications

One Open Source To Orchestrate Them All

First the change happened in Information Technology (IT): moving from hardware to software; virtualization inspired by cloud computing; data centers becoming configurable and programmable as software using DevOps approach; traditional vendor-locked solutions superseded by new world open source initiatives such as OpenStack, Open Compute Project and Cloud Native Computing Foundation.

Then Communications Technology (CT) followed the lead, making its move into the new world with notions such as software defined networking (SDN), network functions virtualization (NFV) and central office re-architected as a data center (CORD). Inevitably open source took a lead role here as well, with a multitude of projects popping up, led by different industry forces.

LinuxFoundationNetworkingAndOrchestrationIn fact, too many project, which left the Telecom industry perplexed and unable to converge under one de-facto standard. Have you tried to orchestrate with each player requiring a different sign language from the maestro?

But then came the twist in the plot when the Chinese and Americans decided to join forces: ECOMP (Enhanced Control, Orchestration, Management and Policy) that was open sourced by AT&T, and Open-O (Open Orchestrator) project led primarily by China Mobile, China Telecom and Huawei, have decided to join forces under the Linux Foundation’s umbrella, to create Open Network Automation Platform (ONAP).

What shape will the merged project take? That is yet to be decided by the community. This topic was much discussed February at the announcement on Mobile World Congress and even more so during Open Networking Summit this month, but still more questions than answers for ONAP, around modeling, protocols, descriptors, architecture…

The most important question, however, is whether the new merged mega-project will bear the critical mass required to gravitate the industry towards it, to become the converging force, the de-facto standard. Seeing the forces behind ECOMP, OPEN-O and now ONAP, including Intel, IBM, Cisco, Nokia and others, it looks promising. And the Linux Foundation is a proven vehicle for widely adopted open source projects. If succeed, this may very well be the turning point, taking the NFV & SDN wagon out of the mud and unto the fast track to production.

*Disclaimer: The writer has been working on the orchestration initiatives of ONAP members Amdocs and GigaSpaces.

1311765722_picons03 Follow Horovits on Twitter!

Leave a comment

Filed under Cloud, DevOps, NFV, SDN, Telecommunications

Open Source Is Taking Over Networks, Startups Lead The Way

Innovating in the networking world is hard. With purpose-built boxes, protocols, technologies, legacy, processes… But when industry veterans from the likes of Apple, Juniper and Big Switch start up fresh and think outside the box – that’s when networks get shaken up. Just see the updates from the last couple of weeks:

After building the complex networks for iCloud, Apple engineering veterans decided to leverage their experience and last week launched their new startup SnapRoute. SnapRoute promises to bring a “developer friendly and operations focused network protocol stack that runs on all commoditized network and hardware with any Linux operating system”. This open stack will remove the dependency in the software provided by the vendors providing the network equipment (such as routers and switches) and will enable innovation decoupled from the vendor.

snaproute-open-source-network

SnapRoute’s first open source project is its FlexSwitch, which it contributed to the Facebook-founded Open Compute Project. FlexSwitch will also be offered as an option for the OpenSwitch operating system. OpenSwitch is an open source, Linux-based network operating system designed to power enterprise grade switches from multiple hardware vendors that will enable organizations to rapidly build data center networks that are customized for unique business needs. Earlier this month OpenSwitch got accepted to the Linux Foundation, which will surely facilitate and boost its open source community activity.

openswitch

Another promising startup, which made headlines recently following Google’s investment, is Barefoot Networks, which brings the vision of programmable networks. Their innovative switch chips can be programmed using the P4 language to run various network tasks to replace today’s purpose-built networking equipment. Interesting to note that both Barefoot Networks and P4.org are also members at the OpenSwitch project.

Apstra is another interesting startup that was launched last week and was founded by networking veterans from Big SwitchArista and Juniper, which offers data center network automation. It employs an intent-driven approach for network operations, and treats the network using the methodologies of distributed systems:

“You need to recognize that your network is a distributed system. This allows you to operate your network as a system”

To be fair, startups are not alone in this front. Check out what GoogleFacebook and Amazon have been doing in their data centers. Together, startups, big players and open communities push the traditional networking world to the modern era.

1311765722_picons03 Follow Horovits on Twitter!

2 Comments

Filed under Cloud, SDN, Telecommunications

Mesosphere Open-Sources Its Containers Management System

The containers movement received major news yesterday when Mesosphere announced it has open-sourced  its Data Center Operating System (DC/OS). The core will be released under Apache 2.0 open source license, with enterprise-grade tools and features such as security, performance, compliance, and monitoring, kept for the paid enterprise version. The new DC/OS community already has more than 60 partner companies, including major names such as Microsoft, HPE, Cisco, Accenture and Verizon. There are also important names from the DevOps automation including Chef and Puppet.

Mesosphere’s open source strategy is primarily rooted in the fact it is the commercial backer of Apache Mesos open source project. But Mesosphere took additional steps and joined the founding team of the Open Container Initiative (OCI) and the Cloud Native Computing Foundation (CNCF) which were founded in the past year by big names such as Google, Microsoft, IBM and HPE to standardize on containers. In fact, on its announcement yesterday Mesosphere said it was considering hosting DC/OS externally under CNCF (among other alternatives).

Mesosphere’s open source move yesterday comes a month after Mesosphere joined the prestigious unicorn club when it finished its round C funding with $73.5 million funding at reportedly over $1 billion valuation. Not surprisingly, Mesosphere’s investors Microsoft and HPE, which also collaborate with Mesosphere at the Open Container Initiative, joined as founding members to the DC/OS project. In fact, Microsoft announced yesterday adding support for DC/OS in its Azure cloud, after it added support for Docker on Azure a year ago. This is part of the fierce cloud competition on containers (so fierce that it drove HP out of the race last year).

Google, a competitor of Microsoft in the public cloud, used a similar open source strategy last year when it decided to open-source its Kubernetes container management system and contribute it to CNCF on its foundation. Kubernetes powers Google’s Container Engine, Google’s own response in the cloud wars. While some consider Kubernetes a competitor for Mesosphere, Mesosphere took a collaborative strategy, providing support (namely package) for Kubernetes alongside its own Marathon product, as well as contributing code to the Kubernetes open source project.

1311765722_picons03 Follow Horovits on Twitter!

1 Comment

Filed under Cloud, Containers, DevOps

Want To Scale Like Google, Amazon? Design Your Own Data Center

Google is joining the Open Compute Project (OCP), the community-driven open project founded by Facebook for standardizing on IT infrastructure. OCP’s mission statement is to

“break open the black box of proprietary IT infrastructure to achieve greater choice, customization, and cost savings”

Google strategically announced joining the OCP at last week’s OCP Summit, together with its first contribution of a new energy-efficient rack specification that includes 48V power distribution. According to Google, their new rack design  was at least 30% more energy efficient and more cost effective in supporting their higher-performance systems.

The OCP includes, in addition to Facebook and Google, other big names such as Intel, Goldman Sachs, Microsoft and Deutsche Telecom. The member list also includes some traditional server and networking manufacturers such as Ericsson, Cisco, HP and Lenovo, which are expected to be seriously disrupted by the new open standards initiative which undermines their domination over this $140B industry.

OpenComputeProjectLogo650

Last year Google already made an important move, sharing its next-generation data center network architecture. On their announcement last week, Google hinted for additional upcoming contributions to OCP such as better disk solutions for cloud based applications. In his post, John Zipfel shared Google’s longer-term vision for OCP:

And we think that we can work with OCP to go even further, looking up the software stack to standardize server and networking management systems.

Google and Facebook are among the “big guys” running massive data centers and infrastructure, which sheer scale drove them to drop the commodity IT infrastructure and start developing their own in-house optimized infrastructure to reduce costs and improve performance.

Amazon is another such big guy, especially with the massive infrastructure required to power its Amazon Web Services which has the lion’s share of the public cloud market, followed by Microsoft and Google (both latter are OCP members). In an interview last week, Amazon’s CTO Werner Vogels said:

“To be able to operate at scale like we do it makes sense to start designing your own server infrastructure as well as your network. There is great advantages in [doing so].”

With the growing popularity of cloud computing, many of the “smaller guys” (even enterprises and banks) will migrate their IT to some cloud hosting service to save them from buying and managing their own infrastructure, which in turn will mean even more of the world’s IT will be with the “big guys”. To aggravate things further, the public cloud market is undergoing consolidation, with big names such as HP, Verizon and Dell dropping the race, which would leave most of the world’s IT in the hands of a few top-tier cloud vendors and Facebook-scale giants. These truly “big guys” will not settle for anything short of the best for their IT.

1311765722_picons03 Follow Dotan on Twitter!

———————————————————————-
Update: At GCP Next conference the following week Google released a 360° virtual tour at its data center. See more here.

6 Comments

Filed under Cloud, IT

HP Quits Public Cloud Race, Focusing On Hybrid Cloud For Enterprises

If you don’t see how the IT world is changing, just follow the recent tectonic shifts: while some tectonic plates merge (see Dell & EMC), others split (see HP split). The big players are assessing their play in this new world of IT where people and companies consume services rather than products, and where businesses run entire operations without owning “stuff” (think about the biggest taxi company not owning a single vehicle…). In this world, the game shifts from selling boxes and licenses to cloud-based services and open-source software and standards. And that’s a shift the big guys are now facing.

In its recent evaluation of the company’s future, HP (soon to be HP Enterprise) realized it cannot compete in the public cloud global arena, and decided to shut down its HP Helion Public Cloud offering on January 31, 2016. This arena is heavily dominated by Amazon, followed by Google and Microsoft, it requires a lot of upfront investment to gain significant global coverage, and it has a fierce war on price and performance.

hphelion

Instead, HP will focus on hybrid cloud, helping their traditional enterprise customers combine their on-premise data center with different public cloud vendors. This way HP actually plans to partner with the big public cloud vendors. In its recent blog post, Bill Hilf, SVP and GM, HP Cloud, stated that:

To support this new model, we will continue to aggressively grow our partner ecosystem and integrate different public cloud environments.

HP’s strategic choice to focus on hybrid cloud should come as no surprise. With the agenda of bringing hybrid cloud to enterprises  HP acquired Stackato 3 months ago from ActiveState. Also, late last year HP acquired open-source software Eucalyptus to “accelerate hybrid cloud adoption in the enterprise“, which paved HP’s way to offering compatibility with Amazon’s AWS cloud. On the Microsoft front HP has been working to support Azure cloud and Office 365 SaaS offering. This may compete with Microsoft’s own hybrid cloud offering announced earlier this year. And Amazon is debating its position on hybrid cloud as well. so these partnerships will be interesting. If formed well, they could lead HP to a true multi-cloud offering.

hphelion

The big players are all eyeing how to bring hybrid model for the enterprises, where the big money lies, and where complex environments, systems and constraints mandate such hybrid models and enterprise-grade tooling. We’ll also be seeing more use of open-source, such as HP’s adoption of Eucalyptus, CloudFoundry (for PaaS), and OpenStack. In fact, today started the OpenStack Summit in Tokyo, it’d be interesting to hear what HP executives elaborate on the recent and expected moves for Helion.

You can read more on HP’s recent moves around cloud, containers, open-source, HP company split and more on this post.

6 Comments

Filed under Cloud

Biggest Tech Takeover of All Times: Dell Acquires EMC For $67B

Can two veteran giants re-invent themselves by joining forces? that’s what Dell thinks in acquiring EMC for the staggering amount of $67 billion, marking the largest tech acquisition of all times. You heard it right, Dell is trying to swallow the EMC serial acquirer which bought over 20 companies in the past 5 years alone, and in a variety of areas such as security, virtualization and of course storage, and has recently been trying to make sense in that mammoth as part of its EMC Federation.

What’s the purpose of that? I would take a look at the trends popping out of the press release:

The transaction… brings together strong capabilities in the fastest growing areas of the industry, including digital transformation, software-defined data center, hybrid cloud, converged infrastructure, mobile and security.

All the hot trends and buzzwords are there, the same new trends which made much of their respective traditional businesses irrelevant for the modern age.

Also, beyond technology, there was a cultural gap. The companies painfully discovered that the modern age is much less tolerant to vendor lock-in and black-box type of products, formats and protocols, and expects a more open approach. For example, EMC’s VMware which dominated the enterprise virtualization realm has become less relevant with cloud and containers. With this lesson learnt EMC teamed up with emerging open standards around containers, while VMware adopted OpenStack, the open-source cloud platform. Dell also made its move to team up with open standards around the Internet of Things.

It’s not trivial for such giants to join forces and re-invent themselves. I would expect seeing more modern approaches for their products and services, adopting more openness and collaborative mindset as a means to regain relevance, while focusing on expanding their offers to the new technologies.

1311765722_picons03 Follow Horovits on Twitter!

1 Comment

Filed under Cloud, technology