OpenStack Targets Edge Computing, Launches OpenDev Event

OpenStack has become the clear open source choice for turning organizations’ data centers into private clouds. But now OpenStack is looking beyond the data center and out towards Edge Computing.

OpenStack’ 16th (and latest) release, codenamed Pike, puts emphasis on composable infrastructure which is stated to “make possible use cases like edge computing and NFV“. While Network Function Virtualization (NFV) has been picking up at OpenStack over the last years with good traction in the Telecommunications industry, edge computing hasn’t been properly addressed so far. Now OpenStack is set to change that.

OpenStack organized OpenDev 2017 conference last month in San Francisco to “advance the future of EDGE computing“. The event drew much attention with participants from over 30 organizations. Seeing the great interest in Edge Computing in the Telecoms industry it wasn’t surprising to see at OpenDev major Telecoms carriers such as Verizon, AT&T and NTT (which last month founded with Toyota and others an Automotive Edge Computing Consortium), as well as vendors such as Intel, VMware, Ericsson, Red Hat and Huawei. Besides the Telecom industry you could see at OpenDev retail giants such as eBay and Wallmart and others.

OpenDev-2017-edgecomputing

OpenStack also collaborates with other edge computing groups such as Open Edge Computing and The European Telecommunications Standards Institute (ETSI).

While OpenStack promotes a private cloud approach to edge computing, the public cloud vendors are also targeting edge computing. The battle between private and public cloud options which began at the centralized cloud will surely continue on to the edge as well.

Here are a few of the interesting bits from OpenDev 2017:

Verizon‘s Beth Cohen presented Verizon’s Virtual Network Services offering cloud-based services such as Software-Defined WAN (SD-WAN), security, and routing at a uCPE “OpenStack in a Box” at customer premises:

AT&T‘s Kandan Kathirvel and Rodolfo Pacheco talked about telco challenges such as supporting massive scale of millions of edge nodes, and presented AT&T’s prototyped solution, based entirely on open source such as Google’s Kubernetes and ONAP orchestration (based on AT&T’s ECOMP merged with OPEN-O under Linux Foundation):

Jonathan Bryce from OpenStack Foundation shared on his keynote more on OpenStack’s view and plans for edge computing:

For more information on OpenStack’s Edge Computing click here.

For more information on OpenDev 2017 click here.

1311765722_picons03 Follow Horovits on Twitter!

Advertisements

Leave a comment

Filed under Edge Computing, OpenStack, Telecommunications

Edge Computing Draws Startups And Venture Capital

Edge computing is the new hype, some see it as the next big thing after Cloud Computing. Edge computing is drawing attention by the major cloud vendors,  as well as by Tier 1 telecom carriers, standards bodies and consortia.

But the big guys are not alone here. As with anything hot and innovative, Edge Computing draw also the attention of the lean and mean – the startups.

Vapor IO is an interesting US-based startup which provides an edge computing platform enabling simple way to deploy and manage cloud servers. Vapor IO provides both the hardware and the software to remotely administer, manage and monitor the distributed environment. Its main focus is helping telecom carriers and wireless base-station landowners to offer cloud compute capabilities in close proximity to the Radio Access Network (RAN). In June Vapor IO launched Project Volutus, with the ambitious mission statement:

Project Volutus seeks to build the world’s largest network of distributed edge data centers by placing thousands of Vapor Chambers at the base of cell towers and directly cross-connecting them to the wireless networks. This will make it possible to push true cloud capabilities to within yards of the end device or application, one hop from the wireless network.

edge_telecom_infographic_hero-297x300

Vapor IO backs its ambitious statement with a strategic investor Crown Castle, the largest wireless tower company in the US, which leases towers to all the top wireless carriers, including Verizon, AT&T, and T-Mobile. With Vapor IO tapping to Crown Castle’s existing network of 40,000 cell towers and 60,000 miles of fiber optic lines in metropolitan areas, the startup seems up to fulfilling its vision. Vapor IO is also among the founding members of Open19, a an open foundation formed by LinkedIn together with HPE and GE to establish open standards for truly open, innovative platform for data centers and edge platforms.

Another interesting US-based startup is Packet with its bare-metal distributed micro datacenters. In July Packet announced expanding to Ashburn, Atlanta, Chicago, Dallas, Los Angeles, and Seattle, along with new international locations in Frankfurt, Toronto, Hong Kong, Singapore, and Sydney. This amounts to 15 global locations to date. Packet’s technology is based on the hottest industry trends: cloud and containers. They also partnered with major new-age technology players such as Docker, Mesosphere and Cloud66. Packet’s vision is also well-backed, seeing its last funding round of $9.4M led by telecom and internet giant SoftBank. In their customer base you’ll find Cisco, the industry leader in networking.

1311765722_picons03 Follow Horovits on Twitter!

Leave a comment

Filed under Edge Computing

Samsung Boosts Autonomous Driving Investment With $300 Million Fund And A Strategic Business Unit

Samsung’s acquisition of auto parts maker Harman earlier this year was just the beginning. Now Samsung is establishing a strategic business unit under Harman’s Connected Car division, focused on Autonomous and Advanced Driver Assistance Systems (ADAS). This follows a similar move by Google, which formalized its autonomous vehicle research in a business unit late last year, about the same time the Harman acquisition was conceived.

The new strategic business unit will focus on engineering, high-performance computing, sensor technologies, algorithms, artificial intelligence, as well as connectivity and cloud solutions. I may speculate that this would ultimately connect also to Samsung’s Internet of Things (IoT) offering with its IoT cloud and connectivity solutions.

In addition, Samsung is launching a new $300-million fund focused on connected car and autonomous technologies. With the new fund Samsung plans to invest in automotive start-ups and technology. The fund has already made its first investment of €75 million in TTTech, a startup focusing in safety and reliability of networked computer systems. With these moves Samsung clearly marks it is joining the race to disrupt transportation. According to Young Sohn, President and Chief Strategy Officer of Samsung Electronics and Chairman of the Board of HARMAN in the PR:

The Autonomous/ADAS Strategic Business Unit and automotive fund reflect the company’s commitment to the values of open innovation and collaboration. In partnership with OEMs and startups, we will make the driver and passenger experience safer, more convenient, and more enjoyable.

Last month Samsung got approval to test self-driving cars in California, after receiving a similar approval in its home country of South Korea a few months before.

HARMAN_main-1

Samsung is not alone in this race. Its competitor Intel has also been investing heavily in autonomous and connected cars with the Mobileye acquisition earlier this year and the foundation of the new Automotive Edge Computing Consortium last month together with Toyota, Denso Corp, NTT and Ericsson. Qualcomm is also eyeing merger with automotive chip maker giant NXP Semiconductors (50% bigger than its closer competitor in the Automotive Industry), which currently seems to be blocked by competitors and regulators.

1311765722_picons03 Follow Horovits on Twitter!

Leave a comment

Filed under Autonomous Car

Edge Computing Gets A Push From Telcos To Power Self-Driving Cars, AR/VR and Other Future 5G Applications

The next revolution after Cloud Computing is Edge Computing, a revolution pushed by industry trends such as the Internet of Things (IoT), Big Data Analytics and Machine Learning. The idea behind Edge Computing is simple: doing the processing not in central cloud data centers hundreds of miles away but rather “at the edge”, in close proximity to the source (end user, cellphone, smart car etc.). Running at the edge, according to AT&T, can “boost the potential of self-driving cars, augmented and virtual reality, robotic manufacturing, and more”.

But where is this “edge”? And who provides it to us?

Could public Cloud Computing vendors serve Edge Computing? In fact, cloud vendors make their money off centralized services, leveraging their economy of scale to serve the masses off their monstrous centralized state-of-the-art data centers. But when it comes to Edge Computing this winning formula breaks since cloud vendors simply don’t have localized edge presence that can reach a few miles from the end user (not even their distributed caching/CDN sites).  Indeed, cloud vendors are starting to recognize the potential threat and are trying to mitigate it by providing some edge computing solutions, but these depend on others to provide the edge location. One might even speculate that Amazon’s recent purchase of Whole Foods store chain may also serve as local real estate for edge computing aspirations.

So who has the edge presence?

The perfect candidates are the Telcos, the communications service providers who own the access networks that deliver data, telephony and even TV to every home, business and cellphone. A prime example is AT&T, which last month announced its plans to deliver Edge Computing:

Instead of sending commands hundreds of miles to a handful of data centers scattered around the country, we’ll send them to the tens of thousands of central offices, macro towers, and small cells usually never farther than a few miles from our customers.

AT&T will start deploying Edge Computing in dense urban areas. The first deployed service is FlexWareSM, targeted for enterprise customers. But that’s just the first step. AT&T sets out to “reinvent the cloud through Edge Computing”, leveraging its other cutting-edge technologies of Software Defined Networking and Network Virtualization. Later on, with its next generation 5G networks, AT&T says it expects to reach “single-digit millisecond latency” – an ambitious goal indeed.

AT&T is not the only one in Telecoms to explore Edge Computing

The European Telecommunications Standards Institute (ETSI) has set out in 2014 to standardize edge computing for the telcos under its Multi-Access Edge Computing architecture (MEC, originally Mobile Edge Computing which was expanded last year to cover both mobile and fixed access networks). There are also open-source initiatives such as Open Fog Consortium (initiated by Cisco which coined the term “Fog Computing“), Open Edge Computing, and the recently-announced Automotive Edge Computing Consortium (which focuses on connected cars), each with its proud list of member telcos teamed up with vendors and academic institutions (some members participating in more than one). Edge Computing is also widely discussed by telcos in 5G forums, seeing that the upcoming 5th Generation networks will face not just a surge in bandwidth demand but also rising needs by massive IoT communications and latency-sensitive applications.

Open Edge Computing

Open Edge Computing

Telcos can leverage their unique footprint to provide Edge Computing services

The world of Edge Computing is getting a serious boost from the Telco industry, with its existing ubiquitous local points of presence, customer base and service provider capabilities – all the ingredients needed to provide edge computing as a service. This is also a life-saver for the telcos, which are facing the risk of becoming “just a dumb pipe”. While telcos largely failed to compete in the public cloud arena, Edge Computing enables telcos to fight off the cloud vendors and other over-the-top players biting off their business, and bring much-needed value-added services right to the very edge.

1311765722_picons03 Follow Horovits on Twitter!

2 Comments

Filed under Edge Computing, Telecommunications

Toyota Launches Automotive Edge Computing Consortium To Address Big Data From Connected and Self Driving Cars

The age of smart connected cars and autonomous vehicles brings with it a new challenge to the automotive industry: Big Data. Japanese auto manufacturer Toyota estimates that the data volume between vehicles and the cloud will reach 10 exabytes (10.7 billion Gigabytes) per month till the year 2025, which is approximately 10,000 times larger than the present volume. This sort of big data challenge calls for Edge Computing.

This challenge brought Toyota to team up with Japanese auto parts maker Denso Corp, Japanese telecoms NTT, Intel and Ericsson to form the Automotive Edge Computing Consortium which was announced a few days ago. This consortium will

develop an ecosystem for connected cars to support emerging services such as intelligent driving and transport, creating of maps with real-time data as well as driving assistance based on cloud computing.

The consortium will use Edge Computing and network design to accommodate automotive big data in a reasonable fashion between vehicles and the cloud.

Last March Toyota showed off its first autonomous test vehicle developed entirely by Toyota Research Institute, following GoogleTeslaUber and others in the race to disrupt transportation. Even consortium member Intel announced last week starting to build a fleet of fully autonomous (level 4 SAE) test cars, based on its acquisition of Mobileye earlier this year.

Toyota states its exploration of autonomous vehicles dates back as far as 2005. Now with edge computing architecture it can also face the associated big data challenge.

1311765722_picons03 Follow Horovits on Twitter!

2 Comments

Filed under Autonomous Car, Big Data, Cloud, Edge Computing, Internet of Things, IoT

The Internet of Things Drives Amazon To Edge Computing

Amazon.com is the bearer of the of e-commerce vision: buy anything online. Or as Amazon’s mission statement goes: a place where people can come to find and discover anything they might want to buy online.

But then the earth shook: a few days ago Amazon acquired organic-food chain Whole Foods for $13.7 billion, Amazon’s largest deal ever. Against its very vision and DNA, the e-commerce giant put a sound foot in brick-and-mortar, with hundreds of physical stores.

WHY? Simply put, Amazon realized that we don’t want to shop for ANYTHING online. Some products, such as groceries, people still like to smell, hand-pick, try out and buy at the store near by. That’s where it loses ground to Walmart et al. So Amazon adapted its vision, and made a serious investment to get in the game to augment its leading e-commerce play (going for M&A after some failed home-grown trials such as AmazonFresh).

Coincidentally(?), a similar earthquake happened at Amazon’s cloud around the same time: Amazon Web Services (AWS) has been the pioneer of public cloud and a strong advocate of the vision that all shall run over the web in the public cloud (hence named “web services”). Even hybrid cloud (private+public), which Microsoft, IBM and other public cloud vendors adopted well, Amazon had hard time accepting, to the point that Amazon partnered with its rival VMware to complement that piece externally.

But then a couple of weeks ago Amazon released Greengrass. Don’t let the innocent-sounding name mislead you – it is nothing short of a revolution for Amazon. Greengrass enables users for the first time to run their favorite AWS services LOCALLY, executing serverless logic and inter-device communication without necessarily connecting to AWS cloud.

WHY? Simply put, the Internet of Things (IoT). At the recent AWS Summit I heard Amazonians for the first time admitting out loud that some use cases, especially those derived from IoT, disallow connecting to a central remote cloud data center. On his blog post, AWS CTO Werner Vogels himself outlines the categories (he calls them “laws”) of these use cases:

  1. Law of Physics. Customers want to build applications that make the most interactive and critical decisions locally, such as safety-critical control. This is determined by basic laws of physics: it takes time to send data to the cloud, and networks don’t have 100% availability. Customers in physically remote environments, such as mining and agriculture, are more affected by these issues.
  2. Law of Economics. In many industries, data production has grown more quickly than bandwidth, and much of this data is low value. Local aggregation and filtering of data allows customers to send only high-value data to the cloud for storage and analysis.
  3. Law of the Land. In some industries, customers have regulatory or compliance requirements to isolate or duplicate data in particular locations. Some governments impose data sovereignty restrictions on where data may be stored and processed.

In fact it’s bigger than merely IoT. Amazon tried launching its IoT API service with direct connectivity to the cloud and it didn’t catch for many types of IoT use cases. The missing ingredient was Edge Computing. As I wrote before, IoT, Big Data and Machine Learning Push Cloud Computing To The Edge, and that’s what Amazon realized. On AWS Summit I saw this learning simply put: AWS IoT Going to the edge.

IMG_20170621_151354

The two Amazon groundbreaking stories this month come down to the same essential truth: Amazon started out from the digital internet-driven services, which matched many common use cases. But now it realizes the power of the edge, whether a physical store available down the street from my home or edge computing executing at my smart home or connected factory (or perhaps at the Whole Foods store down the street?). That’s me – living on the edge – and apparently I’m not alone.

1311765722_picons03 Follow Horovits on Twitter!

3 Comments

Filed under Uncategorized

One Open Source To Orchestrate Them All

First the change happened in Information Technology (IT): moving from hardware to software; virtualization inspired by cloud computing; data centers becoming configurable and programmable as software using DevOps approach; traditional vendor-locked solutions superseded by new world open source initiatives such as OpenStack, Open Compute Project and Cloud Native Computing Foundation.

Then Communications Technology (CT) followed the lead, making its move into the new world with notions such as software defined networking (SDN), network functions virtualization (NFV) and central office re-architected as a data center (CORD). Inevitably open source took a lead role here as well, with a multitude of projects popping up, led by different industry forces.

LinuxFoundationNetworkingAndOrchestrationIn fact, too many project, which left the Telecom industry perplexed and unable to converge under one de-facto standard. Have you tried to orchestrate with each player requiring a different sign language from the maestro?

But then came the twist in the plot when the Chinese and Americans decided to join forces: ECOMP (Enhanced Control, Orchestration, Management and Policy) that was open sourced by AT&T, and Open-O (Open Orchestrator) project led primarily by China Mobile, China Telecom and Huawei, have decided to join forces under the Linux Foundation’s umbrella, to create Open Network Automation Platform (ONAP).

What shape will the merged project take? That is yet to be decided by the community. This topic was much discussed February at the announcement on Mobile World Congress and even more so during Open Networking Summit this month, but still more questions than answers for ONAP, around modeling, protocols, descriptors, architecture…

The most important question, however, is whether the new merged mega-project will bear the critical mass required to gravitate the industry towards it, to become the converging force, the de-facto standard. Seeing the forces behind ECOMP, OPEN-O and now ONAP, including Intel, IBM, Cisco, Nokia and others, it looks promising. And the Linux Foundation is a proven vehicle for widely adopted open source projects. If succeed, this may very well be the turning point, taking the NFV & SDN wagon out of the mud and unto the fast track to production.

*Disclaimer: The writer has been working on the orchestration initiatives of ONAP members Amdocs and GigaSpaces.

1311765722_picons03 Follow Horovits on Twitter!

1 Comment

Filed under Cloud, DevOps, NFV, SDN, Telecommunications