Toyota Launches Automotive Edge Computing Consortium To Address Big Data From Connected and Self Driving Cars

The age of smart connected cars and autonomous vehicles brings with it a new challenge to the automotive industry: Big Data. Japanese auto manufacturer Toyota estimates that the data volume between vehicles and the cloud will reach 10 exabytes (10.7 billion Gigabytes) per month till the year 2025, which is approximately 10,000 times larger than the present volume. This sort of big data challenge calls for Edge Computing.

This challenge brought Toyota to team up with Japanese auto parts maker Denso Corp, Japanese telecoms NTT, Intel and Ericsson to form the Automotive Edge Computing Consortium which was announced a few days ago. This consortium will

develop an ecosystem for connected cars to support emerging services such as intelligent driving and transport, creating of maps with real-time data as well as driving assistance based on cloud computing.

The consortium will use Edge Computing and network design to accommodate automotive big data in a reasonable fashion between vehicles and the cloud.

Last March Toyota showed off its first autonomous test vehicle developed entirely by Toyota Research Institute, following GoogleTeslaUber and others in the race to disrupt transportation. Even consortium member Intel announced last week starting to build a fleet of fully autonomous (level 4 SAE) test cars, based on its acquisition of Mobileye earlier this year.

Toyota states its exploration of autonomous vehicles dates back as far as 2005. Now with edge computing architecture it can also face the associated big data challenge.

1311765722_picons03 Follow Horovits on Twitter!

Leave a comment

Filed under Autonomous Car, Big Data, Cloud, Edge Computing, Internet of Things, IoT

The Internet of Things Drives Amazon To Edge Computing

Amazon.com is the bearer of the of e-commerce vision: buy anything online. Or as Amazon’s mission statement goes: a place where people can come to find and discover anything they might want to buy online.

But then the earth shook: a few days ago Amazon acquired organic-food chain Whole Foods for $13.7 billion, Amazon’s largest deal ever. Against its very vision and DNA, the e-commerce giant put a sound foot in brick-and-mortar, with hundreds of physical stores.

WHY? Simply put, Amazon realized that we don’t want to shop for ANYTHING online. Some products, such as groceries, people still like to smell, hand-pick, try out and buy at the store near by. That’s where it loses ground to Walmart et al. So Amazon adapted its vision, and made a serious investment to get in the game to augment its leading e-commerce play (going for M&A after some failed home-grown trials such as AmazonFresh).

Coincidentally(?), a similar earthquake happened at Amazon’s cloud around the same time: Amazon Web Services (AWS) has been the pioneer of public cloud and a strong advocate of the vision that all shall run over the web in the public cloud (hence named “web services”). Even hybrid cloud (private+public), which Microsoft, IBM and other public cloud vendors adopted well, Amazon had hard time accepting, to the point that Amazon partnered with its rival VMware to complement that piece externally.

But then a couple of weeks ago Amazon released Greengrass. Don’t let the innocent-sounding name mislead you – it is nothing short of a revolution for Amazon. Greengrass enables users for the first time to run their favorite AWS services LOCALLY, executing serverless logic and inter-device communication without necessarily connecting to AWS cloud.

WHY? Simply put, the Internet of Things (IoT). At the recent AWS Summit I heard Amazonians for the first time admitting out loud that some use cases, especially those derived from IoT, disallow connecting to a central remote cloud data center. On his blog post, AWS CTO Werner Vogels himself outlines the categories (he calls them “laws”) of these use cases:

  1. Law of Physics. Customers want to build applications that make the most interactive and critical decisions locally, such as safety-critical control. This is determined by basic laws of physics: it takes time to send data to the cloud, and networks don’t have 100% availability. Customers in physically remote environments, such as mining and agriculture, are more affected by these issues.
  2. Law of Economics. In many industries, data production has grown more quickly than bandwidth, and much of this data is low value. Local aggregation and filtering of data allows customers to send only high-value data to the cloud for storage and analysis.
  3. Law of the Land. In some industries, customers have regulatory or compliance requirements to isolate or duplicate data in particular locations. Some governments impose data sovereignty restrictions on where data may be stored and processed.

In fact it’s bigger than merely IoT. Amazon tried launching its IoT API service with direct connectivity to the cloud and it didn’t catch for many types of IoT use cases. The missing ingredient was Edge Computing. As I wrote before, IoT, Big Data and Machine Learning Push Cloud Computing To The Edge, and that’s what Amazon realized. On AWS Summit I saw this learning simply put: AWS IoT Going to the edge.

IMG_20170621_151354

The two Amazon groundbreaking stories this month come down to the same essential truth: Amazon started out from the digital internet-driven services, which matched many common use cases. But now it realizes the power of the edge, whether a physical store available down the street from my home or edge computing executing at my smart home or connected factory (or perhaps at the Whole Foods store down the street?). That’s me – living on the edge – and apparently I’m not alone.

1311765722_picons03 Follow Horovits on Twitter!

Leave a comment

Filed under Uncategorized

One Open Source To Orchestrate Them All

First the change happened in Information Technology (IT): moving from hardware to software; virtualization inspired by cloud computing; data centers becoming configurable and programmable as software using DevOps approach; traditional vendor-locked solutions superseded by new world open source initiatives such as OpenStack, Open Compute Project and Cloud Native Computing Foundation.

Then Communications Technology (CT) followed the lead, making its move into the new world with notions such as software defined networking (SDN), network functions virtualization (NFV) and central office re-architected as a data center (CORD). Inevitably open source took a lead role here as well, with a multitude of projects popping up, led by different industry forces.

LinuxFoundationNetworkingAndOrchestrationIn fact, too many project, which left the Telecom industry perplexed and unable to converge under one de-facto standard. Have you tried to orchestrate with each player requiring a different sign language from the maestro?

But then came the twist in the plot when the Chinese and Americans decided to join forces: ECOMP (Enhanced Control, Orchestration, Management and Policy) that was open sourced by AT&T, and Open-O (Open Orchestrator) project led primarily by China Mobile, China Telecom and Huawei, have decided to join forces under the Linux Foundation’s umbrella, to create Open Network Automation Platform (ONAP).

What shape will the merged project take? That is yet to be decided by the community. This topic was much discussed February at the announcement on Mobile World Congress and even more so during Open Networking Summit this month, but still more questions than answers for ONAP, around modeling, protocols, descriptors, architecture…

The most important question, however, is whether the new merged mega-project will bear the critical mass required to gravitate the industry towards it, to become the converging force, the de-facto standard. Seeing the forces behind ECOMP, OPEN-O and now ONAP, including Intel, IBM, Cisco, Nokia and others, it looks promising. And the Linux Foundation is a proven vehicle for widely adopted open source projects. If succeed, this may very well be the turning point, taking the NFV & SDN wagon out of the mud and unto the fast track to production.

*Disclaimer: The writer has been working on the orchestration initiatives of ONAP members Amdocs and GigaSpaces.

1311765722_picons03 Follow Horovits on Twitter!

Leave a comment

Filed under Cloud, DevOps, NFV, SDN, Telecommunications

Amazon Cloud Outage Hits Dozen Of Sites, But Not Amazon

The outage in Amazon Web Services (AWS) popular storage service S3 a couple of days ago was severe. Over 50 businesses who entrusted their websites, photos, videos and documents in S3 buckets found themselves unreachable for around 4 hours. Among those were high profile names such as Disney, Target and Nike. And it’s not the first one either. This time, again, the outage took place at Amazon’s veteran Northern Virginia (US-EAST-1) region.

Amazon’s own websites, however, were not affected by the outage. According to Business Insider the reason is that

They have designed their sites to spread themselves across multiple Amazon geographic zones, so if a problem crops up in one zone, it doesn’t hurt them.

Put simply: Amazon designed its websites the right way – with high availability and disaster recovery plan (DRP) in mind.

If you want your website to sustain such outages – follow Amazon’s example! Here’s a piece of advice I wrote a few years ago after another major AWS outage:

AWS Outage – Thoughts on Disaster Recovery Policies

For more best practices on resilient cloud-based architecture check this out:

Retrospect on recent AWS Outage and Resilient Cloud-Based Architecture

And if policies, regulations or your own paranoia level prohibit putting all your eggs in Amazon’s bucket, then you may be interested in this:

AWS Outage: Moving from Multi-Availability-Zone to Multi-Cloud

So Keep Calm – there is Disaster Recovery!

1311765722_picons03 Follow Horovits on Twitter!

keep-calm-there-is-disaster-recovery

Leave a comment

Filed under Cloud, Disaster-Recovery

IoT, Big Data and Machine Learning Push Cloud Computing To The Edge

“The End Of Cloud Computing” – that’s the dramatic title for a talk given by Peter Levine at a16z Summit last month. Levine, a partner at Anderssen Horowitz (a16z) VC fund, worked out his investor foresight and tried to imagine the world beyond cloud computing. The result was an insightful and fluent talk, stating that the centralized cloud computing as we know it is about to be superseded by a distributed cloud inherent in a multitude of edge devices. Levine highlights the rising forces driving this change:

The Internet of Things (IoT). Though the notion of IoT has been around for a few decades it seems it’s really taking place now, and that our world will soon be inhabited by a multitude of smart cars, smart homes and smart everything, each with embedded compute, storage and networking. Levine gives a great example of a computer card found in current day’s luxury cars, containing around 100 CPUs in it. having several such cards in a car would make it a mini data center on wheels. Having thousands of such cars on the roads makes it a massive distributed data center.

smart-car-card

Big Data Analytics. The growing amount of connected devices and sensors around us constantly collecting real world input generates massive amount of data of different types, from temperature and pressure to images and videos. And that unstructured and highly variable data stream needs to be processed and analyzed in real time in order to extract insights and make decisions by the little brains of the smart devices. Just imagine your smart car approaching a stop sign, and the need to process the image input, realize the sign and make the decision to stop – all in a matter of a second or less- would you send it over to the remote cloud for the answer?

Machine Learning. While traditional computer algorithms are well suited for dealing with well-defined problem spaces, the real world has a complex, diverse and unstructured nature of data. Levine believes that endpoints will need to execute Machine Learning algorithms to decipher the data effectively and make intelligent insights and decisions to the countless number and permutations of situations that can occur in the real world.

So should Amazon, Microsoft and Google start worrying? Not really. The central cloud services will still be there, but with different focus. Levine sees the central cloud role in curating data from the edge, performing central non-real-time learning which can then be pushed back to the edge, and long-term storage and archiving of the data. In its new incarnation, the entire world becomes the domain of IT.

You can watch the recording of Levine’s full talk here.

1311765722_picons03 Follow Horovits on Twitter!

2 Comments

Filed under Big Data, Cloud, IoT

Cisco Is Shutting Down Its Public Cloud, Exploring Hybrid IT Strategy

Cisco just confirmed it’ll shut down its Intercloud by March 2017. Intercloud was supposed to be Cisco’s move in public cloud, addressing both businesses and service providers. But Cisco learned the painful lesson of the cloud, same lesson learned by HPE which shut down its Helion public cloud a year ago and Verizon which shut down its cloud earlier this year. On its statement, Cisco explained:

Cisco has evolved its cloud strategy from federating clouds to helping customers build and manage hybrid IT environments.

It appears Cisco realized that hybrid cloud may be its answer to Amazon, Microsoft and Google. It may have learned from Rackspace, which abandoned its cloud product and turned to partnering with Amazon, a strategy shift which paid off big time a few months ago. VMware is another datacenter giant that realized it’d better off partnering with Amazon for cloud services, and announced partnership a couple of months ago.

With open source taking over networking, Cisco foresees rough times also on traditional networking side. The big guys go as far as building their own datacenters from the ground up from simple hardware, skipping Cisco’s expansive purpose-built high-end boxes.

While many abandon their public cloud aspirations, it’s interesting to see that last month Oracle launched bare metal cloud services, which is in fact just the first step of its new cloud strategy announced back in September. Will it succeed where the others have failed?

1311765722_picons03 Follow Horovits on Twitter!

Leave a comment

Filed under Cloud, OpenStack

Oracle Launches Bare Metal Cloud Services, Challenges Amazon AWS

Oracle threw some big announcement back in September at Oracle OpenWorld conference about its plan to add Infrastructure as a Service (IaaS) to its Oracle Cloud Solutions. And now the first piece of that IaaS is announced: Oracle Bare Metal Cloud Services. Oracle’s service offers integrated network block storage, object storage, identity and access management, VPN connectivity, and a software-defined Virtual Cloud Network (VCN) – their implementation of Software Defined Networking (SDN). The new service is launched in Oracle’s new Phoenix Region (Arizona, USA), with the promise of growing to additional regions. The Phoenix region has 3 Availability Domains (similar to Availability Zones in Amazon Web Services).

oracle-iaas-logoOracle has been exploring cloud for a while and has made several startup acquisitions in that direction. With this move Oracle is going jumping heads-on to the ruthless cloud IaaS wars. In fact, it seems Oracle lured in some cloud experts from Amazon, Microsoft and Google to build its new IaaS.

One amazing thing that Oracle did with its IaaS, is that it designed its entire data center, up to the hardware stack, on its own! Oracle learned well the lesson from Amazon, Google et al. Thank to that design it claims to provide competitive pricing that will challenge the legendary AWS pricing.

For more information on Oracle Bare Metal Cloud Services see here.

1311765722_picons03 Follow Horovits on Twitter!

1 Comment

Filed under Cloud, Uncategorized