Category Archives: Uncategorized

Can Hybrid Cloud Present An Alternative To Amazon, Microsoft, Google?

It’s not easy to be a public cloud vendor these days. The public cloud world has been undergoing serious consolidation in the past few years. Amazon, the pioneer of the cloud, has been keeping a clear lead, while Microsoft and Google have been pulling in, utilizing their accumulated experience, global data centers and software platforms, and positioned themselves as next in line. Together this trio serve the vast majority of the workloads running on public cloud.

This consolidation drove out many vendors, including some big incumbent names such as HP that shut down its cloud late last year and Verizon that did the same a couple of months ago.

hphelion2

So what’s their answer? I’d say it’s threefold:

  1. Multi-cloud model: If you can’t beat them, join them. Support Amazon, Microsoft, Google public clouds. If done via a good generic platform, it can help avoid vendor lock-in.
  2. Hybrid model: mix the public cloud support with support for private cloud and bare-metal to offer public-private-hosted hybrid approach.
  3. Private model: concentrate on strictly private cloud. The popular open-source project OpenStack is a leading candidate for this strategy. This approach is useful for the customers insisting to run things on their own premises.

HP (now HPE), after shutting down its public cloud, moved to a hybrid cloud strategy with a series of acquisitions and by endorsing OpenStack private cloud open source project.  Verizon went for the private cloud approach.

An interesting case is Rackspace, which eased off on its own cloud and managed services, and started offering third-party support for the public clouds of Amazon and Microsoft, leveraging its Fanatical Support brand. Also, in parallel to supporting leading public cloud vendors, Rackspace keeps its longstanding support of private cloud deployments based on OpenStack, the popular open-source platform which it co-founded.

rackspace-multi-cloud-offering

Rackspace’s strategy seems to have hit well. quarterly results published this week show quarterly revenue $518 million, up 7.9% from the year-ago-quarter. Executives noted Rackspace’s success was buoyed particularly by a growing number of Fanatical Support customers for its Microsoft Azure and Amazon Web Services (AWS) offerings as well as customers on its OpenStack private cloud.

Hybrid cloud strategies gain traction with enterprises. While Amazon, Microsoft and Google try to convince enterprises to go all-in on the public cloud, it’s too big a change to swallow for most. Even Microsoft realized that hurdle and tried bringing its Azure cloud to the enterprise’s datacenter. Hybrid cloud seems to have demand, and may also be the focus of those who failed to take the lead in the public cloud.

1311765722_picons03 Follow Horovits on Twitter!

Leave a comment

Filed under Cloud, Uncategorized

The Mysterious Creator Of Bitcoin and Blockchain Comes Out Of The Shadows

The birth of the virtual currency Bitcoin was accompanied with great mystery, when its creator chose to remain in the shadows, known only by his (or her?) pseudonym Satoshi Nakamoto. Rumors over the years brought up various potential “suspects”, but nothing was proven and no one confessed. Until now.

Today Dr. Craig Wright, a 45-year-old computer scientist from Australia, announced that he is Satoshi Nakamoto. In his post Wright says

If I sign Craig Wright, it is not the same as if I sign Craig Wright, Satoshi.

Wright then goes on to thank those who supported the project. Something that has started from a monumental paper by the mysterious Satoshi Nakamoto, followed by a release of an even more monumental implementation of Blockchain, the revolutionary technology for open distributed ledger underlying Bitcoin. In fact, the impact of Blockchain currently seems to surmount that of bitcoin, with blockchain-based innovation boiling up in both startups and financial institutions. The interest is so great (and skill set is so rare) that IBM and Microsoft launched Blockchain-as-a-Service offerings on their clouds to help companies innovate with Blockchain.

SatoshiNakamoto-p2pfoundation

Wright dedicated most of his post to convincing the community of the authenticity of his claim. He provides a signed evidence, supposedly signed with a private key associated with Satoshi Nakamoto (the key for block 9), and elaborates on the process of verifying cryptographic keys, basically implying on the verification of his own evidence.

Wright knew such announcement would not go about without a storm, so he summoned in advance three high-profile magazines – The Economist, The BBC and GQ Magazine – to exclusively present his claim and evidence so they can accompany his post with their own coverage. The magazines jumped on the scoop and dag deep into his claims. You can read their full review here:

The Economist: Craig Steven Wright claims to be Satoshi Nakamoto. Is he?

The BBC: Australian Craig Wright claims to be Bitcoin creator

GQ Magazine: Dr Craig Wright Outs Himself As Bitcoin Creator Satoshi Nakamoto

Is Dr. Wright the real Satoshi Nakamoto? The community will be debating that in the weeks to come, together with further evidence released by Wright. More importantly, Wright hints of new work he’s done in this field with “an exceptional group”. It may very well be that his current announcement just sets the stage for bigger announcements or releases yet to come.

1311765722_picons03 Follow Horovits on Twitter!

Leave a comment

Filed under Blockchain, Uncategorized

IBM, Microsoft Offer Blockchain In Their Cloud Services

Recently blockchain fans got major news, with two giants IBM and Microsoft announcing their support for Blockchain-as-a-Service (BaaS) in their cloud services. Are we going to see some cloud-based blockchain developments soon? sounds like it.

Blockchain emerged from Bitcoin cryptocurrency hype as the innovative distributed ledger technology behind Bitcoin. But while cryptocurrencies are well past Gartner’s peak of inflated expectations, blockchain is gaining growing interest from startups and enterprises alike. The interest in blockchain isn’t limited to just cryptocurrencies but also extends into other financial use cases, and even transcends FinTech realm into non-financial use cases such as electronic voting, smart contracts and ownership verification for art and diamonds.

Hyperledger-1024x239

The interest that blockchain drove the creation of different “flavors” of the distributed ledger notion, beyond the initial one used for Bitcoin. One interesting initiative recently launched is the hyperledger project, a community-backed open-source standard for distributed ledger. It was launched December 2015 under the Linux Foundation by big financial services names such as J.P. Morgan, Wells Fargo, London Stock Exchange Group and Deutsche Börse, as well as equally big IT players such as IBM, Intel, Cisco and VMware. As part of joining Hyperledger, IBM has open sourced a significant chunk of the blockchain code it has been working on.

blockchain

IBM launched its blockchain-as-a-service in production February. In order to encourage adoption of its new cloud service, IBM also opens garages for blockchain app design and implementation in London, New York, Singapore and Tokyo.

Microsoft was first to move in on blockchain. Last November ETH-BaaSMicrosoft launched a Blockchain-as-a-Service on its Azure cloud based on Ethereum in partnership with ConsenSys. But while IBM bet on hyperledger project, Microsoft took a different approach and spread its bet across multiple projects and partnerships. During last month Microsoft added to its blockchain partnerships Augur, Lisk, BitShares, Syscoin and Slock.it, and this month also added Storj.

I estimate IBM and Microsoft would not remain alone in this game. Other vendors will join in to offer platforms and cloud services to accelerate the development of blockchain-based applications. This will be a serious enabler for innovation around this fascinating technology, whether for young innovative startups bootstrapping on low budget, or for financial institutions (and other enterprises) lacking in-house skills in this cutting-edge technology.

1311765722_picons03 Follow Horovits on Twitter!

 

1 Comment

Filed under Blockchain, Cloud, Uncategorized

Live Video Streaming At Facebook Scale

Operating at Facebook scale is far from trivial. With 1.49 billion monthly active users (and growing 13 percent yearly), every 60 seconds on Facebook 510 comments are posted, 293,000 statuses are updated, and 136,000 photos are uploaded. And there lies the challenge of serving the masses efficiently and reliably without any outages.

For serving the offline content, whether text (updates, comments, etc.), photos or videos, Facebook developed a sophisticated architecture that includes state-of-the-art data center technology and search engine to traverse and fetch content quickly and efficiently.

But now comes a new type of challenge: A few months ago Facebook rolled out a new service for live streaming called Live for Facebook Mentions, which allows celebs to broadcast live video to their followers. This service is quite similar to Twitter’s Periscope (acquired by Twitter beginning of this year) and the popular Meerkat app, which offer their live video streaming services to all and not just celebs. In fact, Facebook announced this month it is piloting a new service which will offer live streaming to the wide public as well.

live_cover_3

While offline photos and videos get uploaded fully and then distributed and made accessible to followers and friends, serving live video streams is much more challenging to implement at scale. And to make things even worse, the viral nature of social media (and of celeb content in particular) often creates spikes where thousands of followers demand the same popular content at the same time, a phenomenon the Facebook team calls the “thundering herd” problem.

An interesting post by Facebook engineering shares information on these challenges and the approaches they took: Facebook’s system uses Content Delivery Network (CDN) architecture with a two-layer caching of the content, with the edge cache closest to the users and serving 98 percent of the content. This design aims to reduce the load from the backend server processing the incoming live feed from the broadcaster. Another useful optimization for further reducing the load on the backend is request coalescing, whereby when many followers (in the case of celebs it could reach millions!) are asking for some content that’s missing in the cache (cache miss), only one instance request will proceed to the backend to fetch the content on behalf of all to avoid a flood.

facebook-live-stream-cache

It’s interesting to note that the celebs’ service and the newer public service show different considerations and trade-offs of throughput and latency which brought Facebook’s engineering team to make changes to adapt the architecture to the new service:

Where building Live for Facebook Mentions was an exercise in making sure the system didn’t get overloaded, building Live for people was an exercise in reducing latency.

The content itself is broken down into tiny segments of multiplexed audio and video for more efficient distribution and lower latency. The new Live service (for the wide public) even called for changing the underlying streaming protocol to enable an even better latency, reduce the lag between broadcaster and viewer by 5x.

This is a fascinating exercise in scalable architecture for live streaming, which is said to effectively scale to millions of broadcasters. Such open discussions can pave the way to smaller companies in the social media, internet of things (IoT) and the ever-more-connected world. You can read the full post here.

1311765722_picons03 Follow Horovits on Twitter!

Leave a comment

Filed under Solution Architecture, Uncategorized

How IBM is using big data to fix Beijing’s pollution crisis

A fascinating way to leverage big data to help the world

Quartz

Of China’s major cities, Beijing’s pollution problem is probably the worst, causing thousands of premature deaths every year. Its residents are fed up. The growing outrage has forced leaders to declare a “war on pollution,” including the goal of slashing Beijing’s PM2.5— the concentration of the particles that pose the greatest risk to human health—by 25% by 2017. The Beijing municipal government will earmark nearly 1 trillion yuan ($160 billion) to meet that target.

​

Why, then, are the city’s own government officials skeptical about hitting that 2017 goal? Perhaps because Beijing’s pollution woes are unusually complicated. The city is flanked on three sides by smog-trapping mountain ranges. There are numerous sources of foul air, and a multitude of subtle ways the chemicals interact with each other, which make it hard to identify what problems need fixing.

IBM thinks it change that outlook. On Monday, the company will unveil a 10-year initiative launched in partnership with the Beijing Municipal Government…

View original post 612 more words

Leave a comment

Filed under Uncategorized

Facebook outage reported now worldwide

Facebook is down. trying to access the page shows a page with a laconic message that “something went wrong”.

facebook outage error message

According to downdetector.com the outage started 3:55 a.m. EDT.

facebook outage statistics downcenter.com

Reports are  flooding the net. The outage seems to be worldwide.

facebook outage twitter responses

So far no explanation from Facebook.

Stay tuned.

 

 

1 Comment

Filed under Uncategorized

AWS Outage: Moving from Multi-Availability-Zone to Multi-Cloud

A couple of days ago Amazon Web Services (AWS) suffered a significant outage in their US-EAST-1 region. This has been the 5th major outage in that region in the past 18 months. The outage affected leading services such as Reddit, Netflix, Foursquare and Heroku.

How should you architect your cloud-hosted system to sustain such outages? Much has been written on this question during this outage, as well as past outages. Many recommend basing your architecture on multiple AWS Availability Zones (AZ) to spread the risk. But during this outage we saw even multi-Availability Zone applications severely affected. Even Amazon published during the outage that

Customers can launch replacement instances in the unaffected availability zones but may experience elevated launch latencies or receive ResourceLimitExceeded errors on their API calls, which are being issued to manage load on the system during recovery.

The reason is that there is an underlying infrastructure that escalates the traffic from the affected AZ to other AZ in a way that overwhelms the system. In the case of this outage it was the AWS API Platform that was rendered unavailable, as nicely explained in this great post:

The waterfall effect seems to happen, where the AWS API stack gets overwhelmed to the point of being useless for any management task in the region.

But it doesn’t really matter for us as users which exact infrastructure it was that failed on this specific outage. 18 months ago, during the first major outage, the reason was another infastructure component, the Elastic Block Store (“EBS”) volumes, that cascaded the problem. Back then I wrote a post on how to architect your system to sustain such outages, and one of my recommendations was:

Spread across several public cloud vendors and/or private cloud

The rule of thumb in IT is that there will always be extreme and rare situations (and don’t forget, Amazon only commits to 99.995% SLA) causing such major outages. And there will always be some common infrastructure that under that extreme and rare situation will carry the ripple effect of the outage to other Availability Zones in the region.

Of course, you can mitigate risk by spreading your system across several AWS Regions (e.g. between US-EAST and US-WEST), as they have much looser coupling, but as I stated on my previous post, that loose coupling comes with a price: it is up to your application to replicate data, using a separate set of APIs for each region. As Amazon themselves state: “it requires effort on the part of application builders to take advantage of this isolation”.

The most resilient architecture would therefore be to mitigate risk by spreading your system across different cloud vendors, to provide the best isolation level. The advantages in terms resilience are clear. But how can that be implemented, given that the vendors are so different in their characteristics and APIs?

There are 2 approaches to deploying across multiple cloud vendors and keeping cloud-vendor-agnostic:

  1. Open Standards and APIs for cloud API that will be supported by multiple cloud vendors. That way you write your application using a common standard and have immediate support by all conforming cloud vendors. Examples for such emerging standards are OpenStack and JClouds. However, the Cloud is still a young domain with many competing standards and APIs and it is yet to be determined which one shall become the de-facto standard of the industry and where to “place our bet”.
  2. Open PaaS Platforms that abstract the underlying cloud infrastructure and provide transparent support for all major vendors. You build your application on top of the platform, and leave it up to the platform to communicate to the underlying cloud vendors (whether public or private clouds, or even a hybrid). Examples of such platforms, are CloudFoundry and Cloudify. I dedicated one of my posts for exploring how to build your application using such platforms.

Conclusion

System architects need to face the reality of the Service Level Agreement provided by Amazon and other cloud vendors and their limitations, and start designing for resilience by spreading across isolated environments, deploying DR sites, and by similar redundancy measures to keep their service up-and-running and their data safe. Only that way can we guarantee that we will not be the next one to fall off the 99.995% SLA.

This post was originally posted here.

8 Comments

Filed under cloud deployment, Disaster-Recovery, IaaS, PaaS, Solution Architecture, Uncategorized