The Internet-based sharing economy — exemplified by services such as ride-sharing, bike-sharing, and (in China) even basketball-sharing — gains ideal support from cloud technologies. After all, the cloud is all about sharing resources. Taking advantage of growth in the cloud resource market, sharing-economy companies have access to easily expandable networks that deliver the high performance needed to manage complex business ventures.
Sharing companies have grown especially quickly in China, where huge amounts of capital have been invested in sharing startups. The growth rate of companies such as Didi Chuxing and Mobike is far outpacing the pace of traditional Internet companies.
Didi Chuxing grew out of Didi Dache, a Chinese taxi-hailing App operator that was established in 2012. In 2014, the company integrated its service with the WeChat online messaging system of Chinese Internet giant Tencent. Didi Dache offered a cash discount for both passengers and taxi drivers over a period of 77 days. During this time, Didi Dache’s user base skyrocketed from 22 million to 100 million, and daily bookings reportedly saw a 15-fold increase from approximately 350,000 to 5.2 million. In 2015, Didi Dache and Kuaidi Dache merged into what is now Didi Chuxing. It took the company only 3.5 years to increase its daily bookings to over 10 million. In contrast, Taobao, a Chinese online shopping website similar to eBay, took eight years to reach this figure.
Didi Chuxing sustains a high rate of growth
Sharing Economy Grows with the Cloud
Sharing-economy companies such as Didi Chuxing began as small startups, and as they expanded their business into more regions their networks enlarged. Managing and maintaining these networks was a huge challenge, but cloud technology offered ways to reduce costs while flexibly scaling the necessary business support platforms.
Whether it is vehicle scheduling or passenger-driver pairing, Didi Chuxing has to deal with a complex array of factors, many of them in real time. The company is leveraging innovative technologies, such as machine learning and Big Data, to achieve smarter vehicle scheduling and make more accurate supply and demand forecasts. In this process, the company is constantly optimizing computing models to improve forecast efficiency. Didi Chuxing has aggregated driver supply and passenger demand data onto a cloud data center that implements unified resource matching and scheduling using a Big Data engine in the cloud.
How Cloud Networks Help
At the core of any sharing company is a resource-matching engine. Basing this engine in the cloud helps meet multiple requirements, including easy expansion, high-performance networking, and support for new applications and processes.
The success of a sharing company is based largely on scaling up to include an expanding user base. Data centers that are built initially to support relatively small sharing-economy businesses must be able to scale elastically to support explosive growth. Mobike, one of China’s largest bike-sharing companies, took only nine months to increase its user base to approximately 10 million. And, Didi Dache reported doubling its user base from 20 million to 40 million in a single month, and another doubling over the next 15 days. Didi Dache currently has 300 million users. Elastic and scalable network architectures make this type of business growth feasible.
Another requirement is the use of non-blocking networks for efficiently processing Big Data. By collecting enormous amounts of supply and demand information, and then performing Big Data analytics, companies are able to achieve the best match between supply and demand. The data released by Didi Chuxing shows that the company logs around 20 million ride requests, processes more than 2,000 terabytes of data, and plans over 9 billion routes every day. A non-blocking network is needed to process this huge amount of data. The non-blocking network implements wire-speed forwarding between any server pair, and effectively handles traffic bursts brought about by the many-to-one incast model used in Big Data processing.
Artificial Intelligence (AI) applications impose a higher requirement for network quality. In particular, the sharing economy needs AI to deal with the enormous amounts of supply, demand, and historical transaction data. For example, AI-based data policies are needed to determine how to push the current demand information to the most appropriate suppliers — including supply-side subsidies — for best results. Stable AI operations depend on large-scale clusters of High-Performance Computing (HPC) servers and a low-latency, packet-loss-free network architecture.
Cloud Network Solutions
Huawei provides holistic cloud network solutions that support the development of enterprises based on the sharing economy. Specifically, the Huawei CloudFabric solution helps companies build simple, efficient, open cloud data center networks, while the CloudDCI solution assists in constructing flexible, ultra-high bandwidth, energy-saving Data Center Interconnect (DCI) networks. The two solutions combine to support the long-term evolution of enterprise cloud services, ensure the efficient operation of shared economy enterprises, and spur the rapid development of companies planning to enter the shared economy marketplace.
To create elastic and scalable networks that enable the explosive growth of shared economy enterprises, companies begin by setting up data center networks based on a spine-leaf architecture that can support as many as 50,000 10 GE servers in a single cluster. Additionally, Virtual Extensible LAN (VXLAN) technology is used to build unified resource pools for on-demand resource scheduling. ‘IP + optical’ technologies are deployed for data center interconnection and cloud-based network migration to support further growth as well as data center backup capabilities. VXLAN technology enables a unified controller to adjust bandwidth on demand for data center interconnection services to help sharing-economy Internet companies deal with bursty data. This type of elastic and scalable network greatly increases resource utilization and allows for the fast service rollouts and quick capacity expansions needed by shared economy enterprises.
To accommodate the processing needs for Big Data, data center networks must be equipped with strong, non-blocking switching capabilities. Two-stage Clos networks are used to set up non-blocking network architectures, and core switching equipment is configured to use Clos switching topologies to enable switch-fabric capacity expansion. Variable Size Cell (VSC) technology can be used for dynamic routing — with load balancing implemented between paths to prevent the switching matrix from being blocked. All these measures are designed to cope with dynamically changing traffic models within the data centers. Up to 96,000 Virtual Output Queues (VOQs) are supported. The VOQ mechanism, with ultra-large buffers on the ingress ports, combine to implement end-to-end flow control of traffic destined for different egress ports. This approach ensures centralized scheduling and orderly forwarding of services to implement truly non-blocking switching.
Ultra-large bandwidths must be available for data center interconnection. To meet this need, 400G routing and dual-carrier 400G Wavelength-Division Multiplexing (WDM) form ultra-large network pipes. Pairs of fibers can transmit over 20 TB of data. These technologies provide almost unlimited possibilities for the expansion of cloud service capacity.
To meet the growing demands of the sharing economy for better service quality, companies need to set up AI-capable, packet-loss-free, low-latency data center networks. Huawei’s CloudFabric solution supports Data Center Bridging (DCB) to achieve zero-packet-loss traffic forwarding for more than 1,000 units of Layer 2 switching devices.
With the pervasive use of AI applications, the scale of HPC server platforms is growing. For example, Chinese Internet giants, such as Baidu and Tencent, have tens of thousands of HPC servers. For future deployments, Huawei and partners are currently developing CloudFabric solutions for Layer 3-oriented, large-scale, packet loss-free, low-latency data center networks.
Companies also need low-latency network pipes for data center interconnection. The Optical Transport Network (OTN) layer-based latency optimization and automated-path optimization technologies are built to provide the lowest possible latency to deliver good customer experiences using performance-sensitive data center services.
The success of the sharing economy relies on cloud services. The Huawei CloudFabric and CloudDCI solutions help Internet companies build elastic, automated, low-latency, and ultra-large-bandwidth networks, to unleash the maximum potential of the sharing economy.