Questo sito utilizza cookie di profilazione (propri e di terze parti) per ottimizzare la tua esperienza online e per inviarti pubblicità in linea con le tue preferenze. Continuando a utilizzare questo sito senza modificare le tue preferenze acconsenti all’uso dei cookie. Se vuoi saperne di più o negare il consenso a tutti o ad alcuni cookie clicca qui>
The website that you are visiting also provides Arabian language. Do you wish to switch language version?
يوفر موقع الويب الذي تزوره المحتوى باللغة العربية أيضًا. هل ترغب في تبديل إصدار اللغة؟
The website that you are visiting also provides Russia language Do you wish to switch language version?
Данный сайт есть в английской версии. Желаете ли Вы перейти на английскую версию?
The Bureau of Geophysical Prospecting (BGP) of China National Petroleum Corporation, is an earth physics technical service company that develops devices and software for land and shallow-sea earthquake prospecting, processing, interpretation, and physical exploration.
They have 26,000 employees. More than 3,000 are IT personnel and 300 are software R&D employees. The company provides services in 34 countries and ranks first in the global market for land earthquake prospecting.
BGP owns 23 processing centers worldwide that store more than 25 PB of data and provide 2 Petaflops of computing capability.
Ever-growing data for oil exploration placed great strains on BGP’s computing performance and storage system. The poor parallel processing capabilities and limited capacity of existing storage systems were major challenges.
The Lustre cluster architecture used by their legacy storage systems could not balance workloads after it was expanded, so hotspot data easily occurred in certain parts of the system. Engineers had to manually adjust data distribution or stop services, wasting time and energy.
BGP’s services ran at a full load every day, so stopping services meant recomputing data, prolonging the output of computed results. The single engine in the Lustre architecture could not reliably query a great deal of the metadata and a performance bottleneck occurred. BGP needed a new system architecture to address these challenges.
After testing functions and performance, BGP purchased a three-node OceanStor 9000 Big Data Storage System for Phase I of the project. With a fully symmetric scale-out architecture, the system leveraged technologies including:
The system provided a total of 800 MB/s of stable bandwidth per node and 60 PB of capacity. It can also accommodate up to 288 nodes for current and future storage needs and supports a variety of disk types to allow BGP to classify data based on service and computing models and optimize resources. The disk types included:
The redundantly-configured system also used multiple nodes working in active-active mode and network and other critical hardware, such as fan modules, CPUs, and power modules and distributed RAID technology among nodes for maximum uptime when a single point of failure occurred.
The OceanStor 9000 is easily deployed within half a day. A node can become functional within 60 seconds.