CLC number: TP311
On-line Access: 2022-04-22
Received: 2018-08-16
Revision Accepted: 2018-09-14
Crosschecked: 2018-10-15
Cited: 0
Clicked: 3944
Xiang-ke Liao, Kai Lu, Can-qun Yang, Jin-wen Li, Yuan Yuan, Ming-che Lai, Li-bo Huang, Ping-jing Lu, Jian-bin Fang, Jing Ren, Jie Shen. Moving from exascale to zettascale computing: challenges and techniques[J]. Frontiers of Information Technology & Electronic Engineering, 2018, 19(10): 1236-1244.
@article{title="Moving from exascale to zettascale computing: challenges and techniques",
author="Xiang-ke Liao, Kai Lu, Can-qun Yang, Jin-wen Li, Yuan Yuan, Ming-che Lai, Li-bo Huang, Ping-jing Lu, Jian-bin Fang, Jing Ren, Jie Shen",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="19",
number="10",
pages="1236-1244",
year="2018",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.1800494"
}
%0 Journal Article
%T Moving from exascale to zettascale computing: challenges and techniques
%A Xiang-ke Liao
%A Kai Lu
%A Can-qun Yang
%A Jin-wen Li
%A Yuan Yuan
%A Ming-che Lai
%A Li-bo Huang
%A Ping-jing Lu
%A Jian-bin Fang
%A Jing Ren
%A Jie Shen
%J Frontiers of Information Technology & Electronic Engineering
%V 19
%N 10
%P 1236-1244
%@ 2095-9184
%D 2018
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.1800494
TY - JOUR
T1 - Moving from exascale to zettascale computing: challenges and techniques
A1 - Xiang-ke Liao
A1 - Kai Lu
A1 - Can-qun Yang
A1 - Jin-wen Li
A1 - Yuan Yuan
A1 - Ming-che Lai
A1 - Li-bo Huang
A1 - Ping-jing Lu
A1 - Jian-bin Fang
A1 - Jing Ren
A1 - Jie Shen
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 19
IS - 10
SP - 1236
EP - 1244
%@ 2095-9184
Y1 - 2018
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.1800494
Abstract: high-performance computing (HPC) is essential for both traditional and emerging scientific fields, enabling scientific activities to make progress. With the development of high-performance computing, it is foreseeable that exascale computing will be put into practice around 2020. As Moore's law approaches its limit, high-performance computing will face severe challenges when moving from exascale to zettascale, making the next 10 years after 2020 a vital period to develop key HPC techniques. In this study, we discuss the challenges of enabling zettascale computing with respect to both hardware and software. We then present a perspective of future HPC technology evolution and revolution, leading to our main recommendations in support of zettascale computing in the coming future.
[1]Ábrahám E, Bekas C, Brandic I, et al., 2015. Preparing HPC applications for exascale: challenges and recommendations. 18th Int Conf on Network-Based Information Systems, p.401-406.
[2]Asch M, Moore T, Badia R, et al., 2018. Big data and extreme-scale computing: pathways to convergence—toward a shaping strategy for a future software and data ecosystem for scientific inquiry. Int J High Perform Comput Appl, 32(4):435-479.
[3]Cavin RK, Lugli P, Zhirnov VV, 2012. Science and engineering beyond Moore's law. Proc IEEE, 100:1720-1749.
[4]Chong FT, Franklin D, Martonosi M, 2017. Programming languages and compiler design for realistic quantum hardware. Nature, 549(7671):180-187.
[5]Diaz J, Mu aoz-Caro C, Ni ao A, 2012. A survey of parallel programming models and tools in the multi and many-core era. IEEE Trans Parall Distrib Syst, 23(8):1369-1386.
[6]Fang J, Varbanescu AL, Sips HJ, 2011. A comprehensive performance comparison of CUDA and OpenCL. Int Conf on Parallel Processing, p.216-225.
[7]Glosli JN, Richards DF, Caspersen KJ, et al., 2007. Extending stability beyond CPU millennium: a micron-scale atomistic simulation of Kelvin-Helmholtz instability. ACM/IEEE Conf on Supercomputing, p.1-11.
[8]Jacob P, Zia A, Erdogan O, et al., 2009. Mitigating memory wall effects in high-clock-rate and multicore CMOS 3D processor memory stacks. Proc IEEE, 97(1):108-122.
[9]Jeddeloh J, Keeth B, 2012. Hybrid memory cube new DRAM architecture increases density and performance. Int Symp on VLSI Technology, p.87-88.
[10]Keeton K, 2015. The machine: an architecture for memory-centric computing. 5$^rm th$ Int Workshop on Runtime and Operating Systems for Supercomputers, p.1.
[11]Kim NS, Chen D, Xiong J, et al., 2017. Heterogeneous computing meets near-memory acceleration and high-level synthesis in the post-Moore era. IEEE Micro, 37(4):10-18.
[12]Kolli A, Rosen J, Diestelhorst S, et al., 2016. Delegated persist ordering. 49th Annual IEEE/ACM Int Symp on Microarchitecture, p.1-13.
[13]Lucas R, Ang J, Bergman K, et al., 2014. Top10 exascale research challenges. Department of Energy Office of Science. https://science.energy.gov/textasciitilde/media/ascr/ascac/pdf/meetings/20140210/Top10reportFEB14.pdf
[14]Mishra S, Chaudhary NK, Singh K, 2013. Overview of optical interconnect technology. Int J Sci Eng Res, 3(4):364-374.
[15]Rumley S, Nikolova D, Hendry R, et al., 2015. Silicon photonics for exascale systems. J Lightw Technol, 33(3):547-562.
[16]Schroeder B, Gibson GA, 2007. Understanding failures in petascale computers. J Phys, 78(1):012022.
[17]Shen J, Fang J, Sips HJ, et al., 2013. An application-centric evaluation of OpenCL on multi-core CPUs. Parall Comput, 39(12):834-850.
[18]Vinaik B, Puri R, 2015. Oracle's Sonoma processor: advanced low-cost SPARC processor for enterprise workloads. IEEE Hot Chips 27 Symp, p.1-23.
[19]Waldrop MM, 2016. The chips are down for Moore's law. Nature, 530(7589):144-147.
[20]Wilkes MV, 1995. The memory wall and the CMOS end-point. SIGARCH Comput Archit News, 23(4):4-6.
[21]Wulf WA, McKee SA, 1995. Hitting the memory wall: implications of the obvious. SIGARCH Comput Archit News, 23(1):20-24.
[22]Xu W, Lu Y, Li Q, et al., 2014. Hybrid hierarchy storage system in Milkyway-2 supercomputer. Front Comput Sci, 8(3):367-377.
[23]Xu Z, Chi X, Xiao N, 2016. High-performance computing environment: a review of twenty years of experiments in China. Nat Sci Rev, 3(1):36-48.
[24]Zhang P, Fang JB, Tang T, et al., 2018. Auto-tuning streamed applications on Intel Xeon Phi. IEEE Int Parallel and Distributed Processing Symp, p.515-525.
Open peer comments: Debate/Discuss/Question/Opinion
<1>