CLC number: TP316.4
On-line Access: 2024-08-27
Received: 2023-10-17
Revision Accepted: 2024-05-08
Crosschecked: 2017-09-23
Cited: 0
Clicked: 7820
Ji-guang Wan, Da-ping Li, Xiao-yang Qu, Chao Yin, Jun Wang, Chang-sheng Xie. A reliable and energy-efficient storage system with erasure coding cache[J]. Frontiers of Information Technology & Electronic Engineering, 2017, 18(9): 1370-1384.
@article{title="A reliable and energy-efficient storage system with erasure coding cache",
author="Ji-guang Wan, Da-ping Li, Xiao-yang Qu, Chao Yin, Jun Wang, Chang-sheng Xie",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="18",
number="9",
pages="1370-1384",
year="2017",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.1600972"
}
%0 Journal Article
%T A reliable and energy-efficient storage system with erasure coding cache
%A Ji-guang Wan
%A Da-ping Li
%A Xiao-yang Qu
%A Chao Yin
%A Jun Wang
%A Chang-sheng Xie
%J Frontiers of Information Technology & Electronic Engineering
%V 18
%N 9
%P 1370-1384
%@ 2095-9184
%D 2017
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.1600972
TY - JOUR
T1 - A reliable and energy-efficient storage system with erasure coding cache
A1 - Ji-guang Wan
A1 - Da-ping Li
A1 - Xiao-yang Qu
A1 - Chao Yin
A1 - Jun Wang
A1 - Chang-sheng Xie
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 18
IS - 9
SP - 1370
EP - 1384
%@ 2095-9184
Y1 - 2017
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.1600972
Abstract: In modern energy-saving replication storage systems, a primary group of disks is always powered up to serve incoming requests while other disks are often spun down to save energy during slack periods. However, since new writes cannot be immediately synchronized into all disks, system reliability is degraded. In this paper, we develop a high-reliability and energy-efficient replication storage system, named RERAID, based on RAID10. RERAID employs part of the free space in the primary disk group and uses erasure coding to construct a code cache at the front end to absorb new writes. Since code cache supports failure recovery of two or more disks by using erasure coding, RERAID guarantees a reliability comparable with that of the RAID10 storage system. In addition, we develop an algorithm, called erasure coding write (ECW), to buffer many small random writes into a few large writes, which are then written to the code cache in a parallel fashion sequentially to improve the write performance. Experimental results show that RERAID significantly improves write performance and saves more energy than existing solutions.
[1]Amazon, 2007. Amazon S3: Object Storage Built to Store and Retrieve Any Amount of Data from Anywhere. http://aws.amazon.com/s3/
[2]Amur, H., Cipar, J., Gupta, V., et al., 2010. Robust and flexible power-proportional storage. Proc. 1st ACM Symp. on Cloud Computing, p.217-228.
[3]Bhadkamkar, M., Guerra, J., Useche, L., et al., 2009. BORG: Block-reORGanization for self-optimizing storage systems. Proc. Usenix Conf. on File and Storage Technologies, p.183-196.
[4]Blaum, M., Brady, J., Bruck, J., et al., 1994. EVENODD: an optimal scheme for tolerating double disk failures in RAID architectures. Proc. 21st Int. Symp. on Computer Architecture, p.245-254.
[5]Borthaku, D., 2010. What is Apache Hadoop? http://hadoop.apache.org/
[6]Chen, Y., Hsu, W., Young, H., 2000. Logging RAID—an approach to fast, reliable, and low-cost disk arrays. Euro-Par, p.1302-1312.
[7]Colarelli, D., Grunwald, D., 2002. Massive arrays of idle disks for storage archives. Proc. ACM/IEEE Conf. on Supercomputing, p.1-11.
[8]Corbett, P., English, B., Goel, A., et al., 2004. Row-diagonal parity for double disk failure correction. Proc. 3rd USENIX Conf. on File and Storage Technologies, p.1-14.
[9]Department of Energy, 2012. NETL shares computing speed, efficiency to tackle barriers. Fossil Energy Today, 1(6):1-3.
[10]EMC, 2008. ATMOS: Big. Smart. Elastic. http://www.emc.com/storage/atmos/atmos.htm
[11]Eom, H., Hollingsworth, J.K., 2000. Speed vs. accuracy in simulation for I/O-intensive applications. Proc. 14th Int. Parallel and Distributed Processing Symp., p.315-322.
[12]Ghemawat, S., Gobioff, H., Leung, S., 2003. The Google File System. Proc. 19th ACM Symp. on Operating Systems Principles, p.29-43.
[13]Hu, Y., Yang, Q., 1996. DCD—disk caching disk: a new approach for boosting I/O performance. ACM SIGARCH Comput. Archit. News, 24(2):169-178.
[14]Li, D., Wang, J., 2004. EERAID: energy-efficient redundant and inexpensive disk array. Proc. 11th ACM SIGOPS European Workshop, p.29.
[15]Li, D., Wang, J., 2006. eRAID: a queuing model based energy saving policy. Proc. IEEE Int. Symp. on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems, p.77-86.
[16]Lu, L., Varman, P.J., Wang, J., 2007. DiskGroup: energy efficient disk layout for RAID1 systems. Proc. Int. Conf. on Networking, Architecture, and Storage, p.233-242.
[17]Mao, B., Feng, D., Jiang, H., et al., 2008. GRAID: a green RAID storage architecture with improved energy efficiency and reliability. Proc. IEEE Int. Symp. on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems, p.113-120.
[18]Menon, J., 1995. A performanee comparison of RAID-5 and log-struetured arrays. Proc. 4th IEEE In. Symp. on High Performance Distributed Computing, p.167-178.
[19]Patterson, D.A., Gibson, G., Katz, R.H., 1988. A case for redundant arrays of inexpensive disks (RAID). Proc. ACM SIGMOD Int. Conf. on Management of Data, p.109-116.
[20]Plank, J.S., Xu, L.H., 2006. Optimizing Cauchy Reed-Solomon codes for fault-tolerant storage applications. Proc. 5th Int. Symp. on Network Computing and Applications, p.173-180.
[21]Pinheiro, E., Bianchini, R., 2004. Energy conservation techniques for disk array-based servers. Proc. 18th Annual Int. Conf. on Supercomputing, p.68-78.
[22]Pinheiro, E., Weber, W.D., Barroso, L.A., 2007. Failure trends in a large disk drive population. Proc. 5th USENIX Conf. on File and Storage Technologies, p.17-28.
[23]Soundararajan, G., Prabhakaran, V., Balakrishnan, M., et al., 2010. Extending SSD lifetimes with disk-based write caches. Proc. 8th USENIX Conf. on File and Storage Technologies, p.101-114.
[24]Stodolsky, D., Gibson, G., Holland, M., 1993. Parity logging overcoming the small write problem in redundant disk arrays. ACM SIGARCH Comput. Architect. News, 21(2):64-75.
[25]Thereska, E., Donnelly, A., Narayanan, D., 2011. Sierra: practical power-proportionality for data center storage. Proc. 6th Conf. on Computer Systems, p.169-182.
[26]Wang, J., Zhu, H.J., Li, D., 2008. eRAID: conserving energy in conventional disk-based RAID system. IEEE Trans. Comput., 57(3):359-374.
[27]Weil, S., Brandt, S.A., Miller, E.L., et al., 2006. Ceph: a scalable, high-performance distributed file system. Proc. 7th Conf. on Operating Systems Design and Implementation, p.307-320.
[28]Wilkes, J., Golding, R., Staelin, R., et al., 1996. The HP AutoRAID hierarchical storage system. ACM Trans. Comput. Syst., 14(1):108-136.
[29]Xin, Q., Miller, E.L., Schwarz, T., et al., 2003. Reliability mechanisms for very large storage systems. Proc. 20th IEEE/11th NASA Goddard Conf. on Mass Storage Systems and Technologie, p.146-156.
[30]Xu, L.H., Bruck, J., 1999. X-code: MDS array codes with optimal encoding. IEEE Trans. Inform. Theory, 45(1):272-276.
[31]Yue, Y., Tian, L., Jiang, H., et al., 2010. RoLo: a rotated logging storage architecture for enterprise data centers. Proc. IEEE 30th Int. Conf. on Distributed Computing Systems, p.293-304.
[32]Yue, Y., He, B., Tian, L., et al., 2016. Rotated logging storage architectures for data centers: models and optimizations. IEEE Trans. Comput., 65(1):203-215.
[33]Zhu, Q., Chen, Z., Tan, L., et al., 2005. Hibernator: helping disk arrays sleep through the winter. Proc. 20th ACM Symp. on Operating Systems Principles, p.177-190.
Open peer comments: Debate/Discuss/Question/Opinion
<1>