Is it time to revisit Erasure Coding in Data-intensive clusters?

Published in 27th IEEE International Symposium on the Modeling, Analysis, and Simulation of Computer and Telecommunication Systems, 2019

Recommended citation: Jad Darrous, Shadi Ibrahim, Christian Perez. "Is it time to revisit Erasure Coding in Data-intensive clusters?". In Proceedings of the 27th IEEE International Symposium on the Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS 19), Oct. 2019, Rennes, France. https://hal.inria.fr/hal-02263116

Abstract - Data-intensive clusters are heavily relying on distributed storage systems to accommodate the unprecedented growth of data. Hadoop distributed file system (HDFS) is the primary storage for Big Data analytic frameworks such as Spark and Hadoop. Traditionally, HDFS operates under replication to ensure data availability and to allow locality-aware task execution of data-intensive applications. Recently, erasure coding (EC) is emerging as an alternative method to replication in storage systems due to the continuous reduction in its computation overhead. In this work, we conduct an extensive experimental study to understand the performance of data-intensive applications under replication and EC. We use representative benchmarks on the Grid’5000 testbed to evaluate how analytic workloads, data persistency, failures, the back-end storage devices, and the network configuration impact their performance. Our study sheds the light not only on the potential benefits of erasure coding in data-intensive clusters but also on the aspects that may help to realize it effectively.