HyperLink   MLModelScope: A Distributed Platform for Model Evaluation and Benchmarking at Scale
   
Publication Year:
  2020
Authors
  Abdul Dakkak, Cheng Li, Jinjun Xiong, Wen-mei Hwu
   
Published:
  https://arxiv.org/abs/2002.08295
   
Abstract:

Machine Learning (ML) and Deep Learning (DL) innovations are being introduced at such a rapid pace that researchers are hard-pressed to analyze and study them. The complicated procedures for evaluating innovations, along with the lack of standard and efficient ways of specifying and provisioning ML/DL evaluation, is a major "pain point" for the community. This paper proposes MLModelScope, an open-source, framework/hardware agnostic, extensible and customizable design that enables repeatable, fair, and scalable model evaluation and benchmarking. We implement the distributed design with support for all major frameworks and hardware, and equip it with web, command-line, and library interfaces. To demonstrate MLModelScope's capabilities we perform parallel evaluation and show how subtle changes to model evaluation pipeline affects the accuracy and HW/SW stack choices affect performance.


   
BibTeX:
 
@misc{dakkak2020mlmodelscope,
title={MLModelScope: A Distributed Platform for Model Evaluation and Benchmarking at Scale},
author={Abdul Dakkak and Cheng Li and Jinjun Xiong and Wen-mei Hwu},
year={2020},
eprint={2002.08295},
archivePrefix={arXiv},
primaryClass={cs.DC}
}