Frustrated with Replicating Claims of a Shared Model? A Solution
   
Publication Year:
  2019
Authors
  Abdul Dakkak, Cheng Li, Jinjun Xiong
   
Published:
  https://arxiv.org/abs/1811.09737
   
Abstract:

Machine Learning (ML) and Deep Learning (DL) innovations are being introduced at such a rapid pace that model owners and evaluators are hard-pressed analyzing and studying them. This is exacerbated by the complicated procedures for evaluation. The lack of standard systems and efficient techniques for specifying and provisioning ML/DL evaluation is the main cause of this "pain point". This work discusses common pitfalls for replicating DL model evaluation, and shows that these subtle pitfalls can affect both accuracy and performance. It then proposes a solution to remedy these pitfalls called MLModelScope, a specification for repeatable model evaluation and a runtime to provision and measure experiments. We show that by easing the model specification and evaluation process, MLModelScope facilitates rapid adoption of ML/DL innovations.

   
BibTeX:
 
@article{DBLP:journals/corr/abs-1811-09737,

author = {Abdul Dakkak and
Cheng Li and
Abhishek Srivastava and
Jinjun Xiong and
Wen{-}Mei W. Hwu},
title = {MLModelScope: Evaluate and Measure {ML} Models within {AI} Pipelines},
journal = {CoRR},
volume = {abs/1811.09737},
year = {2018},
url = {http://arxiv.org/abs/1811.09737},
archivePrefix = {arXiv},
eprint = {1811.09737},
timestamp = {Fri, 30 Nov 2018 12:44:28 +0100},
biburl = {https://dblp.org/rec/bib/journals/corr/abs-1811-09737},
bibsource = {dblp computer science bibliography, https://dblp.org}
}