Benchmarking and Understanding ML Inference
   
Publication Year:
  2019
Authors
  Cheng Li, Abdul Dakkak, Jinjun Xiong, Wen-mei Hwu
   
Published:
  https://arxiv.org/abs/1904.12437
   
Abstract:
An increasingly complex and diverse collection of Machine Learning(ML) models as well as hardware/software stacks, collectively referred to as "ML artifacts", are being proposed - leading to a diverse landscape of ML. These ML innovations proposed have outpaced researchers' ability to analyze, study and adapt them. This is exacerbated by the complicated and sometimes non-reproducible procedures for ML evaluation. The current practice of sharing ML artifacts is through repositories where artifact authors post ad-hoc code and some documentation. The authors often fail to reveal critical information for others to reproduce their results. One often fails to reproduce artifact authors' claims, not to mention adapt the model to his/her own use. This article discusses the common challenges and pitfalls of reproducing ML artifacts, which can be used as a guideline for ML researchers when sharing or reproducing artifacts.