Inhalt des Dokuments
Es gibt keine deutsche Übersetzung dieser Webseite.
Paper accepted for SIGMOD 2020
Our paper in collaboration with the DIMA group was accepted for publication at SIGMOD 2020.
Paper Title: Optimizing Machine Learning Workloads in Collaborative Environments
Authors: Behrouz Derakhshan, Alireza Rezaei Mahdiraji, Ziawasch Abedjan, Tilmann Rabl, Volker Markl
Effective collaboration among data scientists results in high-quality and efficient machine learning (ML) workloads.
In a collaborative environment, such as Kaggle or Google Colabratory, users typically re-execute or modify published scripts to recreate or improve the result.
This introduces many redundant data processing and model training operations.
Reusing the data generated by the redundant operations leads to the more efficient execution of future workloads.
However, existing collaborative environments lack a data management component for storing and reusing the result of previously executed operations.
In this paper, we present a system to optimize the execution of ML workloads in collaborative environments by reusing previously performed operations and their results.
We utilize a so-called Experiment Graph (EG) to store the artifacts, i.e., raw and intermediate data or ML models, as vertices and operations of ML workloads as edges.
In theory, the size of EG can become unnecessarily large, while the storage budget might be limited.
At the same time, for some artifacts, the overall storage and retrieval cost might outweigh the recomputation cost.
To address this issue, we propose two algorithms for materializing artifacts based on their likelihood of future reuse.
Given the materialized artifacts inside EG, we devise a linear-time reuse algorithm to find the optimal execution plan for incoming ML workloads.
Our reuse algorithm only incurs a negligible overhead and scales for the high number of incoming ML workloads in collaborative environments.
Our experiments show that we improve the run-time by one order of magnitude for repeated execution of the workloads and 50% for the execution of modified workloads in collaborative environments.