💱 A curated list of data valuation (DV) to design your next data marketplace. DV aims to understand the value of a data point for a given machine learning task and is an essential primitive in the design of data marketplaces and explainable AI.
💻 Code available
🎥 Talk / Slides
Towards Efficient Data Valuation Based on the Shapley Value | Ruoxi Jia & David Dao, Boxin Wang, Frances Ann Hubis, Nick Hynes, Nezihe Merve Gürel, Bo Li, Ce Zhang, Dawn Song, Costas J. Spanos | 2019 | SummaryJia et al. (2019) contribute theoretical and practical results for efficient methods for approximating the Shapley value (SV). They show that methods with a sublinear amount of model evaluations are possible and further reductions can be made for sparse SVs. Lastly, they introduce two practical SV estimation methods for ML tasks, one for uniformly stable learning algorithms and one for smooth loss functions. |
Bibtex@inproceedings{jia2019towards, |
💻 | |
Data Shapley: Equitable Valuation of Data for Machine Learning | Amirata Ghorbani, James Zou | 2019 | SummaryGhorbani & Zou (2019) introduce (data) Shapley value to equitably measure the value of each training point to a supervised learners performance. They further outline several benefits of the Shapley value, e.g. being able to capture outliers or inform what new data to acquire, as well as develop Monte Carlo and gradient-based methods for its efficient estimation. |
Bibtex@inproceedings{ghorbani2019data, |
💻 | |
A Distributional Framework for Data Valuation | Amirata Ghorbani, Michael P. Kim, James Zou | 2020 | SummaryGhorbani et al. (2020) formulate the Shapley value as a distributional quantity in the context of an underlying data distribution instead of a fixed dataset. They further introduce a novel sampling-based algorithm for the distributional Shapley value with strong approximation guarantees. |
Bibtex@inproceedings{ghorbani2020distributional, |
💻 | |
Asymmetric Shapley values: incorporating causal knowledge into model-agnostic explainability | Christopher Frye, Colin Rowat, Ilya Feige | 2020 | SummaryFrye et al. (2020) incorporate causality into the Shapley value framework. Importantly, their framework can handle any amount of causal knowledge and does not require the complete causal graph underlying the data. |
Bibtex@article{frye2020asymmetric, |
🎥 | |
Collaborative Machine Learning with Incentive-Aware Model Rewards | Rachael Hwee Ling Sim, Yehong Zhang, Mun Choon Chan, Bryan Kian Hsiang Low | 2020 | SummarySim et al. (2020) introduce a data valuation method with separate ML models as rewards based on the Shapley value and information gain on model parameters given its data. They further define several conditions for incentives such as Shapley fairness, stability, individual rationality, and group welfare, that are suitable for the freely replicable nature of their model reward scheme. |
Bibtex@inproceedings{sim2020collaborative, |
||
Validation free and replication robust volume-based data valuation | Xinyi Xu, Zhaoxuan Wu, Chuan Sheng Foo, Bryan Kian Hsiang Low | 2021 | SummaryXu et al. (2021) propose using data diversity via robust volume for measuring the value of data. This removes the need for a validation set and allows for guarantees on replication robustness but suffers from the curse of dimensionality and may ignore useful information in the validation set. |
Bibtex@article{xu2021validation, |
💻 | |
Beta Shapley: a Unified and Noise-reduced Data Valuation Framework for Machine Learning | Yongchan Kwon, James Zou | 2021 | SummaryKwon & Zou (2022) introduce Beta Shapley, a generalization of Data Shapley by relaxing the efficiency axiom. |
Bibtex@article{kwon2021beta, |
||
Gradient-Driven Rewards to Guarantee Fairness in Collaborative Machine Learning | Xinyi Xu, Lingjuan Lyu, Xingjun Ma, Chenglin Miao, Chuan Sheng Foo, Bryan Kian Hsiang Low | 2021 | SummaryXu et al. (2021) propose cosine gradient Shapley value to fairly evaluate the expected contribution of each agent's update in the federated learning setting removing the need for an auxiliary validation dataset. They further introduce a novel training-time gradient reward mechanism with a fairness guarantee. |
Bibtex@article{xu2021gradient, |
||
Improving Cooperative Game Theory-based Data Valuation via Data Utility Learning | Tianhao Wang, Yu Yang, Ruoxi Jia | 2022 | SummaryWang et al. (2022) propose a general framework to improve effectiveness of sampling-based Shapley value (SV) or Least core (LC) estimation heuristics. They propose learning to predict the performance of a learning algorithm (denoted data utility learning) and using this predictor to estimate learning performance without retraining for cheaper SV and LC estimation. |
Bibtex@article{wang2021improving, |
||
Data Banzhaf: A Robust Data Valuation Framework for Machine Learning | Jiachen T. Wang, Ruoxi Jia | 2023 | SummaryWang et al. (2023) propose using the Banzhaf value for data valuation, providing better robustness against noisy performance scores and an efficient estimate using Maximum Sample Reuse (MSR) principle |
Bibtex@InProceedings{pmlr-v206-wang23e, title={Data Banzhaf: A Robust Data Valuation Framework for Machine Learning}, |
💻 |
Efficient Task-Specific Data Valuation for Nearest Neighbor Algorithms | Ruoxi Jia, David Dao, Boxin Wang, Frances Ann Hubis, Nezihe Merve Gurel, Bo Li, Ce Zhang, Costas J. Spanos, Dawn Song | 2019 | SummaryJia et al. (2019) present algorithms to compute the Shapley value exactly in quasi-linear time and approximations in sublinear time for k-nearest-neighbor models. They empirically evaluate their algorithms at scale and extend them to several other settings. |
Bibtex@article{jia12efficient, |
💻 | |
Efficient computation and analysis of distributional Shapley values | Yongchan Kwon, Manuel A. Rivas, James Zou | 2021 | SummaryKwon et al. (2021) develop tractable analytic expressions for the distributional data Shapley value for linear regression, binary classification, and non-parametric density estimation as well as new efficient methods for its estimation. |
Bibtex@inproceedings{kwon2021efficient, |
💻 |
Scalability vs. Utility: Do We Have to Sacrifice One for the Other in Data Importance Quantification? | Ruoxi Jia, Fan Wu, Xuehui Sun, Jiacen Xu, David Dao, Bhavya Kailkhura, Ce Zhang, Bo Li, Dawn Song | 2021 | SummaryJia et al. (2021) perform a theoretical analysis on the differences between leave-one-out-based and Shapley value-based methods as well as an empirical study across several ML tasks investigating the two aforementioned methods as well as exact Shapley value-based methods and Shapley over KNN Surrogates. |
Bibtex@misc{jia2021scalability, |
💻 | |
Shapley values for feature selection: The good, the bad, and the axioms | Daniel Fryer, Inga Strümke, Hien Nguyen | 2021 | SummaryFryer et al. (2021) calls into question the appropriateness of using the Shapley value for feature selection and advise caution against the magical thinking that presenting its abstract general axioms as "favourable and fair" may introduce. They further point out that the four axioms of "efficiency", "null player", "symmetry", and "additivity" do not guarantee that the Shapley value is suited to feature selection and may sometimes even imply the opposite. |
Bibtex@misc{fryer2021shapley, |
Understanding Black-box Predictions via Influence Functions | Pang Wei Koh, Percy Liang | 2017 | SummaryKoh & Liang (2017) introduce the use of influence functions, a technique borrowed from robust statistics, to identify training points most responsible for a model's given prediction without needing to retrain. They further develop a simple and efficient implementation of influence functions that scales to large ML settings. |
Bibtex@inproceedings{koh2017understanding, |
💻 | 🎥 |
On the accuracy of influence functions for measuring group effects | Pang Wei Koh*, Kai-Siang Ang*, Hubert H. K. Teo*, and Percy Liang | 2019 | SummaryKoh et al. (2019) study influence functions to measure effects of large groups of training points instead of individual points. They empirically find a correlation and often underestimation between predicted and actual effects and theoretically show that this need not hold in general, realistic settings. |
Bibtex@article{koh2019accuracy, |
💻 | 🎥 |
Data Valuation using Reinforcement Learning | Jinsung Yoon, Sercan Ö Arık, Tomas Pfister | 2020 | SummaryYoon et al. (2020) propose using reinforcement learning for data valuation to learn data values jointly with the predictor model. |
Bibtex@inproceedings{49189, |
💻 | 🎥 |
DAVINZ: Data Valuation using Deep Neural Networks at Initialization | Zhaoxuan Wu, Yao Shu, Bryan Kian Hsiang Low | 2022 | SummaryWu et al. (2022) introduce a validation-based and training-free method for efficient data valuation with large and complex deep neural networks (DNNs). They derive and exploit a domain-aware generalization bound for DNNs to characterize their performance without training and uses this bound as the scoring function while keeping conventional techniques such as Shapley values as the valuation function. |
Bibtex@inproceedings{wu2022davinz, |
🎥 |
Data-OOB: Out-of-bag Estimate as a Simple and Efficient Data Value | Yongchan Kwon, James Zou | 2023 | SummaryKwon et al. (2023) propose using the out-of-bag estimate of a bagging estimator for computationally efficient data valuation. |
Bibtex@inproceedings{DBLP:conf/icml/Kwon023, |
💻 | 🎥 |
OpenDataVal: a Unified Benchmark for Data Valuations | Kevin Jiang, Weixin Liang, James Zou, Yongchan Kwon | 2023 | SummaryJiang et al. (2023) provides a Python library to build and test data evaluators across different datasets, data evaluators, models, and new benchmarks. |
Bibtex@article{jiang2023opendataval, |
💻 | 🎥 |
Data Valuation in Machine Learning: “Ingredients”, Strategies, and Open Challenges | Rachael Hwee Ling Sim*, Xinyi Xu*, Bryan Kian Hsiang Low | 2022 | SummarySim et al. (2022) present a technical survey of data valuation and its "ingredients" and properties. The paper outlines common desiderata as well as some open research challenges. |
Bibtex@inproceedings{sim2022data, |
🎥 |
A demonstration of sterling: a privacy-preserving data marketplace | Nick Hynes, David Dao, David Yan, Raymond Cheng, Dawn Song | 2018 | Bibtex@article{hynes2018demonstration, |
|||
DataBright: Towards a Global Exchange for Decentralized Data Ownership and Trusted Computation | David Dao, Dan Alistarh, Claudiu Musat, Ce Zhang | 2018 | Bibtex@article{dao2018databright, 8000 |
|||
A Marketplace for Data: An Algorithmic Solution | Anish Agarwal, Munther Dahleh, Tuhin Sarkar | 2019 | Bibtex@inproceedings{agarwal2019marketplace, |
|||
Computing a Data Dividend | Eric Bax | 2019 | Bibtex@misc{bax2019computing, |
|||
Incentivizing Collaboration in Machine Learning via Synthetic Data Rewards | Sebastian Shenghong Tay, Xinyi Xu, Chuan Sheng Foo, Bryan Kian Hsiang Low | 2021 | Bibtex@article{tay2021incentivizing, |
Data Capsule: A New Paradigm for Automatic Compliance with Data Privacy Regulations | Lun Wang, Joseph P. Near, Neel Somani, Peng Gao, Andrew Low, David Dao, Dawn Song | 2019 | Bibtex@misc{wang2019data, |
💻 |
A Principled Approach to Data Valuation for Federated Learning | Tianhao Wang, Johannes Rausch, Ce Zhang, Ruoxi Jia, Dawn Song | 2020 | Bibtex@misc{wang2020principled, |
|||
Data valuation for medical imaging using Shapley value and application to a large-scale chest X-ray dataset | Siyi Tang, Amirata Ghorbani, Rikiya Yamashita, Sameer Rehman, Jared A Dunnmon, James Zou, Daniel L Rubin | 2021 | Bibtex@article{tang2021data, |
Nonrivalry and the Economics of Data | Charles I. Jones, Christopher Tonetti | 2019 | Bibtex@article{10.1257/aer.20191330, |
Chapter 5: Data as Labor, Radical Markets | Eric A. Posner and E Glen Weyl | 2019 | Bibtex@book{posner2019radical, |
|||
Should We Treat Data as Labor? Moving beyond "Free" | Imanol Arrieta-Ibarra, Leonard Goff, Diego Jiménez-Hernández, Jaron Lanier, E. Glen Weyl | 2018 | Bibtex@article{10.1257/pandp.20181003, |
Performative Prediction | Juan Perdomo, Tijana Zrnic, Celestine Mendler-Dünner, Moritz Hardt | 2020 | SummaryPerdomo et al. (2020) introduce the concept of "performative prediction" dealing with predictions that influence the target they aim to predict, e.g. through taking actions based on the predictions, causing a distribution shift. The authors develop a risk minimization framework for performative prediction and introduce the equilibrium notion of performative stability where predictions are calibrated against future outcomes that manifest from acting on the prediction. |
Bibtex@inproceedings{perdomo2020performative, |
||
Stochastic Optimization for Performative Prediction | Celestine Mendler-Dünner, Juan Perdomo, Tijana Zrnic, Moritz Hardt | 2020 | SummaryMendler-Dünner et al. (2020) look at stochastic optimization for performative prediction and prove convergence rates for greedily deploying models after each stochastic update (which may cause distribution shift affecting convergence to a stability point) or lazily deploying the model after several updates. |
Bibtex@article{mendler2020stochastic, |
Strategic Classification is Causal Modeling in Disguise | John Miller, Smitha Milli, Moritz Hardt | 2020 | SummaryMiller et al. (2020) argue that strategic classication involves causal modelling and designing incentives for improvement requires solving a non-trivial causal inference problem. The authors provide a distinction between gaming and improvement as well as provide a causal framework for strategic adaptation. |
Bibtex@inproceedings{miller2020strategic, |
||
Alternative Microfoundations for Strategic Classification | Meena Jagadeesan, Celestine Mendler-Dünner, Moritz Hardt | 2021 | SummaryJagadeesan et al. (2021) show that standard microfoundations in strategic classification, that typically uses individual-level behaviour to deduce aggregate-level responses, can lead to degenerate behaviour in aggregate: discontinuities in the aggregate response, stable points ceasing to exist, and maximizing social burden. The authors introduce a noisy response model inspired by performative prediction that mitigates these limitations for binary classification. |
Bibtex@inproceedings{jagadeesan2021alternative, |