Special Seminar in CMS and HSS
Even in the age of big data and machine learning, human knowledge and preferences still play a large part in decision making. For some tasks, such as predicting complex events like recessions or global conflicts, human input remains a crucial component, either in a standalone capacity or as a complement to algorithms and statistical models. In other cases, a decision maker is tasked with utilizing human preferences to, for example, make a popular decision over an unpopular one. However, while often useful, eliciting data from humans poses significant challenges. First, humans are strategic, and may misrepresent their private information if doing so can benefit them. Second, when decisions affect humans, we often want outcomes to be fair, not systematically favoring one individual or group over another.
In this talk, I discuss two settings that exemplify these considerations. First, I consider the participatory budgeting problem in which a shared budget must be divided among competing public projects. Building on classic literature in economics, I present a class of truthful mechanisms and exhibit a tradeoff between fairness and economic efficiency within this class. Second, I examine the classic online learning problem of learning with expert advice in a setting where experts are strategic and act to maximize their influence on the learner. I present algorithms that incentivize truthful reporting from experts while achieving optimal regret bounds.