Working Papers
Below are links to the working papers by current and former Caltech social sciences faculty and their collaborators. Papers can be sorted by number, title, author name, or date by clicking on the corresponding column heading.
Certain working papers with numbers below 1000 are not available online. Paper copies may be requested from, and any other questions about working papers can be directed to, the HSS working paper coordinator (workingpapers@hss.caltech.edu). Please include the working paper number, author, and title, and your full mailing address in your request for a paper copy, which will be sent to you at no charge.
# | Title | Authors | Date | Length | Paper | Abstract | |
---|---|---|---|---|---|---|---|
1468 | Dynamic Collective Action and the Power of Large Numbers | Battaglini, Marco Palfrey, Thomas R. | 04/30/2024 | 56 pages | sswp1468.pdf | Collective action is a dynamic process where individuals in a group assess over time the benefits and costs of participating toward the success of a collective goal. Early participation improves the expectation of success and thus stimulates the subsequent participation of other individuals who might otherwise be unwilling to engage. On the other hand, a slow start can depress expectations and lead to failure for the group. Individuals have an incentive to procrastinate, not only in the hope of free riding, but also in order to observe the flow of participation by others, which allows them to better gauge whether their own participation will be useful or simply wasted. How do these phenomena affect the probability of success for a group? As the size of the group increases, will a "power of large numbers" prevail producing successful outcomes, or will a "curse of large numbers" lead to failure? In this paper, we address these questions by studying a dynamic collective action problem in which n individuals can achieve a collective goal if a share an of them takes a costly action (e.g., participate in a protest, join a picket line, or sign an environmental agreement). Individuals have privately known participation costs and decide over time if and when to participate. We characterize the equilibria of this game and show that under general conditions the eventual success of collective action is necessarily probabilistic. The process starts for sure, and hence there is always a positive probability of success; however, the process "gets stuck" with positive probability, in the sense that participation stops short of the goal. Equilibrium outcomes have a simple characterization in large populations: welfare converges to either full efficiency or zero as n → ∞ depending on a precise condition on the rate at which an converges to zero. Whether success is achievable or not, delays are always irrelevant: in the limit, success is achieved either instantly or never. | |
1467 | A Note on Cursed Sequential Equilibrium and Sequential Cursed Equilibrium | Fong, Meng-Jhang Lin, Po-Hsuan Palfrey, Thomas R. | 04/11/2023 | 27 | SSWP_1467.pdf | In this short note, we compare the cursed sequential equilibrium (CSE) by Fong et al. (2023) and the sequential cursed equilibrium (SCE) by Cohen and Li (2023). We identify eight main differences between CSE and SCE with respect to the following features:
(1) the family of applicable games, (2) the number of free parameters, (3) the belief updating process, (4) the treatment of public histories, (5) effects in games of complete information, (6) violations of subgame perfection and sequential rationality, (7) re-labeling of actions, and (8) effects in one-stage simultaneous-move games. | |
1466 | Organizing for Collective Action: Olson Revisited | Battaglini, Marco Palfrey, Thomas R. | 11/17/2023 | 57 | sswp_1466_revised-nov2023.pdf | We study a standard collective action problem in which successful achievement of a group interest requires costly participation by some fraction of its members. How should we model the internal organization of these groups when there is asymmetric information about the preferences of their members? How effective should we expect it to be as we increase the group's size n? We model the organization as an honest and obedient communication mechanism and obtain three main results: (1) For large n it can be implemented with a very simple mechanism that we call the Volunteer Based Organization. (2) The limit probability of success as n goes to infinity in the optimal honest and obedient mechanism is no better than an unorganized group, which is not generally true if obedience is replaced by the usual (weaker) requirement of interim individual rationality. (3) In spite of this asymptotic equivalence, an optimal organization provides substantial gains when the probability of success converges to zero, because it does so at a much slower rate than an unorganized group. Because of this, significant probabilities of success are achievable with simple honest and obedient organizations even in very large groups. | |
1465 | Cursed Sequential Equilibrium | Fong, Meng-Jhang Lin, Po-Hsuan Palfrey, Thomas R. | 04/11/2023 | 61 | sswp1465_updated_041123.pdf | This paper develops a framework to extend the strategic form analysis of cursed equilibrium (CE) developed by Eyster and Rabin (2005) to multi-stage games. The approach uses behavioral strategies rather than normal form mixed strategies, and imposes sequential rationality. We define cursed sequential equilibrium (CSE) and compare it to sequential equilibrium and standard normal-form CE. We provide a general characterization of CSE and establish its properties. We apply CSE to five applications in economics and political science. These applications illustrate a wide range of differences between CSE and Bayesian Nash equilibrium or CE: in signaling games; games with preplay communication; reputation building; sequential voting; and the dirty faces game where higher order beliefs play a key role. A common theme in several of these applications is showing how and why CSE implies systematically different behavior than Bayesian Nash equilibrium in dynamic games of incomplete information with private values, while CE coincides with Bayesian Nash equilibrium for such games. | |
1463 | Competition Shocks, rival reactions and return comovement | Roll, Richard de Bodt, Eric Eckbo, B. Espen | 03/21/2022 | 51 | sswp1463.pdf | We estimate changes in within-industry stock-return comovement caused by the reaction of rival firms to significant tariff cuts. In theory, rivals react by either increasing or decreasing product differentiation. Increased differentiation lowers cash flow correlation and return comovement, while reduced differentiation increases comovement. Large-sample tests show that tariff cuts in manufacturing industries increase comovement and more so for within-industry ‘followers' than ‘leaders'. The notion that this comovement-increase reflects efficiency-enhancing rival reactions is also supported by evidence of increased cost-efficiency measures. One channel for this efficiency-increase is M&A activity among industry followers. | |
1462 | The (Un)intended Consequences of M&A Regulatory Enforcements | Roll, Richard de Bodt, Eric Cousin, Jean-Gabriel Officer, Micah | 03/21/2022 | 56 | sswp1462.pdf | Economic and policy uncertainty affect merger and acquisition (M&A) activity. In this paper, we use Department of Justice (DOJ) and Federal Trade Commission (FTC) interventions in the M&A market to investigate whether uncertainty around regulatory enforcements also matters. Our results support this conjecture. Using the Hoberg and Phillips (2010) similarity scores to identify product market competitors, we confirm a clear and significant DOJ/FTC regulatory enforcements' deterrence effect on future M&A transaction attempts, a result robust to many alternative specifications and confirmed in additional tests. This deterrence effect is (at least partly) driven by the length of the regulatory process, a factor that exacerbates enforcement uncertainty. Our results identify an (un)intended channel through which M&A regulation hampers efficient resources allocation. | |
1461 | Mixed Logit and Pure Characteristics Models | Lu, Jay Saito, Kota | 02/01/2022 | 38 | sswp1461.pdf | Mixed logit or random coefficients logit models are used extensively in empirical work while pure characteristic models feature in much of theoretical work. We provide a theoretical analysis of the relationship between the two classes of models. First, we show an approximation theorem that precisely characterizes the extent and limitations of mixed logit approximations of pure characteristic models. Second, we present two conditions that highlight novel behavioral differences. The first is a substitutability condition that is satisfied by many pure characteristic models (including models of horizontal differentiation such as Hotelling) but is violated by almost all mixed logit models. The second is a continuity condition that is satisfied by all pure characteristic models but is violated by all mixed logit models. Both conditions pertain to choice patterns when product characteristics change or new products are introduced and illustrate the limitations of using mixed logit models for counterfactual analysis. | |
1460 | Cognitive Hierarchies in Extensive Form Games | Lin, Po-Hsuan Palfrey, Thomas R. | 08/10/2022 | 47 | sswp1460_updated_081022.pdf | The cognitive hierarchy (CH) approach posits that players in a game are heterogeneous with respect to levels of strategic sophistication. A level-k player believes all other players in the game have lower levels of sophistication distributed from 0 to k- 1, and these beliefs correspond to the truncated distribution of a \true" distribution of levels. We extend the CH framework to extensive form games, where these initial beliefs over lower levels are updated as the history of play in the game unfolds, providing information to players about other players' levels of sophistication. For a class of centipede games with a linearly increasing pie, we fully characterize the dynamic CH solution and show that it leads to the game terminating earlier than in the static CH solution for the centipede game in reduced normal form. | |
1459 | Bilateral Conflict: An Experimental Study of Strategic Effectiveness and Equilibrium | Holt, Charles A. Palfrey, Thomas R. | 01/17/2022 | 36 | sswp1459.pdf | Bilateral conflict involves an attacker with several alternative attack methods and a defender who can take various actions to better respond to different types of attack. These situations have wide applicability to political, legal, and economic disputes, but are particularly challenging to study empirically because the payoffs are unknown. Moreover, each party has an incentive to behave unpredictably, so theoretical predictions are stochastic. This paper reports results of an experiment where the details of the environment are tightly controlled. The results sharply contradict the Nash equilibrium predictions about how the two parties' choice frequencies change in response to the relative effectiveness of alternative attack strategies. In contrast, nonparametric quantal response equilibrium predictions match the observed treatment effects. Estimation of the experimentally controlled payoff parameters across treatments accurately recovers the true values of those parameters with the logit quantal response equilibrium model but not with the Nash equilibrium model. | |
1458 | Voter Attention and Electoral Accountability | Devdariani, Saba Hirsch, Alexander V. | 08/01/2021 | 76 pages | sswp1458_2.pdf | What sorts of policy decisions do voters pay attention to, and why? And how does rational voter attention affect the behavior of politicians in office? We extend the Canes-Wrone, Herron and Shotts (2001) model of electoral agency to allow the voter to rationally choose when to "pay attention" to an incumbent's policy choice by expending costly effort to learn its consequences. In our model, the voter is sometimes motivated to pay costly attention to improve selection, but that attention influences accountability as a by-product. When attention is moderately costly, the voter generally pays more of it after the ex-ante unpopular policy than after the ex-ante popular one. Rational attention may improve accountability by decreasing the incumbent's rewards to choosing the ex-ante popular policy, increasing her rewards to choosing the ex-ante unpopular one, or both. However, it may also severely harm accountability, both by inducing a strong incumbent to "play it safe" by choosing a policy that avoids attention, or a weak incumbent to "gamble for resurrection" by choosing a policy that draws it. Finally, rational attention can induce or worsen pandering (that is, a bias toward the ex-ante popular policy) but never "fake leadership" (that is, a bias toward the ex-ante popular policy). The latter phenomenon thus requires an asymmetry in voter learning that derives from a process separate from costly information acquisition by the voter, and that is also sufficient to overcome its countervailing effects. | |
1456 | The Efficient Frontier: A Note on the Curious Difference Between Variance and Standard Deviation | Roll, Richard | 07/01/2021 | 9 pages | sswp_1456.pdf | The Markowitz Frontier of optimal portfolios is valid in both mean/variance space and in mean/standard deviation space. But there are some curious differences because lines in one space become curves in the other. This note explores and explains the curiosity. | |
1455 | The Politics of Asymmetric Extremism | Hirsch, Alexander V. | 07/26/2019 | 52 | sswp_1455_s8ztgrf.pdf | In real-world policymaking, concrete and viable policy alternatives do not just appear out of thin air; they must be developed by someone with both the expertise and willingness to do so. We develop a model that explores the implications of strategic policy development by ideologically motivated actors, who craft competing high quality policies for a decisionmaker. We find that the process is characterized by unequal participation, inefficiently unpredictable and extreme outcomes, wasted effort, and an apparent bias toward extreme policies. When one proposer becomes asymmetrically extreme or capable they develop more extreme proposals, while their competitor moderates their proposals, increasingly declines to participate, and is harmed. Despite this, the decisionmaker benefits due to the increasing quality investments of the more extreme or capable proposer. The model thus provides rationale for why an ideologically extreme faction may come to dominate policymaking that is rooted in the nature of productive policy competition. | |
1454 | Lobbyists as Gatekeepers: Theory and Evidence | Hirsch, Alexander V. Kang, Karam Montagnes, B. Pablo You, Hye Young | 02/12/2021 | 55 | sswp_1454_x1Q4ZbI.pdf | Lobbyists are omnipresent in the policymaking process, but the value that they bring to both clients and politicians remains poorly understood. We develop a model in which a lobbyist's value derives from his ability to selectively screen which clients he brings to a politician, thereby earning the politician's trust and preferential treatment for his clients. Lobbyists face a dilemma, as their ability to screen also increases their value to special interests, and the prices they can charge. A lobbyist's profit motive undermines his ability to solve this dilemma, but an interest in policy outcomes—due either to a political ideology or a personal connection—enhances it, which paradoxically increases his profits. Using a unique dataset from reports mandated by the Foreign Agents Registration Act, we find that lobbyists become more selective when they are more ideologically aligned with politicians, consistent with our prediction. | |
1453 | A Theory of Policy Sabotage | Hirsch, Alexander V. Kastellec, Jonathan P. | 06/10/2019 | 50 | sswp_1453.pdf | We develop a theory of policy making that examines when policy sabotage—that is, the deliberate choice by an opposition party to interfere the implementation of a policy—can be an effective electoral strategy, even if rational voters can see that it is happening.In our model, a potential saboteur chooses whether to sabotage an incumbent's policy by blocking its successful implementation. Following this decision, a voter decides whether to retain the incumbent, who is of unknown quality, or to select a challenger.We find that the incentives for sabotage are broadly shaped by the underlying popularity of the incumbent—it is most attractive when an incumbent is somewhat unpopular.If so, sabotage may decrease the probability the incumbent is reelected, even though sabotage is observable to the voter. We illustrate our theory with the implementation of the Affordable Care Act since its passage in 2010. | |
1452 | Polarization and Campaign Spending in Elections | Hirsch, Alexander V. | 01/22/2019 | 27 | sswp_1452.pdf | We develop a Downsian model of electoral competition in which candidates with both policy and office-motivations use a mixture of platforms and campaign spending to gain the median voter's support. The unique equilibrium involves randomizing over both platforms and spending, and exhibits the following properties – (i) ex-ante uncertainty in platforms, spending, and the election winner, (ii) platform divergence, (iii) inefficiency in spending and outcomes, (iv) polarization, and (v) voter extremism. We also show that platform polarization and campaign spending move in tandem, since spending is used by candidates to gain support for extreme platforms. Factors that contribute to both phenomena include the candidates' desire for extreme platforms, and their ability to translate campaign spending into support for them. The latter insight generates new hypotheses about the potential causes of both rising polarization and spending. | |
1451 | Veto Players and Policy Development | Hirsch, Alexander V. Shotts, Kenneth W. | 11/12/2023 | 83 | sswp1451_revised-nov2023.pdf | We analyze the effects of veto players when the set of available policies isn't exogenously fixed, but rather is determined by policy developers who work to craft new high-quality proposals. If veto players are moderate then there is active competition between policy developers on both sides of the political spectrum. However, more extreme veto players induce asymmetric activity, as one side disengages from policy development. With highly-extreme veto players, policy development ceases and gridlock results. We also analyze effects on centrists' utility. Moderate veto players dampen productive policy development and extreme ones eliminate it entirely, either of which is bad for centrists. But some effects are surprisingly positive. In particular, somewhat-extreme veto players can induce policy developers who dislike the status quo to craft moderate, high-quality proposals. Our model accounts for changing patterns of policymaking in the U.S. Senate and predicts that if polarization continues centrists will become increasingly inclined to eliminate the filibuster. | |
1450 | Price Formation in Multiple, Simultaneous Continuous Double Auctions, with Implications for Asset Pricing | Ledyard, John O. Asparouhova, Elena Bossaerts, Peter | 07/27/2020 | 65 | sswp_1450.pdf | We propose a Marshallian model for price and allocation adjustments in parallel continuous double auctions. Agents quote prices that they expect will maximize local utility improvements. The process generates Pareto optimal allocations in the limit. In experiments designed to induce CAPM equilibrium, price and allocation dynamics are in line with the model's predictions. Walrasian aggregate excess demands do not provide additional predictive power. We identify, theoretically and empirically, a portfolio that is closer to mean-variance optimal throughout equilibration. This portfolio can serve as a benchmark for asset returns even if markets are not in equilibrium, unlike the market portfolio, which only works at equilibrium. The theory also has implications for momentum, volume and liquidity. | |
1449 | Repeated Choice: A Theory of Stochastic Intertemporal Preferences | Saito, Kota Lu, Jay | 01/01/2020 | 80 | sswp1449.pdf | We provide a repeated-choice foundation for stochastic choice. We obtain necessary and sufficient conditions under which an agent's observed stochastic choice can be represented as a limit frequency of optimal choices over time. In our model, the agent repeatedly chooses today's consumption and tomorrow's continuation menu, aware that future preferences will evolve according to a subjective ergodic utility process. Using our model, we demonstrate how not taking into account the intertemporal structure of the problem may lead an analyst to biased estimates of risk preferences. Estimation of preferences can be performed by the analyst without explicitly modeling continuation problems (i.e. stochastic choice is independent of continuation menus) if and only ifthe utility process takes on the standard additive and separable form. Applications include dynamic discrete choice models when agents have non-trivial intertemporal preferences, such as Epstein-Zin preferences. We provide a numerical example which shows the significance of biases caused by ignoring the agent's Epstein-Zin preferences. | |
1448 | Using Theory, Markets and Experimental Methods to Improve a Complex Administrative Decision Process: School Transportation for Disadvantaged Students | Plott, Charles R. Stoneham, Gary Lee, Hsing Yang Maron, Travis | 02/25/2021 | 44 | sswp1448_FEB2021.pdf | The paper studies structural inefficiencies of administrative decisions demonstrated in an example from the growing government service sector. The provision of school transportation to disadvantaged children touches social concerns and regulations related to classical conditions of limited competition. The decision process is partitioned into key economic functions related to the challenges faced by the government when dealing with environments in which theory predicts market failures. Theories drawn from public choice theory and theories of auctions suggest policy changes that were experimentally tested in the challenging environment. Field data demonstrate the new processes produced improved allocations. Subsequent laboratory tests demonstrate the success can be attributed to the principles used in the theory and demonstrate a robust link connecting efficient behaviors observed in laboratory experiments and behaviors found in field events. | |
1447 | General Equilibrium Methodology Applied to the Design, Implementation and Performance Evaluation of Large, Multi-Market and Multi-Unit Policy Constrained Auctions | Plott, Charles R. Cason, Timothy N. Gillen, Benjamin J. Lee, Hsing Yang Maron, Travis | 01/13/2022 | 62 | sswp1447_revJAN2022.pdf | The paper reports on the methodology, design and outcome of a large auction with multiple, interdependent markets constructed from principles of general equilibrium as opposed to game theoretic auction theory. It distributed 18,788 entitlements to operate electronic gaming machines in 176 interconnected markets to 363 potential buyers representing gaming establishments subject to multiple policy constraints on the allocation. The multi-round auction, conducted in one day, produced over $600M in revenue. All policy constraints were satisfied. Revealed dynamics of interim allocations and new statistical tests provide evidence of multiple market convergence hypothesized by classical theories of general equilibrium. Results support the use of computer supported, "tâtonnement–like" market adjustments as a reliable empirical processes and not as purely theoretical constructs. | |
1446 | Changing Expected Returns Can Induce Spurious Serial Correlation | Pukthuanthong, Kuntara Roll, Richard Subrahmanyam, Avanidhar | 09/21/2021 | 59 | sswp1446_revised.pdf | Changing expected returns can induce spurious autocorrelation in returns. We show why this happens with simple examples and investigate its prevalence in actual equity data. In a key contribution, we use ex ante expected return estimates from options prices, factor models, and analysts' price targets to investigate our premise. Absolute shifts in expected returns are indeed strongly and positively related to autocorrelations in the cross-section of individual stocks, as predicted by our analysis. Well-studied risk factors show no evidence of spurious components. We also show how our analysis implies spurious cross-autocorrelation and find supporting evidence for this phenomenon as well. | |
1445 | An Experimental Study of Vote Trading | Casella, Alessandra Palfrey, Thomas R. | 12/18/2018 | 59 | sswp1445.pdf | Vote trading is believed to be ubiquitous in committees and legislatures, and yet we know very little of its properties. We return to this old question with a laboratory experiment. We posit that pairs of voters exchange votes whenever doing so is mutually advantageous. This generates trading dynamics that always converge to stable vote allocations--allocations where no further improving trades exist. The data show that stability has predictive power: vote allocations in the lab converge towards stable allocations, and individual vote holdings at the end of trading are in line with theoretical predictions. However, there is only weak support for the dynamic trading process itself. | |
1444 | Trading Votes for Votes. A Dynamic Theory | Casella, Alessandra Palfrey, Thomas R. | 12/10/2018 | 38 | sswp1444.pdf | We develop a framework to study the dynamics of vote trading over multiple binary issues. We prove that there always exists a stable allocation of votes that is reachable in a finite number of trades, for any number of voters and issues, any separable preference profile, and any restrictions on the coalitions that may form. If at every step all blocking trades are chosen with positive probability, convergence to a stable allocation occurs in finite time with probability one. If coalitions are unrestricted, the outcome of vote trading must be Pareto optimal, but unless there are three voters or two issues, it need not correspond to the Condorcet winner. If trading is farsighted, a non-empty set of stable vote allocations reachable from a starting vote allocation need not exist, and if it does exist it need not include the Condorcet winner, even in the case of two issues. | |
1443 | Mechanism Design with Limited Commitment | Doval, Laura Skreta, Vasiliki | 11/13/2018 | 80 | sswp1443.pdf | We develop a tool akin to the revelation principle for mechanism design with limited commitment. We identify a canonical class of mechanisms rich enough to replicate the payoffs of any equilibrium in a mechanism-selection game between an uninformed designer and a privately informed agent. A cornerstone of our methodology is the idea that a mechanism should encode not only the rules that determine the allocation, but also the information the designer obtains from the interaction with the agent. Therefore, how much the designer learns, which is the key tension in design with limited commitment, becomes an explicit part of the design. We show how this insight can be used to transform the designer's problem into a constrained optimization one: To the usual truthtelling and participation constraints, one must add the designer's sequential rationality constraint. | |
1442 | Fake News, Information Herds, Cascades and Economic Knowledge | Butkovich, Lazarina Butkovich, Nina Plott, Charles R. Seo, Han | 03/21/2019 | 24 | sswp1442.pdf | The paper addresses the issue of “fake news” through a well-known and widely studied experiment that illustrates a possible science behind the phenomenon. Public news is viewed as an aggregation of decentralized pieces of valuable information about complex events. Such systems rely on accumulated investment in trust in news sources. In the case of fake news, news source reliability is not known. The experiment demonstrates how fake news can destroy both the investment in trust and also the benefits that news provides. | |
1441 | Approximate Expected Utility Rationalization | Echenique, Federico Imai, Taisuke Saito, Kota | 06/22/2018 | 52 | sswp1441.pdf | We propose a new measure of deviations from expected utility, given data on economic choices under risk and uncertainty. In a revealed preference setup, and given a positive number e, we provide a characterization of the datasets whose deviation (in beliefs, utility, or perceived prices) is within e of expected utility theory. The number e can then be used as a distance to the theory. We apply our methodology to three recent large-scale experiments. Many subjects in those experiments are consistent with utility aximization, but not expected utility maximization. The correlation of our measure with demographics is also interesting, and provides new and intuitive findings on expected utility. | |
1440 | A Characterization of "Phelpsian" Statistical Discrimination | Chambers, Christopher P. Echenique, Federico | 06/13/2018 | 14 | sswp1440.pdf | We establish that statistical discrimination is possible if and only if it is impossible to uniquely identify the signal structure observed by an employer from a realized empirical distribution of skills. The impossibility of statistical discrimination is shown to be equivalent to the existence of a fair, skill-dependent remuneration for every set of tasks every signal-dependent optimal assignment of workers to tasks. Finally, we connect this literature to Bayesian persuasion, establishing that if the possibility of discrimination is absent, then the optimal signalling problem results in a linear payoff function (as well as a kind of converse). | |
1439 | Statistical Discrimination and Affirmative Action in the Lab | Dianat, Ahrash Echenique, Federico Yariv, Leeat | 04/30/2018 | 44 | sswp1439.pdf | We present results from laboratory experiments studying the impacts of affirmative-action policies. We induce statistical discrimination in simple labor-market interactions between rms and workers. We then introduce affirmative-action policies that vary in the size and duration of a subsidy firms receive for hiring discriminated-against workers. These different affirmative-action policies have nearly the same effect and practically eliminate discriminatory hiring practices. However, once lifted, few positive effects remain and discrimination reverts to its initial levels. One exception is lengthy affirmative-action policies, which exhibit somewhat longer-lived effects. Stickiness of beliefs, which we elicit, helps explain the evolution of these outcomes. | |
1438 | Design of Tradable Permit Programs under Imprecise Measurement | Ledyard, John O. | 03/20/2018 | 27 | sswp1438.pdf | If the measurement of production in a commons is accurate and precise, it is possible to design a tradable permit program such that, under a fairly general set of conditions, the market equilibrium is efficient for the given aggregate permit level and everyone is better off after the permit program than before. Often, however, implementation of a tradable permit system is postponed or never undertaken because an inexpensive technology able to provide accurate and precise measurements does not exist. However, there often is an inexpensive technology which accurate but not precise. I study the possibilities for the design of a tradable permit system when the measurement technology involves an imprecise, indirect measure of production that contains statistical uncertainty. To the best of my knowledge, this has not been studied before. | |
1437 | Learning to Alternate | Arifovic, Jasmina Ledyard, John O. | 02/23/2018 | 34 | sswp1437_-_revised.pdf | The Individual Evolutionary Learning (IEL) model explains human subjects' behavior in a wide range of repeated games which have unique Nash equilibria. Using a variation of `better response' strategies, IEL agents quickly learn to play Nash equilibrium strategies and their dynamic behavior is like that of humans subjects. In this paper we study whether IEL can also explain behavior in games with gains from coordination. We focus on the simplest such game: the 2 person repeated Battle of Sexes game. In laboratory experiments, two patterns of behavior often emerge: players either converge rapidly to one of the stage game Nash equilibria and stay there or learn to coordinate their actions and alternate between the two Nash equilibria every other round. We show that IEL explains this behavior if the human subjects are truly in the dark and do not know or believe they know their opponent's payo s. To explain the behavior when agents are not in the dark, we need to modify the basic IEL model and allow some agents to begin with a good idea about how to play. We show that if the proportion of inspired agents with good ideas is chosen judiciously, the behavior of IEL agents looks remarkably similar to that of human subjects in laboratory experiments. | |
1436 | Mimicking Portfolios | Roll, Richard Srivastava, Akshay | 01/08/2018 | 29 | sswp1436.pdf | Mimicking portfolios have many applications in the practice of finance. Here, we present a new method for constructing them. We illustrate its application by creating portfolios that mimic individual NYSE stocks. On the construction date, a mimicking portfolio exactly matches its target stock's exposures (betas) to a set of ETFs, which serve as proxies for global factors, and the portfolio has much lower idiosyncratic volatility than its target. Mimicking portfolios require only modest subsequent rebalancing in response to instabilities in target assets and assets used for portfolio construction. Although composed here exclusively of equities, mimicking portfolios show potential for mimicking non-equity assets as well. | |
1435 | Tick Size, Price Grids and Market Performance: Stable Matches as a Model of Market Dynamics and Equilibrium | Plott, Charles R. Roll, Richard Seo, Han Zhao, Hao | 01/08/2018 | 58 | sswp1435_r_X1x1ZD3.pdf | This paper reports experiments motivated by ongoing controversies regarding tick size in markets. The minimum tick size in a market dictates discrete values at which bids and asks can be tendered by market participants. All transaction prices must occur at these discrete values, which are established by the rules of each exchange. The simplicity of experiments helps to distinguish among competing models of complex real-world securities markets. We observe patterns predicted by a matching (cooperative game) model. Because a price grid damages the equilibrium of the competitive model, the matching model provides predictions where the competitive model cannot; their predictions are the same when a competitive equilibrium exists. The experiment examines stable allocations, average prices, timing of order flow and price dynamics. Larger tick size invites more speculation, which in turn increases liquidity. However, increased speculation leads to inefficient trades that otherwise would not have occurred. | |
1434 | Fairness and efficiency for probabilistic allocations with endowments | Echenique, Federico Zhang, Jun Miralles, Antonio | 12/19/2017 | 38 | sswp1434_-_revised.pdf | We propose to use endowments as a policy instrument in market design. Endowments give agents the right to enjoy certain resources. For example in school choice, one can ensure that low-income families have a shot at high-quality schools by endowing them with a chance of admission. We introduce two new criteria in resource allocation problems with endowments. The first adapts the notion of justified envy to a model with endowments, while the second is based on market equilibrium. Using either criteria, we show that fairness (understood as the absence of justified envy, or as a market outcome) can be obtained together with efficiency and individual rationality. Revised January 2018 | |
1433 | Axiomatizations of the Mixed Logit Model | Saito, Kota | 09/15/2017 | 43 pages | sswp1433.pdf | A mixed logit function, also known as a random-coefficients logit function, is an integral of logit functions. The mixed logit model is one of the most widely used models in the analysis of discrete choice. Observed behavior is described by a random choice function, which associates with each choice set a probability measure over the choice set. I obtain several necessary and sufficient conditions under which a random choice function becomes a mixed logit function. One condition is easy to interpret and another condition is easy to test. | |
1432 | A Testbed Experiment of a (Smart) Marked Based, Student Transportation Policy: Non Convexities, Coordination, Non Existence | Lee, Hsing Yang Maron, Travis Plott, Charles R. Seo, Han | 01/11/2018 | 41 | sswp1432.pdf | The paper develops and studies a decentralized mechanism for pricing and allocation challenges typically met with administrative processes. Traditional forms of markets are not used due to conditions associated with market failure, such as complex coordination problems, thin markets, non-convexities including and zero marginal cost due to lumpy transportation capacities. The mechanism rests on an assignment process that is guided by a computational process, enforces rules and channels information feedback to participants. Special, testbed experimental methods produce high levels of efficiency when confronted by individual behaviors that are consistent with traditional models of strategic behavior. | |
1431 | A Protocol for Factor Identification | Pukthuanthong, Kuntara Roll, Richard Subrahmanyam, Avanidhar | 07/28/2017 | 51 | sswp1431.pdf | We propose a protocol for identifying genuine risk factors. The underlying premise is that a risk factor must be related to the covariance matrix of returns, must be priced in the cross-section of returns, and should yield a reward-to-risk ratio that is reasonable enough to be consistent with risk pricing. A market factor, a profitability factor, and traded versions of macroeconomic factors pass our protocol, but many characteristic-based factors do not. Several of the underlying characteristics, however, do command material premiums in the cross-section. | |
1430 | Generalized Portfolio Performance Measures: Optimal Overweighting of Fees Relative to Sample Returns | Levy, Moshe Roll, Richard | 07/27/2017 | 24 | sswp1430.pdf | Performance measures such as alpha and the Sharpe ratio are typically based on sample returns net of fees. This implies the same weighting to sample returns and to fees. However, sample return parameters are noisy estimates of true parameters, while fees are known with certainty. Thus, intuition suggests that fees should be given more weight than sample returns. We formalize this intuition, and derive the optimal overweighting of fees. We show that the resulting generalized performance measures are better predictors of future net performance than the standard performance measures, and they better explain future fund flows. | |
1429 | Manipulation | Plott, Charles R. | 06/19/2017 | 23 pages | sswp1429.pdf | Systematic opportunities for manipulation emerge as a by-product of the structure of all group decision processes. Theory suggests that no process is immune. The study of manipulation provides principles and insights about how parts of complex decision systems work together and how changes in one part can have broad impact. Thus, manipulation strategies are derived from many features of voting processes. Often they are the product of changes in the decision environment, including rules, procedures and influence on others, in order to achieve a specific purpose. The issues and variables go beyond individual's own voting strategy within a specific setting and whether or not preferences are truthfully revealed – an issue often studied. Hopefully, the insights can lead to avenues for improvements to decision processes and thus, produce a better understanding of process vulnerabilities. | |
1428 | Preference Identification | Chambers, Christopher P. Echenique, Federico Lambert, Nicolas S | 04/11/2017 | 37 | sswp1428.pdf | An experimenter seeks to learn a subject's preference relation. The experimenter produces pairs of alternatives. For each pair, the subject is asked to choose. We argue that, in general, large but finite data do not give close approximations of the subject's preference, even when countably infinite many data points are enough to infer the preference perfectly. We then provide sufficient conditions on the set of alternatives, perferences, and sequences of pairs so that the observation of finitely many choices allows the experimentor to learn the subject's preference with arbitrary precision. The sufficient conditions are strong, but encompass many situations of interest. And while preferneces are approximated, we show that it is harder to identify utility functions. We illustrate our results with several examples, including expected utility, and preferences in the Anscombe-Aumann model. | |
1427 | Candidate entry and political polarization: An experimental study | Grober, Jens Palfrey, Thomas R. | 12/15/2016 | 51 | sswp1427.pdf | We report the results of a laboratory experiment based on a citizen‐candidate model with private information about ideal points. Inefficient political polarization is observed in all treatments; that is, citizens with extreme ideal points enter as candidates more often than moderate citizens. Second, less entry occurs, with even greater polarization, when voters have directional information about candidates' ideal points, using ideological party labels. Nonetheless, this directional information is welfare enhancing because the inefficiency from greater polarization is outweighed by lower total entry costs and better voter information. Third, entry rates are decreasing in group size and the entry cost. These findings are all implied by properties of the unique symmetric Bayesian equilibrium of the entry game. Quantitatively, we observe too little (too much) entry when the theoretical entry rates are high (low). This general pattern of observed biases in entry rates is implied by logit quantal response equilibrium. | |
1426 | Empirical Evidence of Overbidding in M&A Contests | de Bodt, Eric Cousin, Jean-Gabriel Roll, Richard | 11/15/2016 | 61 | Empirical_Evidence_of_Overbidding_in_MA.pdf | Surprisingly few papers have attempted to develop a direct empirical test for overbidding in M&A contests. We develop such a test grounded on a necessary condition for profit maximizing bidding behavior. The test is not subject to endogeneity concerns. Our results strongly support the existence of overbidding. We provide evidence that overbidding is related to conflicts of interest, but also some indirect evidence that it arises from failing to fully account for the winner's curse. | |
1425 | Nowhere to Run, Nowhere to Hide: Asset Diversification in a Flat World | Cotter, John Gabriel, Stuart Roll, Richard | 10/20/2016 | 58 | SSWP_1425.pdf | We present new international diversification indexes across equity, sovereign debt, and real estate. The indexes reveal a marked and near ubiquitous decline in diversification potential across asset classes and markets for the post-2000 period. Analysis of panel data suggests that the decline is related to higher levels of market credit risk and volatility as well as to technology and communications innovation as proxied by internet diffusion. The decline in diversification opportunity is associated with sharply higher levels of investment risk. | |
1424 | ACE: A Combinatorial Market Mechanism | Fine, Leslie Goeree, Jacob K. Ishikida, Takashi Ledyard, John O. | 11/03/2016 | 48 | SSWP_1424.pdf | In 1990 the South Coast Air Quality Management District created a tradable emissions program to more eciently manage the extremely bad emissions in the Los Angeles basin. The program created 136 different assets that an environmental engineer could use to cover emissions in place of installing expensive abatement equipment. Standard markets could not deal with this complexity and little trading occurred. A new combinatorial market was created in response and operated successfully for many years. That market design, called ACE (approximate competitive equilibrium), is described in detail and its successful performance in practice is analyzed. | |
1423 | The Effects of Income Mobility and Tax Persistence on Income Redistribution and Inequality | Agranov, Marina Palfrey, Thomas R. | 10/17/2016 | 48 | SSWP_1423.pdf | We explore the effect of income mobility and the persistence of redistributive tax policy on the level of redistribution in democratic societies. An infinite-horizon theoretical model is developed, and the properties of the equilibrium tax rate and the degree of after-tax inequality are characterized. Mobility and stickiness of tax policy are both negatively related to the equilibrium tax rate. However, neither is sufficient by itself. Social mobility has no effect on equilibrium taxes if tax policy is voted on in every period, and tax persistence has no effect in the absence of social mobility. The two forces are complementary. Tax persistence leads to higher levels of post-tax inequality, for any amount of mobility. The effect of mobility on inequality is less clear-cut and depends on the degree of tax persistence. A laboratory experiment is conducted to test the main comparative static predictions of the theory, and the results are generally supportive. | |
1422 | Communication Among Voters Benefits the Majority Party | Palfrey, Thomas R. Pogorelskiy, Kirill | 10/17/2016 | 56 | SSWP_1422.pdf | How does communication among voters affect turnout? And who benefits from it? In a laboratory experiment in which subjects, divided into two competing parties, choose between costly voting and abstaining, we study three pre-play communication treatments: No Communication, a control; Public Communication, where all voters exchange public messages through computer chat; and Party Communication, where messages are also exchanged but only within one's own party. Our main finding is that communication always beenfits ts the majority party by increasing its expected turnout margin and, hence, its expected margin of victory and probability of winning the election. Party communication increases overall turnout, while public communication increases turnout with a high voting cost but decreases it with a low voting cost. With communication, we find essentially no support for the standard Nash equilibrium predictions and limited consistency with correlated equilibrium. | |
1421 | Subset Optimization for Asset Allocation | Gillen, Benjamin J. | 06/02/2016 | 72 | SSWP_1421.pdf | Subset optimization provides a new algorithm for asset allocation that's particularly useful in settings with many securities and short return histories. Rather than optimizing weights for all N securities jointly, subset optimization constructs Complete Subset Portfolios (CSPs) that naıvely aggregate many "Subset Portfolios," each optimizing weights over a subset of only Nˆ randomly selected securities. With known means and variances, the complete subset efficient frontier for different subset sizes characterizes CSPs' utility loss due to satisficing, which generally decreases with Nˆ . In finite samples, the bound on CSPs' expected out-of-sample performance loss due to sampling error generally increases with Nˆ. By balancing this tradeoff, CSPs' expected out-of-sample performance dominates both the 1/N rule and sample-based optimization. Simulation and backtest experiments illustrate CSPs' robust performance against existing asset allocation strategies. | |
1420 | A Characterization of Combinatorial Demand | Chambers, Christopher P. Echenique, Federico | 05/31/2016 | 9 | sswp1420.pdf | We prove that combinatorial demand functions are characterized by two properties: continuity and the law of demand. | |
1419 | Improved Methods for Detecting Acquirer Skills | de Bodt, Eric Cousin, Jean-Gabriel Roll, Richard | 05/17/2016 | 42 | SSWP1419.pdf | Large merger and acquisition (M&A) samples feature the pervasive presence of repetitive acquirers. They offer an attractive empirical context for revealing the presence of acquirer skills (persistent superior performance). But panel data M&A are quite heterogeneous; just a few acquirers undertake many M&As. Does this feature affect statistical inference? To investigate the issue, our study relies statistical support for the presence of acquirer skills appears compromised. We introduce a new resampling method to detect acquirer skills with attractive statistical properties (size and power) for presence of acquirer skills but only for a marginal fraction of the acquirer population. This result is robust to endogenous attrition and varying time periods between successive transactions. Claims according to which acquirer skills are a first order factor explaining acquirer cross-sectional cumulated abnormal returns appears overstated. | |
1418 | On Multiple Discount Rates | Chambers, Christopher P. Echenique, Federico | 05/09/2016 | 33 | sswp1418.pdf | We propose a theory of intertemporal choice that is robust to specic assumptions on the discount rate. One class of models requires that one utility stream be chosen over another if and only if its discounted value is higher for all discount factors in a set. Another model focuses on an average discount factor. Yet another model is pessimistic, and evaluates a flow by the lowest available discounted value. | |
1417 | Full Stock Payment Marginalization in M&A Transactions | de Bodt, Eric Cousin, Jean-Gabriel Roll, Richard | 04/05/2016 | 55 | SSWP_1417.pdf | The number of merger and acquisition (M&A) transactions paid fully in stock in the U.S. market declined sharply after 2001, when pooling and goodwill amortization were abolished by the Financial Accounting Standards Board. Did this accounting rule change really have such far reaching implications? Using a differences-in-differences test and Canada as a counterfactual, this study reveals that it did. We also report several other results confirming the role of pooling abolishment, including (i) that the decrease in full stock payment relates to CEO incentives and (ii) that previously documented determinants of the M&A mode of payment cannot explain the post pooling abolishment pattern. These results are also robust to controls for various factors, such as the Internet bubble, the exclusion of cross-border deals, the presence of Canadian cross-listed firms, the use of a constant sample of acquirers across the pooling and post pooling abolishment periods, the use of Europe as an alternative counterfactual, and controls for the SEC Rule 10b-18 share repurchase safe harbor amendments of 2003. |