Publications

Ridge Distributions and Information Design in Simultaneous All-Pay Auction Contests (with Zhonghong Kuang and Jie Zheng). Games and Economic Behavior (2024).

Abstract

Two informed contestants compete in a contest, and the organizer ex-ante designs a public anonymous disclosure policy to maximize contestants' total effort. We fully characterize ridge distributions, under which the organizer achieves the first best outcome in equilibrium: the allocation is efficient, and the entire surplus goes to the organizer. When the prior is more positively correlated than ridge distributions, the first-best outcome is achievable by the signal that solely generates ridge distributions as posteriors.

Working Papers

Algorithmic Collusion of Pricing and Advertising on E-commerce Platforms Major Revision at Marketing Science

with Ron Berman.

Finalist, 2025 ASA Marketing Section Doctoral Dissertation Research Award

Abstract

When online sellers use AI learning algorithms to automatically compete on e-commerce platforms, there is concern that they will learn to coordinate on higher than competitive prices. However, this concern was primarily raised in single-dimension price competition. We investigate whether this prediction holds when sellers make pricing and advertising decisions together, i.e., two-dimensional decisions. We analyze competition in multi-agent reinforcement learning, and use a large-scale dataset from Amazon.com to provide empirical evidence. We show that when consumers have high search costs, learning algorithms can coordinate on prices lower than competitive prices, facilitating a win-win-win for consumers, sellers, and platforms. This occurs because algorithms learn to coordinate on lower advertising bids, which lower advertising costs, leading to lower prices and enlarging demand on the platform. We also show that our results generalize to any learning algorithm that uses exploration of price and advertising bids. Consistent with our predictions, an empirical analysis shows that price levels exhibit a negative interaction between estimated consumer search costs and algorithm usage index. We analyze the platform's strategic response and find that reserve price adjustments will not increase platform profits, but commission adjustments will, while maintaining the beneficial outcomes for both sellers and consumers.

The Impact of LLMs on Online News Consumption and Production Under Review

with Ron Berman.

Abstract

Large language models (LLMs) change how consumers acquire information online; their bots also crawl news publishers' websites for training data and to answer consumer queries; and they provide tools that can lower the cost of content creation. These changes lead to predictions of adverse impact on news publishers in the form of lowered consumer demand, reduced demand for newsroom employees, and an increase in news "slop." Consequently, some publishers strategically responded by blocking LLM access to their websites using the robots.txt file standard. Using high-frequency granular data, we document four effects related to the predicted shifts in news publishing following the introduction of generative AI (GenAI). First, we find a moderate decline in traffic to news publishers occurring after August 2024. Second, using a difference-in-differences approach, we find that blocking GenAI bots can be associated with a reduction of total website traffic to large publishers compared to not blocking. Third, on the hiring side, we do not find evidence that LLMs are replacing editorial or content-production jobs yet. The share of new editorial and content-production job listings increases over time. Fourth, regarding content production, we find no evidence that large publishers increased text volume; instead, they significantly increased rich content and use more advertising and targeting technologies. Together, these findings provide early evidence of some unforeseen impacts of the introduction of LLMs on news production and consumption.

Choosing the Winner: When and How to Correct for Selection Bias in Randomized Experiments Under Review

with Ron Berman and Walter W. Zhang.

Abstract

Decision-makers often select the best-performing treatment in a randomized experiment for deployment. This practice leads to the winner's curse: the estimated performance of the selected treatment is biased upwards because selection might favor treatments that had higher outcomes by chance. We analyze this problem by distinguishing three distinct objectives. (1) Global winner's curse: the bias relative to the truly best treatment; (2) selected winner's curse: the bias relative to the deployed treatment's true mean; and (3) regret: the loss from selecting the wrong treatment compared to the truly best. We derive an identity linking these three quantities and show that methods optimal for estimating for one objective can underperform for others. We evaluate proposed solutions including sample splitting, cross-fitting, bootstrap bias correction, adaptive resampling, conditional inference, and a novel empirical likelihood approach. When we focus on decision-making scenarios that reflect realistic experimental decision making settings, our results provide practical guidance: cross-fitting excels when treatments have similar effects, bootstrap correction offers good MSE properties for moderate differences between treatments, and the simple plug-in estimator dominates when treatment effects are large or in the asymptotic regime. Our proposed adaptive empirical likelihood method provides valid confidence intervals without being sensitive to a tuning parameter like resampling methods.

Strategic Design of Recommendation Algorithms

with Ron Berman and Yi Zhu.

Abstract

We analyze recommendation algorithms that firms can engineer to strategically provide information to consumers about products with uncertain matches to their tastes. Monopolists who cannot alter prices can design recommendation algorithms to oversell, i.e., that recommend products even if they are not a perfect fit, instead of algorithmically recommending perfectly matching products. However, when prices are endogenous or when competition is rampant, firms opt to reduce their overselling efforts and instead choose to fully reveal the product's match (i.e., maximize recall and precision). As competition strengthens, the algorithms will shift to demarket their products, i.e., under-recommend highly fitting products, in order to soften price competition. When a platform designs a recommendation algorithm for products sold by third-party sellers, we find that demarketing might be a more prevalent strategy of the platform. Additionally, we find that platforms bound by fairness constraints may gain lower profits compared to letting sellers compete, while discriminatory designs do not necessarily result in preferential outcomes for a specific seller.

Work in Progress

The Effectiveness of Digital Advertising Across Multiple Platforms

with Kenneth C. Wilbur.

Reinforcement Learning and Optimal Credit Allocation

with Vitaly M. Bord, Agnes Kovacs, and Patrick Moran.

A Transformer-Based Framework for Consumer Search Modeling

with Zhenling Jiang.