Can startups be designed with algorithms?

Working Paper (coming soon)

(with Rembrand Koning)

Prediction tasks are fundamental to startup team design. To create a high-performance team, founders, incubators and venture capitalists must use available information about potential team members to make predictions about how a new team might perform. Presently, team design in such contexts relies on limited information and simple heuristics. We introduce and evaluate a machine learning based framework for data-driven team design. Our empirical evaluation leverages data from the `Innovate Delhi’ bootcamp and field experiment. The structure of the experiment, which collected detailed data on 112 individuals and the performance of 117 teams in which they worked over three weeks, provides an opportunity to evaluate the proposed framework. We find that out-of-sample predictions can explain between 10% to 15% of the variation in the actual performance of teams. The best performing models use `social’ information about team members, specifically, their social networks and evaluations by peers on previous teams.

When does A/B testing impact startup performance?

Working PaperĀ  (coming soon)

(with Rembrand Koning and Aaron Chatterji)

A/B testing, controlled digital experimentation, has been advocated by practitioners as a tool to learn about consumer demand and increase firm performance. The academic literature on experimentation, however, offers mixed guidance on this proposition. While some scholars have found the A/B tests can increase key performance indicators, other work casts doubt on whether A/B tests are broad enough to affect firm level outcomes. Even more pessimistically, some have argued that A/B tests might distort startups to chase testable minor improvements leading to incremental product innovation. We present the first evidence of the firm-level impact of A/B testing by creating a unique data set tracking startup growth metrics, technology stacks, design changes, financing information, and product launches for thousands of firms. Using both a rich set of fixed effects and an instrument leveraging price changes we find evidence consistent with the argument that A/B testing drives increases in firm growth, as measured by website page views. While A/B testing leads websites to build incremental and vanilla page designs, we find little evidence that A/B testing leads to more incremental product innovation. Instead, A/B testing appears to lead to more extreme outcomes, increased funding, and more product launches.