Forthcoming in Strategic Management Journal
(with Rembrand Koning)
We conduct a field experiment at an entrepreneurship bootcamp to investigate whether interaction with proximate peers shapes a nascent startup team’s performance. We find that teams whose members lack prior ties to others at the bootcamp experience peer effects that influence the quality of their product prototypes. A one-standard-deviation increase in the performance of proximate teams is related to a two-thirds standard-deviation improvement for a focal team. In contrast, we find that teams whose members have many prior ties interact less frequently with proximate peers, and thus their performance is unaffected by nearby teams. Our findings highlight how prior social connections, which are often a source of knowledge and influence, can limit new interactions and thus the ability of organizations to leverage peer effects to improve the performance of their members.
Strategic Management Journal, 40(3), 331-356
(with Aaron Chatterji, Solene Delecourt and Rembrand Koning)
Why do some entrepreneurs thrive while others fail? We explore whether the advice entrepreneurs receive about managing their employees influences their startup’s performance. We conducted a randomized field experiment in India with 100 high-growth technology firms whose founders received in-person advice from other entrepreneurs who varied in their managerial style. We find that entrepreneurs who received advice from peers with a formal approach to managing people—instituting regular meetings, setting goals consistently, and providing frequent feedback to employees—grew 28% larger and were 10 percentage points less likely to fail than those who got advice from peers with an informal approach to managing people, two years after our intervention. Entrepreneurs with MBAs or accelerator experience did not respond to this intervention, suggesting that formal training can limit the spread of peer advice.
Working Paper (coming soon)
(with Rembrand Koning)
Prediction tasks are fundamental to startup team design. To create a high-performance team, founders, incubators and venture capitalists must use available information about potential team members to make predictions about how a new team might perform. Presently, team design in such contexts relies on limited information and simple heuristics. We introduce and evaluate a machine learning based framework for data-driven team design. Our empirical evaluation leverages data from the `Innovate Delhi’ bootcamp and field experiment. The structure of the experiment, which collected detailed data on 112 individuals and the performance of 117 teams in which they worked over three weeks, provides an opportunity to evaluate the proposed framework. We find that out-of-sample predictions can explain between 10% to 15% of the variation in the actual performance of teams. The best performing models use `social’ information about team members, specifically, their social networks and evaluations by peers on previous teams.
Working Paper (coming soon)
(with Rembrand Koning and Aaron Chatterji)
A/B testing, controlled digital experimentation, has been advocated by practitioners as a tool to learn about consumer demand and increase firm performance. The academic literature on experimentation, however, offers mixed guidance on this proposition. While some scholars have found the A/B tests can increase key performance indicators, other work casts doubt on whether A/B tests are broad enough to affect firm level outcomes. Even more pessimistically, some have argued that A/B tests might distort startups to chase testable minor improvements leading to incremental product innovation. We present the first evidence of the firm-level impact of A/B testing by creating a unique data set tracking startup growth metrics, technology stacks, design changes, financing information, and product launches for thousands of firms. Using both a rich set of fixed effects and an instrument leveraging price changes we find evidence consistent with the argument that A/B testing drives increases in firm growth, as measured by website page views. While A/B testing leads websites to build incremental and vanilla page designs, we find little evidence that A/B testing leads to more incremental product innovation. Instead, A/B testing appears to lead to more extreme outcomes, increased funding, and more product launches.
Working Paper (November, 2018)
(with Anuj Kumar)
We analyze whether widespread online access to school quality information affected economic and social segregation in America. We leverage the staged roll-out of GreatSchools.org school ratings across America from 2006-2015 to answer this question. Across a range of outcomes and specifications, we find that the mass availability of school ratings has accelerated divergence in housing values, income distributions, education levels, as well as the racial and ethnic composition across communities. Affluent and more educated families were better positioned to leverage this new information to capture educational opportunities in communities with the best schools. An unintended consequence of better information was less, rather than more, equity in education.
(with Rembrand Koning)
Why do some people generate better ideas than others? We conducted a field experiment at a startup bootcamp to evaluate the impact of informal conversations on the quality of product ideas generated by participants. Specifically, we examine how the personality of an innovator (their openness to experience, capturing creativity) and the personalities of her randomly assigned conversational peers (their extraversion, measuring willingness to share information) affects the innovator’s ideas. We find that open innovators who spoke with extroverted peers generated significantly better ideas than others at the bootcamp. However, closed individuals produced mediocre ideas regardless of with whom they spoke, suggesting limited benefits of conversations for these people. More surprisingly, open individuals, who are believed to be inherently creative, produced worse ideas after they spoke with introverted peers, suggesting individual creativity’s dependence on external information. Our study demonstrates the importance of considering the traits of both innovators and their conversational peers in predicting who will generate the best ideas.
(with Sampsa Samila and Alexander Oettl)
We explore whether helpful behavior makes collaborative networks more resilient to decay. Using a novel research design, we study whether research collaborations among 11,000 pairs of research immunologists persists after the unexpected loss of a third collaborator. We find that dyads whose departed third collaborator was helpful—as indicated by acknowledgments in journal articles—continue to collaborate after the death of their third. In contrast, dyads who lost a non-helpful third experienced a 5-12%-point decline in their probability of repeat collaboration. The effect of third-party helpfulness was particularly strong when they were high status and when the treated dyad did not have a prior history of helpful behavior. Our results speak to the central role that helpfulness plays in shaping the collaborative relationships that underpin science and innovation.