Sequential testing of product designs: Implications for learning

成果类型:
Article
署名作者:
Erat, Sanjiv; Kavadias, Stylianos
署名单位:
University of California System; University of California San Diego; University System of Georgia; Georgia Institute of Technology
刊物名称:
MANAGEMENT SCIENCE
ISSN/ISSBN:
0025-1909
DOI:
10.1287/mnsc.1070.0784
发表日期:
2008
页码:
956-968
关键词:
sequential testing design space complexity contingency analysis
摘要:
ast research in new product development (NPD) has conceptualized prototyping as a design- build- testanalyze cycle to emphasize the importance of the analysis of test results in guiding the decisions made during the experimentation process. New product designs often involve complex architectures and incorporate numerous components, and this makes the ex ante assessment of their performance dif. cult. Still, design teams often learn from test outcomes during iterative test cycles enabling them to infer valuable information about the performances of (as yet) untested designs. We conceptualize the extent of useful learning from analysis of a test outcome as depending on two key structural characteristics of the design space, namely whether the set of designs are close to each other (i. e., the designs are similar on an attribute level) and whether the design attributes exhibit nontrivial interactions (i. e., the performance function is complex). This study explicitly considers the design space structure and the resulting correlations among design performances, and examines their implications for learning. We derive the optimal dynamic testing policy, and we analyze its qualitative properties. Our results suggest optimal continuation only when the previous test outcomes lie between two thresholds. Outcomes below the lower threshold indicate an overall low performing design space and, consequently, continued testing is suboptimal. Test outcomes above the upper threshold, on the other hand, merit termination because they signal to the design team that the likelihood of obtaining a design with a still higher performance (given the experimentation cost) is low. We find that accounting for the design space structure splits the experimentation process into two phases: the initial exploration phase, in which the design team focuses on obtaining information about the design space, and the subsequent exploitation phase in which the design team, given their understanding of the design space, focuses on obtaining a good enough configuration. Our analysis also provides useful contingency-based guidelines for managerial action as information gets revealed through the testing cycle. Finally, we extend the optimal policy to account for design spaces that contain distinct design subclasses.