Dear Editor,
We read the recent study by Rabinowitz and coworkers with great interest. The authors compared symptomatic and cognitive outcome in patients with mild traumatic brain injury (mTBI) with those outcomes in patients with orthopedic injuries and in healthy controls. They also studied predictors of symptomatic and cognitive sequelae in adolescents and young adults with mTBI. The authors should be applauded for their well-executed study and the novel finding that the rate of elevated post-concussive symptoms was markedly greater in those with mTBI than in the control groups. The study identified three significant predictors for symptomatic sequelae and one for cognitive sequelae among mTBI patients. We will make some comments and propose some recommendations from a methodological perspective for future prediction studies in mTBI.
A rule of thumb in prediction research is that only one candidate predictor for every 10 outcome events should be tested, to prevent statistical overfitting. Overfitting refers to the phenomenon that occurs when a model performs well in the study sample, but not in other patient populations, limiting the generalizability of results. Rabinowitz and coworkers1 included 66 patients of whom 34 and 22 had symptomatic and cognitive sequelae respectively (‘‘events’’). They subsequently considered 14 candidate predictors for each outcome, indicating ratios of one candidate predictor for 2.4 and 1.6 events, respectively. Such a modeling strategy should hence be considered as exploratory. If reliable prediction models are to be developed, a more limited number of predictors should be considered based on clinical knowledge or previous studies. Related to this is that the authors used a standard backward selection procedure with a p value of 0.05. Such data-driven selection of predictors leads to unstable selections in small data sets. An internal validation procedure such as bootstrap validation can provide important insight in model instability and in optimism of predictive performance. Such a validation was not performed, but is recommendable as a first assessment of the generalizability of findings. In line with the authors, we recommend external validation before implementation of a prediction model in clinical practice can be recommended, irrespective of the specific modeling strategy. Therefore, in addition to exploring new predictors, future studies should also aim at validating the findings of Rabinowitz and coworkers and other previous prediction studies in mTBI.