Model-based decision support systems (DSSs) improve performance in many contexts that are datarich, uncertain, and require repetitive decisions. But such DSSs are often not designed to help users understand and internalize the underlying factors driving DSS recommendations. Users then feel uncertain about DSS recommendations, leading them to possibly avoid using the system. We argue that a DSS must be designed to induce an alignment of a decision maker’s mental model with the decision model embedded in the DSS. Such an alignment requires effort from the decision maker and guidance from the DSS. We experimentally evaluate two DSS design characteristics that facilitate such alignment: (i) feedback on the upside potential for performance improvement and (ii) feedback on corrective actions to improve decisions. We show that, in tandem, these two types of DSS feedback induce decision makers to align their mental models with the decision model, a process we call deep learning, whereas individually these two types of feedback have little effect on deep learning. We also show that deep learning, in turn, improves user evaluations of the DSS. We discuss how our findings can potentially lead to DSS design improvements and better returns on DSS investments.

Additional Metadata
Keywords DSS design, decision support systems, evaluations, feedback, learning, mental models
Persistent URL dx.doi.org/10.1287/isre.1080.0198, hdl.handle.net/1765/15059
Citation
Kayande, U, de Bruyn, A, Lilien, G.L, Rangaswamy, A, & van Bruggen, G.H. (2008). How Incorporating Feedback Mechanisms in a DSS Affects DSS Evaluations. Information Systems Research, 20(4), 527–546. doi:10.1287/isre.1080.0198