This talk focuses on the role of expectations in designing explanations from Artificial Intelligence/Machine Learning (AI/ML) -based systems. Explanations are crucial for system understanding that, in turn, are very relevant to supporting trust and trust calibration in such systems. I discuss the connections between expectations, explanations and trust in human-AI/ML system interaction.
I present two recent studies ([1, 2]) investigating if expectations modulate what people want to see and when from an AI/ML system when carrying out analytical tasks.
We found out that,
Overall, user expectations are a significant variable in determining the most suitablecontent of explanations (including whether an explanation is needed at all). More research isneeded to investigate the relationship between expectations and explanations, and how theysupport trust calibration.