FR

Conversational Explainable Artificial Intelligence: Towards More Human-Centered Explanations

Ingénierie et Architecture

Alexandra Kovacs

Due to rapid advances in Artificial Intelligence (AI), many AI-based systems have grown more powerful, starting to function as "black boxes". In response, the field of Explainable AI (xAI) arose to turn these boxes into "glass" ones by making their decisions transparent through human-understandable explanations. However, paradoxically, while the goal is to foster trust, acceptance, and understanding, xAI solutions are often designed for people with experience in AI, developers and researchers, rather than the real target end-users (lay users) - a phenomenon sometimes described as "inmates running the asylum".

As explanations are the pillar of xAI, the way we model and present them, as well as the information they contain, matters. Beyond that, we must acknowledge that there is no "one-size-fits-all" solution: people differ as individuals, in their domain knowledge and backgrounds, and in their needs and goals, shaping what might constitute a "good explanation" for them. Accordingly, explanations should be adapted to the needs of the explainee.

Then, since explanations imply an exchange of information between two parties, Conversational Interfaces (CIs) are a natural medium for delivering them (especially with today's Large Language Models). Social sciences, often neglected in the field of xAI, highlight how central dialogue is to explain and motivate drawing inspiration from how humans explain things to one another. At the same time, user studies remain relatively scarce compared to what is needed, despite the field’s emerging user-centered goals.

Therefore, this thesis aims to contribute to conversational xAI by integrating explainability into CIs in a human-centered manner, with a focus on explanation modeling.