Christiane Attig, Tim Schrills, Markus Gödker, Christiane Wiebel, Patricia Wollstadt, Thomas Franke,
"Enhancing Trust in Smart Charging Agents—The Role of Traceability for Human-Agent-Cooperation",
HCI International 2023 – Late Breaking Papers. HCII 2023., vol. 14059, 2023.
Abstract
Introduction and background: The EU aims for climate neutrality by 2050, which necessitates
a comprehensive transformation of the transport sector, including a 90% reduction in
emissions [3]. Consequently, the demand for electric vehicles (EVs) will strongly rise within the
next years. It has been argued that this demand will pose a challenge for the stability of the
power grid [6] – particularly if EVs are charged with renewable electricity, which is subject to
strong fluctuations in supply and might not be flexible enough to meet user needs at all times
[1]. Conversely, EVs offer a great potential for increasing grid stability through bidirectional
charging, that is, EVs can store or provide excess energy to the grid as needed [7]. As a
consequence, the complexity of the charging process increases (e.g., in terms of planning,
technical understanding). Thus, the collective benefit of grid stability may come at a cost for
the individual user, who might face a restriction of personal resources (e.g., time, comfort [6]).
Smart charging agents relying on techniques from the field of artificial intelligence (AI) offer
one solution to combine user comfort with optimal utilization of renewable energy resources.
To realize this solution, smart charging agents need to be perceived as cooperative partners
within a joint activity [5] who assist users to achieve not only individual, but also collective
goals. Therefore, it is crucial to maximize users’ perception of advantages from cooperating
with the system. One core variable for enhancing cooperation between users and an AI system
such as a smart charging agent is trust, which can be increased by AI traceability [9].
Objective and significance: The present research aimed at understanding the potential of AI
traceability (i.e., transparency, understandability, and predictability [8]) for enhancing trust in
the context of smart charging in car sharing fleets.
Method: For an online experiment, a basic algorithm was designed to calculate the resource
efficiency of booking an EV from a car-sharing fleet based on simulated data. The data was
based on 10 features (e.g., time of booking start and end, expected network power demand,
likelihood of a peak load). In five subsequent observation blocks, N = 57 participants were
asked to observe 10 cost calculations made by the algorithm (i.e., 50 observations in total).
After each observation block, participants rated their subjective experience with the algorithm
(i.e., trust via the Facets of Systems Trustworthiness scale [4]; traceability with the Subjective
Information Processing Awareness scale [8]). To evaluate participants’ ability to predict the
algorithm’s results, a performance block followed, in which participants were asked to
estimate booking costs based on the disclosed information (20 estimations in total). The
traceability of the algorithm was experimentally manipulated by varying the amount of
disclosed information that formed the basis of the cost calculation (high, medium, low
information; between-factors design).
Results: Using planned contrast analyses, it was shown that trust partially varied with the
amount of disclosed information (higher amount of information related to higher reported
trust). Moreover, traceability was partially higher in the high information group than the
medium and low information groups. Analyses of the three subscales of traceability revealed
that effects were particularly pronounced for understandability and predictability, while no
effect was found for transparency. In addition, participants’ performance in estimating the
booking costs did not vary with amount of disclosed information.
Discussion: While additional information enhanced subjective experiences of trust,
understandability, and predictability of a smart charging agent for EV car sharing, they did not
improve transparency ratings and estimation of the algorithm’s output. This pattern of results
might reflect an explainability pitfall [2]: Users of smart charging agents might trust these
systems more as traceability increases, regardless of how well they understand the system.
Download Bibtex file
Download PDF