Change search
Link to record
Permanent link

Direct link
BETA
Publications (3 of 3) Show all publications
Thill, S., Riveiro, M., Lagerstedt, E., Lebram, M., Hemeren, P., Habibovic, A. & Klingegård, M. (2018). Driver adherence to recommendations from support systems improves if the systems explain why they are given: A simulator study. Transportation Research Part F: Traffic Psychology and Behaviour, 56, 420-435
Open this publication in new window or tab >>Driver adherence to recommendations from support systems improves if the systems explain why they are given: A simulator study
Show others...
2018 (English)In: Transportation Research Part F: Traffic Psychology and Behaviour, ISSN 1369-8478, E-ISSN 1873-5517, Vol. 56, p. 420-435Article in journal (Refereed) Published
Abstract [en]

This paper presents a large-scale simulator study on driver adherence to recommendationsgiven by driver support systems, specifically eco-driving support and navigation support.123 participants took part in this study, and drove a vehicle simulator through a pre-defined environment for a duration of approximately 10 min. Depending on the experi-mental condition, participants were either given no eco-driving recommendations, or asystem whose provided support was either basic (recommendations were given in theform of an icon displayed in a manner that simulates a heads-up display) or informative(the system additionally displayed a line of text justifying its recommendations). A naviga-tion system that likewise provided either basic or informative support, depending on thecondition, was also provided.

Effects are measured in terms of estimated simulated fuel savings as well as engine brak-ing/coasting behaviour and gear change efficiency. Results indicate improvements in allvariables. In particular, participants who had the support of an eco-driving system spenta significantly higher proportion of the time coasting. Participants also changed gears atlower engine RPM when using an eco-driving support system, and significantly more sowhen the system provided justifications. Overall, the results support the notion that pro-viding reasons why a support system puts forward a certain recommendation improvesadherence to it over mere presentation of the recommendation.

Finally, results indicate that participants’ driving style was less eco-friendly if the navi-gation system provided justifications but the eco-system did not. This may be due to par-ticipants considering the two systems as one whole rather than separate entities withindividual merits. This has implications for how to design and evaluate a given driver sup-port system since its effectiveness may depend on the performance of other systems in thevehicle.

Keywords
Driver behaviour, System awareness, Eco-friendly behaviour, Driver recommendation systems
National Category
Psychology Human Computer Interaction Information Systems
Research subject
Interaction Lab (ILAB); Skövde Artificial Intelligence Lab (SAIL); INF301 Data Science; INF302 Autonomous Intelligent Systems
Identifiers
urn:nbn:se:hj:diva-43234 (URN)10.1016/j.trf.2018.05.009 (DOI)000437997700037 ()2-s2.0-85048505654 (Scopus ID)0;0;miljJAIL (Local ID)0;0;miljJAIL (Archive number)0;0;miljJAIL (OAI)
Projects
TIEB
Funder
Swedish Energy Agency
Available from: 2018-06-04 Created: 2019-03-05 Last updated: 2019-08-23Bibliographically approved
Lagerstedt, E., Riveiro, M. & Thill, S. (2017). Agent Autonomy and Locus of Responsibility for Team Situation Awareness. In: HAI '17: Proceedings of the 5th International Conference on Human Agent Interaction. Paper presented at 5th International Conference on Human Agent Interaction, Bielefeld, October 17-20, 2017 (pp. 261-269). New York: Association for Computing Machinery (ACM)
Open this publication in new window or tab >>Agent Autonomy and Locus of Responsibility for Team Situation Awareness
2017 (English)In: HAI '17: Proceedings of the 5th International Conference on Human Agent Interaction, New York: Association for Computing Machinery (ACM) , 2017, p. 261-269Conference paper, Published paper (Refereed)
Abstract [en]

Rapid technical advancements have led to dramatically improved abilities for artificial agents, and thus opened up for new ways of cooperation between humans and them, from disembodied agents such as Siris to virtual avatars, robot companions, and autonomous vehicles. It is therefore relevant to study not only how to maintain appropriate cooperation, but also where the responsibility for this resides and/or may be affected. While there are previous organisations and categorisations of agents and HAI research into taxonomies, situations with highly responsible artificial agents are rarely covered. Here, we propose a way to categorise agents in terms of such responsibility and agent autonomy, which covers the range of cooperation from humans getting help from agents to humans providing help for the agents. In the resulting diagram presented in this paper, it is possible to relate different kinds of agents with other taxonomies and typical properties. A particular advantage of this taxonomy is that it highlights under what conditions certain effects known to modulate the relationship between agents (such as the protégé effect or the "we"-feeling) arise.

Place, publisher, year, edition, pages
New York: Association for Computing Machinery (ACM), 2017
Keywords
HAI, Locus of Responsibility, Agent Relationship, Classification of Artificial Agents
National Category
Interaction Technologies
Research subject
Interaction Lab (ILAB); Skövde Artificial Intelligence Lab (SAIL); INF302 Autonomous Intelligent Systems
Identifiers
urn:nbn:se:hj:diva-43241 (URN)10.1145/3125739.3125768 (DOI)2-s2.0-85034847392 (Scopus ID)0;0;miljJAIL (Local ID)978-1-4503-5113-3 (ISBN)0;0;miljJAIL (Archive number)0;0;miljJAIL (OAI)
Conference
5th International Conference on Human Agent Interaction, Bielefeld, October 17-20, 2017
Projects
Dreams4Cars
Funder
EU, Horizon 2020, 731593
Available from: 2017-10-30 Created: 2019-03-05 Last updated: 2019-08-23Bibliographically approved
Lagerstedt, E., Riveiro, M. & Thill, S. (2015). Interacting with Artificial Agents. In: Sławomir Nowaczyk (Ed.), Thirteenth Scandinavian Conference on Artificial Intelligence: . Paper presented at 13th Scandinavian Conference on Artificial Intelligence, SCAI 2015, Halmstad, Sweden, 4 November 2015 through 5 November 2015 (pp. 184-185). IOS Press
Open this publication in new window or tab >>Interacting with Artificial Agents
2015 (English)In: Thirteenth Scandinavian Conference on Artificial Intelligence / [ed] Sławomir Nowaczyk, IOS Press, 2015, p. 184-185Conference paper, Oral presentation with published abstract (Refereed)
Place, publisher, year, edition, pages
IOS Press, 2015
Series
Frontiers in Artificial Intelligence and Applications, ISSN 1879-8314 ; 278
Keywords
Human-Machine Interaction, Trust, Cooperation, Locus of Control
National Category
Human Computer Interaction
Research subject
Technology; Interaction Lab (ILAB); Skövde Artificial Intelligence Lab (SAIL)
Identifiers
urn:nbn:se:hj:diva-43257 (URN)10.3233/978-1-61499-589-0-184 (DOI)2-s2.0-84963706294 (Scopus ID)0;0;miljJAIL (Local ID)978-1-61499-589-0 (ISBN)978-1-61499-588-3 (ISBN)0;0;miljJAIL (Archive number)0;0;miljJAIL (OAI)
Conference
13th Scandinavian Conference on Artificial Intelligence, SCAI 2015, Halmstad, Sweden, 4 November 2015 through 5 November 2015
Projects
TIEB
Available from: 2016-07-14 Created: 2019-03-05 Last updated: 2019-08-23Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-8937-8063

Search in DiVA

Show all publications