Log inLog Out
For YouPoliticsEntertainmentRelationshipLifestyleSportsTechnology
Would you trust AI to mediate conflict?

YAHYADIN

Oct. 22, 2019

How close is artificial intelligence developing to be able to be used for conflict resolution? And if so, who would be the first political leaders to trust it? A new study says we will soon listen to virtual agents for advice.
At the moment if the question 'do we trust artificial intelligence agents to mediate conflict?' was pitched, most people would answer 'no' or 'not entirely', depending on their knowledge of technology and how well they trust machines to process data and arrive at a logical, consensual conclusion.
An inquiry into artificial intelligence and the ability to weigh up conflict data has been undertaken by political scientists at University of Southern California and the University of Denver. To examine how advanced technology is and how people might react to it, the researchers developed a simulation requiring a three-person team to work with a virtual agent avatar on screen. The exercise was to engage in a mission designed to ensure failure and elicit conflict.
The purpose of the study was to consider how virtual agents might work potential mediators as to improve team collaboration during conflict mediation. Preliminary work had shown that one-on-one human interactions with a virtual agent therapist can lead to greater amounts of information being revealed. The output was presented at the 28th IEEE International Conference on Robot and Human Interactive Communication, which took place on October 15, 2019 in New Delhi, India.
The new research was run at a military academy environment. The participants were presented with some 27 different scenarios. Not every scenario included the virtual assistant. The teams were tested to see whether the presence of a virtual agent influenced whether the scenario led to conflict.
The outcome was that the teams did interact with the virtual agent when it was present, especially during planning stages. However, the longer the exercise progressed, the study participant's engagement with the virtual agent fell away.
Based on this, lead researcher Kerstin Haring surmises: "Our results show that virtual agents and potentially social robots might be a good conflict mediator in all kinds of teams. It will be very interesting to find out the interventions and social responses to ultimately seamlessly integrate virtual agents in human teams to make them perform better."
Further study will continue, including tests to see what happens as artificial intelligence progresses and whether those engaged in weighting up conflict scenarios start to trust machines more.
0
Comments
Sign in to post a message
You're the first to comment.
Say something
Recommend
Log in