Interview with Bryan Crump on Nights at RNZ

I had the pleasure to talk with Bryan Crump from RNZ about our work on the morality of robot abuse. Having the opportunity to talk about this topic in depth is much better than a short TV appearance. Here is the full interview:

TVNZ reports on our study on The Morality Of Abusing A Robot

Our study was featured on 1 News.

TVNZ’s reporter Lisa Davis interviewed us about our latest study on “The Morality Of Abusing A Robot”. The paper was published under the Creative Commons license at the Paladyn Journal. Merel did an excellent job speaking in a TV interview.

Expressing uncertainty in Human-Robot interaction

PLOS One published our new article on Expressing uncertainty in Human-Robot interaction. This was another successful collaboration with Elena Moltchanova from Maths & Stats. The goal of the study was to explore ways on how to communicate the uncertainty inherent in human-robot interaction. More specifically, the interaction with a passenger and his/her autonomous vehicle. This is of particular importance since driving in an autonomous vehicle can result in the loss of life. So how do you tell a passenger that his chance of surviving this trip is almost certain?

Most people struggle to understand probability which is an issue for Human-Robot Interaction (HRI) researchers who need to communicate risks and uncertainties to the participants in their studies, the media and policy makers. Previous work showed that even the use of numerical values to express probabilities does not guarantee an accurate understanding by laypeople. We therefore investigate if words can be used to communicate probability, such as “likely” and “almost certainly not”. We embedded these phrases in the context of the usage of autonomous vehicles. The results show that the association of phrases to percentages is not random and there is a preferred order of phrases. The association is, however, not as consistent as hoped for. Hence, it would be advisable to complement the use of words with numerical expression of uncertainty. This study provides an empirically verified list of probabilities phrases that HRI researchers can use to complement the numerical values.

Visual Metaphor Gone Wrong

UC uses wrong design for cyber security campaign.

The University of Canterbury is making an effort to increase awareness for the need of strong passwords. For this purpose they ran the “Longer Is Stronger” campaign, including a poster that is still being shown on displays across the campus.

The problem is that the visual metaphor of a chain is completely wrong. A chain is only as strong as its weakest link and with an increasing length of the chain the likelihood of a particularly week link increases. A longer chain is weaker than a short one. I really hope that our IT security experts are smarter than our visual designers.

A password is indeed usually stronger the longer it is. Encouraging students and staff to use long passwords is a step in the right direction. It would be even better if UC would offer password managers, such as 1Password or LastPass to all its members. That way our passwords could not only be long, but we would be able to conveniently access them. But putting money where you mouth is, is a skill that UC still needs to practice. Purchases of password managers are still being processed on an individual basis and it can take months to complete a purchase.

The Morality Of Abusing A Robot

Our paper The Morality Of Abusing A Robot has been published.

We are happy to announce that our paper “The Morality Of Abusing A Robot” has been published under the Creative Commons license at the Paladyn Journal. You can also download the PDF directly.

It is not uncommon for humans to exhibit abusive behaviour towards robots. This study compares how abusive behaviour towards a human is perceived differently in comparison with identical behaviour towards a robot. We showed participants 16 video clips of unparalleled quality that depicted different levels of violence and abuse. For each video, we asked participants to rate the moral acceptability of the action, the violence depicted, the intention to harm, and how abusive the action was. The results indicate no significant difference in the perceived morality of the actions shown in the videos across the two victim agents. When the agents started to fight back, their reactive aggressive behaviour was rated differently. Humans fighting back were seen as less immoral compared with robots fighting back. A mediation analysis showed that this was predominately due to participants perceiving the robot’s response as more abusive than the human’s response.

We created a little video to demonstrate the two main conditions of the experiment, a human or a robot being abused and then fighting back. We would like to acknowledge Jake Watson and Sam Gorski from Corridor Digital who made the stimuli for this experiment available.