Back to top

Participation to the DMCA Conference 2022

Participation to the DMCA Conference 2022

We’re proud to present our on-going work at the full online conference DMCA 2022.

On Thursday, Nov., 3 we will present at the Digital Meeting for Conversion Analysis.

Here you can have a look at our interactive poster that gives an overview over the project

poster

Here is our abstract :

Our poster is a presentation of a research project that brings together researchers in artificial intelligence (AI) and conversation analysis (CA): we want to show our first results, but first and foremost we want to address the caveats that we encountered so far in this project and discuss our applied methodology. We conduct a three-year (2021-2024) exploratory study which aims at analyzing task-oriented human-robot interaction (HRI) in order to improve the social/interactional abilities of robots. The focus of this collaborative work is on the multi-modal and situated practices that achieve the temporal coordination of turn-taking (Levinson & Torreira 2015) and the progressivity (Fischer et al. 2019) of interaction, dealing with trouble, repair, reformulations or accounts. In order to focus on these dimensions, we delimited a particular use case and set of actions that will remain the same across different software improvements : the humanoid robot Pepper is supposed to orient and inform users of a university library in Lyon (French language). The poster will show the different steps from

  1. the project’s goals,
  2. the development of the first version of the robot, taking into account the sequential organization of human-human interactions in service encounters,
  3. the data acquisition of 17 hours of video-recorded data (and encountered problems),
  4. the sequential and multi-modal analysis of the data with regard to failures and repairs in interaction as for instance the management of the participation framework in multi-party interactions. First collections document how people contingently humanize the robot by indexing preference, rights and obligations in regular interactions, and how they orient to their alignment or not for the practical purpose of building humorous activities or complaints addressed to the other humans (see paper Tisserand/Baldauf-Quilliatre). The last two steps are
  5. the annotation of approximately 500 interactions (according to the most relevant phenomena identified as problematic or consequential) and
  6. the first solutions for a new programming based on developmental machine learning. The corpus has to be systematically annotated so that AI researchers can teach the machine to recognize multimodal features of interaction and propose new AI models that account for sequentiality (prospective and retrospective indexicalities). We address issues with how to mix top-down and bottom-up approaches (Levinson 2013) in order to process indexical and embodied conducts (e.g. humans do not gaze “on their right”, they might gaze “at someone” on their right).

You can also listen to our presentation about humans’ reactions to the robot’s first offers.

paper

Here is our abstract :

Our paper will investigate the opening of human-robot interactions (HRI) in a service encounter. In our data, students freely interact with a robot (Pepper) that is supposed to inform and orient them in a university library in Lyon.We developed an interaction script with regard to the robot’s tasks and conversational findings concerning human-human interactions in service encounters (e.g. Mondada 2021, Harjunpää et al. 2018). The script has then been adapted to the robot’s built-in software and capabilities. While the robot’s initial programming is based on basic word recognition, the analyses are part of an interdisciplinary project which aims at creating new algorithms that take into account the multi-modality and progressivity (Fischer et al. 2019) of social interactions. We video-recorded (multiple cameras and a first person view) 17 hours of data from which we extracted approximately 500 HRI from 6 seconds to 3 minutes (a total amount of 4 hours and 42 minutes).However, data show that the library users do not align with the categories and interactional formats of the expected (human-human) service encounter. In our paper, we are interested in the responses to the first and second scripted turns produced by the robot. As a first turn, a generic offer of assistance is packed with a greeting (“Hello (.) can I help you”). At his point, if the humans accept this first offer or if they solely respond to the greeting, a second offer (“How can I help you”) that projects a set of relevant requests or information questions is iterated. We analyse how the turn design of users responses (rejects, acceptances and requests) frame different actions and preferences than a regular service encounter (e.g. doing designing a request as an instructed action). Focusing on the organisation of turn-taking and participation framework during multiparty HRI, we then show how these user responses display a contingent treatment of the robot as a co-participant or as just another technological device like a voice agent (Avgustis et al. 2021), how students understand its purpose, its relevant institutional identity and the rights and obligations attached to these. Our conclusions problematize the setting that we aimed to reproduce when we designed a robot as a service provider as institutions and companies do. In particular, the robot’s script expects human participants to act with benefactive statuses and stances (Clayman and Heritage 2014) that do not seem to virtually operate with robots.