In high stress or dangerous situations, virtual assistants and collaborative robots that have the ability to make recommendations or physically intervene could help minimize damage and even save human lives. By recognizing unexpected obstacles and finding the optimal paths around them, they could, for example, be employed to fix machinery in hazardous or difficult-to-access terrain.
Such assistants would need to be able to assimilate information both from the visual scene and from the conversational interaction with human controllers, which it would then use to define, update or correct plans on the fly. They would also need to be able to actively collaborate with the controllers—to participate via language or action in joint conception or execution of tasks. This in turn would entail a capacity to represent the intentions and plans of their interlocutors and to recognize and repair any inconsistencies in these representations when need be.
Combining work from linguistics on models of multimodal conversation with research in robotics on the representation of cognitive states and plans, the principal goal of DISCUTER is to develop a dialogue module capable of managing a conversation using input from the non-linguistic context and dynamically evolving belief state representations. In so doing, the project will bring us one step closer to cobots and virtual agents who, like humans, can update their representations in real time in response to information conveyed in conversation or the external world.
ASTRID project (ANR-21-ASIA-0005)
1 January, 2022 – 31 December, 2024
Lead: Linagora
Accessibility
visibility_offDisable flashes
titleMark headings
settingsBackground Color
zoom_outZoom out
zoom_inZoom in
remove_circle_outlineDecrease font
add_circle_outlineIncrease font
spellcheckReadable font
brightness_highBright contrast
brightness_lowDark contrast
format_underlinedUnderline links
font_downloadMark links