Language

The ability to reliably transcribe spoken conversation opens up the possibility of exploiting transcribed data for tasks that require advanced language understanding capacities, ranging from automatic summarization, to more fruitful dialogues with artificial assistants, to full-fledged situated interactions with assistants able to exploit information from the visual context during conversation. In collaboration with academic and industrial research partners, our team is strongly invested in developing innovative models of language understanding that draw on our solid experience in machine learning and a hybrid approach to research that brings linguistic expertise to bear on machine learning algorithms.

language-01

Automatic Summarization

Our research on automatic summarization has led to improved models of lexical importance and discourse similarity that allow us to more reliably identify the utterances most central to a conversation. To extend this work to models capable of producing detailed summaries and meeting minutes, we are currently working on algorithms to track how utterances relate to one another in a conversation:  does an utterance serve to answer a question that was asked, to provide an explanation of something that was said, or to correct or disagree with an argument that was put forward, for example? Identifying such relations involves integrating our work on summarization with our work on dialogue and discourse modeling.

Dialogue Modeling

Drawing on our team’s expertise in modeling discourse structure, our research on dialogue extends work on discourse parsing for text and chat to model multi-party, spoken conversation. Progress in discourse parsing is greatly hindered by a dearth of annotated conversational data as well as a need for linguistic expertise for exploiting it. We are currently tackling both of these problems with an approach to weak supervision that allows expert annotators to study a small but representative sample of data and write labeling rules that can be used to automatically annotate large data sets. This approach allows us to easily incorporate heterogeneous sources of information that can be useful for dialogue modeling, from discursive cues to acoustic information.

language-02
language-03

Multimodal Dialogue

A final aspect of our work on language concerns the multimodality of face-to-face conversations, or even video conferences, in which gestures or other meaningful movements, as well as objects and actions visible in the context, can be semantically relevant.  Understanding how the nonlinguistic context adds content to a conversation, and conversely, how the content of a conversation can help us understand what is going on in the visual scene will be crucial for developing models of conversation sophisticated enough to facilitate natural conversation between humans on the one hand and assistants or embodied agents, such as collaborative robots, on the other.

Related Projects

Products

Publications

2020

LinTO Platform: A Smart Open Voice Assistant for Business Environments

The 1st International Workshop on Language Technology Platforms (IWLTP)

#Ilyes Rebai, #Kate Thompson, #Sami Benhamiche, #Zied Selami, #Damien Lainé, #Jean-Pierre Lorré

#Speech, #Language, #LinTO

Read more

Modelling Structures for Situated Discourse

Dialogue & Discourse, vol. 11 (1): 89-121

Nicholas Asher, #Julie Hunter, #Kate Thompson

#Language, #LinTO

Read more

Speaker-change Aware CRF for Dialogue Act Classification

The 28th International Conference on Computational Linguistics (COLING)

#Guokan Shang, Antoine Jean-Pierre Tixier, Michalis Vazirgiannis, #Jean-Pierre Lorré

#Language, #LinTO

Read more

Energy-based Self-attentive Learning of Abstractive Communities for Spoken Language Understanding

The 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing (AACL-IJCNLP)

#Guokan Shang, Antoine Jean-Pierre Tixier, Michalis Vazirgiannis, #Jean-Pierre Lorré

#Language, #LinTO

Read more

2019

Meeting Intents Detection Based on Ontology for Automatic Email Answerings

IC 2019: Journées francophones d’Ingénierie des Connaissances

Manon Cassier, #Zied Sellami, #Jean-Pierre Lorré

#Language, #OpenPaaS::NG

Read more

Weak Supervision for Learning Discourse Structure

Conference on Empirical Methods in Natural Language Processing (EMNLP)

#Sonia Badene, #Kate Thompson, #Jean-Pierre Lorré, Nicholas Asher

#Language, #LinTO

Read more

Learning Multi-Party Discourse Structure Using Weak Supervision

The 25th International Conference on Computational Linguistics and Intellectual Technologies (Dialogue)

#Sonia Badene, #Kate Thompson, #Jean-Pierre Lorré, Nicholas Asher

#LinTO, #Language

Read more

Data Programming for Learning Discourse Structure

The 57th Annual Meeting of the Association for Computational Linguistics (ACL)

#Sonia Badene, #Kate Thompson, #Jean-Pierre Lorré, Nicholas Asher

#LinTO, #Language

Read more

Apprentissage faiblement supervisé de la structure discursive

Conférence sur la Traitement Automatique des Langues Naturelles (TALN)

#Sonia Badene, #Kate Thompson, #Jean-Pierre Lorré, Nicholas Asher

#LinTO, #Language

Read more

2018

Unsupervised Abstractive Meeting Summarization with Multi-Sentence Compression and Budgeted Submodular Maximization

The 56th Annual Meeting of the Association for Computational Linguistics (ACL)

#Guokan Shang, Wensi Ding, Zekun Zhang, Antoine J.-P. Tixier, Polykarpos Meladianos, Michalis Vazirgiannis, #Jean-Pierre Lorré

#LinTO, #Language

Read more

Blog Posts

Next Word Prediction: A Complete Guide

Construction of 360° images dataset for image recognition

Data augmentation for Natural Language Understanding

Ontology-based Meeting Intents Detection for Automatic Email Answering

Pourquoi modéliser la conversation orale spontanée reste un défi de taile ?