Have you ever felt your interaction with avatars, robots or chatbots as highly "unnatural", especially when entailing more than one modalities?

More often than usual, this is caused by these artificial machines lacking cognitive and social skills. By and large and while not noticing it explicitly during human-to-human interactions, this is largely the gap that needs to be covered for machines/computers/robots to be perceived as interacting in a more human-like manner.

MULTISIMO (MULTImodal and MULTIparty Social Interactions MOdelling) is a Marie-Slodowska Curie action that takes on the above challenge, specifically addressing multiparty social interactions.

Tangible objectives set out by the action are:

The implementation of such novel models will allow for the creation of more sophisticated human-like agent-based interfaces of high quality communication experience, especially in scenarios where social and cognitive skills, in addition to the communicative ones, are crucial for the success of the interaction, i.e. in the education, customer care domains and web applications requiring interfaces triggered by human communication.

MULTISIMO is a project sponsored by the European Commission Horizon 2020 program, specifically through the Marie Skłodowska-Curie Individual Fellowships (IF-EF) funding scheme (MSCA-IF-2015-EF). Formal details of the action can be found here.