Researchers funded by the National Science Foundation have created a robot called “Emo” that can mimic the facial expressions of the person it is conversing with in real-time. Emo uses predictive algorithms trained on video data of human facial expressions to anticipate the expressions its human conversant will make. It then controls its 26 motors and actuators to recreate those expressions on its own face, which has interchangeable silicone “skin” and camera “eyes” to observe the person it is mirroring. The goal is to achieve “coexpression” and make the robot seem more friendly, human-like, and socially responsive through simultaneous facial mimicry during conversations. Emo represents an advancement over previous robot iterations in facially mirroring human interlocutors.

Summarized by Claude 3