Unheard
Unheard is an installation by artists Ymer Marinus and Martina Raponi, who identifies as CODA (Child of Deaf Adults), in collaboration with the Groningen Deaf Institute Turkoois. Together with the Deaf community, the artists developed a statement that led to this unique, visual, and vibratory interactive installation, which is especially created for the Pixel Perceptions exhibition. The installation bridges the hearing and Deaf worlds and can be activated by visitors using sign language.
Before the artists began Unheard, they worked with the Deaf community to explore avatars that automatically translate sign language. They found that these avatars often needed more expressiveness for sign language. They missed the facial expressions and nuances that are so important in language.* The avatars felt far too “neutral” and failed to capture variations in space, intensity, dialects, and tone.
- (smaller:) An avatar is a digital representation of a person in a virtual or online environment. This can be a simple icon, but it can also be a more detailed, three-dimensional model, like those used in video games or virtual worlds.
- (smaller:) One example is the sign for "fall." There is no single correct sign for "fall" in sign language. The sign changes depending on what is falling: a bottle, a book, a standing person, or someone rolling off a table. The sign also varies based on how the object falls, such as water flowing from a waterfall or a book falling off a table. AI often focuses on the general concept but misses these crucial details essential for a complete language experience. Sign language is deeply connected to physical and emotional expression.
Unheard does not focus on what AI cannot do but instead uses the limitations of technology to enhance the depth and beauty of sign language. The work presents sign language as a symbol of pride for the Deaf community.
We invite you to enter this space, interact with the installation, and experience how sensory stimuli can create an inclusive environment for everyone. Learn three signs: "sign language," "data," and "representation." These signs and specific movements activate a specially trained AI model that controls low-frequency vibrations and lights, creating an immersive sensory space.