To analyze and explore multimodal data (images, videos, and recordings such as gaze trajectories, EEG, emotional states, and heart rate) customized applications have to be utilized. But how to make data accessible to a broad audience, like researchers from diverse disciplines? A possible solution is to use multimedia containers which support the visualization and sonification of the scientific data thus allowing explorative multimodal data analyses with any multimedia player. To test this approach it had been prototyped on several datasets and then evaluated in a user study. Read on to learn more about the benefits and the requirements and how artificial networks can be trained on such standardized containers.
Read the article here.
Find out more on the project webpage.