Towards a data generation framework for affective shared perception and Social cue learning using virtual avatars

Workshop on Affective Shared Perception, ICDL 2020 - Oct 2020 Open Access
Associated documents :  
Research on machine learning models for affective shared perception, social cue, and crossmodal conflict learning generates a high demand for large data sets of accurately annotated and unbiased training samples While many existing data sets rely on freely available “in-the-wild” video material or paid actors, using fully controlled virtual avatars has a series of advantages: 1) Once scripted, virtual avatars and environments can be automatically varied and randomized to generate any desired number of training samples. 2) Generated video material can be automatically annotated with the exact time point of avatar behavior, e.g., exact information about the gaze target, the position of hands and body pose, obviating the tedious hand-annotation process. 3) The generated behavior is fully controllable, allowing a detailed analysis of the contribution of different behaviors to machine learning and participant study results. 4) Full control over biases, e.g., actor appearance and positioning in a scene can be controlled and balanced, unwanted behavior can be excluded.

 

@InProceedings{KW20, 
 	 author =  {Kerzel, Matthias and Wermter, Stefan},  
 	 title = {Towards a data generation framework for affective shared perception and Social cue learning using virtual avatars}, 
 	 booktitle = {Workshop on Affective Shared Perception, ICDL 2020},
 	 number = {},
 	 volume = {},
 	 pages = {},
 	 year = {2020},
 	 month = {Oct},
 	 publisher = {},
 	 doi = {}, 
 }