ByteBuddy is planned to be a multimodal game in the field of computer science education with the goal to teach elementary school students programming concepts. Here, speech, touch gesture and facial features are used to solve the levels of mazes.
ByteBuddy B, the personal robot of the player walking through the mazes, will be supporting the player to learn the concepts of programming.
Use of Openface 2.0 for the extraction of Action Units (AUs) and landmarks for further processing.
Code based on the $Q
implementation from https://github.com/shakyaprashant/qdollar .
The prototype can be tried out here.
For now, only the Start Screen, the Intro and the Intro to Level 1 are included to show the functionality of how the player gets information to play the game.
- include speech
- evaluate with real users which face feature is the most robust to use (or mixture of both)
- Action Units: currently only the smiling AUs are checked for activation against a threshold. It may make sense to check which other AUs are activated as well → if other AUs are activated over a certain time window as well, it may not be a smile.