Research Projects Overview

Select your language

   

Tracking and training perceptual-motor expertise in Basketball

The objective of this project is to track and train the key determinants of perceptual-motor expertise in basketball throwing. We have developed an immersive and highly realistic simulator that allows players to naturally throw a basketball in virtual reality. This simulator allows us to study expertise related to throwing skills (e.g., free throw), expertise related to duel situations by allowing interaction with a virtual opponent and expertise related to social pressure by emulating a virtual coach capable of giving feedback. The data collected show that we are able with this simulator to predict the on-field players' expertise. Furthermore, we have established that different visual information is picked up from the basket and the opponent depending on the expertise. We are currently developing training methods based on individual feedback in Augmented Virtuality and Augmented Reality.

   

Assisting helicopter ship landing with AR

This project aims at designing visual aids for helicopter ship landing. The first step of the project consists in studying and modelling the perceptivo-motor coupling strategies of expert pilots during approach and landing. We're using an immersive simulator that allows us to manipulate the sea state and the resulting movements of the ship. During the second step, ecological interfaces are designed by basing their architecture on the modelling of the perceptivo-motor strategies resulting from the first stage. We use augmented virtuality to prototype these assistance and experimentally highlight their benefit on the performance and behaviour of the pilots.

 Intersection crossing in VR  

Model driver behaviours while approaching to an intersection and improve it with AR

This project aims at modeling driver perceptual-motor coupling while approaching an intersection in order to design decision aids and driving behaviour regulation. To do this, we use a fixed-based virtual reality system in which the kinematics of the vehicle being driven and of the traffic are perfectly controlled so as to experimentally manipulate different manoeuvring possibilities (e.g., emergency braking, acceleration). Our results show that drivers decide which manoeuvres to perform and regulate them using the same mechanism allowing them to know at all times their possibilities to act according to the morphological and kinematic characteristics of the vehicles being driven (i.e. affordances). We develop visual assistance based on these affordances and experimentally validate their benefit in augmented virtuality.

 Overtaking  

Modeling overtaking behavior while driving with affordance-based models in VR

This project aims at demonstrating that drivers' perceptual-motor coupling rely on their vehicles' action limits when initating and regulating car overtaking maneuvers. Our approach consisted in manipulating in virtual reality the overtaking situations (by varying, for example, the speed of oncoming traffic) and the kinematic limits of the vehicles driven (by varying the speed and/or maximum acceleration of the vehicle). Our results revealed that drivers are able to decide whether or not to initiate overtaking manoeuvres and to regulate these manoeuvres by perceiving their overtaking possibilities through high-order (speed-dependent) and low-order (acceleration-dependent) affordances. Drivers are also able to take into account changes in their overtaking ability when changing gears. The lack of account in the possibility of braking in this model that gave rise to the investigation of the intersection crossing paradigm in which the possibilities of crossing and braking are taken into account simultaneously.

   

Modeling information-movement coupling and identifying information-pickup while walking to intercept moving targets

 This project aims at identifying the visual information picked up by operators and modelling the perceptual-motor architecture underlying interceptive actions of a moving target while walking. I used basic virtual environments enslaved to different peripherals (e.g. treadmill, joystick) to study the coupling between the operators' moving speed and the changes in the visual scene. These virtual reality devices allowed me to bias the operators' perception either by de-correlating the available optical information (e.g. optical expansion of the target, optical flow carried by the ground) from the operator-environment property it usually carries, or by impoverishing or enriching the visual environment (e.g. by adding predictive information about the target trajectory). The data collected show that the Constant Bearing Angle strategy is an ideal architecture to model the perception-action coupling of operators in many conditions. I am currently developing tactile vision substitution devices to enable blind users to intercept moving targets.