Virtual, Augmented and Mixed Reality, Computer Graphics, and Human-Machine Interaction
-
- Thesis with an external company or at Politecnico di Torinoimmersive technologies, vision and collaborative robotics for next-generation production linesThe computer graphics and computer vision techniques are essential for the analysis and the visual representation of key information regarding the state of production processes. In processes that envisage a human intervention, e.g., in scenarios based on the use of collaborative robots (or cobots), they can support the identification and managements of defects and issues in general; they can additionally contribute at increasing the safety of the workplace, through the study/the simulation of ergonomics aspects, the delivery of training contents, the monitoring/prediction of human actions in relation to the machine ones.
In these fields, several thesis are available, framed in the context of the MANAGE 5.0 project and developed in collaboration with Stellantis and other companies, with the support by Ministry of Enterprises and Made in Italy (MISE) and the National Recovery and Resilience Plan (PNRR).
In particular, the MANAGE 5.0 project will focus on the definition of intelligent approaches for the creation of digital twins of production lines involving cobots and of their operators, and on the use of immersive technologies (virtual, augmented and mixed reality) for the optimization of human-machine collaboration, onsite and at distance via telepresence solutions.The computer graphics and computer vision techniques are essential for the analysis and the visual representation of key information regarding the state of production processes. In processes that envisage a human intervention, e.g., in scenarios based on the use of collaborative robots (or cobots), they can support the identification and managements of defects and issues in general; they can additionally ... - Thesis with an external company or research centerVirtual simulations and human-computer interfaces for advanced air mobilitySeveral thesis proposals are available in the context of the "Centro Nazionale Mobilità Sostenibile" ("National Center for Sustainable Mobility"), in particular of the project "Air mobility" funded by the National Recovery and Resilience Plan (PNRR). These proposal will concern the design and development of virtual simulations and human-computer interfaces for advanced air mobility, and could be developed in collaboration with the other project partners.
One of these proposals will concern the creation of a simulation environment for flying vehicles. Starting from existing software, an immersive environment capable to simulate, both visually and physically, the cabin/cockpit of a Vertical Take-Off and Landing (VTOL) aircraft (manned or un-manned) will be created. The physical simulation will leverage a motion platform available at VR@POLITO. Typical tasks for VTOL aircrafts like passenger transport, emergency medical services, or search and rescue (SAR) will be considered.
Another proposal strictly related to the previous one will concern the design and evaluation of interfaces capable to enhance the situational awareness of pilots of manned aircrafts. In particular, since the two main human-vehicle communication channels, i.e., the visual and aural ones, are typically overloaded, the aim is to investigate how to leverage the haptic channel. In particular, by using a commercial haptic suit, the plan is to investigate the effectiveness of a multi-sensory stimulation combining the above channels with the haptic one by means of full-body touch stimuli.
Another proposal will concern the use of spatial audio in immersive VR simulations to investigate the acoustic experience in flying vehicles. Though a collaboration with the DIMEAS department, acoustic simulations of the the vehicle's mechanical structure and motors will be generated. By means of a VR simulation, users will be immersed a synthetic environment and will be allowed to evaluate different configurations based on their acoustic impact.Several thesis proposals are available in the context of the "Centro Nazionale Mobilità Sostenibile" ("National Center for Sustainable Mobility"), in particular of the project "Air mobility" funded by the National Recovery and Resilience Plan (PNRR). These proposal will concern the design and development of virtual simulations and human-computer interfaces for advanced air mobility, and could be developed in ... - Thesis with an external company or research centerGenerating 3D synthetic data to train AI algorithms for intelligent vehicle applicationsSupervisors: Fabrizio LambertiThe recognition of the vehicle’s owner or the person authorized to operate a given vehicle will be a challenge for future intelligent vehicle applications. Furthermore, the person to be recognized stands outside the car, in different and complex conditions (recognition in the “wild”): highly variable lighting, presence of other subjects, etc. Apart from the difficulty and complexity of the recognition algorithm, one of the fundamental problems becomes the data needed to train required AI models, which cannot be available yet.
In this case, so-called synthetic data need to be employed. Synthetic data, generated through some simulation environments, can mimic operational conditions, by also enabling to keep edge cases into account. In fact, real-world datasets often contain imbalances, because edge cases, which do not happen frequently in real life, are not sufficiently represented. Finally, with synthetic data, other issues, e.g., related to privacy and GDPR compliance, can be easily overcome.
The focus of this thesis, developed in collaboration with Centro Ricerche Fiat (CRF) and Stellantis, will be the use of computer graphics simulation for the the generation of synthetic data that could be later used to train AI algorithms for the recognition of vehicle owners that are robust to variations in terms of, e.g., illumination, presence of other human and non-human subjects, etc.The recognition of the vehicle’s owner or the person authorized to operate a given vehicle will be a challenge for future intelligent vehicle applications. Furthermore, the person to be recognized stands outside the car, in different and complex conditions (recognition in the “wild”): highly variable lighting, presence of other subjects, etc. Apart from the difficulty and complexity of the recognition ... - Thesis with an external company or at Politecnico di TorinoIntelligent virtual humansVirtual humans (or digital humans) are simulations of human beings on computers, which are considered today a kind of commodity in movies, video games and the entertainment domain in general. In fact, virtual humans are getting commonplace in various industries and domains, from medicine, to film, fashion, education, automobile, telecommunications, etc. Depending on the application, various level of realisms might be needed.
A medical application requires an exact simulation of specific internal organs or of human emotional states; an education or a cultural heritage application requires effective communication/storytelling and interaction abilities to effectively deliver the intended contents and manage dialogues; the film industry requires extremely pleasant appearances, as well as natural movements and expressions; ergonomic studies, like clothing applications, require faithful body proportions, realistic locomotion, etc.; the metaverse require social behavior to be replicated in a credible way via easy-to-use interfaces.
The research domain of virtual humans is concerned with their representation, movement and behavior, and is addressing related challenges through a multi-disciplinary approach that encompasses, among others, computer graphics, computer animation, computer vision and artificial intelligence.
The aim of this thesis is to explore possible methods and tools for the creation and the control of intelligent virtual humans that can be used in different projects currently ongoing at the VR@POLITO lab.
In particular, activities could concern:
- the creation of digital avatars for virtual visits being created for cultural institutions in Piemonte and in Italy (in collaboration with DAD and DISEG departments);
- the development of empathic avatars of patient and doctors to support training in the medical domain in collaboration with OntarioTech university (Ontario, Canada);
- the implementation of conversational agents that can be used in customer service-oriented applications;
- the comparison of consumer technologies that can be exploited to enable a smooth transfer of the user's movements (body posture and gait, as well as facial expression) onto his or her virtual avatar.
Depending on the domain, the thesis can be carried out either in collaboration with a company, as a thesis at Politecnico di Torino or even as a thesis abroad in a foreign university.Virtual humans (or digital humans) are simulations of human beings on computers, which are considered today a kind of commodity in movies, video games and the entertainment domain in general. In fact, virtual humans are getting commonplace in various industries and domains, from medicine, to film, fashion, education, automobile, telecommunications, etc. Depending on the application, various level of realisms ... - Thesis with an external company or at Politecnico di TorinoImmersive driving simulations for autonomous vehicles and road usersAutonomous vehicles are routinely advertized as the world-changing invention that will greatly impact people lives by, on the one hand, improving safety, reducing fuel consumption and decreasing urban infra-structure strains, and, on the other hand, freeing up huge amounts of time for their occupants, who will be allowed to read, consume, work, play, etc. Despite the enormous business opportunities, autonomous vehicles also come with some key issues.
One of the major issues is related to the fact that, in oder to use these technology, people will first have to trust it. Besides education and experience, a factor that is regarded as the possible keystone for constructing trust towards autonomous vehicles is represented by HMI, i.e., by the paradigms chosen to communicate with the users (the vehicle occupants, but also the other road users) and to make them interact with it.
At present, since just a few vehicles are actually available for real-world experiments (and not at the highest levels of autonomy envisaged by SAE, yet) the only way to design and validate approaches for possibly coping with issues like those above is through simulation.
The VR@POLITO has developed an immersive VR driving simulation platform that has been used, e.g.,
- to explore in-vehicle interfaces based on AR Head Up Displays for enhancing drivers'/passengers' trust;
- to study pedestrians' behavior at crosswalks with incoming vehicles mounting different types of signaling interfaces;
- to investigate the effectiveness of multi-modal interfaces in urban, suburban and highway drives with autonomous vehicles.
Several thesis works are available in this field, which can address the open problems from different perspectives, possibly in collaboration with a company.
Suggested readings:
Building Trust in Autonomous Vehicles: Role of Virtual Reality Driving Simulators in HMI Design
https://arxiv.org/abs/2007.13371
Comparing State-of-the-Art and Emerging Augmented Reality Interfaces for Autonomous Vehicle-to-Pedestrian Communication
https://arxiv.org/abs/2102.02783Autonomous vehicles are routinely advertized as the world-changing invention that will greatly impact people lives by, on the one hand, improving safety, reducing fuel consumption and decreasing urban infra-structure strains, and, on the other hand, freeing up huge amounts of time for their occupants, who will be allowed to read, consume, work, play, etc. Despite the enormous business opportunities, autonomous ... - Thesis with an external company or at Politecnico di TorinoSerious games and virtual/augmented reality for emergency managementSupervisors: Fabrizio LambertiSeveral collaborations are in place with various institutions (Italian Airforce, Piedmont Region Civil Protection and Forest Firefighting Unit, Frejus Tunnel Authority, Fire and Rescue Department of the Savoy Region, France, etc.) involved in emergency management. In this context, a number of serious games and interactive experiences leveraging Virtual Reality, Augmented Reality and Mixed Reality have been developed to support education and training of first responders, volunteers, students, etc.
Three thesis proposals are available, to be developed mostly at Politecnico di Torino in collaboration with the said institutions.
1) The first thesis proposal aims at extending several previous thesis works that have created a prototype strategic planning and debriefing tool for forest firefighting based on a sand table. The prototype already combines a fire simulator with AR projection and mobile app-based interaction. Planned extension envisage, e.g., the integration of depth scan reconstruction of the sand construction, and the use of wearable AR headsets (HoloLens) for collaborative discussion.
2) The second thesis's gol is to build a multi-layer, multi-player training tool by integrating a series of previously-developed interactive experiences. For instance, immersive VR-based applications for training on the use of low-flame firefighting tools or high-pressure pumping equipment have been created already. The idea would be to make it possible for decision makers to see the effects of various teams operating in such applications from a kind of centralized control room.
3) The third thesis will focus on the evolution of a VR game based on a "travel-in-time" dynamics created to teach middle school students the correct procedures to follow in emergency situations. So far, hydrogeological risks (precisely floods) have been addressed. Other risks will be considered now.
In all the thesis works, experiments with end-users will be carried out to validate the effectiveness of devised solutions.Several collaborations are in place with various institutions (Italian Airforce, Piedmont Region Civil Protection and Forest Firefighting Unit, Frejus Tunnel Authority, Fire and Rescue Department of the Savoy Region, France, etc.) involved in emergency management. In this context, a number of serious games and interactive experiences leveraging Virtual Reality, Augmented Reality and Mixed Reality have been ... - Thesis at Politecnico di TorinoVirtual laboratories in eXtended RealitySupervisors: Fabrizio LambertiA "virtual laboratory" is defined as an interactive environment that enables a group of researchers located around the world to work together for creating and conducting simulated experiments. Virtual laboratories are receiving an increasing attention also by the world of education, as they could represent the practical side of classes that are implemented today according to the e-learning paradigm. The latter statement has become particularly relevant especially during the lockdown phase, and the potential of this technology could be finally unleashed now that person and distance education have started to be delivered at the same time in a hybrid mode.
In this context, several thesis proposals are available that are aimed to create multiple virtual laboratory experiences supporting, among others, the educational activities at Politecnico di Torino. Contacts have been activated with colleagues in various Departments interested in developing learning tools in the field of physics, chemistry, math, etc. that can be accessed by students through both Virtual Reality (VR) and Augmented Reality (AR) technologies. Various directions could be investigated, encompassing online and offline contents, self-learning modules, teacher-student and student-student collaboration modalities, etc.
An example of such tools developed, e.g., for the Department of Energy in the context of electrical engineering is presented in the paper below.
Immersive Virtual Reality for procedural training: Comparing traditional and learning by teaching approaches
Computers in Industry, January 2023
https://dx.doi.org/10.1016/j.compind.2022.103785A "virtual laboratory" is defined as an interactive environment that enables a group of researchers located around the world to work together for creating and conducting simulated experiments. Virtual laboratories are receiving an increasing attention also by the world of education, as they could represent the practical side of classes that are implemented today according to the e-learning paradigm. The latter ... - Thesis at Politecnico di TorinoCyber-sickness mitigation techniques in virtual scenariosCyber-sickness is a disorder characterized by symptoms like nausea or discomfort, that can happen meanwhile or after using Virtual Reality (VR) technologies. Similarly to motion sickness, it is mainly caused by inconsistency between visual and aural stimuli from the simulated virtual environment and the expected feedback from the vestibular system. Several mitigation and prevention techniques have been proposed and adopted in many commercial products. However, most of them are highly situational, or require implementation choices that have a negative impact on core aspects such immersion, naturalness, and sense of presence.
In order to support a comprehensive evaluation and fair comparison of such techniques, a testbed including a set of four immersive scenarios able to stimulate different, controlled levels of cyber-sickness has been developed in a previous thesis.
The aim of this new thesis will be to study the state-of-the-art of cyber-sickness mitigation and prevention techniques by implementing them in the above testbed, and to possibly design new and more effective strategies for addressing this major issue of immersive Virtual Reality.Cyber-sickness is a disorder characterized by symptoms like nausea or discomfort, that can happen meanwhile or after using Virtual Reality (VR) technologies. Similarly to motion sickness, it is mainly caused by inconsistency between visual and aural stimuli from the simulated virtual environment and the expected feedback from the vestibular system. Several mitigation and prevention techniques have been proposed ... - Thesis at Politecnico di TorinoAnimating virtual characters into immersive environmentsGenerating computer animations, particularly of virtual characters, is a very labor-intensive task, which requires animators to operate with sophisticated interfaces. Hence, researchers continuously experiment with alternative interaction paradigms that could possibly ease the above task. Among others, the use of Virtual represents a valid alternative to traditional interfaces (based on mouse and keyboards) since it can make interactions more expressive and intuitive. However, although the literature proposes several solutions leveraging VR to solve different computer graphics challenges, generally these solutions are not fully integrated into the computer animation pipeline or were not designed to support collaborative working. Moreover, most of the existing solutions are still based on the common hand controllers, even though the literature has already shown the benefits provided by alternatives such as sketch-based interfaces [1].
The objective of this thesis proposal (more than one student will be accepted) is to design and develop an immersive and collaborative animation system that allows animators to produce and edit 3D character animations considering alternative interfaces. The system could be also featured with intelligent mechanisms able to support the animator in the creation of the virtual character animations, e.g., to automatically generate the next/previous frame given a current character pose. Related works have been proposed in the literature. For example, in [2] a system was proposed by the Disney research lab to support collaborative animations through a mixed reality system. Additional examples for authoring contents, 3D modeling, or directing actors in virtual scenes have been proposed in [3-5], however, according to a recent survey [6], it was not possible to identify works that make use of immersive and collaborative environments to produce computer animations. This research gap is probably due to the number of challenges that researchers have to face when approaching the design of collaborative AR/VR systems [7].
Related works:
[1] Cannavò, A., Zhang, C., Wang, W., Lamberti, F. (2020, October). Posing 3D Characters in Virtual Reality Through In-the-Air Sketches. In International Conference on Computer Animation and Social Agents (pp. 51-61). Springer, Cham.
[2] Pan, Ye, and Kenny Mitchell (Disney Research). "PoseMMR: a collaborative mixed reality authoring tool for character animation." 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW). IEEE, 2020
[3] Nebeling, Michael, et al. "Xrdirector: A role-based collaborative immersive authoring system." Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 2020.
[4] Larpkiattaworn, Noppasorn, et al. "Multiuser Virtual Reality for Designing and Editing 3D Models." International Conference on Human-Computer Interaction. Springer, Cham, 2020.
[5] Wang, Cheng Yao, et al. "VideoPoseVR: authoring virtual reality character animations with online videos." Proceedings of the ACM on Human-Computer Interaction 6.ISS (2022): 448-467.
[6] Schäfer, Alexander, Gerd Reis, and Didier Stricker. "A Survey on Synchronous Augmented, Virtual and Mixed Reality Remote Collaboration Systems." ACM Computing Surveys (CSUR) (2021).
[7] Krauß, Veronika, et al. "Current practices, challenges, and design implications for collaborative AR/VR application development." Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 2021.Generating computer animations, particularly of virtual characters, is a very labor-intensive task, which requires animators to operate with sophisticated interfaces. Hence, researchers continuously experiment with alternative interaction paradigms that could possibly ease the above task. Among others, the use of Virtual represents a valid alternative to traditional interfaces (based on mouse and keyboards) ... - Thesis at Politecnico di TorinoExperiencing past events through extended reality experiences in the MetaverseExtended reality applications are getting commonplace in several domains, ranging from entertainment to education. Moreover, the increasing interest in the Metaverse is attracting, more and more, the attention of researchers and companies. In the Metaverse, different communities can meet each other, with the aim of exchanging ideas or knowledge and working together on a given task. But what happens if someone missed an online event held in the recent past, or is interested in meeting people who lived in the previous century, etc.?
The aim of this thesis is to develop an extended reality-based system giving users the possibility to experience seamless physical events even though they happened in the past. In this way, when attendance is or was not possible, the system will be able to capture and reconstruct co-located or remote interactions, both physical and virtual, thus allowing immersed users to revisit them at a suitable time. Intelligent systems could be leveraged to make famous people get alive or reconstruct historical events, with the aim of making them available in the Metaverse.
This capability of a system to reconstruct interactions and events is generally referred to as “asynchronous reality”. The system will also allow users to transition along the reality-virtuality continuum and collaborate with other users living the same experience through different technologies. In this way, the benefits of both virtual and augmented reality can be leveraged in the same application. Developing such kind of asynchronous- and cross-reality systems is usually characterized by numerous challenges, which will be addressed in this thesis. The outcomes of this research could be applied to different use cases, such as education and training, cultural heritage, entertainment, sport, etc.
Suggested readings:
[1] Gruenefeld, Uwe, et al. "VRception: Rapid Prototyping of Cross-Reality Systems in Virtual Reality." CHI Conference on Human Factors in Computing Systems. 2022.
[2] Wang, Nanjia, and Frank Maurer. "A Design Space for Single-User Cross-Reality Applications." Proceedings of the 2022 International Conference on Advanced Visual Interfaces. 2022.
[3] Kikuchi, Yusuke, et al. "Mobile Cross Reality (XR) Space for Remote Collaboration." Human Factors in Virtual Environments and Game Design 50 (2022): 25.
[4] Woodworth, Jason W., David Broussard, and Christoph W. Borst. "Redirecting Desktop Interface Input to Animate Cross-Reality Avatars." 2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). IEEE, 2022.
[5] Fender, Andreas Rene, and Christian Holz. "Causality-preserving asynchronous reality." Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 2022.Extended reality applications are getting commonplace in several domains, ranging from entertainment to education. Moreover, the increasing interest in the Metaverse is attracting, more and more, the attention of researchers and companies. In the Metaverse, different communities can meet each other, with the aim of exchanging ideas or knowledge and working together on a given task. But what happens if someone ... - Thesis at Politecnico di TorinoExtended reality (XR) for the cinema industryExtended Reality (XR) technologies, encompassing Virtual (VR), Augmented (AR) and Mixed (MR) Reality, are disrupting the way films are produced and viewed. Although these technologies are getting commonplace, they are presenting new, unexplored challenges to both the actors and final viewers. For instance, when motion capture (mocap) is used to record actors’ movements with the aim of animating digital characters, an increase in the workload is observed for people on stage as they have to largely rely on their imagination to understand how the digitally created characters will behave in the scene. An interesting research direction pertain how to use XR to support mocap actors’ job. Another research direction regards the impact of innovative cinematic techniques on the viewers' experience. For instance, a key step for immersive movies would be the selection of ideal points of view that would make the viewer have the best experience while watching them.
In this context, several thesis proposals are available. For instance, one of the proposals could concern the improvements of an AR system designed to support mocap actors while rehearsing scenes containing visual effects. Another proposal could focus on cinematic AR/VR, and envisage in-depth analyses aimed at supporting the creation of immersive content best matching the viewers’ expectations.
Suggested readings:
[1] Gödde M, Gabler F, Siegmund D, Braun A. Cinematic narration in VR—rethinking film conventions for 360 degrees, pp. 184–20, 2018.
[2] Cannavò A, Pratticò F G, Bruno A, Lamberti F. "AR-MoCap: Using augmented reality to support motion capture acting". In: 30th IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR 2023), 2023.
[3] Cannavò A, Castiello A, Pratticò F G, Mazali T, Lamberti F. "Immersive Movies: The Effect of Point of View on Narrative Engagement". AI & Society, 2023.Extended Reality (XR) technologies, encompassing Virtual (VR), Augmented (AR) and Mixed (MR) Reality, are disrupting the way films are produced and viewed. Although these technologies are getting commonplace, they are presenting new, unexplored challenges to both the actors and final viewers. For instance, when motion capture (mocap) is used to record actors’ movements with the aim of animating digital ... - Thesis with an external company or research centerHaptics in eXtended Reality experiences and Human-Robot InteractionSupervisors: Fabrizio LambertiSeveral thesis proposals are available regarding the use of haptic feedback in Virtual Reality, Augmented Reality and Mixed Reality experiences. The proposals are in collaboration with different teams of the IRISA-Inria center at Rennes university and include an internship at their premises (http://www.irisa.fr/en/scientific-departments-irisa). Duration of the thesis/internship is approximately 5-6 months. Experience in C/C++/C#, Unity3D, VR/AR tools, human-machine interaction are required. Details are provided below at at the following link:
inria-irisa-rennes-haptics-thesis-internship-april2023.pdf
HapticMic: Studying the Persuasive Effects of Haptics during Speech-Based Interactions in Virtual Reality
Vibrotactile feedback is directly related to sound. When we speak, we make our body vibrate. At a concert, we feel vibratory feedback if we get close to a speaker. Vibrations seem to be an interesting way to emphasize sound feedback. Persuasive speech is broadly used in verbal communication, for instance in meetings, advertisements, informal discussions among friends, etc. Being able to modulate speaker leadership or persuasion in a collaborative environment could be of interest to increase inclusivity (e.g. of shy participants) or to solve conflicts. Previous studies have suggested that leadership is influenced by visual feedback. Very recently, we conducted two experiments where participants embody a first-person avatar attending a virtual meeting in immersive VR. Results showed that vibrotactile-reinforced speech can significantly improve the perceived co-presence but also the persuasiveness and leadership of the haptically-augmented agent. The objective of this research work is to design and evaluate a haptic-enabled microphone, able to reinforce speech in real-time through vibrotactile feedback.
Wearable haptics for augmenting tangible objects in Augmented Reality
Augmented Reality (AR) integrates virtual content into our real-world surroundings, giving the illusion of one unique environment. Virtual object manipulation is critical for useful and effective AR usage, such as in medical applications, training, or entertainment. However, there still exist several issues that affect manipulation in AR, degrading the overall user experience. One of the most important limitation is the lack of haptic feedback, despite being proven to significantly improve both the manipulation performance and the user's experience during manipulation in VR and AR. The lack of haptic sensations is due to multiple reasons, including the degradation of the tracking performance when using, e.g., wearable haptic interfaces, and the resulting mismatch between the pose of the user's hand and that of its avatar representation in the AR environment. The objective of this research work is to design and evaluate (wearable) haptic approaches to augment tangible objects during AR manipulation, e.g., alter the haptic perception of a tangible object through additional wearable haptic stimuli.
Mixed Reality (MR) for heterogeneous multi-robot systems
Teams of coordinated robots have been successfully used in a plethora of different applications, including disaster response, exploration, patrolling, and surveillance. Multi-robot systems are able to perform actions according to the perception of the single robot and understanding of the environment, as the team (physically) interacts with it. The ensemble of these abilities of sensing, interpreting, modeling, predicting, and interacting with the physical world are concrete applications for Artificial Intelligence (AI) tools and methodologies. However, when multiple robots are involved, it is unclear how to provide spatially accurate and efficient information to a (leader) human user, who either control the team from a remote (and secure) location or moves in the same environment as the robots. In this respect, Mixed and Augmented reality can be used to provide the human user with additional information about the target environment and task, reconstructing (part of) it from the information provided by the robotic team. By enhancing the world with virtual information, the user is allowed to provide additional feedback to the robots, that can elaborate those information using sensors and cameras in a complete agnostic way with respect to the environment they move in. The objective of this work is to design and deploy a mixed-reality system able to agnostically treat real and virtual robots, as well as provide feedback information to the user about the status of the robots. Imagine a human user, moving in the same environment of a team of coordinated drones; he or she can see directly some of the drones, while those who are not visible are shown through AR rendering. Moreover, virtual content can be added to the real world to show additional relevant details, such as the location of targets.
Immersive Virtual Reality (VR) multi-sensory User Interface (UI) for controlling multi-robot systems at the microscale
Information feedback during the teleoperation of microrobotic systems is of course fundamental, enabling the human user to understand what is happening in the remote environment. An example of visual feedback commonly provided during the teleoperation of such systems is shown in Figure 1: two microrobots are controlled in 3D, providing the user with the front and side views of the environment as captured by the cameras. Input information regarding, e.g., where the robot should move is usually provided by simply clicking on the screen to indicate the target reference positions. While this approach is widely employed, and it provides all the necessary information to drive the robots, it does not enable an intuitive control and visualization of the robots and their environment. The objective of this research work is to design and evaluate a series of immersive Virtual Reality (VR) User Interfaces (UI) for controlling multi-robot systems at the microscale in an intuitive and natural way, as well as providing feedback information to the user about the status of the robots and the current task.Several thesis proposals are available regarding the use of haptic feedback in Virtual Reality, Augmented Reality and Mixed Reality experiences. The proposals are in collaboration with different teams of the IRISA-Inria center at Rennes university and include an internship at their premises (http://www.irisa.fr/en/scientific-departments-irisa). Duration of the thesis/internship is approximately 5-6 months. ... - Thesis with an external company or research center3D Reconstruction and VR visualization of nuclear scattering eventsSupervisors: Fabrizio LambertiReconstruction of the complete kinematics of a collision between particles at accelerators is a fundamental step in the investigation of subtle processes hidden in the 10-15 m and 10-23 s space-time windows where the interaction takes place. The study of pion – Helium nucleus interaction at the temperature at which a transition phase from a bound nucleus to a gas of fermions is expected to occur is of high interest because it allows to investigate a number of fundamental open issues in nuclear and particle physics. It can provide information on the Universe evolution during the transition phase from the hadron era to the primordial nucleosynthesis era (10-3 s, 1011 K - 107 s, 109 K). During this transition the strong force (Quantum ChromoDynamics) has been responsible for a non-trivial set of physics phenomena involving, at the same time, the emergence of collective self-interacting complex systems existing at the quantum scale.
At the Joint Institute for Nuclear Research (JINR) (Dubna-Russia) a set of several thousand pion - Helium nucleus scattering events has been observed with a detector, namely a self-shunted streamer chamber (SSSC) filled with Helium, capable of revealing the complete interaction kinematics, down to the momenta of very low-energy charged secondary interaction products such as protons with an energy of merely 1 MeV. A charged particle leaves along its trajectory electron-ion pairs, that give rise to electric discharges in their initial stage (streamers), produced when a high-voltage pulse is applied to the SSSC electrodes, if a signal from scintillation counters reveals a nuclear event to have occurred inside the chamber volume. The visible light, emitted in excitation and ionization processes taking place within the streamers in the Helium gas filling the volume of the detector, indicates the particle tracks and is captured by two CCDs and saved in two photograms. The kinematics (the momenta of particles that take part in the collision) is reconstructed from the analysis of the tracks left by charged particles.
This thesis will focus on the 3D Reconstruction and VR visualization of nuclear scattering events produced and photographed in the PAINUC experiment at the JINR phasotron (particle accelerator) of the Laboratory of nuclear problems (LNP). In particular, the work will be focused on the implementation of automatic 3D reconstruction of each pion - Helium nucleus scattering event, on the basis of the two photograms taken by the CCDs. The work will involve the use of sw tools for the stereoscopic view reconstruction as well as for background reduction and filtering. The effectiveness of the various tools, from standard image filtering approaches to the use of AI-based systems, will be studied.
In addition, visualization of the reconstructed events in a VR environment is to be implemented in order to provide a new tool for the analysis of phenomena involved as well as a powerful environment focused on STEM didactics and the Communication of Science.
The thesis will be developed in collaboration with CERN, JINR and DISAT.Reconstruction of the complete kinematics of a collision between particles at accelerators is a fundamental step in the investigation of subtle processes hidden in the 10-15 m and 10-23 s space-time windows where the interaction takes place. The study of pion – Helium nucleus interaction at the temperature at which a transition phase from a bound nucleus to a gas of fermions is expected to occur is of high ...