2024-03-29T11:10:40Z
https://zenodo.org/oai2d
oai:zenodo.org:7863461
2023-04-25T14:26:42Z
user-guestxr
user-eu
Amber Maimon
Iddo Yehoshua Wald
Meshi Ben Oz
Sophie Codron
Ophir Netzer
Benedetta Heimler
Amir Amedi
2023-01-26
<p>Humans, like most animals, integrate sensory input in the brain from different sensory modalities. Yet humans are distinct in their ability to grasp symbolic input, which is interpreted into a cognitive mental representation of the world. This representation merges with external sensory input, providing modality integration of a different sort. This study evaluates the TopoSpeech algorithm in the blind and visually impaired. The system provides spatial information about the external world by applying sensory substitution alongside symbolic representations in a manner that corresponds with the unique way our brains acquire and process information. This is done by conveying spatial information, customarily acquired through vision, through the auditory channel, in a combination of sensory (auditory) features and symbolic language (named/spoken) features. The Topo-Speech sweeps the visual scene or image and represents objects’ identity by employing naming in a spoken word and simultaneously conveying the objects’ location by mapping the x-axis of the visual scene or image to the time it is announced and the y-axis by mapping the location to the pitch of the voice. This proof of concept study primarily explores the practical applicability of this approach in 22 visually impaired and blind individuals. The findings showed that individuals from both populations could effectively interpret and use the algorithm after a single training session. The blind showed an accuracy of 74.45%, while the visually impaired had an average accuracy of 72.74%. These results are comparable to those of the sighted, as shown in previous research, with all participants above chance level. As such, we demonstrate practically how aspects of spatial information can be transmitted through non-visual channels. To complement the findings, we weigh in on debates concerning models of spatial knowledge (the persistent, cumulative, or convergent models) and the capacity for spatial representation in the blind. We suggest the present study’s findings support the convergence model and the scenario that posits the blind are capable of some aspects of spatial representation as depicted by the algorithm comparable to those of the sighted. Finally, we present possible future developments, implementations, and use cases for the system as an aid for the blind and visually impaired.</p>
https://doi.org/10.3389/fnhum.2022.1058093
oai:zenodo.org:7863461
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Frontiers in Human Neuroscience, (2023-01-26)
sensory substitution
spatial perception
sensory substitution device (SSD)
blind and visually impaired people
sensory development
sensory perception
The Topo-Speech sensory substitution system as a method of conveying spatial information to the blind and vision impaired
info:eu-repo/semantics/article
oai:zenodo.org:10605201
2024-02-01T11:37:58Z
openaire
user-guestxr
user-eu
Hecquard, Jeanne
Saint-Aubert, Justine
Manson, Julien
Argelaguet, Ferran
Pacchierotti, Claudio
Lécuyer, Anatole
Macé, Marc
2023-07-28
<p>Poster presented by Jeanne Hecquard, project partner from Inria, during the International Summer School on eXtended Reality Technology and eXperience, celebrated on July 18th-21st, 2023, in Madrid (Spain).</p>
https://doi.org/10.5281/zenodo.10605201
oai:zenodo.org:10605201
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.10605200
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
International Summer School on eXtended Reality Technology and eXperience
Empathetic haptics: feel the stress of a virtual agent
info:eu-repo/semantics/conferencePoster
oai:zenodo.org:10606918
2024-02-01T16:54:38Z
user-guestxr
user-eu
GuestXR partners
2024-02-01
<p>The GuestXR project is geared towards developing an all-encompassing virtual social platform using extended reality. The project offers a XR/VR technology (metaverse) as a meeting immersive environment where people can connect and interact. Moreover, the central feature of this project is a machine learning program known as "The Guest," whose purpose is to facilitate the communication between users and assist them in accomplishing their objectives. The objective of this innovation is to address concerns such as digital conflict and cyberbullying, facilitate engagement for individuals with communication difficulties, and identify antisocial behaviours, among other related issues.</p>
<p>GuestXR is launching an Open Call for interested and capable individuals or organizations to participate in the development and implementation of a use case that integrates the project technology. We are looking for innovative approaches to utilizing our resources to address social issues and create a safe and inclusive environment for individuals to connect and interact.</p>
<p>We welcome applications from individuals or organizations with diverse backgrounds, experiences and perspectives. We encourage proposals that can contribute to the project’s objectives and are committed to creating an inclusive and diverse community, and that demonstrate innovative and creative ideas for the development of this ground-breaking project.</p>
https://doi.org/10.5281/zenodo.10606918
oai:zenodo.org:10606918
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.10606917
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
D6.4 GuestXR Open Call
info:eu-repo/semantics/report
oai:zenodo.org:7311300
2022-11-11T02:26:27Z
openaire
user-guestxr
user-eu
Sayın, Umut
2022-11-10
<p>Real life situations are hard to replicate in the laboratory and often discarded during Hearing Aids (HA) optimization, leading to performance inconsistencies and user dissatisfaction. In recent years, the virtual sound environments (VSEs) have become of importance to research groups due to this discrepancy Difficulties in setting up realistic situations and a lack of scriptable end to end virtual acoustic simulations makes it difficult to generate abundant amount of properly labeled data for possible use of machine learning in HA devices.</p>
https://doi.org/10.5281/zenodo.7311300
oai:zenodo.org:7311300
eng
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.7311299
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Synthesized Virtual Sound Environments for Hearing Aids Research
info:eu-repo/semantics/conferencePoster
oai:zenodo.org:10605984
2024-02-01T14:01:43Z
openaire
user-guestxr
user-eu
Slater, Mel
2022-12-16
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<p>Presentation delivered by project partner Mel Slater during the ICIR 2022 conference.</p>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<p> </p>
</div>
https://doi.org/10.5281/zenodo.10605984
oai:zenodo.org:10605984
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.10605983
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
IEEE 2nd International Conference on Intelligent Reality (ICIR 2022)
Reinforcing and Eliciting Behaviour in Virtual Reality Using Closed Loop Learning
info:eu-repo/semantics/lecture
oai:zenodo.org:10213099
2023-11-28T10:45:53Z
user-guestxr
user-eu
Gusó, Enric
Luberadzka, Joanna
Baig, Martí
Sayin, Umut
Serra, Xavier
2023-07-24
<p>We investigate the objective performance of five high-end commercially available Hearing Aid (HA) devices compared to DNN-based speech enhancement algorithms in complex acoustic environments. To this end, we measure the HRTFs of a single HA device to synthesize a binaural dataset for training two state-of-the-art causal and non-causal DNN enhancement models. We then generate an evaluation set of realistic speech-in-noise situations using an Ambisonics loudspeaker setup and record with a KU100 dummy head wearing each of the HA devices, both with and without the conventional HA algorithms, applying the DNN enhancers to the latter. We find that the DNN-based enhancement outperforms the HA algorithms in terms of noise suppression and objective intelligibility metrics.</p>
https://doi.org/10.48550/arXiv.2307.12888
oai:zenodo.org:10213099
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
WASPAA23, IEEE Workshop on Applications of Signal Processing to Audio and Acoustics
An objective evaluation of Hearing Aids and DNN-based speech enhancement in complex acoustic scenes
info:eu-repo/semantics/conferenceProceedings
oai:zenodo.org:10606183
2024-02-01T14:30:48Z
openaire
user-guestxr
user-eu
Küçüktütüncü, Esen
2023-11-23
<div>
<p>Presentation delivered by project partner Esen Küçüktütüncü during a SECTG cluster meeting targeted to Early Stage Researchers (ESRs) on November 23rd, 2023.</p>
<p> </p>
</div>
https://doi.org/10.5281/zenodo.10606183
oai:zenodo.org:10606183
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.10606182
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Influence of Prior Acquaintance on the Shared VR Experience
info:eu-repo/semantics/lecture
oai:zenodo.org:10606876
2024-02-01T16:46:28Z
user-guestxr
user-eu
GuestXR partners
2024-02-01
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<p>In this document we report about the GuestXR baseline system which is available to the public. The baseline system was developed by Virtual Bodyworks (VBW), it is an immersive shared virtual and extended reality system where participants can meet as full body avatars and talk about a topic of their choice. The architecture and system components are described in more detail in D3.1. The system is available as an AppLab app on the Meta App store and participants are invited by email. The system also requires authentication on an online system maintained by VBW to be able to deliver personalised avatars. The system offers a choice of experiences that can be selected from, which represent test spaces and prototypes of the different use cases in WP5. Human participants can be joined by the Guest system which is represented by a Python library which provides a set of preliminary actions.</p>
<div> </div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
https://doi.org/10.5281/zenodo.10606876
oai:zenodo.org:10606876
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.10606875
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
D5.3 Public launch of Baseline Application
info:eu-repo/semantics/report
oai:zenodo.org:10606922
2024-02-01T16:55:53Z
user-guestxr
user-eu
GuestXR partners
2024-02-01
<p>This deliverable DMP describes the methodology for data management that is planned to be employed in the framework of the GUESTXR project. The described methodology aims to safeguard the sound management of the data collected and generated during the course of the project’s activities across their entire lifecycle, while also making them FAIR. Moreover, the DMP identifies the anticipated activities required for making data FAIR, outlines the provisions pertaining to their security as well as addresses the ethical aspects revolving around their collection/generation.</p>
<p>The DMP is considered to be a living document in the framework of GUESTXR and will be updated as needed throughout the course of the project taking into account its latest developments and available results. In fact, the DMP will be reviewed and revalidated before each periodic project management report, and any updates will be included in the periodic project management reports. Ad hoc updates, may also be realised when deemed necessary, with a view to delivering an accurate, up-to-date and comprehensive DMP before the completion of the project (D7.5 Final Data Management Plan). These updates will also be appended on the periodic project management reports. This deliverable is delivered at M6 of the project and is updated based on the latest information available up to the month of delivery.</p>
https://doi.org/10.5281/zenodo.10606922
oai:zenodo.org:10606922
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.10606921
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
D7.2 Initial Data Management Plan
info:eu-repo/semantics/report
oai:zenodo.org:10606005
2024-02-01T14:03:57Z
openaire
user-guestxr
user-eu
Slater, Mel
2022-12-06
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<p>Presentation delivered by project partner Mel Slater during the Digital Media & Human Well-Being (DIGEING) Conference.</p>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
https://doi.org/10.5281/zenodo.10606005
oai:zenodo.org:10606005
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.10606004
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
DIGEING
Immersive Social Media and the Metaverse
info:eu-repo/semantics/lecture
oai:zenodo.org:10606738
2024-02-01T16:28:53Z
openaire
user-guestxr
user-eu
Maimon, Amber
Wald, Iddo
2023-06-22
<div>
<p>Slides of the webinar "Use of multisensory features to improve accessibility to VR environments", held in June 22nd, 2023, and part of the GuestXR webinar series "Fostering inclusion and social interaction in XR".</p>
<p> </p>
</div>
https://doi.org/10.5281/zenodo.10606738
oai:zenodo.org:10606738
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.10606737
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Use of multisensory features to improve accessibility to VR environments
info:eu-repo/semantics/lecture
oai:zenodo.org:10606694
2024-02-01T16:20:20Z
openaire
user-guestxr
user-eu
Spanlang, Bernhard
2023-05-17
<p>Slides of the webinar "A shared XR system with full body avatars and AI agent integration for enhanced inclusivity", held in May 17th, 2023, and part of the GuestXR webinar series "Fostering inclusion and social interaction in XR".</p>
https://doi.org/10.5281/zenodo.10606694
oai:zenodo.org:10606694
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.10606693
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
A shared XR system with full body avatars and AI agent integration for enhanced inclusivity
info:eu-repo/semantics/lecture
oai:zenodo.org:10606716
2024-02-01T16:23:15Z
openaire
user-guestxr
user-eu
Luberadzka, Joanna
2023-06-01
<div>
<p>Slides of the webinar "Binaural 3D sound in VR environments", held in June 1st, 2023, and part of the GuestXR webinar series "Fostering inclusion and social interaction in XR".</p>
<p> </p>
</div>
https://doi.org/10.5281/zenodo.10606716
oai:zenodo.org:10606716
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.10606715
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Binaural 3D sound in VR environments
info:eu-repo/semantics/lecture
oai:zenodo.org:10606149
2024-02-01T14:28:15Z
openaire
user-guestxr
user-eu
Luberadzka, Joanna
2023-11-23
<p>Presentation delivered by project partner Joanna Luberadzka during a SECTG cluster meeting targeted to Early Stage Researchers (ESRs) on November 23rd, 2023.</p>
https://doi.org/10.5281/zenodo.10606149
oai:zenodo.org:10606149
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.10606148
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Deep learning for acoustic matching in the VR context
info:eu-repo/semantics/lecture
oai:zenodo.org:10605974
2024-02-01T13:57:59Z
openaire
user-guestxr
user-eu
Slater, Mel
2022-12-01
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<p>Presentation delivered by project partner Mel Slater during the VR Days Europe.</p>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
https://doi.org/10.5281/zenodo.10605974
oai:zenodo.org:10605974
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.10605973
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
VR Days Europe
Meeting Yourself and Celebrities in VR
info:eu-repo/semantics/lecture
oai:zenodo.org:10605156
2024-02-01T11:25:43Z
openaire
user-guestxr
user-eu
Smekal, Vojtěch
Poyo Solanas, Marta
de Gelder, Beatrice
2022-09-14
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<p>Poster presented by Vojtěch Smekal, partner from Maastricht University, during the workshop Comparative Neurobiology of Higher Cognitive Functions.</p>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
https://doi.org/10.5281/zenodo.10605156
oai:zenodo.org:10605156
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.10605155
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Workshop "Comparative Neurobiology of Higher Cognitive Functions"
Computing a unique neural fingerprint of human bodily actions and expressions
info:eu-repo/semantics/conferencePoster
oai:zenodo.org:6782741
2022-06-30T13:50:45Z
user-guestxr
user-eu
K. Cieśla
T. Wolak
A. Lorens
M. Mentzel
H. Skarżyński
Amir Amedi
2022-02-25
<p>Understanding speech in background noise is challenging. Wearing face-masks, imposed by the COVID19-pandemics, makes it even harder. We developed a multi-sensory setup, including a sensory substitution device (SSD) that can deliver speech simultaneously through audition and as vibrations on the fingertips. The vibrations correspond to low frequencies extracted from the speech input. We trained two groups of non-native English speakers in understanding distorted speech in noise. After a short session (30–45 min) of repeating sentences, with or without concurrent matching vibrations, we showed comparable mean group improvement of 14–16 dB in Speech Reception Threshold (SRT) in two test conditions, i.e., when the participants were asked to repeat sentences only from hearing and also when matching vibrations on fingertips were present. This is a very strong effect, if one considers that a 10 dB difference corresponds to doubling of the perceived loudness. The number of sentence repetitions needed for both types of training to complete the task was comparable. Meanwhile, the mean group SNR for the audio-tactile training (14.7 ± 8.7) was significantly lower (harder) than for the auditory training (23.9 ± 11.8), which indicates a potential facilitating effect of the added vibrations. In addition, both before and after training most of the participants (70–80%) showed better performance (by mean 4–6 dB) in speech-in-noise understanding when the audio sentences were accompanied with matching vibrations. This is the same magnitude of multisensory benefit that we reported, with no training at all, in our previous study using the same experimental procedures. After training, performance in this test condition was also best in both groups (SRT ~ 2 dB). The least significant effect of both training types was found in the third test condition, i.e. when participants were repeating sentences accompanied with non-matching tactile vibrations and the performance in this condition was also poorest after training. The results indicate that both types of training may remove some level of difficulty in sound perception, which might enable a more proper use of speech inputs delivered via vibrotactile stimulation. We discuss the implications of these novel findings with respect to basic science. In particular, we show that even in adulthood, i.e. long after the classical “critical periods” of development have passed, a new pairing between a certain computation (here, speech processing) and an atypical sensory modality (here, touch) can be established and trained, and that this process can be rapid and intuitive. We further present possible applications of our training program and the SSD for auditory rehabilitation in patients with hearing (and sight) deficits, as well as healthy individuals in suboptimal acoustic situations.</p>
https://doi.org/10.1038/s41598-022-06855-8
oai:zenodo.org:6782741
eng
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Scientific Reports, 12((2022) 12:3206), (2022-02-25)
Effects of training and using an audio-tactile sensory substitution device on speech-in-noise understanding
info:eu-repo/semantics/article
oai:zenodo.org:10606733
2024-02-01T16:25:39Z
openaire
user-guestxr
user-eu
Lécuyer, Anatole
2023-06-15
<div>
<p>Slides of the webinar "Studying haptics as a way to support social interactions in XR", held in June 15th, 2023, and part of the GuestXR webinar series "Fostering inclusion and social interaction in XR".</p>
<p> </p>
</div>
https://doi.org/10.5281/zenodo.10606733
oai:zenodo.org:10606733
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.10606732
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Studying haptics as a way to support social interactions in XR
info:eu-repo/semantics/lecture
oai:zenodo.org:7311279
2022-11-11T02:26:27Z
openaire
user-guestxr
user-eu
GuestXR partners
2022-11-10
<p>General infosheet for the GuestXR project</p>
https://doi.org/10.5281/zenodo.7311279
oai:zenodo.org:7311279
eng
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.7311278
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
GuestXR infosheet
info:eu-repo/semantics/conferencePoster
oai:zenodo.org:10606838
2024-02-01T16:42:20Z
user-guestxr
user-eu
GuestXR partners
2024-02-01
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<p>This present deliverable builds on the findings of deliverable 5.1 (state of the art and literature review) and also informs deliverable 5.1 (survey and interview design). The objective of this deliverable is to provide guidelines and tools for the operationalisation of “ethics-by-design” methodology within the GuestXR project. The guidelines and tools contained in the deliverable are intended to be used by members of the consortium or associated projects together with the participation of ethics and responsible innovation researchers. However, researchers in XR and artificial intelligence can also use insights and resources from these guidelines independently. The deliverable begins (section 2) by sketching several relevant approaches that fall under the broader umbrella term of ethics-by-design. These include value-sensitive design and Virtual Ethnography and Future Studies. The deliverable then provides a toolkit (section 3) for researchers to use in the implementation of ethics-by-design approaches within design and innovation processes. This toolkit is best implemented in the context of close collaboration with ethics and/or responsible innovation researchers, for example in the context of a research stay or “embedding” of an ethics and/or responsible innovation researcher within a laboratory or relevant scientific group. The toolkit is divided into four potential stages of implementation in the ethics-by-design approach: Prepare, Explore, Understand, Improve & Implement. Each stage within section 3 then contains several tools that can be utilised in the implementation of the ethics-by-design approach.</p>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
https://doi.org/10.5281/zenodo.10606838
oai:zenodo.org:10606838
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.10606837
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
D1.1 Guidelines for ethics by design
info:eu-repo/semantics/report
oai:zenodo.org:10606664
2024-02-01T16:13:33Z
openaire
user-guestxr
user-eu
Friedman, Doron
2022-07-06
<p>Slides of the webinar "Deep Neural Networks for Virtual Humans", held in July 6th, 2022, and part of the GuestXR webinar series "Striving for Social Harmony in XR".</p>
https://doi.org/10.5281/zenodo.10606664
oai:zenodo.org:10606664
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.10606663
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Deep Neural Networks for Virtual Humans
info:eu-repo/semantics/lecture
oai:zenodo.org:7311271
2022-11-11T02:26:28Z
openaire
user-guestxr
user-eu
GuestXR partners
2022-11-10
<p>General brochure for the GuestXR project</p>
https://doi.org/10.5281/zenodo.7311271
oai:zenodo.org:7311271
eng
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.7311270
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
GuestXR brochure
info:eu-repo/semantics/conferencePoster
oai:zenodo.org:7311263
2022-11-11T02:26:27Z
openaire
user-guestxr
user-eu
GuestXR partners
2022-11-10
<p>General rollup for the GuestXR project</p>
https://doi.org/10.5281/zenodo.7311263
oai:zenodo.org:7311263
eng
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.7311262
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
GuestXR rollup
info:eu-repo/semantics/conferencePoster
oai:zenodo.org:10606897
2024-02-01T16:49:55Z
user-guestxr
user-eu
GuestXR partners
2024-02-01
<p>The present document details the GuestXR project communication and public engagement activities, aiming at reaching as many relevant actors as possible to inform them on the activities and results derived from the project. EUT is responsible for designing and implementing it, but all consortium partners will be involved in it.</p>
<p>The channels and platforms considered in the communication and public engagement plan are: social media channels, official website, materials to be distributed in key events, media relations to specialized media outlets, the creation of a community of interest, and the organisation of engagement activities, among others.</p>
<p>This deliverable details the target groups, key messages addressed to each main group of stakeholders, the activities planned, requirements and the process of reporting communication activities.</p>
https://doi.org/10.5281/zenodo.10606897
oai:zenodo.org:10606897
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.10606896
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
D6.2 Public Launch of the project, and Communication, and Public Engagement Plans
info:eu-repo/semantics/report
oai:zenodo.org:10566468
2024-01-25T09:31:46Z
user-guestxr
user-eu
Vaessen, Marten
Van der Heijden, Kiki
De Gelder, Beatrice
2023-10-06
<p>A central question in affective science and one that is relevant for its clinical applications is how emotions provided by different stimuli are experienced and represented in the brain. Following the traditional view emotional signals are recognized with the help of emotion concepts that are typically used in descriptions of mental states and emotional experiences, irrespective of the sensory modality. This perspective motivated the search for abstract representations of emotions in the brain, shared across variations in stimulus type (face, body, voice) and sensory origin (visual, auditory). On the other hand, emotion signals like for example an aggressive gesture, trigger rapid automatic behavioral responses and this may take place before or independently of full abstract representation of the emotion. This pleads in favor specific emotion signals that may trigger rapid adaptative behavior only by mobilizing modality and stimulus specific brain representations without relying on higher order abstract emotion categories. To test this hypothesis, we presented participants with naturalistic dynamic emotion expressions of the face, the whole body, or the voice in a functional magnetic resonance (fMRI) study. To focus on automatic emotion processing and sidestep explicit concept-based emotion recognition, participants performed an unrelated target detection task presented in a different sensory modality than the stimulus. By using multivariate analyses to assess neural activity patterns in response to the different stimulus types, we reveal a stimulus category and modality specific brain organization of affective signals. Our findings are consistent with the notion that under ecological conditions emotion expressions of the face, body and voice may have different functional roles in triggering rapid adaptive behavior, even if when viewed from an abstract conceptual vantage point, they may all exemplify the same emotion. This has implications for a neuroethologically grounded emotion research program that should start from detailed behavioral observations of how face, body, and voice expressions function in naturalistic contexts.</p>
https://doi.org/10.3389/fnins.2023.1132088
oai:zenodo.org:10566468
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Frontiers in Neuroscience, 17, (2023-10-06)
Modality-specific brain representations during automatic processing of face, voice and body expressions
info:eu-repo/semantics/article
oai:zenodo.org:8158545
2023-07-19T02:26:50Z
user-guestxr
user-eu
Elena Aggius-Vella
Daniel-Robert Chebat
Shachar Maidenbaum
Amir Amedi
2023-04-10
<p>V6 is a retinotopic area located in the dorsal visual stream that integrates eye movements with retinal and visuo-motor signals. Despite the known role of V6 in visual motion, it is unknown whether it is involved in navigation and how sensory experiences shape its functional properties. We explored the involvement of V6 in egocentric navigation in sighted and in congenitally blind (CB) participants navigating via an in-house distance-to-sound sensory substitution device (SSD), the EyeCane. We performed two fMRI experiments on two independent datasets. In the first experiment, CB and sighted participants navigated the same mazes. The sighted performed the mazes via vision, while the CB performed them via audition. The CB performed the mazes before and after a training session, using the EyeCane SSD. In the second experiment, a group of sighted participants performed a motor topography task. Our results show that right V6 (rhV6) is selectively involved in egocentric navigation independently of the sensory modality used. Indeed, after training, rhV6 of CB is selectively recruited for auditory navigation, similarly to rhV6 in the sighted. Moreover, we found activation for body movement in area V6, which can putatively contribute to its involvement in egocentric navigation. Taken together, our findings suggest that area rhV6 is a unique hub that transforms spatially relevant sensory information into an egocentric representation for navigation. While vision is clearly the dominant modality, rhV6 is in fact a supramodal area that can develop its selectivity for navigation in the absence of visual experience.</p>
Published in Current Biology.
https://doi.org/10.1016/j.cub.2023.02.025
oai:zenodo.org:8158545
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Current Biology, (2023-04-10)
sensory substitution
congenital blindness
task-specific sensory-independent brain organization
TSSI
V6
spatial cognition
fMRI
egocentric reference space
navigation
vision
Activation of human visual area V6 during egocentric navigation with and without visual experience
info:eu-repo/semantics/article
oai:zenodo.org:10213137
2024-01-22T08:08:15Z
user-guestxr
user-eu
Oliva, Ramon
Beacco, Alejandro
Gallego, Jaime
Gallego, Raul
Slater, Mel
2023-11-07
<p>VR United is a virtual reality application that we have developed to support multiple people simultaneously interacting in the same environment. Each person is represented with a virtual body that looks like themselves. Such immersive shared environments have existed and been the subject of research for the past 30 years. Here, we demonstrate how VR United meets criteria for successful interaction, where a journalist from the Financial Times in London interviewed a professor in New York for two hours. The virtual location of the interview was a restaurant, in line with the series of interviews published as "Lunch with the FT." We show how the interview was successful, as a substitute for a physically present one. The article based on the interview was published in the Financial Times as normal for the series. We finally consider the future development of such systems, including some implications for immersive journalism.</p>
https://doi.org/10.1109/MCG.2023.3315761
oai:zenodo.org:10213137
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
IEEE Computer Graphics and Applications
The Making of a Newspaper Interview in Virtual Reality: Realistic Avatars, Philosophy, and Sushi
info:eu-repo/semantics/conferenceProceedings
oai:zenodo.org:8158802
2023-07-19T02:26:55Z
user-guestxr
user-eu
Jeanne Hecquard
Justine Saint-Aubert
Ferran Argelaguet
Claudio Pacchierotti
Anatole Lécuyer
Marc Macé
2023-05-16
<p>We study the promotion of positive social interactions in VR by fostering empathy with other users present in the virtual scene. For this purpose, we propose using affective haptic feedback to reinforce the connection with another user through the direct perception of their physiological state. We developed a virtual meeting scenario where a human user attends a presentation with several virtual agents. Throughout the meeting, the presenting virtual agent faces various difficulties that alter her stress level. The human user directly feels her stress via two physiologically based affective haptic interfaces: a compression belt and a vibrator, simulating the breathing and the heart rate of the presenter, respectively. We conducted a user study that compared the use of such a "sympathetic" haptic rendering vs an "indifferent" one that does not communicate the presenter's stress status, remaining constant and relaxed at all times. Results are rather contrasted and user-dependent, but they show that sympathetic haptic feedback is globally preferred and can enhance empathy and perceived connection to the presenter. The results promote the use of affective haptics in social VR applications, in which fostering positive relationships plays an important role.</p>
https://doi.org/10.5281/zenodo.8158802
oai:zenodo.org:8158802
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.8158801
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
IEEE World Haptics Conference
Virtual Reality
Affective Haptics
Empathy
User Experience
Fostering empathy in social Virtual Reality through physiologically based affective haptic feedback
info:eu-repo/semantics/conferencePaper
oai:zenodo.org:8158322
2023-07-19T02:26:53Z
user-guestxr
user-eu
Amber Maimon
Ophir Netzer
Benedetta Heimler
Amir Amedi
2023-01-11
<p>As neuroscience and rehabilitative techniques advance, age-old questions concerning the visual experience of those who gain sight after blindness, once thought to be philosophical alone, take center stage and become the target for scientific inquiries. In this study, we employ a battery of visual perception tasks to study the unique experience of a small group of children who have undergone vision-restoring cataract removal surgery as part of the Himalayan Cataract Project. We tested their abilities to perceive in three dimensions (3D) using a binocular rivalry task and the Brock string task, perceive visual illusions, use cross-modal mappings between touch and vision, and spatially group based on geometric cues. Some of the children in this study gained a sense of sight for the first time in their lives, having been born with bilateral congenital cataracts, while others suffered late-onset blindness in one eye alone. This study simultaneously supports yet raises further questions concerning Hubel and Wiesel’s critical periods theory and provides additional insight into Molyneux’s problem, the ability to correlate vision with touch quickly. We suggest that our findings present a relatively unexplored intermediate stage of 3D vision development. Importantly, we spotlight some essential geometrical perception visual abilities that strengthen the idea that spontaneous geometry intuitions arise independently from visual experience (and education), thus replicating and extending previous studies. We incorporate a new model, not previously explored, of testing children with congenital cataract removal surgeries who perform the task <em>via</em> vision. In contrast, previous work has explored these abilities in the congenitally blind <em>via</em> touch. Taken together, our findings provide insight into the development of what is commonly known as the visual system in the visually deprived and highlight the need to further empirically explore an amodal, task-based interpretation of specializations in the development and structure of the brain. Moreover, we propose a novel objective method, based on a simple binocular rivalry task and the Brock string task, for determining congenital (early) vs. late blindness where medical history and records are partial or lacking (e.g., as is often the case in cataract removal cases).</p>
Published in Frontiers in Neuroscience.
https://doi.org/10.3389/fnins.2022.962817
oai:zenodo.org:8158322
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Frontiers in Neuroscience, (2023-01-11)
vision restoration
sensory perception
sensory development
visual perception
cataract removal
visual development
geometry
3D perception
Testing geometry and 3D perception in children following vision restoring cataract-removal surgery
info:eu-repo/semantics/article
oai:zenodo.org:7258075
2022-11-08T13:21:31Z
openaire
user-guestxr
user-eu
GuestXR Partners
2022-10-27
<p>General presentation for the GuestXR project</p>
https://doi.org/10.5281/zenodo.7258075
oai:zenodo.org:7258075
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.7258074
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
GuestXR General Presentation
info:eu-repo/semantics/conferencePoster
oai:zenodo.org:7889411
2023-05-03T14:26:42Z
user-guestxr
user-eu
Iddo Yehoshua Wald
Amber Maimon
Lucas Keniger de Andrade Gensas
Noémi Guiot
Meshi Ben Oz
Benjamin W. Corn MD
Amir Amedi
2023-04-28
<p>This work explores utilizing representations of one’s physiological breath (embreathment) in immersive experiences, for enhancing presence and body awareness. Particularly, embreathment is proposed for reducing claustrophobia and associated negative cognitions such as feelings of restriction, loss of agency, and sense of sufocation, by enhancing agency and interoception in circumstances where one’s ability to act is restricted. The informed design process of an experience designed for this purpose is presented, alongside an experiment employing the experience, evaluating embodiment, presence, and interoception. The results indicate that embreathment leads to signifcantly greater levels of embodiment and presence than either an entrainment or control condition. In addition, a modest trend was observed in a heartbeat detection task implying better interoception in the intervention conditions than the control. These fndings support the initial assumptions regarding presence and body awareness, paving the way for further evaluation with individuals and situations related to the claustrophobia use case.</p>
https://doi.org/10.1145/3544549.3585897
oai:zenodo.org:7889411
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
CHI EA '23, 2023 CHI Conference on Human Factors in Computing Systems
embodiment
presence
respiration
sense of control
agency
embreathment
breathing
negative cognitions
claustrophobia
Breathing based immersive interactions for enhanced agency and body awareness: a claustrophobia motivated study
info:eu-repo/semantics/conferencePaper
oai:zenodo.org:10606193
2024-02-01T14:36:46Z
openaire
user-guestxr
user-eu
Hecquard, Jeanne
Saint-Aubert, Justine
Pacchierotti, Claudio
Argelaguet, Ferran
Lécuyer, Anatole
Macé, Marc
2023-11-23
<p>Presentation delivered by project partner Jeanne Hecquard during a SECTG cluster meeting targeted to Early Stage Researchers (ESRs) on November 23rd, 2023.</p>
https://doi.org/10.5281/zenodo.10606193
oai:zenodo.org:10606193
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.10606192
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Fostering empathy in social Virtual Reality through physiologically based affective haptic feedback
info:eu-repo/semantics/lecture
oai:zenodo.org:6818127
2022-07-11T13:48:41Z
user-guestxr
user-eu
Amber Maimon
Or Yizhar
Galit Buchs
Benedetta Heimler
Amir Amedi
2022-08-13
<p>The <a href="https://www.sciencedirect.com/topics/neuroscience/phenomenology">phenomenology</a> of the blind has provided an age-old, unparalleled means of exploring the enigmatic link between the brain and mind. This paper delves into the unique phenomenological experience of a man who became blind in adulthood. He subsequently underwent both an Argus II retinal prosthesis implant and training, and extensive training on the EyeMusic visual to auditory sensory substitution device (SSD), thereby becoming the first reported case to date of dual proficiency with both devices. He offers a firsthand account into what he considers the great potential of combining sensory substitution devices with visual prostheses as part of a complete visual restoration protocol. While the Argus II retinal prosthesis alone provided him with immediate visual percepts by way of electrically stimulated phosphenes elicited by the device, the EyeMusic SSD requires extensive training from the onset. Yet following the extensive training program with the EyeMusic sensory substitution device, our subject reports that the sensory substitution device allowed him to experience a richer, more complex <a href="https://www.sciencedirect.com/topics/psychology/perceptual-experience">perceptual experience</a>, that felt more “second nature” to him, while the Argus II prosthesis (which also requires training) did not allow him to achieve the same levels of automaticity and transparency. Following long-term use of the EyeMusic SSD, our subject reported that visual percepts representing mainly, but not limited to, colors portrayed by the EyeMusic SSD are elicited in association with auditory stimuli, indicating the acquisition of a high level of automaticity. Finally, the case study indicates an additive benefit to the combination of both devices on the user's subjective phenomenological visual experience.</p>
https://doi.org/10.1016/j.neuropsychologia.2022.108305
oai:zenodo.org:6818127
eng
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Neuropsychologia, 173(108305), (2022-08-13)
Sensory substitution
Visual prosthesis
Blindness
Vision
Visual experience
Vision restoration
A case study in phenomenology of visual experience with retinal prosthesis versus visual-to-auditory sensory substitution
info:eu-repo/semantics/article
oai:zenodo.org:10076114
2023-11-06T16:39:57Z
user-guestxr
user-eu
Sayin, Umut
2023-11-06
<p>Real-life situations are hard to replicate in the laboratory and often discarded during hearing aids optimisation, leading to performance inconsistencies and user dissatisfaction. As a solution, the authors propose a tool set to incorporate real-life conditions in the design, test and fitting of hearing aids. This tool set includes a spatial audio simulation framework for generating large number of realistic situations, a machine learning algorithm focused on prominent hearing aids problems trained with the newly generated data, and a low-cost spatial audio solution for audiological clinics for improved fitting of hearing aids. The current article presents the first results of the spatial audio simulation framework compared to a reference scenario and other existent solutions in literature. First findings demonstrate that synthesized impulse responses with arbitrary source directivity combined with using hearing aid head related transfer functions, with spatial upsampling and Ambisonic domain optimizations, to generate simulated binaural audio can be a powerful tool for generating several real-life situations for further hearing aids research.</p>
https://doi.org/10.5281/zenodo.10076114
oai:zenodo.org:10076114
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.10076113
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
155th Convention of the Audio Engineering Society, New York, 25-27 October 2023
Comparison of synthesized Virtual Sound Environments with validated Hearing Aid experiments
info:eu-repo/semantics/conferencePaper
oai:zenodo.org:10605421
2024-02-01T12:11:40Z
openaire
user-guestxr
user-eu
Sayin, Umut
2023-10-27
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<p>Poster presented by Umut Sayin during the AESNY 2023 convention, an event organised by the Audio Engineering Society (AES) that was celebrated on October 25th-27th at the Jacob Javits Convention Center in New York City, USA.</p>
<div> </div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
https://doi.org/10.5281/zenodo.10605421
oai:zenodo.org:10605421
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.10605420
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
AES Fall Convention 2023
Comparison of synthesized Virtual Sound Environments with validated Hearing Aid experiments
info:eu-repo/semantics/conferencePoster
oai:zenodo.org:10606866
2024-02-01T16:44:11Z
user-guestxr
user-eu
GuestXR partners
2024-02-01
<p>This deliverable provides a literature review of several areas of research and investigation in the field of ethics, broadly conceived, that are relevant to the topic areas of the GuestXR project. As the GuestXR project brings together research on virtual reality (VR) and VR systems with artificial intelligence (AI) and reinforcement machine learning techniques, as well as neuroscience and social psychology, the state-of-the-art provided in this deliverable includes discussion<br>of the following areas:</p>
<ul>
<li>the ethics of AI;</li>
<li>ethical considerations related to the development and use of VRsystems and intelligent virtual environments (IVE), bringing ethical investigations of VR into dialogue with the ethics of social AI systems;</li>
<li>value-sensitive design methodologies, to be iterated within the GuestXR project</li>
</ul>
<p>The deliverable utilises a semi-systematic review approach. Semi-systematic reviews map dominant themes that have emerged over time, drawing attention to how topics have developed across different research traditions. This allows for the synthesis of multiple perspectives across several areas of research. Using this approach provides a broad overview of the main issues, and helps to make sense of complex and often contested topics. A semi-structured review can then be used as a decision aid. This review will serve as the knowledge basis for the development of an “ethics-by-design” approach that will be utilised throughout the<br>project.</p>
<p>In relation to the development of AI and machine learning applications, this review loosely adopts a framework proposed by Carly Kind, director of the Ada Lovelace Institute. This framework looks at the development of the rapidly expanding field of AI ethics in three, now overlapping phases:</p>
<ul>
<li>Abstract guiding principles and norms generally derived from systematic approaches in philosophical ethics and applied ethics;</li>
<li>More technically oriented approaches that look primarily at how interventions and work programmes led by computer scientists and developers can address questions concerning issues like fairness, bias, and accessibility. This approach operationalised many of the more abstract or high-level ethical concerns raised in the first phase, but also tended to reframe them as technical problems requiring technical solutions.</li>
<li>A third phase has shifted the discussion of ethical AI towards more social and political questions relating to questions of justice, including “social justice, racial justice, economic justice, and environmental justice” (Kind, 2020). Within this phase, technologies are considered embedded in and co-producing socio-technical systems. As such, it is also concerned with questions relating power and structure as well as highlevel ethical concerns, legal constraints and technical “fixes”. The practical focus in this approach is on co-creation processes that involve affected individuals and communities at early stages of technological development and innovation processes, giving them voice and capacity for action within the design and application of technical systems.</li>
</ul>
<p>The third phase is also characterised by the further development of Responsible Innovation and “by-design” approaches. These approaches also characterise the approach to ethics within the GuestXR project.</p>
<p>The European Commission’s Ethical Guidelines for Trustworthy AI and the related seven requirements produced by the High-Level Expert Group on Artificial Intelligence (AI HLEG) nonetheless remain extremely important foundational guidelines for the development of ethics-by-design methodologies and practices within the GuestXR project.</p>
<p>Based on fundamental rights and ethical principles, the Guidelines list seven key requirements that AI systems should meet in order to be trustworthy:</p>
<ol>
<li>Human agency and oversight</li>
<li>Technical robustness and safety</li>
<li>Privacy and Data governance</li>
<li>Transparency</li>
<li>Diversity, non-discrimination and fairness</li>
<li>Societal and environmental well-being</li>
<li>Accountability</li>
</ol>
<p>These requirements are based on the fundamental rights elaborated in the European treaties and other foundational documents as well as foundational ethical principles.</p>
<p>Ethical and social consideration of VR systems are closely intertwined with technical, scientific and indeed philosophical questions about embodiment, immersion and presence within VR environments. Thus, ethical examination of VR systems and environments has evolved alongside technical, psychological and sociological investigations. An initial focus has been on potential harms to individuals or vulnerable groups in VR environments – and how to prevent them. This focus has addressed questions concerning discrimination, stereotyping, anti-social behaviour such as sexual harassment as well as accessibility relating to equipment and the affordances within VR environments.</p>
<p>As the potential use of VR environments in various arenas of social life has increased and with many companies now eyeing the commercial potentials of these technologies, the nascent field of VR ethics has followed the development of AI ethics in starting to increasingly take into consideration broader sociopolitical<br>questions relating to the governance of immersive virtual environments.</p>
<p>As the possibilities for individuals to spend greater amounts of time within VR environments (for professional, leisure, and commercial activities) continues to increase, the field of VR ethics has also begun to explore how habits and affinities developed within VR environments might transfer into physical reality. This has opened up further possibilities for the use of VR, such as in therapeutic contexts. The growing potential of VR and the expanding scope of use applications has made the necessity for serious ethical consideration and upstream value-sensitive intervention increasingly important.</p>
<p>The GuestXR project, for this reason, has committed to the implementation of ethics-by-design approaches within the scientific work programme of the project. This deliverable will be followed by further guidelines for implementation of ethics-by-design methods and processes. These guidelines will therefore build, at least in part, upon the initial review of ethical and related literature in this deliverable.</p>
https://doi.org/10.5281/zenodo.10606866
oai:zenodo.org:10606866
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.10606865
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
D5.1 Ethics State-of-the-Art and Literature Review
info:eu-repo/semantics/report
oai:zenodo.org:10605255
2024-02-01T12:05:33Z
openaire
user-guestxr
user-eu
Küçüktütüncü, Esen
Oliva, Ramon
Slater, Mel
2023-07-28
<p>Poster presented by Esen Küçüktütüncü, PhD student at Event Lab Barcelona, during the International Summer School on eXtended Reality Technology and eXperience, celebrated on July 18th-21st, 2023, in Madrid (Spain).</p>
https://doi.org/10.5281/zenodo.10605255
oai:zenodo.org:10605255
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.10605254
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
International Summer School on eXtended Reality Technology and eXperience
Preliminary Results for Small Groups in VRUnited
info:eu-repo/semantics/conferencePoster
oai:zenodo.org:10606870
2024-02-01T16:45:25Z
user-guestxr
user-eu
GuestXR partners
2024-02-01
<p>This present deliverable builds on the findings of deliverable 5.1 (state of the art and literature review) and is also informed by deliverable 1.1 (guidelines for ethics by design). The objective of this deliverable is to support D1.1, providing an initial overview of what the interview and survey protocol used in the implementation of the ethics by design methodology will look like. It is important to point out that the questions contained within this document have been extracted from existing batteries (see Bechmann et al., 2020; AoIR 2019; Institute for the Future, 2018) and will need to be tailored more specifically to the GuestXR context. The interview questions contained in the deliverable will be modified and used by the ethics and responsible innovation researchers in conversation with project partners, as well as by project partners within their own meetings to encourage ethical reflection. The deliverable also contains an outline of the “7 questions” technique, pioneered by Shell in its scenario planning process and lists questions that can also be modified for use within the context of GuestXR. In addition, this document will contain the protocol for a visioning/scenario workshop which will be used in the place of a standard survey in order to engage project partners in ethical questions about GuestXR during the next face to face consortium meeting in June 2023. The protocol is currently under development and will be sent to all partners and added to this document before the workshop takes place. As with D1.1, the script and survey contained herein will be implemented by ethics and/or responsible innovation researchers, in the context of close collaboration with project partners, for example in the context of a research stay or “embedding” of an ethics and/or responsible innovation researcher within a laboratory or relevant scientific group.</p>
https://doi.org/10.5281/zenodo.10606870
oai:zenodo.org:10606870
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.10606869
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
D5.2 Survey and interview design
info:eu-repo/semantics/report
oai:zenodo.org:10606904
2024-02-01T16:51:23Z
user-guestxr
user-eu
GuestXR partners
2024-02-01
<p>The present confidential document details the GuestXR project’s dissemination and exploitation activities, aiming at reaching as many relevant actors as possible to inform them on the activities and results derived from the project.</p>
<p>The channels and platforms considered in the dissemination report are dissemination tools, activities to reach the academia and the industry, including scientific publications, the organisation of engagement activities, and clustering activities, among others.</p>
<p>Last, the innovation management, knowledge transfer and exploitation task detailed in this deliverable, explains how the methodology to follow has been planned in 3 distinctive phases: a) initial diagnosis, and, b) innovation sessions and IP & BP conceptualisation that will be developed through two pathways: Innovation Management pathway and IPR & Exploitation pathway.</p>
https://doi.org/10.5281/zenodo.10606904
oai:zenodo.org:10606904
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.10606903
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
D6.3 Dissemination and Exploitation Plans
info:eu-repo/semantics/report
oai:zenodo.org:10606889
2024-02-01T16:48:39Z
user-guestxr
user-eu
GuestXR partners
2024-02-01
<p>This deliverable reports on the development of GuestXR Visual Corporative Image (VCI) and project webpage. The document includes the project’s logo, Word templates for project activities, a Power Point template for presentations and a visual identity standards guidebook defining and outlining how to use identifying elements pertaining to the project, such as logos, colours, fonts and graphic elements.</p>
<p>In addition, this deliverable also describes how the GuestXR’s project webpage has been created, summarising the structure, the content and website functionalities.</p>
https://doi.org/10.5281/zenodo.10606889
oai:zenodo.org:10606889
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.10606888
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
D6.1 Project logo and website
info:eu-repo/semantics/report
oai:zenodo.org:7544844
2023-01-18T14:26:40Z
user-guestxr
user-eu
Yann Moullec
Justine Saint-Aubert
Julien Manson
Melanie Cogne
Anatole Lecuyer
2022-11-17
<p>In this paper we explore the multi-sensory display of self-avatars' physiological state in Virtual Reality (VR), as a means to enhance the connection between the users and their avatar. Our approach consists in designing and combining a coherent set of visual, auditory and haptic cues to represent the avatar's cardiac and respiratory activity. These sensory cues are modulated depending on the avatar's simulated physical exertion. We notably introduce a novel haptic technique to represent respiratory activity using a compression belt simulating abdominal movements that occur during a breathing cycle. A series of experiments was conducted to evaluate the influence of our multi-sensory rendering techniques on various aspects of the VR user experience, including the sense of virtual embodiment and the sensation of effort during a walking simulation. A first study (N=30) that focused on displaying cardiac activity showed that combining sensory modalities significantly enhances the sensation of effort. A second study (N=20) that focused on respiratory activity showed that combining sensory modalities significantly enhances the sensation of effort as well as two sub-components of the sense of embodiment. Interestingly, the user's actual breathing tended to synchronize with the simulated breathing, especially with the multi-sensory and haptic displays. A third study (N=18) that focused on the combination of cardiac and respiratory activity showed that combining both rendering techniques significantly enhances the sensation of effort. Taken together, our results promote the use of our novel breathing display technique and multi-sensory rendering of physiological parameters in VR applications where effort sensations are prominent, such as for rehabilitation, sport training, or exergames.</p>
https://doi.org/10.1109/TVCG.2022.3203120
oai:zenodo.org:7544844
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
IEEE Transactions on Visualization and Computer Graphics, 28(11), 3596 - 3606, (2022-11-17)
IEEE Transactions on Visualization and Computer Graphics
Avatar
multi-sensory display
haptic
physiological computing
effort sensation
embodiment
cardiac
respiration
Multi-sensory display of self-avatar's physiological state: virtual breathing and heart beating can increase sensation of effort in VR
info:eu-repo/semantics/article
oai:zenodo.org:8405321
2024-01-22T08:12:18Z
user-guestxr
user-eu
Alon Shoa
Ramon Oliva
Mel Slater
Doron Friedman
2023-10-04
<p>It is becoming increasingly easier to set up multi-user virtual reality sessions, and these can become viable alternatives to video conference in events such as international conferences. Moreover, it is possible to enhance such events with automated virtual humans, who may participate in the discussion. This paper presents the behind-the-scenes work of a panel session titled “Is virtual reality genuine reality?”, which was held during a physical symposium, "XR for the people," in June 2022. The panel featured a virtual Albert Einstein, based on a large language model (LLM), as a panelist, alongside three international experts having a live conference panel discussion. The VR discussion was broadcast live on stage, and a moderator was able to communicate with both the live audience, the virtual world participants, and the virtual agent (Einstein). We provide lessons learned from the implementation and from the live production, and discuss the potential and pitfalls of using LLM-based virtual humans for multi-user VR in live hybrid events.</p>
https://doi.org/10.1145/3570945.3607317
oai:zenodo.org:8405321
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
23rd ACM International Conference on Intelligent Virtual Agents
VR
AI
Persona Reconstruction
GPT3
Sushi with Einstein: Enhancing Hybrid Live Events with LLM-Based Virtual Humans
info:eu-repo/semantics/conferencePaper
oai:zenodo.org:7863523
2023-04-25T14:26:42Z
user-guestxr
user-eu
Justine Saint-Aubert
Ferran Argelaguet
Marc J.-M. Macé
Claudio Pacchierotti
Amir Amedi
Anatole Lécuyer
2023-02-20
<p>In Virtual Reality (VR), a growing number of applications involve verbal communications with avatars, such as for teleconference, entertainment, virtual training, social networks, etc. In this context, our paper aims to investigate how tactile feedback consisting in vibrations synchronized with speech could influence aspects related to VR social interactions such as persuasion, co-presence and leadership. We conducted two experiments where participants embody a first-person avatar attending a virtual meeting in immersive VR. In the first experiment, participants were listening to two speaking virtual agents and the speech of one agent was augmented with vibrotactile feedback. Interestingly, the results show that such vibrotactile feedback could significantly improve the perceived co-presence but also the persuasiveness and leadership of the haptically-augmented agent. In the second experiment, the participants were asked to speak to two agents, and their own speech was augmented or not with vibrotactile feedback. The results show that vibrotactile feedback had again a positive effect on co-presence, and that participants perceive their speech as more persuasive in presence of haptic feedback. Taken together, our results demonstrate the strong potential of haptic feedback for supporting social interactions in VR, and pave the way to novel usages of vibrations in a wide range of applications in which verbal communication plays a prominent role.</p>
https://doi.org/10.5281/zenodo.7863523
oai:zenodo.org:7863523
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.7863522
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
IEEE Conference on Virtual Reality and 3D User Interfaces
Audio
Haptic
Vibrotactile feedback
Speech
Co- Presence
Leadership
Persuasion
Persuasive Vibrations: Effects of Speech-Based Vibrations on Persuasion, Leadership, and Co-Presence During Verbal Communication in VR
info:eu-repo/semantics/conferencePaper
oai:zenodo.org:10606471
2024-02-01T16:10:30Z
openaire
user-guestxr
user-eu
Meacham, Darian
Shanley, Dani
2022-06-22
<div>
<p>Slides of the webinar "Getting Real about Ethics in Virtual Environments", held in June 22nd, 2022, and part of the GuestXR webinar series "Striving for Social Harmony in XR".</p>
<p> </p>
</div>
https://doi.org/10.5281/zenodo.10606471
oai:zenodo.org:10606471
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.10606470
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Getting Real about Ethics in Virtual Environments
info:eu-repo/semantics/lecture
oai:zenodo.org:8158504
2023-07-19T02:26:54Z
user-guestxr
user-eu
Shira Shvadron
Adi Snir
Amber Maimon
Or Yizhar
Sapir Harel
Keinan Poradosu
Amir Amedi
2023-03-02
<p>Current advancements in both technology and science allow us to manipulate our sensory modalities in new and unexpected ways. In the present study, we explore the potential of expanding what we perceive through our natural senses by utilizing a visual-to-auditory sensory substitution device (SSD), the EyeMusic, an algorithm that converts images to sound. The EyeMusic was initially developed to allow blind individuals to create a spatial representation of information arriving from a video feed at a slow sampling rate. In this study, we aimed to use the EyeMusic for the blind areas of sighted individuals. We use it in this initial proof-of-concept study to test the ability of sighted subjects to combine visual information with surrounding auditory sonification representing visual information. Participants in this study were tasked with recognizing and adequately placing the stimuli, using sound to represent the areas outside the standard human visual field. As such, the participants were asked to report shapes’ identities as well as their spatial orientation (front/right/back/left), requiring combined visual (90° frontal) and auditory input (the remaining 270°) for the successful performance of the task (content in both vision and audition was presented in a sweeping clockwise motion around the participant). We found that participants were successful at a highly above chance level after a brief 1-h-long session of online training and one on-site training session of an average of 20 min. They could even draw a 2D representation of this image in some cases. Participants could also generalize, recognizing new shapes they were not explicitly trained on. Our findings provide an initial proof of concept indicating that sensory augmentation devices and techniques can potentially be used in combination with natural sensory information in order to expand the natural fields of sensory perception.</p>
Published in Frontiers in Neuroscience.
https://doi.org/10.5281/zenodo.8158504
oai:zenodo.org:8158504
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.8158503
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Frontiers in Human Neuroscience, (2023-03-02)
spatial perception
visual-auditory
sensory substitution
sensory substitution device (SSD)
visual-spatial perception
auditory spatial perception
multisensory spatial perception
multisensory perception
Shape detection beyond the visual field using a visual-to-auditory sensory augmentation device
info:eu-repo/semantics/article
oai:zenodo.org:10606015
2024-02-01T14:07:17Z
openaire
user-guestxr
user-eu
Sayin, Umut
Spanlang, Bernhard
2022-06-23
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<p>Presentation delivered by Umut Sayin, partner from Eurecat, and Bernhard Spanlang, from Virtual Bodyworks, during the Sonar Festival 2023.</p>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
https://doi.org/10.5281/zenodo.10606015
oai:zenodo.org:10606015
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.10606014
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
A Machine Learning Agent for eXtended Reality
info:eu-repo/semantics/lecture
oai:zenodo.org:8019916
2023-06-09T14:26:49Z
openaire
user-guestxr
user-eu
Smekal, Vojtěch
Poyo Solanas, Marta
Szucs, Tamas
Lappe, Alexander
Giese, Martin
de Gelder, Beatrice
2023-06-09
<p>Presented at the Vision Sciences Society 2023 meeting in St. Pete Beach, FL, USA. </p>
https://doi.org/10.5281/zenodo.8019916
oai:zenodo.org:8019916
eng
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.8019915
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Computing a unique neural fingerprint of human bodily expressions and actions
info:eu-repo/semantics/conferencePoster
oai:zenodo.org:6782779
2022-06-30T13:50:39Z
user-guestxr
user-eu
Roni Arbel
Benedetta Heimler
Amir Amedi
2022-03-14
<p>Unlike sighted individuals, congenitally blind individuals have little to no experience with face shapes. Instead, they rely on non-shape cues, such as voices, to perform character identification. The extent to which face-shape perception can be learned in adulthood via a different sensory modality (i.e., not vision) remains poorly explored. We used a visual-to-auditory Sensory Substitution Device (SSD) that enables conversion of visual images to the auditory modality while preserving their visual characteristics. Expert SSD users were systematically taught to identify cartoon faces via audition. Following a tailored training program lasting ~ 12 h, congenitally blind participants successfully identified six trained faces with high accuracy.</p>
<p>Furthermore, they effectively generalized their identification to the untrained, inverted orientation of the learned faces. Finally, after completing the extensive 12-h training program, participants learned six new faces within 2 additional hours of training, suggesting internalization of face-identification processes. Our results document for the first time that facial features can be processed through audition, even in the absence of visual experience across the lifespan. Overall, these findings have important implications for both non-visual object recognition and visual rehabilitation practices and prompt the study of the neural processes underlying auditory face perception in the absence of vision.</p>
https://doi.org/10.1038/s41598-022-08187-z
oai:zenodo.org:6782779
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Scientific Reports, 12(Article number: 4330 (2022)), (2022-03-14)
Congenitally blind adults can learn to identify face-shapes via auditory sensory substitution and successfully generalize some of the learned features
info:eu-repo/semantics/article
oai:zenodo.org:10606453
2024-02-01T15:34:51Z
openaire
user-guestxr
user-eu
Slater, Mel
2022-05-25
<p>Slides of the webinar "The affordances and problems of meeting in virtual reality", held in May 25th, 2022, and part of the series "Striving for Social Harmony in XR".</p>
https://doi.org/10.5281/zenodo.10606453
oai:zenodo.org:10606453
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.10606452
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Affordances and problems of meeting in virtual reality
info:eu-repo/semantics/lecture
oai:zenodo.org:10809512
2024-03-12T14:28:39Z
openaire
user-guestxr
Küçüktütüncü, Esen
Oliva, Ramon
Slater, Mel
2024-03-08
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<p>Poster presented by Esen Küçüktütüncü, PhD candidate from the Event Lab and the Institute of Neurosciences of the University of Barcelona, during the 2<sup>nd</sup> edition of the spring school on Social XR, organised by the Research Institute for Mathematics & Computer Science in the Netherlands (CWI), which took place from March 4<sup>th</sup> to 8<sup>th</sup>, 2024, in Amsterdam (the Netherlands).</p>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
https://doi.org/10.5281/zenodo.10809512
oai:zenodo.org:10809512
Zenodo
https://zenodo.org/communities/guestxr
https://doi.org/10.5281/zenodo.10809511
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
CWI Spring School on Social XR
Preliminary Results on Conflict Resolution in VR
info:eu-repo/semantics/conferencePoster
oai:zenodo.org:8157962
2023-07-19T02:26:54Z
user-guestxr
user-eu
Dan Pollak
Jonathan Giron
Doron Friedman
2023-05-01
<p>Inceptor is a tool designed for non-expert users to develop social VR scenarios that includes virtual humans. The tool uses a text based interface and natural language processing models as input, and generates complete 3D/VR Unity scenarios as output. The tool is currently based on the Rocketbox asset library. We release the tool as an open source project in order to empower the extended reality research community.</p>
https://doi.org/10.1109/VRW58643.2023.00102
oai:zenodo.org:8157962
Zenodo
https://zenodo.org/communities/guestxr
https://zenodo.org/communities/eu
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
2023 IEEE Conference on Virtual Reality and 3D User Interfaces
Solid modeling
Three-dimensional displays
Extended reality
Conferences
User interfaces
Natural language processing
Libraries
Inceptor: An Open Source Tool for Automated Creation of 3D Social Scenarios
info:eu-repo/semantics/conferencePaper