Still wrestling with this mixed reality and what it might offer or not in an academic / learning setting. With DICATS we had the experience of viewing each other’s sessions in 2D and 3D 360 for some or all of the hives / regions, this is part of an ongoing conversation I’m having with colleagues. When I have facilitated international conferences online in 2D before using a virtual classroom, it is different because most times I would be sitting with a device and some kind of webcam or small camera so it would probably be a thumbnail video if it’s on at all.
But for DICATS this was different. In London we had partial 3D 360 and 2D online for the Moscow section. But how to create empathy with both virtual presenters and the audience – where to stand, what to do, what to talk about / words to use?
I attended a session last week about academic presentations by Dr Gail Forey who specialises in written and spoken workplace discourse, systemic functional linguistics, discourse analysis, language education and teaching development.
So to dissect this further
So I was talking to Varvara but in physical reality, I was talking to a rollup screen on which Zoom software was projected whilst attendees in the room watched the screen, our AV team livestreamed from the back and attendees viewed via youtube based on whatever our AV team chose to focus on (we discussed this at length for each hive in the planning of DICATS because of the added complications of 360). So fairly standard stuff and even if the 360 camera had been streaming at this point (I don’t think I was aware it wasn’t) it still couldn’t bring too much alive because of the projected image from Zoom. Varvara in reality and Sofia in a sense were physically closer to the computer on the left and my phone not the projected image, but our minds tell us it is more natural to talk to their larger image, especially in front of an audience. And use gestures in both the direction of the projected image as well as gestures that give cues to the audiences.
This is Natalie Carlton from Philadelphia with the 360 livestream but being played back via a 2D video
Is it possible to present to virtual and live audiences in different ways (and realities) at the same time?
These are areas that Gail and others will be researching in more depth particularly interpersonal meaning: https://researchportal.bath.ac.uk/en/publications/interpersonal-meaning-and-audience-engagement-in-academic-present
Depending on the future with physical/virtual realities and mixed reality presenting – for now let’s say in one hour of a day but in multiple timezones with possible playback only at any time in the future – will it be just one person that brings alive a topic and creates meaning? What will students be looking for? Will it matter if they don’t find it in a live session but can find it in a recorded session? Or will trying to playback in a limited time in any kind of unspecified context make it less meaningful anyway. Presenters moving towards presenting as hologram by appearing on the edge of one of their physical surfaces / furniture which is a strange but probably not too far in the future option.
Maybe sonic augmented reality will make this easier to figure out and make it more meaningful – it’s an area of research in CVSSP here at Surrey at the moment: