• capturing footage for creating virtual scenarios or environments – brief update
  • Back
 by 

When I was last at Surrey in 2008, I was involved with various projects in Second Life. Now with our virtual and augmented reality network this is relevant because Second Life still exists but with the improvements to VR hardware, development of consumer/prosumer video capture hardware as well as softwarein this brilliant summary there are now options for developing a 3D 360 virtual learning environment or 3D 360 virtual learning scenarios. Smartphone development has produced preferences for taking part in immersive experiences whilst viewing on a smaller screen, or augmented experiences such as Pokemon Go.

On the negative side, the pressures on higher education in the UK mean that there are more pressures on existing resources to make the most of these opportunities. With Second Life, it was free to use and there were open source alternatives. Some basic 3D creation could be done inside the Second Life app which led to faster adoption as resources, items, processes were shared amongst the education community. For people who don’t play virtual games, similar knowledge was shared about how to navigate around and understand 3D space. Psychology, psychiatry and other scenarios evolved to help learners understand something about their emotional responses whilst inside a virtual environment. And artificial intelligence scenarios and avatars developed – notably through JISC preview project. So the barriers were lowered but higher quality scenarios and infrastructure which enhanced the immersion were often contracted to 3D designers, developers etc

Where are we now?

Capturing content

Taking photos with a 2d and possibly 360 capable camera or smartphone then using software to develop 3D images from the photographs.

Recording video with a 2d and possibly 360 capable camera or smartphone and using software to develop 3D moving images. After much discussion we will be purchasing a 3D 360 camera which has up to 8k resolution that will hopefully future proof the scenarios we want to build at least for a while. The majority of what we will develop will be 4K probably.

Using 3D 360 cameras to record footage and using software to convert or edit the 3D still or moving images into formats that can be viewed through virtual reality hardware such as goggles or a web virtual reality browser.

3D scanners which scan and build then have an editable 3D version of the scanned footage often with in-built software.

This is an excellent overview for smaller objects

As I’m in a faculty of engineering and physical sciences, the students work with larger environments too I hope to find out over the summer what kinds of tools they are using for this e.g. from Autodesk, or for atmospheric scanning; and will provide an update.

Viewing content

In Second Life there were a number of different experimental web browsers so that computers which didn’t have suitable graphics cards or were unable to manage the processing of Second Life application, could also take part. There are discussions ongoing about whether web browsers for VR are better than applications, slightly different because of the advanced capabiilities offered by smartphone apps and smartphone browsers as well as desktop applications.

3D virtual environments or platforms
Altspace (used briefly, unfortunately now owned by Microsoft) anyone remember how Oracle aquired Sun and binned their 3D virtual environment (see also Open Wonderland) ,good post from Venturebeat in 2016 about where investment likely to go https://venturebeat.com/2016/06/12/where-to-place-your-bets-in-vr-content-platform-or-hardware/
I will also investigate EngageVR , WondaVR, BrioVR, InstaVR in next few weeks. Also two options from the Second Life founders (HighFidelity, Sansar)

Capturing experiences to follow soon and separate one on any augmented projects starting in 19/20.