What is extended reality (XR)?

Extended reality (XR) is a term that is commonly used to describe all environments and interactions that combine real and virtual elements. Whilst XR usually encompasses AR (augmented reality), MR (mixed reality) and VR (virtual reality), it has a more specific meaning when used in relation to film, broadcast, and live entertainment production. In this article, we explain how, and why it’s on course to become a studio staple. 

Extended Reality meaning 

When used as an umbrella term, XR denotes all AR, MR and VR technologies –it’s the overarching label given to systems that integrate virtual and real worlds. In this sense, XR can be applied equally to certain motion capture techniques, augmented reality applications, or VR gaming. 

In the production world, however, it means something much more specific. XR production refers to a workflow that comprises LED screens, camera tracking systems, and powerful graphics engines. Often the LED wall operate with set extensions, which are tracked AR masks that allow the virtual scene to extend seamlessly beyond the LED wall.

How does XR production work? 

In XR production, a pre-configured 3D virtual environment generated by the graphics engine is displayed on one (or across multiple) high-quality LED screens that form the background to live-action, real world events.  When combined with a precision motion tracking system, cameras are able to move in and around the virtual environment, with the real and virtual elements seamlessly merged and locked together creating the combined illusion.

The benefits of XR production 

Immersive production and real time edits 

Immersive technology enables actors, hosts, and producers to see the virtual environments whilst shooting. This means that they can adapt their performances or make edits live on set, which reduces time (and budget) spent in post-production.  


Lighting is provided by the LED screens on an XR set. Common illumination helps real-world people and objects blend seamlessly into virtual environments, and further reduces time on set adjusting lighting. 

No Colour Spill or Chroma Key Compositing 

On certain green screen setups, colour spill and the need for chroma key compositing can increase the time spent in post production. Neither is required for XR screens which, again, reduces any time needed in post-production. 

Rapid calibration 

The calibration of camera tracking systems on xR sets takes minutes rather than hours (as can happen with green screen sets). This allows for scenes to be shot across multiple sessions with minimal disruption and preparation. However, where set extensions are used or AR objects are added, more precise calibration would still be required.

Examples of XR in production 

The Mandalorian 

Having experimented with XR sets in the making of The Lion King (2019), producer Jon Favreau used them to complete half of all scenes for his production of Disney’s The Mandalorian (2019). With a twenty-foot tall LED screen wall that spanned 270°, Favreau’s team filmed scenes across a range of environments, from frozen planets to barren deserts and the insides of intergalactic spaceships. Apart from giving the cast and crew a photo-realistic backdrop to work against, the XR set saved a significant amount of time in production. Rather than continually changing location and studio setup, the design team could rapidly switch out props and partial sets from inside the 75-foot diameter of the volume. 

Dave at the BRIT Awards 

At the 2020 BRIT Awards, an XR setup was used to enhance Dave’s performance of his single ‘Black’. In a collaboration between Mo-Sys, Notch, and Disguise, a 3D animation was projected onto Dave’s piano, giving live audiences around the world an engaging visual experience. With effective camera tracking provided by Mo-Sys Startracker, camera operators could move freely around the stage with zero movement or disturbance in the piano’s moving images. 

HP OMEN Challenge 2019 

With the rise of eSports, developers are exploring new ways to enhance gaming and bring immersive experiences to ever larger audiences. In 2019, HP did this by broadcasting their OMEN eSports tournament live from an XR stage. There were two main benefits of using extended reality: firstly, audiences around the world could immerse themselves in the virtual environments of the game; secondly, gamers in the studio could review their gameplay from ‘within’ the game. The end result was an interactive, immersive experience that blurred the lines between the real and virtual world. 

Mo-Sys Academy is committed to sharing the latest developments in film and broadcasting technology with those looking to enter the field. If you’re interested in learning more, check out our previous articles explaining what a film crew does, and the difference between AR and VR broadcasting

SOC Mo-Sys presentation: Remote Heads Technology Overview

The Remote Heads Technology Overview webinar that we presented with the Society Of Camera Operators last month is now available to view online.

Lead by a panel of experts in their respective fields, this is a great opportunity for camera operators to learn first-hand about the industry-standard Mo-Sys L40 remote head, used on high-profile productions such as ‘Shape of Water’, ‘Aquaman’ and ‘Stranger Things’. 
Mo-Sys founder and CEO Michael Geissler gives a technical overview of the different types of remote heads. Pro-Cam Rental Atlanta Bob Gorelick, SOC Camera Operator of the Year 2018 (for ‘Stranger Things’), talks about his on set experience and demonstrates how to set up and operate the L40. And guest speakers Kenneth McKenzie (key grip) and Joe Allegro (general manager at Pro-Cam Rental Atlanta) talk about their experience with Mo-Sys remote heads from a grip and Rental House perspective respectively.

Edit: This webinar was held just before we launched our next generation in gyro-stablized remote heads, the Mo-Sys G30. Discover how the G30s novel 45-degree frame geometry confer enhanced features to this heavy duty head including eliminating gimbal lock by visiting our website.

Find out more about the Mo-Sys G30 >

Mo-Sys launches radical new design for camera gyro-stabilization

G30 provides universal support with very high-performance stabilization and movement

Mo-Sys today launched a new generation of gyro-stabilized remote head, the G30. Its radical new design, with a compact, 45-degree frame, allows it to support virtually any broadcast or digital cinematography camera rig for precise movement and stabilization.

“In our conversations with the production community, we know that there is a real need for excellent stabilization and precision camera positioning without the expense and limitations of device-specific and proprietary mounts,” said Michael Geissler, CEO of Mo-Sys. “Whether it is on a vehicle, a remote mount or a crane, producers and directors want to be unrestricted creatively, with a device that is quick to set up and balance, and will accept whatever camera and accessories they need.”

The G30’s 45˚ frame geometry provides easy access to all the camera connections and accessories, making it simple to install any type of camera quickly and securely. The short, stiff frame provides rigidity for rigs of up to 30kg, and high torque direct drive motors deliver crisp, precise camera movement alongside excellent stabilization. Open hubs for the three drive motors means cable routing is clear and tidy and avoids the need for sliprings and camera-specific cables.

Mo-Sys G30

The unique frame design eliminates a serious limitation with some existing gyro head designs: the issue of gimbal lock, where pan axis movement – including stabilization – are impossible when the camera is pointing directly down. The G30 has impressive pan and tilt movement ranges, along with ±45˚ roll, suitable for most creative productions. Axis encoders are built into each motor assembly for direct input into virtual production systems.

Launch customer for the G30 is Thoroughbred Racing Productions, based in Melbourne, Australia. It provides comprehensive coverage of more than 525 race meetings a year, including a camera car tracking each race. That car has used an earlier Mo-Sys stabilization gimbal, and TRP has now used the G30 for several months.

“We took the G30 out of the box, put it on our mount and turned it on,” said Charles Cole, technical operations manager at TRP. “The stabilization of our picture is significantly better than anything we had seen before – the results have been very, very good.”

Cole pointed to the excellent stabilization of shots from a fast-moving car, including the effective reduction of low frequency disturbances due to potholes on the course-side track. He also praised the simplicity of set-up, particularly as different courses require different zoom settings and therefore different rates of panning to track the action smoothly.

The controlling software of the G30 includes the ability to fine-tune the balance of the camera quickly and largely automatically, significantly reducing the set-up time. The rigid frame design and semi-automated balance system ensures that any camera rig up to 30kg can be installed without counterweights and be ready for use very quickly. Users can store pre-sets for frequently used camera combinations to speed set-up even more.

What does a film crew do?

As film-making has got more technologically-advanced, film crews have changed in composition. Whilst the core remains the same (directors, producers, technicians, and camera operators) – there are many more roles which have opened up as a result of virtual production (VP). If you’re considering a career within the film industry and are wondering, ‘what does a film crew do?’, read on to learn about the main positions, and how they’re changing as VP progresses.  

Film crew roles explained 


Producers are the driving force behind productions. They oversee projects from start to finish, making decisions about key concepts, creative processes, and finances. Whilst they’re mostly focused on organisation and operational functions, they also hold sway over script selection, directing, and editing. 


Directors, on the other hand, are much more involved in the creative side of filmmaking. They set the artistic direction of the production, and guide the technicians on how to achieve it. Alongside deciding on shots and angles, directors oversee casting, set design, and musical score. 


Scriptwriters are often the starting point of film productions. They provide the initial idea of a story, and craft it into a compelling narrative. They shape characters, give them a voice, and make them believable. Although scriptwriters are often in the shadow of producers and directors, they’re a fundamental part of the filmmaking process. 

Production designer 

Working closely with producers and directors, production designers are responsible for the visual concept of a film. This covers location spotting, set design, costume design, lighting, and visual effects. As VP becomes more popular, moreover, production designers are collaborating more with graphic designers and VFX artists in the pre-production phase – ultimately, shortening time and money spent in post-production. 

Production manager 

Reporting to the producer, production managers deal with the day-to-day running of film projects. They’re responsible for hiring film crew members, managing running budgets, and organising shooting schedules. You might consider them the on-the-ground project managers of the filmmaking process. 

Cinematographer/ Director of Photography (DP)

The DP transforms the director’s vision into reality through their technical knowledge. Traditionally, they advise on which cameras, lenses, filters, and stock to use to achieve the desired shots. Similarly to production designers, their role is increasingly influenced by VP. They need to be aware of how virtual production systems integrate with standard studio equipment such as lighting and rigging, and how to set up and calibrate VP gear. 

Focus puller 

In traditional filmmaking, a focus puller works alongside a camera operator to manually bring actors and objects into focus at the right time. Moving with the camera, they adjust the lens to according to the distance between themselves and the object they want to focus on. They may put markers down before filming (like sellotape on the floor of the set) to help them, or they may just rely on their own spatial awareness as the camera is rolling. Whilst their role is likely to change with the increasing use of VP methods, they’ll remain incredibly important on set. The production team of Jon Favreu’s The Lion King (2019), showed how focus pullers can be integrated into a new way of multi-track filming methods

Director of Virtual Production (DVP)

As VP progresses, however, it’s becoming more common to hire a separate Director of Virtual Production (DVP), who concentrates solely on VP during a project. Although the role is developing along with the tech, DVP’s are generally responsible for managing virtual props, dealing with VFX vendors,  overseeing the pre-production phase, and transferring assets to graphic engines.

Digital Image Technician (DIT)

In a broad sense, the DIT ensures that the highest technical standards are maintained when filming. As experts of the latest camera technology and associated software, they advise the DP and DVP on any issues relating to digital (rather than film) recording, including contrasts, exposure, framing, and focus. They’re also responsible that all film data is stored and managed correctly, making regular backups and transferring into file types that are accessible to other departments in post production. 

Films crews and changing technology 

As VP technology advances, the traditional roles within film crews are adapting. Whilst directors, producers, and DPs are becoming more aware of the advantages of VP – including greater creative freedom, fewer resources required, and less time needed – new roles are emerging to operate highly specialised VP tech. Ultimately, the days of large film crews moving between locations like travelling circuses are gone – replaced with remote, more agile and technically-enabled teams. 

Want to learn more about virtual production and the film industry? Take a look at articles from Mo-Sys Academy. We’re here to help educate and inform those looking to make their first steps into the world of filmmaking.

What is motion capture and how does it work?

Motion capture (Mo-cap) refers to a group of technologies that records the movements of people and objects, and transfers the corresponding data to another application. It’s been used for many purposes, from sports therapy, farming, and healthcare, to film and gaming. By mapping real-world movement on computer generated frames, motion capture allows for photorealistic dynamics in a virtual environment. Here’s how it developed and how it works. 

The birth of mo-cap 

The first major step in the development of mocap was brought by American animator, Lee Harrison III, in the 1960s. Using a series of analogue circuits, cathode ray tubes, and adjustable resistors, Harrison devised a system that could record and animate a person’s movement in real-time. 

Harrison’s Animac and Scanimate technology was developed in the late 1960s and allowed real-time animations to be created and processed by a computer. With Animac, actors would wear what was described as an ‘electrical harness’ or ‘data suit’ wired up to a computer. Using potentiometers that picked up movements attached to the suit, an actor’s movements could be translated into crude animations on a monitor.

Though the result was fairly basic, it was soon being utilised in various TV shows and advertisements across the States. The abstract images that could be produced with the rudimentary mocap technology of Animac and Scanimate, however, just weren’t good enough to attract mainstream attention.

The development of mo-cap 

The following decades saw improvements on Harrison’s designs, with bodysuits more accurately recording movement. They were also helped by the development of large tracking cameras; as useful as they were, however, each was about the size of a fridge. 

While mocap had been used sparingly in the 1980s and 1990s with films like American Pop (1981) and Cool World (1992), the first film to be used entirely using the technology was Sinbad: Beyond the Veil of Mists (2000). The film was a flop, but its use of mocap was picked up and expanded on by Peter Jackson in his making of The Lord of the Rings Trilogy in the early 2000s. 

For the first time ever, actors wearing their bodysuits (complete with retroreflective ping-pong balls) could perform alongside their non-animated colleagues in the same scene. Among CG-created characters, The Lord of the Rings’ Gollum is recognised as one of the most impressive Hollywood has ever produced. The combination of the character’s voice and intricate facial expressions performed by Andy Serkis resulted in an unforgettable motion-capture performance. The character and technology were created on the fly by Weta Digital’s Bay Raitt.

Facial capture 

With an increasing awareness of how motion capture techniques can enhance productions, more attention has been given specifically to facial capture. A number of companies have developed highly accurate systems, which, when paired with powerful graphics engines, result in like-like, photo-realistic facial images. Cubic Motion (now partnered with Unreal Engine) is one of these, and has worked on a number of high-profile games and virtual experiences, including Spiderman, League of Legends, Apex Legends. 

Motion capture techniques 

Nowadays, there are four main motion capture techniques: 

  • Optical (passive) – With this technique, retroreflective markers are attached to bodies or objects, and reflect light generated from near the camera lens. Once reflected, the light is used to calculate the position of the markers within a three-dimensional space, and recorded. 
  • Optical (active) – This technique is exactly the same, but the markers emit light rather than reflect them. The markers therefore require a power source. 
  • Marker-less – This technique doesn’t require markers of any sort. It relies on depth-sensitive cameras and specialised software in order to track and record moving people and objects. Whilst more convenient in some ways, it’s generally considered less accurate than its optical or mechanical-tracking alternatives. 
  • Inertial – This technique doesn’t necessarily need cameras to operate. It records movement through IMUs (inertial measurement units), which contain sensors to measure rotational rates. The most common sensors used in IMUs are gyroscopes, magnetometers, and accelerometers. 

What is motion tracking used for? 

Motion tracking and capture has a broad range of uses across various industries, including: 

  • Film and Gaming – Motion capture is used to record the movement of actors and transfer them onto virtual or computer-augmented characters.
  • Sports Therapy and Healthcare – Health professionals can use motion capture to analyse the movement of patients and diagnose problems i.e gait analysis. 
  • Military – When combined with virtual reality, motion capture technologies have been used to enhance military training experiences. 

If you’re interested in learning more about the future of the film industry, check back regularly for more articles from Mo-Sys Academy. Drawing from years of experience in virtual production for film and tv, and as one of the UK’s leading camera tracking suppliers, we’re aiming to educate the next generation of producers.