What is Motion Capture Acting?

When the unmistakably quirky Gollum first appeared on our screens back in 2001, audiences were taken by his unique character conveyed purely through CGI. Despite the fact that MoCap had already been around for a while, this moment signified the shifting of Motion Capture acting into the mainstream. 

Andy Serkis’ take on Tolkien’s now infamous Lord of the Rings character provided the movie industry with a true taste of the potential of Motion Capture Acting. The result was a complete revolution. Not just for the film industry, but for creating realistic and cinematic gaming experiences as well as for use in sports therapy, farming and healthcare.

From Middle Earth to Pandora: How motion capture works

In the last two decades we’ve watched numerous ‘behind the scenes’ images of Serkis jumping around in a MoCap suit equipped with retroreflective markers. During the filming of Peter Jackson’s Lord of the Rings (2001), these retro reflective optical markers allowed motion capture technology to accurately record his facial and body movements through a series of motion tracking cameras. This data was then transferred to a graphics engine to create a ‘skeleton’ which acted as a base for the animated body of Gollum. 

This early MoCap technique, which is still used today by some, is known as an ‘outside-in’ system; this means that the cameras look from a perspective ‘outside’ of the MoCap environment, and into the movement of the actor. The second, more recent technique (which we’ll explain further below), involves the use of Inertial Measurement Units (IMUs) to capture an actors movement regardless of any space or location (Xsens’ MVN system is an example of this type of MoCap setup). 

Performance capture  

Seeing the potential of the technology to enhance productions, a number of companies have since  invested in technology that more accurately records facial, hand, and finger motion. Known informally as ‘performance capture’ tech, these more targeted systems give motion-caption actors a greater degree of creative freedom whilst improving the credibility of their CGI digital characters. 

And their increased use in film production has not gone unnoticed. James Cameron’s Avatar (2001), for instance, was highlighted by critics for its innovative use of performance capture tech when creating the ethereal Na’vi. Matt Reeves’s War of the Planet of the Apes (2017), furthermore, was praised for its use of facial motion capture; Andy Serkis, who played the leading role, wore 132 mini retro-reflective markers on his face in order to record the exact movement of his facial muscles. 

Free-environment motion capture 

In 2011, Mo-Cap was upgraded again so it could be taken out of the studio and used on location. This began with Rise of the Planet of the Apes (2011), and allowed actors to fully immerse themselves in mixed reality environments whilst giving producers unparalleled creative freedom. Shooting outside also meant adapting the technology to varying climates. Mo-cap motion capture suits were made much more robust with the reflective markers upgraded to infrared pulse so directors could film in bright light (an example of active optical motion capture). 

The production teams of Dawn of the Planet of the Apes (2014) and War of the Planet of the Apes (2017) took the tech even further, filming in humid conditions and at night. This came alongside advancements in the rendering textures of fur, skin and eyes, allowing audiences to enjoy the most cinematically-gripping visuals of photo-realistic simulations. 

The magic of Motion Capture Acting

The beauty behind motion capture as a form of acting lies in the emphasis on physicality and embodied movement and expression. In an interview with WIRED Serkis explains, “It’s not just about mimicking behaviour. This is about creating a character.” This includes developing the psychological and emotional journeys to pour into the character. “It’s not just about the physical build of the character, but also the internalisation,”explains the actor. “That’s why the misconception that performance capture acting is a genre or type of acting is completely wrong, It’s no different to any process you go through to create a role… the actor’s performance is the actor’s performance.”

The combination of professional acting skills with advancements in Mo-cap technology has led to the development of many memorable CGI characters in recent years. From Serkis’s Portrayal of Caesar in Planet of the Apes to Benedict Cumberbatch and his unique take on the dragon Smaug in The Hobbit, motion capture is giving actors more powerful tools to portray characters and ultimately, enhance storytelling. 


If you’re interested in learning more about the future of the film industry, check back regularly for more articles from Mo-Sys Academy. Drawing from years of experience in virtual production for film and tv, and as one of the UK’s leading camera tracking suppliers, we’re aiming to educate the next generation of producers.  

What is a jib camera?

A jib camera is simply a camera mounted on a jib, which is a boom or crane device. On the other end of the jib, there’s a counterweight and either manual or automatic controls to direct the position of the camera. The main benefit of using a jib is that it allows camera operators to get shots that would otherwise be difficult to obtain due to obstacles or awkward positions. 

Different types of jib 

Jibs come in many sizes; from small jibs for handheld cameras to enormous booms that can pan over the heads of huge festival crowds. Regardless of the size, however, the purpose remains the same. A camera jib is there to help provide producers with stable crane shots. 

What is a crane shot? 

A crane shot is shot taken from a jib camera. Whilst most jibs can move in all directions, they’re valued primarily for their ability to move on a vertical plane. This gives producers the opportunity to emphasise the scale of a set, and is often used either to introduce or close a setting in a film. 

Crane shot examples 

La La Land (2017) 

The opening scene of Damien Chazelle’s Oscar-nominated La La Land was shot with the use of a camera jib. The scene presented various challenges to camera technicians as the shot weaves around stationary cars and dancers. An added complication was that the freeway it was filmed on was actually slanted, creating panning perspective problems. Regardless, the end result was a success –  the scene set the tone for the rest of the film whilst introducing Los Angeles, the central location of the narrative. 

Once Upon a Time in Hollywood (2019)

Quentin Tarrantino is well-known for his use of jibs for panoramic and tracking shots. Most recently, he used them in Once Upon a Time in Hollywood (2019) to place characters in context and add atmosphere. At the end of the ‘Rick’s house’ scene, a large jib camera slowly pans out from across the top of a Hollywood home to reveal the neighbourhood’s quiet night time roads. 

https://www.youtube.com/watch?v=e39FNz9W350

Camera heads for jibs 

To achieve shots like these, operators need to be able to move and adjust the camera at the end of the jib. This can be done either manually through a series of pull-wheels, or automatically with a controller. Either way allows operators to pan, tilt, and zoom the camera.

Camera jibs for virtual production 

Jibs used for virtual production either need to have all axes encoded, or have a tracking system attached to them. This is required in order to capture camera movement data in order that the virtual elements of a shot can be made to move in exactly the same way as the real camera shot. When it comes to virtual production, which jib you decide to use is extremely important. This is because any unintended movement (i.e. any unencoded or untracked movement) caused by the jib can cause virtual images to ‘float’ and break the illusion. To counter this, VP jibs need to be heavier, sturdier, and more rigid. Mo-Sys’s e-Crane and Robojib were designed specifically with these needs in mind – catering to a growing trend in Virtual Production (VP), Extended reality (XR), and augmented reality (AR). 

Mo-Sys Academy is committed to sharing the latest developments in film and broadcasting technology with those looking to enter the field. If you’re interested in learning more, check out our previous articles explaining what a film crew does, and the difference between AR and VR broadcasting

What is extended reality (XR)?

Extended reality (XR) is a term that is commonly used to describe all environments and interactions that combine real and virtual elements. Whilst XR usually encompasses AR (augmented reality), MR (mixed reality) and VR (virtual reality), it has a more specific meaning when used in relation to film, broadcast, and live entertainment production. In this article, we explain how, and why it’s on course to become a studio staple. 

Extended Reality meaning 

When used as an umbrella term, XR denotes all AR, MR and VR technologies –it’s the overarching label given to systems that integrate virtual and real worlds. In this sense, XR can be applied equally to certain motion capture techniques, augmented reality applications, or VR gaming. 

In the production world, however, it means something much more specific. XR production refers to a workflow that comprises LED screens, camera tracking systems, and powerful graphics engines. Often the LED wall operate with set extensions, which are tracked AR masks that allow the virtual scene to extend seamlessly beyond the LED wall.

How does XR production work? 

In XR production, a pre-configured 3D virtual environment generated by the graphics engine is displayed on one (or across multiple) high-quality LED screens that form the background to live-action, real world events.  When combined with a precision motion tracking system, cameras are able to move in and around the virtual environment, with the real and virtual elements seamlessly merged and locked together creating the combined illusion.

The benefits of XR production 

Immersive production and real time edits 

Immersive technology enables actors, hosts, and producers to see the virtual environments whilst shooting. This means that they can adapt their performances or make edits live on set, which reduces time (and budget) spent in post-production.  

Lighting 

Lighting is provided by the LED screens on an XR set. Common illumination helps real-world people and objects blend seamlessly into virtual environments, and further reduces time on set adjusting lighting. 

No Colour Spill or Chroma Key Compositing 

On certain green screen setups, colour spill and the need for chroma key compositing can increase the time spent in post production. Neither is required for XR screens which, again, reduces any time needed in post-production. 

Rapid calibration 

The calibration of camera tracking systems on xR sets takes minutes rather than hours (as can happen with green screen sets). This allows for scenes to be shot across multiple sessions with minimal disruption and preparation. However, where set extensions are used or AR objects are added, more precise calibration would still be required.

Examples of XR in production 

The Mandalorian 

Having experimented with XR sets in the making of The Lion King (2019), producer Jon Favreau used them to complete half of all scenes for his production of Disney’s The Mandalorian (2019). With a twenty-foot tall LED screen wall that spanned 270°, Favreau’s team filmed scenes across a range of environments, from frozen planets to barren deserts and the insides of intergalactic spaceships. Apart from giving the cast and crew a photo-realistic backdrop to work against, the XR set saved a significant amount of time in production. Rather than continually changing location and studio setup, the design team could rapidly switch out props and partial sets from inside the 75-foot diameter of the volume. 

Dave at the BRIT Awards 

At the 2020 BRIT Awards, an XR setup was used to enhance Dave’s performance of his single ‘Black’. In a collaboration between Mo-Sys, Notch, and Disguise, a 3D animation was projected onto Dave’s piano, giving live audiences around the world an engaging visual experience. With effective camera tracking provided by Mo-Sys Startracker, camera operators could move freely around the stage with zero movement or disturbance in the piano’s moving images. 

HP OMEN Challenge 2019 

With the rise of eSports, developers are exploring new ways to enhance gaming and bring immersive experiences to ever larger audiences. In 2019, HP did this by broadcasting their OMEN eSports tournament live from an XR stage. There were two main benefits of using extended reality: firstly, audiences around the world could immerse themselves in the virtual environments of the game; secondly, gamers in the studio could review their gameplay from ‘within’ the game. The end result was an interactive, immersive experience that blurred the lines between the real and virtual world. 

Mo-Sys Academy is committed to sharing the latest developments in film and broadcasting technology with those looking to enter the field. If you’re interested in learning more, check out our previous articles explaining what a film crew does, and the difference between AR and VR broadcasting

What does a film crew do?

As film-making has got more technologically-advanced, film crews have changed in composition. Whilst the core remains the same (directors, producers, technicians, and camera operators) – there are many more roles which have opened up as a result of virtual production (VP). If you’re considering a career within the film industry and are wondering, ‘what does a film crew do?’, read on to learn about the main positions, and how they’re changing as VP progresses.  

Film crew roles explained 

Producer 

Producers are the driving force behind productions. They oversee projects from start to finish, making decisions about key concepts, creative processes, and finances. Whilst they’re mostly focused on organisation and operational functions, they also hold sway over script selection, directing, and editing. 

Director 

Directors, on the other hand, are much more involved in the creative side of filmmaking. They set the artistic direction of the production, and guide the technicians on how to achieve it. Alongside deciding on shots and angles, directors oversee casting, set design, and musical score. 

Scriptwriter 

Scriptwriters are often the starting point of film productions. They provide the initial idea of a story, and craft it into a compelling narrative. They shape characters, give them a voice, and make them believable. Although scriptwriters are often in the shadow of producers and directors, they’re a fundamental part of the filmmaking process. 

Production designer 

Working closely with producers and directors, production designers are responsible for the visual concept of a film. This covers location spotting, set design, costume design, lighting, and visual effects. As VP becomes more popular, moreover, production designers are collaborating more with graphic designers and VFX artists in the pre-production phase – ultimately, shortening time and money spent in post-production. 

Production manager 

Reporting to the producer, production managers deal with the day-to-day running of film projects. They’re responsible for hiring film crew members, managing running budgets, and organising shooting schedules. You might consider them the on-the-ground project managers of the filmmaking process. 

Cinematographer/ Director of Photography (DP)

The DP transforms the director’s vision into reality through their technical knowledge. Traditionally, they advise on which cameras, lenses, filters, and stock to use to achieve the desired shots. Similarly to production designers, their role is increasingly influenced by VP. They need to be aware of how virtual production systems integrate with standard studio equipment such as lighting and rigging, and how to set up and calibrate VP gear. 

Focus puller 

In traditional filmmaking, a focus puller works alongside a camera operator to manually bring actors and objects into focus at the right time. Moving with the camera, they adjust the lens to according to the distance between themselves and the object they want to focus on. They may put markers down before filming (like sellotape on the floor of the set) to help them, or they may just rely on their own spatial awareness as the camera is rolling. Whilst their role is likely to change with the increasing use of VP methods, they’ll remain incredibly important on set. The production team of Jon Favreu’s The Lion King (2019), showed how focus pullers can be integrated into a new way of multi-track filming methods

Director of Virtual Production (DVP)

As VP progresses, however, it’s becoming more common to hire a separate Director of Virtual Production (DVP), who concentrates solely on VP during a project. Although the role is developing along with the tech, DVP’s are generally responsible for managing virtual props, dealing with VFX vendors,  overseeing the pre-production phase, and transferring assets to graphic engines.

Digital Image Technician (DIT)

In a broad sense, the DIT ensures that the highest technical standards are maintained when filming. As experts of the latest camera technology and associated software, they advise the DP and DVP on any issues relating to digital (rather than film) recording, including contrasts, exposure, framing, and focus. They’re also responsible that all film data is stored and managed correctly, making regular backups and transferring into file types that are accessible to other departments in post production. 

Films crews and changing technology 

As VP technology advances, the traditional roles within film crews are adapting. Whilst directors, producers, and DPs are becoming more aware of the advantages of VP – including greater creative freedom, fewer resources required, and less time needed – new roles are emerging to operate highly specialised VP tech. Ultimately, the days of large film crews moving between locations like travelling circuses are gone – replaced with remote, more agile and technically-enabled teams. 

Want to learn more about virtual production and the film industry? Take a look at articles from Mo-Sys Academy. We’re here to help educate and inform those looking to make their first steps into the world of filmmaking.

What is motion capture and how does it work?

Motion capture (Mo-cap) refers to a group of technologies that records the movements of people and objects, and transfers the corresponding data to another application. It’s been used for many purposes, from sports therapy, farming, and healthcare, to film and gaming. By mapping real-world movement on computer generated frames, motion capture allows for photorealistic dynamics in a virtual environment. Here’s how it developed and how it works. 

The birth of mo-cap 

The first major step in the development of mocap was brought by American animator, Lee Harrison III, in the 1960s. Using a series of analogue circuits, cathode ray tubes, and adjustable resistors, Harrison devised a system that could record and animate a person’s movement in real-time. 

Harrison’s Animac and Scanimate technology was developed in the late 1960s and allowed real-time animations to be created and processed by a computer. With Animac, actors would wear what was described as an ‘electrical harness’ or ‘data suit’ wired up to a computer. Using potentiometers that picked up movements attached to the suit, an actor’s movements could be translated into crude animations on a monitor.

Though the result was fairly basic, it was soon being utilised in various TV shows and advertisements across the States. The abstract images that could be produced with the rudimentary mocap technology of Animac and Scanimate, however, just weren’t good enough to attract mainstream attention.

The development of mo-cap 

The following decades saw improvements on Harrison’s designs, with bodysuits more accurately recording movement. They were also helped by the development of large tracking cameras; as useful as they were, however, each was about the size of a fridge. 

While mocap had been used sparingly in the 1980s and 1990s with films like American Pop (1981) and Cool World (1992), the first film to be used entirely using the technology was Sinbad: Beyond the Veil of Mists (2000). The film was a flop, but its use of mocap was picked up and expanded on by Peter Jackson in his making of The Lord of the Rings Trilogy in the early 2000s. 

For the first time ever, actors wearing their bodysuits (complete with retroreflective ping-pong balls) could perform alongside their non-animated colleagues in the same scene. Among CG-created characters, The Lord of the Rings’ Gollum is recognised as one of the most impressive Hollywood has ever produced. The combination of the character’s voice and intricate facial expressions performed by Andy Serkis resulted in an unforgettable motion-capture performance. The character and technology were created on the fly by Weta Digital’s Bay Raitt.

Facial capture 

With an increasing awareness of how motion capture techniques can enhance productions, more attention has been given specifically to facial capture. A number of companies have developed highly accurate systems, which, when paired with powerful graphics engines, result in like-like, photo-realistic facial images. Cubic Motion (now partnered with Unreal Engine) is one of these, and has worked on a number of high-profile games and virtual experiences, including Spiderman, League of Legends, Apex Legends. 

Motion capture techniques 

Nowadays, there are four main motion capture techniques: 

  • Optical (passive) – With this technique, retroreflective markers are attached to bodies or objects, and reflect light generated from near the camera lens. Once reflected, the light is used to calculate the position of the markers within a three-dimensional space, and recorded. 
  • Optical (active) – This technique is exactly the same, but the markers emit light rather than reflect them. The markers therefore require a power source. 
  • Marker-less – This technique doesn’t require markers of any sort. It relies on depth-sensitive cameras and specialised software in order to track and record moving people and objects. Whilst more convenient in some ways, it’s generally considered less accurate than its optical or mechanical-tracking alternatives. 
  • Inertial – This technique doesn’t necessarily need cameras to operate. It records movement through IMUs (inertial measurement units), which contain sensors to measure rotational rates. The most common sensors used in IMUs are gyroscopes, magnetometers, and accelerometers. 

What is motion tracking used for? 

Motion tracking and capture has a broad range of uses across various industries, including: 

  • Film and Gaming – Motion capture is used to record the movement of actors and transfer them onto virtual or computer-augmented characters.
  • Sports Therapy and Healthcare – Health professionals can use motion capture to analyse the movement of patients and diagnose problems i.e gait analysis. 
  • Military – When combined with virtual reality, motion capture technologies have been used to enhance military training experiences. 

If you’re interested in learning more about the future of the film industry, check back regularly for more articles from Mo-Sys Academy. Drawing from years of experience in virtual production for film and tv, and as one of the UK’s leading camera tracking suppliers, we’re aiming to educate the next generation of producers.  

Green Shots: Sustainable film production

According to a joint report from BFI and ARUP, the average high-budget film production generates 2,840 tonnes of CO2 – that’s roughly the same amount absorbed by 3,709 square acres of forest over the course of a year. And to many, such a high figure won’t come as a surprise; film productions are complex operations, spanning numerous geographical locations and relying on energy-sapping specialist equipment and processes. 

Our industry is, however, aware of the need to change. The movement for sustainable film production is picking up pace, and has been given an unexpected boost by the coronavirus pandemic. Production teams are looking to cut both their emissions and costs, whilst ensuring that they continue to innovate and keep their members safe. Here are the main ways that they’re doing so. 

Virtual Production 

Virtual Production (VP) can be defined as the integration of real and virtual production elements on a live set. The combination of camera tracking technology with powerful graphics engines has made it possible to film scenes in photorealistic virtual environments with effects in realtime. Aside from providing producers with unparalleled creative opportunities, VP is considered a sustainable film production technique for the following reasons: 

  • Reduced travel – Around 16% of a film’s carbon emissions derive from international travel, and a further 4% for accommodation. Respectively, this equates to 75 return flights from London to New York, and the average annual electricity consumption of 34 homes. VP reduces this by removing the need to travel to several locations to shoot. Whether using green screens or LED walls, it does this by virtually rendering different locations on a single set. 
  • Reduced use of (physical) props and materials – Computer-generated sets and props significantly reduce material waste. Industry expert, Richard James, even predicts the use of ‘digital prop libraries’ which are already commonplace in the gaming industry. 

Sustainable sets 

Where physical sets do need to be built, sustainable design principles should be followed. ‘Design for deconstruction’ and ‘parametric design’, for instance, are ideas which may allow film productions to reduce the environmental impact of their physical sets. The Australian-made ‘X-frame’, for example, is a highly flexible construction option which could be re-used across multiple sets with minimal material waste. 

Another option to make sets more sustainable is to re-use materials. In its production of ‘Spider Man 2’ (2014), Sony Pictures managed to divert 52% of its set waste (lumber, steel, and glass) away from landfills and towards future projects. As a result, they saved approximately $400,000 in costs and earned a carbon neutral certification. 

Renewable energy 

As it currently stands, many studios rely on costly, inefficient diesel generators to power studios (the average production, in fact, uses enough of the fuel to fill the tanks of 11,478 cars). Although production teams are rarely in control of the source of their power, they do hold sway with the studios that do. Solar photovoltaics and wind turbines are viable options for many, whilst switching to a sustainable energy provider is a simple, fast way to improve green credentials. 


Filmmaking is going through a green revolution. Catalysed by the coronavirus pandemic, film crews are relying more on VP and are paying closer attention to how they can streamline each stage of the production process. What’s emerging as a result is a leaner, greener, more cost-effective way of making films. 

Want to learn more about virtual production and the film industry? Take a look at articles from Mo-Sys Academy. We’re here to help educate and inform those looking to make their first steps into the world of filmmaking. 

Before and After: What is pre and post production?

Filmmaking is a complex process. From the very first conception of an idea right through to the final editing, a production goes through multiple stages before it eventually hits the screen. Actually shooting it is just one link in the chain – a lot of production work happens before the cameras start rolling and long after they stop. These before and after phases are called ‘pre-production’ and ‘post-production’, and are fundamental to the filmmaking process. 

What is pre-production? 

In the broadest sense, pre-production is everything that happens before shooting takes place, to ensure its smooth operation. A pre production checklist would include things like: 

  • Scripting 
  • Storyboarding 
  • Budgeting 
  • Location scouting 
  • Hiring equipment and crew 
  • Casting 

Pre (virtual) production 

Increasingly, this traditional pre production phase is being disrupted by virtual production (VP) methods. VP uses camera tracking technology and advanced graphics engines to create virtual, interactive objects and environments for the film and broadcast industry. The BBC, The Weather Channel, and Dutch-broadcaster NEP are just a few organisations to have utilised VP. 

Among the many benefits of this developing technology is that it reduces uncertainty for producers. During the pre production phase using VP, the smallest of details can be visualised and settled on; environments, settings, and initial effects can all be prepared at the pre-pro stage. This means that producers can start to iterate much earlier than with traditional production methods, saving time (and money) later on.

What is post production? 

Just as pre production is preparing for a shoot, post production is preparing for release. It’s the last phase of the production process and is focused around turning raw footage into the final product with editing and special effects. The post production process involves multiple departments, and generally includes the following: 

  • Picture editing (or ‘cutting) 
  • Sound editing, sound effects and mixing 
  • Visual effects 
  • Colour correction 
  • Animation

How long does post production editing take? 

Depending on the complexity of the production, the post production phase can take anywhere from 4 to 12 weeks to complete. If significant edits have to be made due to mistakes or a change in creative direction, then this stage can take a lot longer. 

Virtual post production 

When VP methods are used, the post production phase is either shortened considerably or eliminated entirely. As mentioned, this is because there are far fewer surprises. Rather than wait weeks or months to watch back over footage, producers can view and edit shots in real time when in the studio. 

It’s worth mentioning, however, that the time required in post-production is largely dependent on which VP technologies and software are used. As a general rule, LED screens (such as those used by Italian post-production company, Neticks Group Evolution) tend to need almost no time in the post-production phase. This is because they provide a final composite on set in realtime, with natural-looking reflections and shadows, and don’t require colour corrections between the foreground and background. Whilst LED screens give producers the chance to shoot a near-final product on set, though, they only allow a limited degree of movement for actors. 

A more flexible VP option in this sense would be the green screen. Using camera tracking tech, actors can move around a pre-constructed virtual set, which the production team can view through a monitor. This VP option requires slightly more time in post production as lighting needs to be corrected, props removed, and effects added. 

Regardless of which VP technology is used, however, it’s likely to significantly reduce time in post production when compared to traditional production methods. In essence, VP shifts the production team’s mentality from ‘Let’s fix it in post’ to ‘Let’s deal with this now’.


Pre and post production remain core elements of the filmmaking process, but they’re changing. With an increasing use of VP technology, more emphasis is being placed on ‘front-loading’ productions; more accurate decisions and editing can take place earlier on, reducing the risk of guess-work whilst in the studio and stretched budgets in post production. In the future, final edits will be ready almost as soon as the cameras have stopped rolling for the day. 

Mo-Sys is the market leader in virtual production camera tracking technology for film and broadcast. Working with production teams across the world, we’ve helped to create engaging, immersive experiences that have reached large audiences. To learn more about the capabilities of virtual filmmaking, check out Mo-Sys Academy – our dedicated resource centre for emerging filmmaking professionals, and sign up to our newsletter. 

Augmented Reality in Corporate Training: The Future of Learning

Overlapping real life images with computer-generated graphics, Augmented Reality (AR) technology is becoming increasingly popular. Besides being used for virtual film and broadcast production, it’s adding an extra dimension to gaming, education, and live entertainment.

Another area where it’s beginning to gain traction is that of corporate training. Various organisations have begun to incorporate AR into their training strategies in order to engage, inform, and connect a globally-disparate workforce. Just as AR is revolutionising how we produce and consume media, it’s also changing how we communicate complex ideas at work. If you’re curious about what augmented reality offers your corporate training scheme, read on to find out more. 

What is Augmented Reality (AR)? 

AR blends photo-realistic 3D graphics, effects, and objects into a real-world environment. It combines virtual and real-world elements to create a contextually rich user experience that wouldn’t otherwise be possible. Its real value, however, comes from the fact that users can interact with virtual objects in real-time – this is why it has such a multitude of uses.

What is the difference between virtual and augmented reality?

The difference between Augmented Reality and Virtual Reality (VR) is that the latter creates an entirely computer-generated environment. A VR environment is completely simulated and doesn’t need any real-world input. An augmented environment, on the other hand, blends real-life with computer graphics to create a convincing, stimulating experience. 

And this capability has been utilised for various reasons; The Weather Channel, for instance, use AR (enabled by Mo-Sys StarTracker) to enhance their broadcasts, whilst various apps (such as BBC Civilisations) use a basic version of the tech to create more captivating content. 

How does AR work? 

Broadly speaking, AR relies on cameras, computers, and software to capture real-world images and superimpose virtual objects in real time. 

A key element in a more sophisticated AR system is camera tracking; whether optical or mechanical, tracking systems allow cameras to move freely around a virtual or augmented set whilst maintaining the absolute positions of objects. In effect, this means you can move around virtual objects in a real-world setting without distorting them in any way. The end result can be displayed either on a computer, TV, or phone screen. 

AR in corporate training

According to a 2019 report from PwC, the VR and AR market is expected to provide a boost of $1.5 trillion to the global economy over the next decade – and 20% of this will be driven by their use in corporate training. 

Such a huge figure is justified when you consider the benefits that this technology brings: 

  • Reduced costs – AR tech can reduce training costs by reducing the need to travel, the use of physical training facilities, and the recruitment of multiple human facilitators. 
  • Improved sustainability – Reduced travel (especially for international teams) significantly reduces the carbon emissions associated with training. 
  • Improved scalability – With AR technology, it’s easier to run multiple training sessions simultaneously. This allows you to rapidly scale-up your organisation’s training strategy. 
  • Greater engagement – The immersive nature of AR experiences leads to greater engagement and more effective learning. 

Examples of AR in corporate training

Mercedes-Benz

The German car manufacturer has been using AR technology to train employees across different business functions as well as market new models. Whilst AR is relevant to all business areas, it’s proven particularly useful to engineers and Research & Development teams; the tech virtually disassembles all the components of a vehicle, whilst showing their individual functions and locations. 

Boeing

The aviation industry has long used VR and AR for different types of training.  Boeing has been at the forefront of advances, using the tech from training pilots to engineers; most recently, the aircraft manufacturer has used AR technology to guide technicians in complex aircraft wiring schematics – resulting in a 25% decrease in production time, and a 40% increase in productivity. (Here’s a video of Paul Davis from Boeing discussing how AR is used to manufacture their places)

Thames Water

Thames Water has used AR and VR to train its staff and showcase future projects. Back in 2012, it used AR technology to superimpose water-flow graphics onto a static, real-world model of a new sewage system. More recently, it’s used the tech to allow workers to experience on-the-job situations in a safe, adaptable environment, as well as raise awareness of mental health in the workplace. 

Ford

Alongside training, AR technology has also been used to showcase new products. In 2017, Ford unveiled a number of new models at the North American Auto Show, using AR to demonstrate their new capabilities. With photo-realistic, high poly-count 3D animations overlaying the physical vehicles, Ford provided viewers with a privileged look under the sheet metal at how individual parts functioned. 

The Future of Training

The days of stuffy conference rooms with whiteboards and lacklustre presentations are coming to an end. Companies that truly want to empower their staff are turning to augmented reality training solutions. With the right technology, they improve learning, bolster engagement, and save time and money. 

Mo-Sys is a market-leading supplier of Virtual and Augmented Reality Production technology. Our StarTracker Studio is a complete VR and AR studio package designed for a wide range of uses – from film and broadcasts to augmented reality corporate training. If you’re interested in learning more about how our technology can help your organisation, please get in contact

An introduction to object tracking

Object tracking is becoming more and more essential in modern film and broadcast. An increasing dominance of CGI and digital environments has made highly accurate object tracking technology incredibly useful.

What is object tracking?

Object tracking and object detection can be used to augment a target object into a scene, ensure that virtual characters share a line-of-sight with actors and even to add digital reflections and shadows. Whatever the reason you need object tracking, it’s important to consider the range of options and levels of object tracking that are available to you.

In this article, the Mo-Sys team review a range of object tracking camera options, from the more traditional image tracking, through to our own StarTracker system. Whatever you are looking to gain from your object tracking experience, there are a range of systems and techniques at your disposal.

Types of object tracking

Image-based tracking

Image tracking relies on the use of a green screen to create a mask comprised of a cut-out of an actor/object. Based on their positioning, the tracker can then calculate the position of the plane.

Image tracking is a relatively cheap and easy method of object tracking. However, it is a somewhat inaccurate method, usually tracking positions by a margin of +/-5cm from the image plane to the camera.

Radio tracking

Radio trackers use a beacon and a system of receivers set up in a studio for object tracking. A step-up from image tracking in terms of quality, with four beacons set up, a radio tracker can track a large area of approximately 30 x 30m in full 3-dimensions.

As with image tracking, there is a degree of inaccuracy to be expected when using radio tracking, once again, creating a potential margin of +/-5cm. The main benefit of this tracking over image tracking is down to the number of target objects that can be tracked at any one time. Each object within the tracking area can be assigned a roaming beacon that allows the device to track dozens of objects. It is important to remember that radio tracking will only track the position of an object, not its orientation.

Active LED tracking

Initially provided by OptiTrack, Active LED tracking is one of the more precise ways of tracking objects and actors. It works by utilising a blinking LED and a minimum of two cameras to accurately position a target. With four cameras that are able to turn, LED tracking allows users to track objects in full 360º.

The cameras track the blinking LEDs mounted onto an object with a battery-powered tag. Using one LED will allow the cameras to track the position of the object, but by using more than one you will also be able to track the object’s orientation. An LED tracking system can accurately track objects down to the millimetre in a 6 x 6m space, making it ideal for precision tracking in a limited space. However, to expand the tracking area when using this technology requires installing a proportional amount of cameras, dramatically increasing the cost. The OptiTrack system typically requires 12-16 cameras for a 12 x 12-metre area.

Infrared Tracking

Most commonly used by the HTC Vive and the Vive Tracker for VR, this tracking method uses two infrared stations to create a sweeping infrared laser line to trigger small sensors on the receiver. By comparing the timing of when these sensors are triggered, angles and position can be calculated with high accuracy and low latency. Although a relatively simple and precise solution that offers great value for money, this method is restricted to a limited area for tracking approximately 10 x 10m.

In addition, HTC Vive suffers from occlusion when tracking multiple objects and it can be difficult to coordinate when using other methods of tracking. Furthermore, when switching off and on again, the Vive tracker gives a slightly different position to previous position. Most importantly, Vive products cannot be synced with genlock which causes instability for live broadcasting.

Mo-Sys StarTracker

Mo-Sys’ patented StarTracker technology is the most precise 3D object tracking available, this is why it is primarily used by broadcasters and film makers for real-time camera tracking. Particularly, in terms of orientation, StarTracker stands out from the crowd with its ability to track the most minute camera movements with ease.

The main benefit of using StarTracker for object tracking is that it can be far more cost-effective thanks to the patented retro-reflective sticker method. By simply adding more stickers and mapping them, means you can increase the tracking area exponentially. This means you only need one optical sensor to track one object and can be achieved across a virtually unlimited space.

It is worth noting, however, that StarTracker is less compact than the LED system used by OptiTrack. It relies on larger electronics and a battery to interact with the markers placed on the studio ceiling.

Object tracking faqs

What is the difference between object tracking and object detection?

Object tracking applies deep learning; utilising a program that tracks a set of objects that have each been given a unique identification in video frames. Object detection utilises computer technology linked to computer vision that detects real-time objects in images or videos such as face detection or cars. 

Does object detection use machine learning?

Object tracking requires a user to label examples of objects that they want detected so it’s more of a supervised learning system that requires training.

What is an object tracking algorithm?

After an object’s initial movement has been detected, object tracking algorithms are used to track it as it moves within video frames. There are various types of algorithms used such as R-CNN, HOG, R-CFN, SSD, SPP-net and YOLO.

If you’re looking for high-precision object tracking technology for your next production, get in touch with the Mo-Sys team today. Our range of products are ideal for all motion tracking and VFX applications.

The key features of camera crane remote heads

When crane shots first began to appear in cinema, they were the epitome of cinematic innovation. Early in their life cycle, crane shots were only utilised by some of cinema’s most highly praised directors, including the highly controversial Leni Riefenstahl, Orson Welles, Jean-Luc Godard and from the early films of D.W. Griffith.

Until the late 90s and early 2000s, camera operators would have to ‘ride’ camera cranes and manually operate the pan and tilt of the camera to get that perfect shot. All too often, this was a technically involved, time consuming and even potentially unsafe process. Today, almost all high-end productions have left manual cranes behind, choosing instead to use camera crane remote heads to achieve those sweeping shots.

But what is a camera crane remote head, and what are some of the main features of these camera crane heads? In this article, the Mo-Sys team look into the key features of camera crane heads and how they are used.

What is a remote head?

As we’ve already briefly touched on, remote camera crane heads mount a camera on the end of a camera crane. Thanks to innovative companies like MovieBird, Panther and SuperTechno, and Mo-Sys manual cranes have almost totally been replaced in both film and broadcast production. As such, it’s absolutely essential that they replicate the feeling of ‘riding’ the camera, giving camera operators the immediate feedback, they expect. Thus, the heads allow for the level of precision necessary to get the shots that are required.

Beyond being accurate and precise, remote heads also need to be robust and reliable. As an essential feature of almost any high-end production, the heads need to survive being swung around by a crane and any wear and tear that could occur while on set.

Gears, through-holes and inputs

The importance of backlash

Any camera operator knows that backlash is one of the most important considerations to keep in mind when shooting, especially when using a remote head. Using electronic slow starts can help to prevent backlash, but the delay caused by these features can affect the precision of the operator, making it difficult to get certain shots right.

More often than not, the level of backlash is determined by the gears that a remote head uses and, if they’re not up to scratch, shots can be jerky and elastic.

Worm gears

Worm gears have internal sliding actions that are very sensitive. While this is very positive for operators seeking to achieve a particularly precise shot, they sit on a very fine line between precision and over-sensitivity.

Cycloidal gears

Cycloidal gears are very strong, with a high gear ratio that offers zero backlash, making them ideal for use in remote heads. Despite this, they can cause a lot of noise to occur, which can often disrupt a shot.

Mo-Sys Jam Drive

The drive used in Mo-Sys camera crane remote heads provides the best of both worlds. With no noise, no backlash and an accessible through-hole that allows for direct cabling, Mo-Sys’ gears are ideal for shooting.

Through-holes and slip rings

Through-holes and slip rings are important aspects of any camera crane head. The cables that keep our cameras rolling are absolutely imperative, so making sure a sturdy through-hole or slip ring is present in a remote head is important.

Slip rings used to be the most obvious way to keep cables secure and connected to the camera that’s shooting, but they must be continually checked to ensure that they remain compatible with whatever camera connections are being made. Furthermore, with so many films now being shot using optical cables for HD and UHD, slip rings are falling out of favour as they have the potential to corrupt the data being sent.

Rather than using a more traditional slip ring, Mo-Sys favours the use of through-holes. These are simply large holes in the middle of the remote head that allow all cables to be kept untangled and connected to the camera, without fear of data corruption.

Input devices

There are three main types of input device used by camera operators to control the remote head on the crane: handwheels, pan bars and joysticks.

Handwheels are mostly used by production teams in Hollywood and Canada. They’re very precise, incredibly smooth and perfect for big-budget blockbuster shooting. However, expertise and left-hand and right-hand co-ordination is required.

Pan bars are more common in European and Asian film markets. While not always as smooth as the handwheels used in the US, they are very precise and can make following a subject particularly easy.

Joysticks are mostly used in broadcast rather than film. While not as smooth or as accurate as the other two options, joysticks can be more user-friendly and one joystick can often operate multiple remote heads.

Joystick input

Remote Head Features

Now that we’ve outlined how remote heads work and the technology that goes into them, we’ll look into the key features that every camera crane remote head should possess. If you’re looking to acquire a remote head for your next production, you should ensure that the following key features are present.

Back-pan

Back-pan allows a camera operator to keep the remote head – and the connected camera – pointing in the same direction while the crane is moved. This does not keep the camera pointing at a specific target. Instead, it keeps the camera aimed at a set direction. Back-pan can be switched on and off and can also be used with heads attached to dollies. Mo-Sys heads such as the L40, Lambda 2.0 and B20 all have the optional back-pan feature.

Targeting

Similar to backpan, the targeting feature available with many remote heads keeps the camera pointing at a specific spot rather than a larger direction. This makes it a useful feature if a shot needs to focus on one specific entity.

Soft stops

Soft stops can be programmed into remote heads to preset where a camera operator needs a camera to stop while a crane is being moved. Feathering can be used while setting a soft stop to determine how abruptly you want the camera to stop.

Capturing tracking data

While not currently a feature available with the majority of remote heads, with the Mo-Sys remote heads we’re proud to offer users the ability to download all of the pan/tilt data used in a shoot for use in post-production at a later date.

Roll axis

A roll axis feature will allow the remote head to rotate around the optical axis to get unique shots with a ‘Dutch Angle’. While this is not a standard feature, a roll axis can be added to all Mo-Sys remote heads.

At Mo-Sys, we have a variety of camera crane remote heads suitable for broadcast and film. Get in touch with our helpful team today to find out more.