Back Test shoot for video & depth capture from live music performances
Test shoot for video & depth capture from live music performances
BBC R&D are researching ways to capture performers for XR experiences without intrusive or cumbersome multi-camera 3D capture rigs. The MAX-R team are specifically looking at leveraging machine learning (ML) to establish depth data of performers from video. The derived depth data will then be combined with the colour information from the videos to enable photorealistic “2.5D” reconstructions of the performers in 3D virtual worlds. This will give an appearance close to full 3D when viewed from a range of angles, while avoiding the need for more complex set-ups that can otherwise restrict performers to a confined capture space. The images here show a recent test shoot designed to gather video and depth data from performers under controlled conditions. The media will be used to improve the ML techniques to prepare for more challenging live performances conditions as the project progresses.