Skip to main content

MIT researchers develop a drone system that can do a camera operator’s job

MIT researchers develop a drone system that can do a camera operator’s job

/

The next step in virtual directing

Share this story

If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.

Shooting professional quality video with a drone is not an easy task, and often requires multiple human operators. Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) think they’ve found a way to take humans out of the operation part of the equation altogether. The team teased a system this week that they plan to unveil at a conference later this month in which filmmakers can set certain parameters and then let the drone do all the work.

The group calls the system “real-time motion planning for aerial videography,” and it lets a director define basic parameters of a shot, like how tight or how wide the frame should be, or the position of the subject within that frame. They can also change those settings on the fly and the drone will adjust how it’s filming accordingly. And, of course, the drone can dynamically avoid obstacles.

Doing the difficult and dangerous work for directors

While a few consumer drones like the DJI Mavic Pro already have object recognition and tracking, MIT’s project sets itself apart by adding in more robust versions of those technologies and a vast amount of granular control. The system is constantly measuring and estimating the velocities of the objects moving around the drone, and it does this 50 times a second.

The researchers say that a director using their system would be able to weigh certain variables differently so the drone knows what to prioritize in a shot, too. From the MIT release:

Unless the actors are extremely well-choreographed, the distances between them, the orientations of their bodies, and their distance from obstacles will vary, making it impossible to meet all constraints simultaneously. But the user can specify how the different factors should be weighed against each other. Preserving the actors’ relative locations onscreen, for instance, might be more important than maintaining a precise distance, or vice versa. The user can also assign a weight to minimize occlusion, ensuring that one actor doesn’t end up blocking another from the camera.

It’s a cool idea that’s both reminiscent and seemingly a natural extension of the virtual camera work that directors like James Cameron helped pioneer and others (like Gareth Edwards and Lucasfilm) have been using ever since. It’s definitely not ready for that kind of work, judging from CSAIL’s video. But it’s another important wrinkle in the way new hardware and software is changing filmmaking, big or small.