clock menu more-arrow no yes

Filed under:

Facebook is simulating users’ bad behavior using AI

New, 2 comments

Bad bots roam free in a parallel version of Facebook

Illustration by Alex Castro / Th

Facebook’s engineers have developed a new method to help them identify and prevent harmful behavior like users spreading spam, scamming others, or buying and selling weapons and drugs. They can now simulate the actions of bad actors using AI-powered bots by letting them loose on a parallel version of Facebook. Researchers can then study the bots’ behavior in simulation and experiment with new ways to stop them.

The simulator is known as WW, pronounced “Dub Dub,” and is based on Facebook’s real code base. The company published a paper on WW (so called because the simulator is a truncated version of WWW, the world wide web) earlier this year, but shared more information about the work in a recent roundtable.

The research is being led by Facebook engineer Mark Harman and the company’s AI department in London. Speaking to journalists, Harman said WW was a hugely flexible tool that could be used to limit a wide range of harmful behavior on the site, and he gave the example of using the simulation to develop new defenses against scammers.

In real life, scammers often start their work by prowling a users’ friendship groups to find potential marks. To model this behavior in WW, Facebook engineers created a group of “innocent” bots to act as targets and trained a number of “bad” bots who explored the network to try to find them. The engineers then tried different ways to stop the bad bots, introducing various constraints, like limiting the number of private messages and posts the bots could send each minute, to see how this affected their behavior.

Harman compares the work to that of city planners trying to reduce speeding on busy roads. In that case, engineers model traffic flows in simulators and then experiment with introducing things like speed bumps on certain streets to see what effect they have. WW simulation allows Facebook to do the same thing but with Facebook users.

“We apply ‘speed bumps’ to the actions and observations our bots can perform, and so quickly explore the possible changes that we could make to the products to inhibit harmful behavior without hurting normal behavior,” says Harman. “We can scale this up to tens or hundreds of thousands of bots and therefore, in parallel, search many, many different possible [...] constraint vectors.”

Simulating behavior you want to study is a common enough practice in machine learning, but the WW project is notable because the simulation is based on the real version of Facebook. Facebook calls its approach “web-based simulation.”

“Unlike in a traditional simulation, where everything is simulated, in web-based simulation, the actions and observations are actually taking place through the real infrastructure, and so they’re much more realistic,” says Harman.

He stressed, though, that despite this use of real infrastructure, bots are unable to interact with users in any way. “They actually can’t, by construction, interact with anything other than other bots,” he says.

Notably, the simulation is not a visual copy of Facebook. Don’t imagine scientists studying the behavior of bots the same way you might watch people interact with one another in a Facebook group. WW doesn’t produce results via Facebook’s GUI, but instead records all the interactions as numerical data. Think of it as the difference between watching a football game (real Facebook) and simply reading the match statistics (WW).

Right now, WW is also in the research stages, and none of the simulations the company has run with bots have resulted in real life changes to Facebook. Harman says his group is still running tests to check that the simulations match real-life behaviors with high enough fidelity to justify real-life changes. But he thinks the work will result in modifications to Facebook’s code by the end of the year.

There are certainly limitations to the simulator, too. WW can’t model user intent, for example, and nor can it simulate complex behaviors. Facebook says the bots search, make friend requests, leave comments, make posts, and send messages, but the actual content of these actions (like, the content of a conversation) isn’t simulated.

Harman says the power of WW, though, is its ability to operate on a huge scale. It lets Facebook run thousands of simulations to check all sorts of minor changes to the site without affecting users, and from that, it finds new patterns of behavior. “The statistical power that comes from big data is still not fully appreciated, I think,” he says.

One of the more exciting aspects of the work is the potential for WW to uncover new weaknesses in Facebook’s architecture through the bots’ actions. The bots can be trained in various ways. Sometimes they’re given explicit instructions on how to act; sometimes they are asked to imitate real-life behavior; and sometimes they are just given certain goals and left to decide their own actions. It’s in the latter scenario (a method known as unsupervised machine learning) that unexpected behaviors can occur, as the bots find ways to reach their goal that the engineers did not predict.

“At the moment, the main focus is training the bots to imitate things we know happen on the platform. But in theory and in practice, the bots can do things we haven’t seen before,” says Harman. “That’s actually something we want, because we ultimately want to get ahead of the bad behavior rather than continually playing catch up.”

Harman says the group has already seen some unexpected behavior from the bots, but declined to share any details. He said he didn’t want to give the scammers any clues.