Many of the services that we enjoy today are made possible by machine learning algorithms. This type of artificial intelligence uses a bunch of data to make predictions based on previous behaviors. So why didn’t everything break when the pandemic suddenly changed our collective behaviors? It comes down to the way algorithms are built and maintained.
Many operate based on a concept called stationarity, which means the computer system assumes that the data is fairly consistent. That’s how algorithms like the ones that financial institutions use can instantly detect unusual behavior and flag it as potential fraud.
But a global event like COVID-19, a natural disaster, or the collapse of an economic bubble can make everyone suddenly behave in unusual ways. Some machine learning models failed in the face of so much upheaval, but many did just fine. The algorithms that can withstand this kind of commotion are designed with built-in resiliency.
Many of Capital One’s machine learning tools, for example, are trained to give more weight to newer data. That way, even abrupt shifts like our new spending habits help guide the model’s decision making.
It starts with good model creation. When data scientists build machine learning algorithms, they have to strike the right balance between accuracy and flexibility. When they train the models, they use as much data as possible to make sure it performs well based on the past. At the same time, they have to consider events that might happen in the future so the algorithms don’t get caught off-guard.
“You’ve really got to be focused on getting different viewpoints,” says Travis Nixon, a Microsoft data scientist. “Feeding data science and machine learning a diversity of opinions and perspectives is absolutely essential and critical.”
Plus, even artificial intelligence needs a human touch. Real people designed the algorithms in the first place, and keeping them running, especially during an unexpected event, requires a collaboration between humans and machines. When COVID-19 rocked our world, data scientists turned their attention toward monitoring existing models to keep them accurate.
“You’ve got to come back and curate that model again and see that data coming in over time,” says Nixon.
So, even though machine learning algorithms are based on historical data, a massive shift in new data doesn’t mean we need to throw out all of our old models. They just need to be handled with care.