Skip to main content

The next frontier in hiring is AI-driven

Can an AI ease the stress of recruiting?

Share this story

Photo by Amelia Holowaty Krales / The Verge

Frrole’s DeepSense AI doesn’t have kind things to say about me. My stability potential — a person’s willingness to “give it their all” before they quit — ranks as 4.6, a “medium” assessment marked by an ominous red bar. Other traits, like my learning ability and need for autonomy rank only slightly higher, while a short personality assessment is kinder: an optimistic attitude, a sunny disposition, a good listener.

DeepSense has, for the moment, reduced me to a series of data points. The system has yanked information from my social profiles, from LinkedIn to Twitter, in an effort to sum me up as a person. The purpose? That recruiters might be better equipped to scout and understand potential employees. Hiring professionals bring their own personal bias to the table in how they interpret and understand possible candidates. The tech isn’t infallible, but AI-driven systems promise to eliminate some of those prejudices.

For some tech companies, AI’s place may be most valuable for shortening the slog of application vetting. Companies like Ideal introduce these systems into that process early. Instead of one person reading through hundreds of resumes, they envision a process in which AI can quickly sort through data. CEO Somen Mondal compares its tech to a recommendation engine, much like Amazon or Netflix — a first line of defense against high-volume hiring. Ideal connects to a company’s applicant tracking system, works out who has applied, and compares it against people who have already been hired and are doing well.

“The things that AI is not good at doing is the soft skills”

Take a former Microsoft employee, for example. “Our system is able to understand that Microsoft is a technology company, so it adds context just like a human would,” Mondal says. “Then it’s able to train itself on how to select very accurately, intelligently — not only for efficiency but for the quality of hire, which means hiring better people for the job.” The tech then connects those missing pieces and provides the recruiter with a recommendation: here’s who you should hire and why, based on their historical performance and skills. Mondal claims that much like a human recruiter getting better at their job, their AI tech will improve over time.

But options like Ideal aren’t, well, ideal for all career fields. It’s helpful for retail jobs, call centers, or banking jobs, for example, but wouldn’t be the best way to track down a company’s next VP of sales. And it can’t figure out things like cultural fit. “The things that AI is not good at doing — or computers in general — is the soft skills,” says Mondal. “Our goal is to get someone the right people to interview, whereas before, recruitment in high volume, people can’t even look through all the applications... But when it comes to ‘is this person a good fit, can we negotiate with this person to get them,’ those are clearly things that you need a human touch to manage.”

That hasn’t stopped companies like Frrole from trying. DeepSense relies on behavioral predictions and a small personality assessment. It pulls from publicly available social data, uses sentiment analysis, and distills its findings into behavioral traits and personality — categories like teamwork, learning ability, or behavior — with the aim of saving candidates the time they might spend on a test or traditional CV.

“What we’re trying to do here is essentially build a really easy way to create the right match between people and jobs that they do,” says Amarpreet Kalkat, Frrole co-founder and DeepSense co-creator. And what determines a good candidate, he says, often comes down to personality.

Kalkat says that DeepSense is often used in executive and managerial hiring as a first step. It focuses on individuals and their personality, expectations, and behavior in an attempt to better personalize communication, rather than sort through masses of applicants. “We’re not a background check tool,” he says. “We come in at the start of the process where you’re trying to know the candidate or how to speak to that person.” Once a candidate has been analyzed, it offers up a brief summary of the person in question. When I plugged my own LinkedIn into DeepSense via a browser, it informed me that I’m a skeptic who will incessantly ask questions — a horoscope-like description you could apply to any journalist. Interpretations of my abilities as a team player or how friendly I am were curious, but vague enough to apply. I’m certain my editor has different opinions on my ability to hit a deadline.

“From a relative point of view, how accurate is human judgement?”

How Frrole presents its findings has changed over time. Kalkat says that while reviewing “what is useful and right” for hiring, they’ve cut any insight that might not pertain to criterion validity or job performance. The first profile they provided me with, which included information from my Twitter, analyzed aspects like anxiety and depression. When I reached out to Kalkat about these metrics, he pointed to a cached version as a result of “significant Twitter data.” These categories no longer exist on my profile. “It is not a measure of mental health anyway,” he says, pointing to a specific mood map their tech uses.

Kalkat says this bio is to help recruiters who are interacting with a possible candidate for the first time, to “manage those first impressions” better than one might going in blind. When I suggested that assigning traits based off of an AI reading, rather than an in-person meeting, would actually increase a hirer’s bias, Kalkat disagreed. “How does it compare to the judgement that I might form otherwise?” he says. “From a relative point of view, how accurate is human judgement?”

AI will continue to improve with more data, he claims, while humans are “unlikely to make any significant leap” in this area. He says that DeepSense can avoid bias by not accounting for features like race, age, or gender. It also helps to focus on “predicting personality attributes on standard frameworks like DISC and Big Five, frameworks that hiring teams understand easily and in a standardized manner as they have been in use for 50+ years.”

Analyzing prospective employees’ profiles without their consent may sound suspect, but realistically, public information is fair game for employers. It’s not unusual for recruiters to scan social media profiles while checking a candidate for culture fit; LinkedIn exists to draw interested eyes to your professional capabilities; and DeepSense doesn’t hunt for private data.

“It’s critical that we do the ethically and morally right thing,” Kalkat says. He goes on: DeepSense doesn’t judge what you’re talking about, a potentially sticky situation for anyone who wants to loudly talk about topics like politics or religion. “It is not looking at the topic. It is in fact looking through the topic to see linguistic patterns that have been proven in 30+ years of academic research,” he says. “People who use more pronouns are likely to be less open, or people who use more positive words are likely to be more agreeable.” What matters are salient characteristics, not specific sentiments. “The intent is to understand the ‘real’ person without being bogged down by distractions,” he says.

On platforms like Twitter, context is everything

AI’s presence in hiring is intended to streamline a taxing process and cut human bias, but it comes with its share of concerns. In its 2018 report, AI Now found that “the gap between those who develop and profit from AI — and those most likely to suffer the consequences of its negative effects — is growing larger, not smaller,” as it relates to concerns over bias, discrimination, due process, liability, and overall responsibility for harm. Machine learning has already proven to be problematic in many instances. Amazon killed an AI vetting project after learning that the software was biased against women. Even AI-driven assessments on jobs like babysitting raise concerns over how well software can differentiate between actual bullying behaviors online and a joke or movie quote.

On platforms like Twitter, context is everything. The blow-by-blow nature of the platform is ripe for stripping tweets of their true meaning. Reading irony or sarcasm in text requires situational clues. Some written memes require understanding of specific capitalization or punctuation. Kalkat says that context is something the company is continuously working on, but feels “we’re at a good point right now” through its use of multiple years of data and platforms. In the future, he adds, the company plans to expand its tech to include an option for people to upload letters or essays they’ve written to be analyzed by a psycholinguistics engine.

How that might account for internet slang and culture is still unclear. And DeepSense won’t be of much help for potential candidates who don’t have a meaty enough internet presence. Without sufficient online data, it can’t compile a profile. “DeepSense does not say which candidate should be selected for a certain job and which candidate not. It is not an elimination tool,” Kalkat says. “Hiring organizations have 100% control. They learn the characteristics exhibited by successful candidates from past data available, or modify it where necessary. And they then use it to understand incoming candidates in an objective, and proven-by-data manner, so that they can eliminate subjective human biases.”  

But DeepSense, like Ideal, only stands to grow. Researchers are bullish about the growing ability of AI to extract meaning from the written word, and companies that want to automate recruitment could benefit for this. Both Ideal and Frrole say their user bases continue to grow and interest is climbing. “I really think 2019 is going to be a big year for AI and recruiting, kind of getting away from just automation and into real intelligence,” Mondal says. “Being able to make better results. I think that’s the real key moving forward.”