Data analyst Jim Adler started work on his felon classifier with two goals in mind. First, he wanted to see how well so-called "big data" could create a correlation between one behavior and a set of information around it. Second, he wanted to push forward the debate around privacy, data, and profiling. If a computer could look at a data set of convicted felons and find a set of characteristics that were more likely to show up, would politicians, employers, and police try to screen out "dangerous" people? Would we have to create laws to protect citizens' data troves?
His current classifier is a rough prototype that locks onto factors like tattoos, skin color, eye color, and previous non-felony convictions as potential identifying factors, with promising but unreliable results. Using data from Kentucky police databases, it only "learned" from a very limited population, and if the justice system itself unfairly singles out some groups, the bot may only be mirroring its prejudices. Besides Adler's own site above, Bloomberg explains more of his method and what it could someday mean for privacy and profiling.