How AI Is Catching People Cheating On Diets, School Work And Job Interviews

Artificial intelligence is paving the way towards combating cheating; from homework, job hunting, to that diet you swore you'd stay on. 

One California company, Crosschq, is using machine learning and data analytics to help HR departments avoid bad hires, according to Marketwatch. CEO and co-founder Mike Fitzsimmons noticed how job applicants were regularly using friends and former colleagues to overhype their credentials. To counter this, Crosschq's program has candidates rank themselves on various factors such as motivation and attention to detail - then has their reference rate the candidate on the same things. 

The software will then note inconsistencies to decisionmakers. 

"It’s when you start to see inconsistencies, that’s when the flags go up," said Fitzsimmons, adding that the program is designed to control "the ability of the candidate to game the system." 

Drexel University, meanwhile, is working on an app that can predict when dieters are likely to break promises to themselves based on the time of day, the user's emotions, and even the temperate of their skin and their heart rate. 

Approximately 45 million Americans diet each year, but many don’t lose weight because they backslide, said Drexel University psychology professor Evan Forman. While there are plenty of apps telling users the foods they should be eating and the activities they should be doing, that only goes so far, said Forman, who is director of the school’s Center for Weight, Eating and Lifestyle Science. -Marketwatch

"It’s easy to understand what change you ought to make. It’s much more difficult to actually make those changes and keep on making them," said Forman, who is developing the app - OnTrack - with others. The app 'learns' when diet lapses are statistically likely - warning users right before it thinks the next one is about to happen

Forman hopes OnTrack will be publicly available in the next year or two. Though users had to manually input data in early trials — like telling the program if they felt stressed — Forman said the end goal is to make OnTrack as automated as possible. For example, new versions are incorporating data from sensors, he said.

Forman used OnTrack himself to try breaking his post-dinner habit of snacking on Trader Joe’s tortilla chips. It worked — at least while Forman used the app. He knew it was just a machine acting on data he supplied. Still, it felt like “someone helping me do what I wanted to do,” he said.

He understands if someone would think that could get creepy. But Forman said the goal wasn’t forcing someone to do something against their will. “This was an extension of you helping you do what you want to do,” he said. -Marketwatch

School cheats, meanwhile, may have to contend with a system developed by Danish researchers that can determine with 90% accuracy whether a high school research paper was written by the student who handed it in

"Ghostwriter" compares a student's current assignment with their past work, scrutinizing writing style and word choice to see if it matches the student's previous work. 

"It’s common in higher education to check on student papers. We all know some students are not diligent," according to the Brooking Institution's Darrell West, who runs the institution's Center for Technology Innovation.

Ghostwriter isn't the only AI used to spot school cheats. Currently more than 15,000 K-12 and higher education institutions in 153 countries use software from Oakland, CA-based Turnitin. Ghostwriter's creators, however, say that it could be used elsewhere - such as spotting forged documents, or during police work. Perhaps it could even be used to determine if a real person is making a post on Twitter, or if its a bot. 

That said, AI still has its shortcomings: 

Artificial intelligence won’t cure the human weakness to fudge facts and cut corners — and the technology itself isn’t foolproof. “The big challenges are privacy, fairness and transparency,” West said. “No algorithm is perfect,” he said, noting that its conclusions depended deeply on the data it received in the first place. “You have to make sure the conclusion reached by AI actually is true in fact.” One example of that issue: facial recognition algorithms have had trouble recognizing darker skin tones and women’s faces, in part because the algorithms are trained with images of lighter-skinned male faces. -Marketwatch

And just like everything else, we expect people to figure out how to circumvent their AI overlords.