Profile of NewsDigest, a popular news app in Japan, which employs no journalists and uses AI and social media monitoring to generate stories and spot fake news

Profile of NewsDigest, a popular news app in Japan, which employs no journalists and uses AI and social media monitoring to generate stories and spot fake news


Shoko Oda / Bloomberg:

Profile of NewsDigest, a popular news app in Japan, which employs no journalists and uses AI and social media monitoring to generate stories and spot fake news  —  – Japan millennial made newsroom using artificial intelligence  — NewsDigest is one of most popular news apps in Japan

AI Systems Should Debate Each Other To Prove Themselves, Says OpenAI

AI Systems Should Debate Each Other To Prove Themselves, Says OpenAI

tedlistens shares a report from Fast Company: To make AI easier for humans to understand and trust, researchers at the [Elon Musk-backed] nonprofit research organization OpenAI have proposed training algorithms to not only classify data or make decisions, but to justify their decisions in debates with other AI programs in front of a human or AI judge. In an experiment described in their paper (PDF), the researchers set up a debate where two software agents work with a standard set of handwritten numerals, attempting to convince an automated judge that a particular image is one digit rather than another digit, by taking turns revealing one pixel of the digit at a time. One bot is programmed to tell the truth, while another is programmed to lie about what number is in the image, and they reveal pixels to support their contentions that the digit is, say, a five rather than a six. The image classification task, where most of the image is invisible to the judge, is a sort of stand-in for complex problems where it wouldn’t be possible for a human judge to analyze the entire dataset to judge bot performance. The judge would have to rely on the facets of the data highlighted by debating robots, the researchers say. “The goal here is to model situations where we have something that’s beyond human scale,” says Geoffrey Irving, a member of the AI safety team at OpenAI. “The best we can do there is replace something a human couldn’t possibly do with something a human can’t do because they’re not seeing an image.”

Read more of this story at Slashdot.

Posted in AI |