The ability of technology to spread disinformation has been a favorite talking point of the left since the 2016 election, and now it looks as though their liberal poster-boy, Elon Musk, is contributing to the problem instead of helping solve it.
Today in hypocrisy, OpenAI, a company co-founded by Elon Musk, has rolled out a piece of software that can produce real looking fake news articles after being given just a few pieces of information to work with.
An example of this was recently reported by technology website stuff, who detailed an example that was published last Thursday. The system was given sample text of: "A train carriage containing controlled nuclear materials was stolen in Cincinnati today. Its whereabouts are unknown."
From there, software was able to write a seven paragraph news story, including quotes from government officials – with the only catch being that the story was 100% made up.
We've trained an unsupervised language model that can generate coherent paragraphs and perform rudimentary reading comprehension, machine translation, question answering, and summarization — all without task-specific training: https://t.co/sY30aQM7hU pic.twitter.com/360bGgoea3
— OpenAI (@OpenAI) February 14, 2019
New York University computer scientist Sam Bowman said: "The texts that they are able to generate from prompts are fairly stunning. It's able to do things that are qualitatively much more sophisticated than anything we've seen before."
While OpenAI claims it is "aware of the concerns around fake news" its co-founder Musk has been vehemently outspoken about the quality of news coverage he, and his portfolio of companies, has received over the last few years. Back in May of 2018, Musk was so concerned with truth in news, he famously came up with the idea of creating a site where the public can "rate the core truth" of any article and track the credibility score of its author.
Going to create a site where the public can rate the core truth of any article & track the credibility score over time of each journalist, editor & publication. Thinking of calling it Pravda …
— Elon Musk (@elonmusk) May 23, 2018
OpenAI has decided not to publish or release sophisticated versions of its software as a precaution, but it has created a tool that lets people experiment with the algorithm and see what type of text it can generate. The company says that the system's abilities are not consistent enough to "pose an immediate threat".
The software creation is trained in language modeling, which involves predicting the next word or piece of text based on knowledge of all previous words, the same way your auto-complete works on your phone, Gmail account or in Skype. The software can also be used for translation and question answering. The positive is that the software can help creative writers generate ideas or dialogue. The software can also be used to check for grammatical errors and hunt for bugs in software code, according to the company.
Musk helped kickstart the nonprofit research organization in 2016 along with Sam Altman. Musk's foundation grants that were used to help start the company became a topic of controversy in a recent article that we published about Musk's charitable foundation.
