by Gisela Schmalz
Somebody had to do it. Anthony Levandowski heard the call and followed it. In 2015 he founded a church that pays homage to the Goddess artificial intelligence. “Way of the Future” (WOTF) accompanies people when they soon have to give up control over themselves and their planet to artificial intelligence (AI). WOTF wants to make sure that the handover is peaceful and that the new bosses don’t destroy the old ones.
The non-profit religious corporation, however, has so far hardly shown any activities and no publicly confessing believers. The most striking events in church history took place in 2017. Two years after its foundation, the church received around 40,000 US dollars in grants and membership fees. It was granted tax-exempt status, and Levandowski enthroned himself irremovable WOTF Dean until his death according to WOTF bylaws. Coincidentally all of this happened in the same year in which Levandowski, a robotics engineer born in 1980, was fired from his profane job at Uber.
But good for the church that there is the Internet: The most important, because so far the only distinctive feature of “Way of the Future” is its website with the creed. It says that those who believe in WOTF believe -in science and -in progress, as well as -that intelligence is not biological, -that the creation of super intelligence is inevitable, and -that “everyone can help (and should)” to create it.
Official WOTF documents state that the Church’s focus is on “the realization, acceptance, and worship of a Godhead based on artificial intelligence (AI) developed through computer hardware and software.” “We’re in the process of raising a god,” the Dean 2017 said in an Interview for Wired. The holy baby is fed large datasets to learn to improve itself through simulations. Everything the church breeds will be open source.” “I wanted a way for everybody to participate in this, to be able to shape it. If you’re not a software engineer, you can still help,” says Levandowski.
The bylaws disclose something different. According to them, WOTF is not drafted as a universal religion. You learn that the Church wants to promote the improvement of the environmental perception of self-learning robots. Dean Levandowski is one of the world’s leading experts in this field. Prior to his startup called WOTF, Levandowski founded several companies dealing with the autonomous driving. He sold two of them, -510 Systems and -Anthony’s Robots, to Google and -Otto, founded in 2016, to Uber, where he also worked for a while. He founded -ProntoAI in 2018 which is also dealing with driverless vehicles based on AI. So the fine print of the WOTF statutes show that the Church is not interested in any member, but rather in AI researchers. The Church seems to rather be a recruiting instrument than an invitation to pray. Without engineering talent, the church cannot fulfill its creed of raising a divine AI baby.
Lewandovski’s statements about the salvific future technology AI (“the gospel”) are tainted by some dystopian particles. “Change is good, even if a bit scary sometimes”, or “it may be important for machines to see who is friendly to their cause and who is not,” it says on the WOTF website. Dean Lewandovski told Wired-journalist Mark Harris: “I don’t think it’s going to be friendly when the tables are turned.”
No church without Angst, the shrewd Dean might have thought. In the Wired-interview he mimics the reassuring, good shepherd: WOTF will “decrease fear of the unknown“, Lewandovski said.
Elon Musk might as well have started a church. But the CEO of Tesla and SpaceX opened a non-profit institute to tackle his fears of AI. He founded OpenAI in the same year as Anthony Lewandovski founded WOTF. In 2015, Musk started OpenAI to not only develop safe AI, but a safer AI than the one he is afraid of – or should we say he makes others afraid of?
The year before he founded OpenAI, Musk found drastic words to warn about AI: “AI is far more dangerous than nukes”, and “with artificial intelligence we’re summoning the demon”. Now an institute. But as little as WOTF is a people’s church, OpenAI is open to everyone. OpenAI is no longer even non-profit as it was in its early days. Meanwhile the institute is for-profit as well. Elon Musk has recruited AI specialists. He has poached the machine learning expert Ilya Sutskever from Google and hired other renowned AI researchers, since Elon Musk wants to be at the forefront of the AI race.
Techies love the devil (or God – depending on perspective). The biggest tech tycoons in the US and China are currently flirting like crazy with AI.
Microsoft, IBM, Amazon, Facebook, Apple, Google, Baidu, Tencent and Alibaba are competing to be the first to develop general artificial intelligence (AGI), AI that accomplishes similar but much more intelligent thought and action processes than humans. AGI can answer more and more precisely the most complicated scientific, economic, political and social questions in fractions of a second.
But why do Lewandovski and Musk, engineers from Silicon Valley, where rationality is fetishized, bring irrational concepts such as faith and fear into play? Are their fears appropriate? Why do others join the chorus of the anxious? Apple founder Steve Wozniak, Microsoft founder Bill Gates, the inventor of the World Wide Web Tim Berners-Lee and, shortly before his death, physicist Stephen Hawking also warn(ed) against the extinction of humanity by AI. But there is an opposite side. Tech journalists such as Kevin Kelly and Jaron Lanier refute the idea of a general artificial intelligence for the coming decades. Top AI researchers like Jeff Hawkins, Geoffrey Hinton (Google Brain) and Demis Hassabis (DeepMind) don’t believe that humans will soon be replaced by machines.
In 2016, Amazon, Facebook, Microsoft, IBM, Google and DeepMind founded “Partnership on Artificial Intelligence to Benefit People and Society”, or Partnership on AI, to appease the anxious and the ethics freaks within the AI scene and especially outside it. Apple, OpenAI and many other international institutions joined the club. The partnership wants to provide illumination and to promote a society-friendly AI. It does research and initiates discussions about AI. Nevertheless, the partnership remains toothless. Its efforts and written papers do not oblige AI companies to anything. Rather, Partnership on AI allows ethical and social aspects to be outsourced from the tech companies.
Google is also a member of this AI partnership. It is noticeable that two of the most prominent developers, who avoid to spread fears, Hinton and Hassabis, are from the Google camp. Their boss, Alphabet CEO Larry Page, who oversees the AI projects Google Brain and DeepMind, is also remarkably carefree about AI. That is why Elon Musk approached him at a party in Napa Valley in 2015. MIT professor Max Tegmark later reported on this. When Musk mentioned that digitalization could destroy everything important and valuable to mankind, Page dismissed this as AI paranoia. He labelled Musk’s fears as “speciesist”, morally discriminatory simply because something belongs to another species (in this case silicone). After all, Page opened an ethics commission at Google in the spring of 2019, but closed it only a week later. Alphabet´s CEO does business as usual. He remains silent when others argue lively or dead seriously about KI.
Elon Musk keeps on warning. He sounds like Dean Lewandovski when he calls for all people to be involved in the development of AI. “… the best defense against the misuse of AI is to empower as many people as possible to have AI. If everyone has AI powers, then there’s not any one person or a small set of individuals who can have AI superpower”, he said in 2015. In an interview for Vox in 2018, Elon Musk recommended a professional government committee to consult with the tech industry on how to guarantee “a safe advent of AI“. Musk wisely did not demand to regulate the AI sector. Nobody in the Valley would ever call for AI research regulation. Research is sacrosanct.
It is not quite clear what Elon Musk really is afraid of. Sometimes he sounds as if he was less worried about the impact of AI than about his competitors in this field – either from China or from his own valley, like Larry Page. Apparently Musk wants to prevent others (competitors) from having too much control over a technology that allows to play God or devil. It could be that Musk and Levandowski stir up fears to act as the good wise men from the land of the future. Perhaps they want to present their own companies as lighthouses in the fog of uncertainty, as the only trustworthy sources of potentially dangerous technologies. But perhaps they are actually afraid of what AI might become or do and therefore set up a church and a non-profit institute.
Whatever their motives, it is important that representatives from the tech sector are calling for an AI that is developed for people and not against them. But God and devil? Anyone who is serious about an AI for the people should not hide behind blown up metaphors. They should let the devil roam about his hell and God slumber in his heavenly four-poster bed.
If you tear away the mystifying veiling of tech prophets like Levandowski and Musk you will see what AI experts see: a broadly applicable, monetarizable technology that is neither good nor evil. The clouds of fear blown around the idea of AI are just as irritating as trivializing the use of AI. But it really gets disturbing when developers pretend their own technology sooner or later could slip out of their hands. It is irresponsible to hide behind the self-developed monstrosities as if they would be higher and potentially dangerous superpowers.
AI researchers and investors must accept their responsibility for the vast projects they are bringing forth. Instead of first telling horror stories and then spreading soothing sermons of salvation, they should explain what is happening in their laboratories. Silence à la Larry Page is not an option either. People need no fears, no religion, no promises of salvation and certainly no secrecy. They need sober and permanently updated information about the progress in AI research and new products based on AI. People should be asked what technologies they want. And they should get them.
© Gisela Schmalz
Recommended Citation: Schmalz, Gisela: “FEAR NOT! – Why AI Church?” (2019). https://www.giselaschmalz.com/fear-not-why-ai-church/
German Version: Schmalz, Gisela: “FÜRCHTET EUCH NICHT! – Wozu eine KI-rche?” (2019). Gisela Schmalz. https://www.giselaschmalz.com/4211-2/
Published c/o CARTA.info: Gisela Schmalz: “Der Hype um und die Angst vor Künstlicher Intelligenz sind übertrieben“ (2019)