When artificial intelligence investor Yuri Dasin attended a birthday party in San Francisco last Friday, he saw something striking. A separate room was designated for guests who did not want to talk about the resignation of OpenAI CEO Sam Altman. “A non-AI room. For those who also want to talk about something else,” says Dasen, who lives in San Francisco. “There is no further discussion about this in WhatsApp groups.”
The departure of Altman — the face of artificial intelligence (AI) since the launch of chatbot ChatGPT a year ago — was one of the most discussed tech stories of the past year. Altman was fired last Friday, apparently returned on Saturday, left for Microsoft on Sunday and returned to OpenAI on Wednesday. The board that fired Altman resigned.
Last week, a powerful internal power struggle took place publicly at the world’s most famous artificial intelligence company. A battle that basically boiled down to a classic dilemma: speed versus safety. Sam Altman saw the competition approaching and basically wanted to speed it up. The board that controlled Altman feared the consequences and applied a handbrake.
OpenAI was founded in 2015 as a non-profit company with a mission to advance artificial intelligence to benefit humanity. The company was given a powerful six-member board of directors with no financial interests, which had to oversee this mission.
But after all the controversy, little of this design remains. The world’s most important AI company has the characteristics of a regular startup, where growing as quickly as possible is the main priority. With technology having the potential to radically change our world and entailing enormous social risks.
What went wrong with OpenAI? And how dangerous is it that even with highly sensitive systems technology like AI, companies are essentially monitoring themselves?
Decades of ups and downs
You may almost forget, but the field of AI has been around since the 1950s. A branch of science that has been going through periods of ups and downs for decades. Scientific breakthroughs (the self-driving car, the chess computer) alternate with an “AI winter,” where technological progress reaches its limits.
Since the introduction of language models—computers that harness the Internet and learn how to imitate human speech—AI has entered a new hype cycle. The difference is that now it is not universities that have the most advanced technology and infrastructure in their hands, but large corporations. Since ChatGPT, the first AI chatbot for a mass audience, attracted 100 million users weekly, Chinese and American technology companies have invested billions in the technology. As a result, development occurs very quickly.
Experts believe that an AI language model could be used in the near future for much more purposes than just chatting with a robot. The ideal: the fully enlightened digital assistant. The person who thinks a lot and helps while working, arranging banking, making online purchases, sending emails and managing your schedule. “ChatGPT has given us a glimpse into a world where we can tap into unlimited amounts of intelligence,” said Stuart Russell, an artificial intelligence scientist at the University of California, Berkeley.
The current development in artificial intelligence is, according to Microsoft founder Bill Gates, the biggest technological revolution in decades. No less important are the microprocessor, computer and the Internet. “Entire industries will form around this technology,” Gates wrote on his blog in March. “Companies will differentiate themselves in how well they use it.”
OpenAI leads with its AI technology, closely followed by Google and AI company Anthropic, also from San Francisco. That’s “certainly” the case if you think about what OpenAI could build in the future if its systems become exponentially better, says Dassin, the AI investor. “Automate human intelligence and decision-making, and you have unprecedented economic value.”
The power of Microsoft
Ever since the existence of artificial intelligence, people have been warning about its consequences. The famous British computer scientist Alan Turing warned in 1951 that “computers will eventually take over.” This fear has not completely disappeared. on the contrary.
The main fear is not that computers with a certain degree of intelligence want to harm humanity or develop their own “consciousness”; There is a growing fear that people will misuse technology and that humanity will lose control over its digital infrastructure.
OpenAI emerged from this fear in 2015. A group of prominent Silicon Valley figures hoped to quietly build secure systems by developing artificial intelligence in a non-profit environment. The agreement at its founding was that OpenAI would be funded by donations from individual investors. Their potential future profits were subject to a cap, and all of the above was up to the company. Founder Sam Altman received the title of “CEO” and a place on the board, but no stock.
The brilliant Russian computer scientist Ilya Sutskever, who led OpenAI’s scientific division, also took a seat on the board. Sutskever is known as an extremely anxious “AI governer.” He fears a future world full of data centers and solar panels, where uncontrolled AI systems make all the decisions on behalf of humans.
The internal problems began after Altman first opened its doors to a major investor in 2018. Microsoft invested $1 billion in OpenAI, an amount that later rose to $13 billion. According to Altman, outside money was needed to continue growth and to pay for the required server space and expensive proprietary computer chips powering the AI technology.
After OpenAI launched ChatGPT, the pressure to grow and stay ahead of the competition increased. Microsoft saw it as having made a golden move by investing in OpenAI. It has overtaken Google, which has dominated the field of artificial intelligence technology in previous decades, to the left. The technology behind ChatGPT is largely based on inventions made by Google.
Exactly what happened between the OpenAI board and Sam Altman remains unknown. It is clear that tensions have increased significantly in recent months. Sutskever, the AI expert, in particular, was very concerned about the pace of innovation, as the US media paraphrased him last week.
A few months ago, according to Reuters news agency and the technology website the information It was a “technological breakthrough” in OpenAI. The invention will enable language models to solve complex mathematical problems as well, something that has not been possible until now. Partly due to Altman’s desire to accelerate the pace of product development even further, the OpenAI board has reportedly decided once and for all to fire him.
Read also
“How Microsoft’s Copilot Gained Its Wings With Shaky OpenAI”
But the board underestimated Sam Altman’s popularity among his seven-hundred-plus employees, who supported their CEO almost without exception — as well as the influence of Microsoft, which had managed to acquire OpenAI despite its nonprofit structure. In exchange for its billions, Microsoft received a 49% stake and, more importantly, the intellectual property rights and source code for OpenAI’s systems, as it emerged this week.
When Altman threatened to leave Microsoft – and his entire staff seemed to agree – the board had no choice but to reinstate him. The resigned board members have just stipulated that one member can remain on the board, where Microsoft will also have a position. The non-profit structure will remain in place, but the question is for how long.
It is up to governments
Now that Altman has won the battle, the question is: what now? Who will control OpenAI now that most of the important board members have left?
According to AI scientist Stuart Russell, the crisis at OpenAI shows that “even when an organization takes into account the interests of humanity, financial considerations trump those interests.” “Only governments can protect the interests of humanity.”
Marietje Schacke, a researcher at Stanford University and an artificial intelligence expert, also sees the OpenAI crisis as confirmation that these technology companies are incapable of self-regulation. “The power of the lender has been strengthened, and the authority of the management that had to look after the so-called public interest has been broken,” she says.
Europe, China and the United States are currently working on legislation to hold AI companies to the rules. Governments fear AI systems that create convincing fake news, carry out cyberattacks, or serve as a tool for terrorists or fraudsters. The main European law on artificial intelligence (AI Law) is due to be introduced on 6 December, although it is questionable whether the EU will meet this date. According to Shakey, “There isn’t a table on the board where this topic isn’t currently active,” she says.
According to Shaki, there is no time to waste. “We see that all the AI knowledge, all the data, the talent, the computing power, is in the hands of companies like Open-AI,” she says. “Governments cannot make good policy if they do not know exactly how the technology these companies use works. It is time for the government to be transparent about what happens inside the home.
More about the relationship between Microsoft and OpenAI Page E11
Devoted music ninja. Zombie practitioner. Pop culture aficionado. Webaholic. Communicator. Internet nerd. Certified alcohol maven. Tv buff.