ChatGPT seems to be scary for two main reasons: its near-term and long-term effects. Near-term problems can include those such as data bias leading to AI bias, misuse, and the potential huge changes possible. These problems have been taking most of the focus (as far as I have seen) by âmainstreamâ or âgeneral audienceâ media. The thing is, these problems are relatively easy to solve (relative to the long-term effects of the development of AGI). (Donât get me wrong, these are important problems to address). And they have been covered a huge amount, so I probably canât add much. So what about the long-term effects?
My idea of long-term problems of AI like GPT-4 emerging mostly revolves around superintelligent AI. Alignment problems and the like. This too has been covered much (Superintelligence, the book!). But what has been less covered is how ChatGPT may have accelerated the march towards these long-term problems. So I can cover that!
Most people know of OpenAI, but fewer people know of DeepMind. I think that simple statement encapsulates their difference. OpenAI is more bullish on developing AI and giving it to the public to beta test. This has advantages and disadvantages. DeepMind on the other hand is slowly developing AI, with safety being a huge concern. I would personally trust Demis Hassabis and DeepMind more when it comes to considering AI alignment seriously. They are more likley to halt AI progress in order to achieve alignment first. But as a result of this more cautionary philosophy, DeepMind does not have the same public attention. OpenAI on the other hand has a very open (wow!) approach to their AI developments. Their failures and successes are tested out and (hopefully) fixed on the public forum. This can have benefits and drawbacks. And in this case, I think the drawbacks may be worse. All because one thing.
Capitalism! Bing-GPT, a result of OpenAIâs collaboration with Microsoft shows the point of view big tech is taking toward generative AI. They see it as a new market, something disruptive to their business. Google is genuinely worried about its main profit source, search engines, being overthrown by Bing. Bing! The engine whose most searched word is Google! We have evidence for this fear, with Googleâs rushed launch of the Bard AI. Big tech companies are rushing to develop, deploy, and profit from AI. This itself could be a good thing, more technology, and powers. But with the push to increase profits, and our societyâs overwhelming short-term bias, there comes a downside. (This is the problem with Capitalism as it is right now, the incentive to favor short-term profits over long-term profits (even if the long-term profits are greater and safery)). In AIâs case, the downside is AI safety. Once again, we have direct evidence for this. Microsoft recently laid off its entire AI safety team. Not good! With Microsoftâs close connection to OpenAI, and the formerâs incentive to maximize short term gain over AI safety, this could be problem. Some OpenAI researchers were even claimed to have said GPT-4 was pushed by Microsoft to be released before they could fully test it out.
OpenAIâs open policy towards AI is fueling this arms race. If we continue this way, AI safety and alignment could be abandoned in favor of profits right now. DeepMind is a bit more immune to this than OpenAI. However this means that even if DeepMind is âaheadâ of OpenAI, the effects of OpenAIâs work will still be greater than DeepMindâs today. Many people have said GPT-4 is an early AGI. This anecdote combined with Microsoftâs activity should be a cause for concern. Let us see where the race takes us.