As is evident, big tech conglomerates including Google and Microsoft are doubling down on integrating artificial intelligence into their core products like search. However, time and again, industry veterans and AI experts have foreshadowed what humans can expect in the future if generative artificial intelligence models (LLMs) are not regulated.
Now, as reported by Engadget, former Google CEO Eric Schmidt has joined the growing list of experts who have warned humanity about the potential dangers that AI may bring in the future. Speaking at Wall Street Journal’s CEO council summit, Schmidt said that AI has created an “existential risk” that could get people “killed.”
He warned that the technology should not fall into the hands of “evil people,” or find a security flaw in the growing digitalisation of things.
Schmidt—like others including Geoffrey Hinton—said that in its current state, AI doesn’t pose much of a risk, but the same can’t be said about what the future holds.
Recently, Geoffrey Hinton, known as the ‘Godfather of AI,’ left Google citing concerns about its dangers. He claimed that despite the benefits that AI brings, such as increased efficiency and productivity, the fear of AI outpacing humans and becoming too smart is a legitimate concern for humans. Further, he went on to say that AI models in the future might soon generate and run their own code—which could lead to truly autonomous weapons and killer robots becoming a reality.
Concerned, a number of industry leaders and experts including Elon Musk, cognitive scientist Gary Marcus, and Apple co-founder Steve Wozniak signed an open letter requesting a six-month freeze in the development of AI models.