OpenAI CEO Sam Altman is a ‘bit scared’ of AI, ChatGPT


The 37-year-old at the heart of the artificial intelligence boom is a little scared, and he wants us to know it.

OpenAI CEO and ChatGPT creator Sam Altman granted a lengthy one-on-one interview to ABC News, released last week during its evening news show, in which he discussed the future of his company’s AI tech in lofty terms — “this will be the greatest technology humanity has yet developed” — while emphasizing society’s need to adapt or otherwise ready itself for terrible outcomes. 

As he praised his company’s product-by-product rollout of chatbot platforms, he said he hopes his firm can make mistakes while the stakes are low and said “people should be happy that we’re a little bit scared of this.”

“You’re a little bit scared?” ABC News’ Rebecca Jarvis prodded. “You personally?”

“A little bit, yeah, of course,” Altman replied, hitting a key point in his apparent campaign to win the trust of the American public. “I think if I said I were not, you should either not trust me or be very unhappy I’m in this job.”

Altman has maintained a balancing act since November, when the ChatGPT platform became a tech supernova and launched the AI boom sweeping the tech industry. He attempts to promote this fast-moving tech as his company develops stronger and stronger models — and he becomes the spokesperson for the industry. 

The CEO’s vision for artificial intelligence is as an “amplifier of human will,” he said in the interview, citing the potential for rapid medical advice, creative tools and a “co-pilot” fine-tuned to help every professional do their job more easily or better. 

Altman describes AI as transformative for labor, but now in the spotlight, he no longer publicly talks about his 2021 push for universal basic income. He needs feedback on his chatbots to make them work better, but OpenAI isn’t open-sourcing its model or code because of safety fears and competition from other firms. He implores the government to get involved (and even floated the idea of an international coalition for AI governance), but when Jarvis asked for a single suggestion for what regulators could do now, he said the main thing is just getting “up to speed.”

He lauds the technology’s huge educational potential while noting that a glaring issue for chatbots is their tendency to confidently state untrue things. And as he aims to orient our vision of AI away from sci-fi apocalypses, he admits that he worries “a lot” about authoritarian governments developing the powerful tech.

He also worries about large-scale disinformation campaigns and offensive cyberattacks, he said. He’s clearly thoughtful and direct about the limitations of AI and his firm’s own chatbot technologies. But he doesn’t often take ownership of the problems this tech may cause, instead invoking forms of the phrase “society needs time to adapt.”

“We will need to figure out ways to slow down this technology over time,” said the chief executive of the world’s foremost AI research and engineering firm.





Source link

Denial of responsibility! galaxyconcerns is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave A Reply

Your email address will not be published.