How the collapse of Sam Bankman-Fried’s crypto empire has disrupted AI


SAN FRANCISCO – In April, a San Francisco artificial intelligence lab called Anthropic raised US$580 million (S$781 million) for research involving “AI safety”.

Few in Silicon Valley had heard of the one-year-old lab, which is building AI systems that generate language. But the amount of money promised to the tiny company dwarfed what venture capitalists were investing in other AI start-ups, including those stocked with some of the most experienced researchers in the field.

The funding round was led by Mr Sam Bankman-Fried, the founder of FTX, the cryptocurrency exchange that filed for bankruptcy in November. After FTX’s sudden collapse, a leaked balance sheet showed that Mr Bankman-Fried and his colleagues had fed at least US$500 million into Anthropic.

Their investment was part of a quiet and quixotic effort to explore and mitigate the dangers of AI, which many in Mr Bankman-Fried’s circle believed could eventually destroy the world and damage humanity.

Over the past two years, the 30-year-old entrepreneur and his FTX colleagues funnelled more than US$530 million – through either grants or investments – into more than 70 AI-related companies, academic labs, think-tanks, independent projects and individual researchers to address concerns over the technology, according to a tally by The New York Times.

Now some of these organisations and individuals are unsure whether they can continue to spend that money, said four sources close to the AI efforts who were not authorised to speak publicly.

They said they were worried that Mr Bankman-Fried’s fall could cast doubt over their research and undermine their reputations.

And some of the AI start-ups and organisations may eventually find themselves embroiled in FTX’s bankruptcy proceedings, with their grants potentially clawed back in court, they said.

The concerns in the AI world are an unexpected fallout from FTX’s disintegration, showing how far the ripple effects of the crypto exchange’s collapse and Mr Bankman-Fried’s vaporising fortune have travelled.

“Some might be surprised by the connection between these two emerging fields of technology,” said Mr Andrew Burt, a lawyer and visiting fellow at Yale Law School who specialises in the risks of artificial intelligence, of AI and crypto. “But under the surface, there are direct links between the two.”

Mr Bankman-Fried, who faces investigations into FTX’s collapse and who spoke at the Times’ DealBook conference last Wednesday, declined to comment.

Anthropic declined to comment on his investment in the company.

Mr Bankman-Fried’s attempts to influence AI stem from his involvement in “effective altruism”, a philanthropic movement in which donors seek to maximise the impact of their giving for the long term. Effective altruists are often concerned with what they call catastrophic risks, such as pandemics, bioweapons and nuclear war.

Their interest in AI is particularly acute.

Many effective altruists believe that increasingly powerful AI can do good for the world, but worry that it can cause serious harm if it is not built in a safe way. While AI experts agree that any doomsday scenario is a long way off – if it happens at all – effective altruists have long argued that such a future is not beyond the realm of possibility and that researchers, companies and governments should prepare for it.

Over the last decade, many effective altruists have worked inside top AI research labs, including DeepMind, which is owned by Google’s parent company, and OpenAI, which was founded by Tesla chief executive Elon Musk and others.

They helped create a research field called AI safety, which aims to explore how AI systems might be used to do harm or might unexpectedly malfunction on their own.

Effective altruists have helped drive similar research at Washington think-tanks that shape policy. Georgetown University’s Centre for Security and Emerging Technology, which studies the impact of AI and other emerging technologies on national security, was largely funded by Open Philanthropy, an effective altruist giving organisation backed by a Facebook co-founder, Mr Dustin Moskovitz. Effective altruists also work as researchers inside these think-tanks.

Mr Bankman-Fried has been a part of the effective altruist movement since 2014. Embracing an approach called earning to give, he told the Times in April that he had deliberately chosen a lucrative career so he could give away much larger amounts of money.

In February, he and several of his FTX colleagues announced the Future Fund, which would support “ambitious projects in order to improve humanity’s long-term prospects”. The fund was led partly by Associate Professor Will MacAskill, a founder of the Centre for Effective Altruism, as well as other key figures in the movement.



Source link

Denial of responsibility! galaxyconcerns is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave A Reply

Your email address will not be published.