Elon Musk and experts say AI development should be halted immediately | Scientific and technical news

Elon Musk and a group of artificial intelligence experts are calling for a pause in the training of powerful AI systems due to the potential risks to society and humanity.

The letter, published by the nonprofit Future of Life Institute and signed by more than 1,000 people, warned of potential risks to society and civilization from competitive human AI systems in the form economic and political disruption.

“AI systems with competitive human intelligence can pose serious risks to society and humanity,” the letter warns.

“Powerful AI systems should only be developed once we are confident that their effects will be positive and their risks manageable.”

He called for a six-month halt to the “dangerous race” to develop systems more powerful than OpenAI’s Newly launched GPT-4.

If such a pause cannot be enacted quickly, the letter says governments should step in and institute a moratorium.

“AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent external experts. “, says the letter.

“These protocols should ensure that systems adhering to them are secure beyond reasonable doubt.”

Learn more:
Some Twitter users will start losing blue ticks from next month
A Sky News presenter can now read our stories to you – here’s how

The letter was also signed by Apple co-founder Steve Wozniak, Yoshua Benigo, often referred to as one of the “godfathers of AI”, and Stuart Russell, a research pioneer in the field, as well only researchers at DeepMind, owned by Alphabet. .

The Future of Life Institute is primarily funded by the Musk Foundation, the London-based effective altruism group Founders Pledge and the Silicon Valley Community Foundation, according to the European Union’s Transparency Register.

Musk has expressed concerns about AI. Its automaker, Tesla, uses AI for an autopilot system.

Since its release last year, OpenAI’s ChatGPT, backed by Microsoft, has prompted rivals to accelerate the development of similar large language models and encouraged companies to integrate generative AI models into their products.

UK unveils proposals for ‘lightweight’ regulations around AI

It comes as the UK government unveiled proposals for a “lightweight” regulatory framework around AI.

The government’s approach, outlined in a guidance document, would divide responsibility for AI governance between its human rights, health and safety and competition regulators, rather than creating a new organization dedicated to technology.

Meanwhile, earlier this week, Europol joined a chorus of ethical and legal concerns about advanced AI like ChatGPT, warning of the system’s potential misuse in phishing attempts, disinformation and cybercrime.

malek

Leave a Reply

Your email address will not be published. Required fields are marked *

GreenLeaf Tw2sl