Skip to main content

Deck.blue brings a TweetDeck experience to Bluesky users

With over 3 million users and plans to open up more broadly in the months ahead, Bluesky is still establishing itself as an alternative to Twitter/X. However, that hasn’t stopped the developer community from embracing the project and building tools to meet the needs of those fleeing the now Elon Musk-owned social network, formerly known […] © 2024 TechCrunch. All rights reserved. For personal use only. from TechCrunch https://ift.tt/TBbEAPF

OpenAI’s Altman and other AI giants back warning of advanced AI as ‘extinction’ risk

Make way for yet another headline-grabbing AI policy intervention: Hundreds of AI scientists, academics, tech CEOs and public figures — from OpenAI CEO Sam Altman and DeepMind CEO Demis Hassabis to veteran AI computer scientist Geoffrey Hinton, MIT’s Max Tegmark and Skype co-founder Jaan Tallinn to Grimes the musician and populist podcaster Sam Harris, to name a few — have added their names to a statement urging global attention on existential AI risk.

The statement, which is being hosted on the website of a San Francisco-based, privately-funded not-for-profit called the Center for AI Safety (CAIS), seeks to equate AI risk with the existential harms posed by nuclear apocalypse and calls for policymakers to focus their attention on mitigating what they claim is ‘doomsday’ extinction-level AI risk.

Here’s their (intentionally brief) statement in full:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

Per a short explainer on CAIS’ website the statement has been kept “succinct” because those behind it are concerned to avoid their message about “some of advanced AI’s most severe risks” being drowned out by discussion of other “important and urgent risks from AI” which they nonetheless imply are getting in the way of discussion about extinction-level AI risk.

However we have actually heard the self-same concerns being voiced loudly and multiple times in recent months, as AI hype has surged off the back of expanded access to generative AI tools like OpenAI’s ChatGPT and DALL-E — leading to a surfeit of headline-grabbing discussion about the risk of “superintelligent” killer AIs. (Such as this one, from earlier this month, where statement-signatory Hinton warned of the “existential threat” of AI taking control. Or this one, from just last week, where Altman called for regulation to prevent AI destroying humanity.)

There was also the open letter signed by Elon Musk (and scores of others) back in March which called for a six-month pause on development of AI models more powerful than OpenAI’s GPT-4 to allow time for shared safety protocols to be devised and applied to advanced AI — warning over risks posed by “ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control”.

So, in recent months, there has actually been a barrage of heavily publicized warnings over AI risks that don’t exist yet.

This drumbeat of hysterical headlines has arguably distracted attention from deeper scrutiny of existing harms. Such as the tools’ free use of copyrighted data to train AI systems without permission or consent (or payment); or the systematic scraping of online personal data in violation of people’s privacy; or the lack of transparency from AI giants vis-a-vis the data used to train these tools. Or, indeed, baked in flaws like disinformation (“hallucination”) and risks like bias (automated discrimination). Not to mention AI-driven spam!

It’s certainly notable that after a meeting last week between the UK prime minister and a number of major AI execs, including Altman and Hassabis, the government appears to be shifting tack on AI regulation — with a sudden keen in existential risk, per the Guardian’s reporting.

Talk of existential AI risk also distracts attention from problems related to market structure and dominance, as Jenna Burrell, director of research at Data & Society, pointed out in this recent Columbia Journalism Review article reviewing media coverage of ChatGPT — where she argued we need to move away from focusing on red herrings like AI’s potential “sentience” to covering how AI is further concentrating wealth and power.

So of course there are clear commercial motivates for AI giants to want to route regulatory attention into the far-flung theoretical future, with talk of an AI-driven doomsday — as a tactic to draw lawmakers’ minds away from more fundamental competition and antitrust considerations in the here and now. And data exploitation as a tool to concentrate market power is nothing new.

Certainly it speaks volumes about existing AI power structures that tech execs at AI giants including OpenAI, DeepMind, Stability AI and Anthropic are so happy to band and chatter together when it comes to publicly amplifying talk of existential AI risk. And how much more reticent to get together to discuss harms their tools can be seen causing right now.

OpenAI was a notable non-signatory to the aforementioned (Musk signed) open letter but a number of its employees are backing the CAIS-hosted statement (while Musk apparently is not). So the latest statement appears to offer an (unofficial) commercially self-serving reply by OpenAI (et al) to Musk’s earlier attempt to hijack the existential AI risk narrative in his own interests (which no longer favor OpenAI leading the AI charge).

Instead of the statement calling for a development pause, which would risk freezing OpenAI’s lead in the generative AI field, it lobbies policymakers to focus on risk mitigation — doing so while OpenAI is simultaneously crowdfunding efforts to shape “democratic processes for steering AI”, as Altman put it. So the company is actively positioning itself (and applying its investors’ wealth) to influence the shape of any future mitigation guardrails, alongside ongoing in-person lobbying efforts targeting international regulators.

 

Elsewhere, some signatories of the earlier letter have simply been happy to double up on another publicity opportunity — inking their name to both (hi Tristan Harris!).

But who is CAIS? There’s limited public information about the organization hosting this message. However it is certainly involved in lobbying policymakers, at its own admission. Its website says its mission is “to reduce societal-scale risks from AI” and claims it’s dedicated to encouraging research and field-building to this end, including funding research — as well as having a stated policy advocacy role.

An FAQ on the website offers limited information about who is financially backing it (saying its funded by private donations). While, in answer to an FAQ question asking “is CAIS an independent organization”, it offers a brief claim to be “serving the public interest”:

CAIS is a nonprofit organization entirely supported by private contributions. Our policies and research directions are not determined by individual donors, ensuring that our focus remains on serving the public interest.

We’ve reached out to CAIS with questions.

In a Twitter thread accompanying the launch of the statement, CAIS’ director, Dan Hendrycks, expands on the aforementioned statement explainer — naming “systemic bias, misinformation, malicious use, cyberattacks, and weaponization” as examples of “important and urgent risks from AI… not just the risk of extinction”.

“These are all important risks that need to be addressed,” he also suggests, downplaying concerns policymakers have limited bandwidth to address AI harms by arguing: “Societies can manage multiple risks at once; it’s not ‘either/or’ but ‘yes/and.’ From a risk management perspective, just as it would be reckless to exclusively prioritize present harms, it would also be reckless to ignore them as well.”

The thread also credits David Krueger, an assistant professor of Computer Science at the University of Cambridge, with coming up with the idea to have a single-sentence statement about AI risk and “jointly” helping with its development.

OpenAI’s Altman and other AI giants back warning of advanced AI as ‘extinction’ risk by Natasha Lomas originally published on TechCrunch



from TechCrunch https://ift.tt/eC057y4

Comments

Popular posts from this blog

New month, new crypto market moves?

To get a roundup of TechCrunch’s biggest and most important crypto stories delivered to your inbox every Thursday at 12 p.m. PT, subscribe here . Welcome back to Chain Reaction. Seems like just yesterday we were ringing in the New Year, but we’ve coasted into February and all seems to be somewhat relaxed (for once) in the crypto world. Last month was filled with crypto companies laying off staff , developments around the existing and new Chapter 11 bankruptcies in the space, partnerships and conversations about potential recovery in 2023. Even with a range of bad news flooding the industry, some cryptocurrencies had a bull run in January, amid the market turmoil. Bitcoin rallied 40% on the month, while ether rose about 32% during the same period. Solana also saw serious recovery, from about $10 in the beginning of the year, near its lowest level since February 2021, up 146% to about $24.3 by the end of January, CoinMarketCap data showed. These market movements could pot

Metaverse app BUD raises another $37M, plans to launch NFTs

BUD , a nascent app taking a shot at creating a metaverse for Gen Z to play and interact with each other, has raised another round of funding in three months. The Singapore-based startup told TechCrunch that it has closed $36.8 million in a Series B round led by Sequoia Capital India, not long after it secured a Series A extension in February . The new infusion brings BUD’s total financing to over $60 million. As with BUD’s previous rounds, this round of raise attracted a handful of prominent China-focused investors — ClearVue Partners, NetEase and Northern Light Venture Capital. Its existing investors GGV Capital, Qiming Venture Partners and Source Code Capital also participated in the round. Founded by two former Snap engineers Risa Feng and Shawn Lin in 2019, BUD lets users create bulbous 3D characters, cutesy virtual assets and richly colored experiences through drag-and-drop and without any coding background. The company declined to reveal its active user size but said its use

Can Arbitrum’s recently formed DAO recover from its messy week?

The TechCrunch Podcast Network has been nominated for two Webbys in the Best Technology Podcast category. You can help TechCrunch win by voting for Chain Reaction , which digs into the wild world of crypto, or Found , which brings you the stories behind the startups by sitting down with the founders themselves. Please take a few moments to vote here . Voting closes April 20. (NB I host Chain Reaction, so vote for my show!) Welcome back to Chain Reaction. This week was pretty bearable as a crypto reporter covering this space. There was less crazy news transpiring, compared to previous weeks (where we saw a number of U.S. government crackdowns on major crypto companies like Binance and Coinbase ). Still, it’s never a dull week in the crypto world. In late March, Arbitrum, an Ethereum scaling solution, transitioned into a decentralized autonomous organization (DAO), after airdropping community members its new token, ARB. DAOs are meant to operate with no central authority and token h