Skip to main content

Deck.blue brings a TweetDeck experience to Bluesky users

With over 3 million users and plans to open up more broadly in the months ahead, Bluesky is still establishing itself as an alternative to Twitter/X. However, that hasn’t stopped the developer community from embracing the project and building tools to meet the needs of those fleeing the now Elon Musk-owned social network, formerly known […] © 2024 TechCrunch. All rights reserved. For personal use only. from TechCrunch https://ift.tt/TBbEAPF

Ethicists fire back at ‘AI Pause’ letter they say ‘ignores the actual harms’

A group of well-known AI ethicists have written a counterpoint to this week’s controversial letter asking for a six-month “pause” on AI development, criticizing it for a focus on hypothetical future threats when real harms are attributable to misuse of the tech today.

Thousands of people, including such familiar names as Steve Wozniak and Elon Musk, signed the open letter from the Future of Life institute earlier this week, proposing that development of AI models like GPT-4 should be put on hold in order to avoid “loss of control of our civilization,” among other threats.

Timnit Gebru, Emily M. Bender, Angelina McMillan-Major and Margaret Mitchell are all major figures in the domains of AI and ethics, known (in addition to their work) for being pushed out of Google over a paper criticizing the capabilities of AI. They are currently working together at the DAIR Institute, a new research outfit aimed at studying and exposing and preventing AI-associated harms.

But they were not to be found on the list of signatories, and now have published a rebuke calling out the letter’s failure to engage with existing problems caused by the tech.

“Those hypothetical risks are the focus of a dangerous ideology called longtermism that ignores the actual harms resulting from the deployment of AI systems today,” they wrote, citing worker exploitation, data theft, synthetic media that props up existing power structures and the further concentration of those power structures in fewer hands.

The choice to worry about a Terminator- or Matrix-esque robot apocalypse is a red herring when we have, in the same moment, reports of companies like Clearview AI being used by the police to essentially frame an innocent man. No need for a T-1000 when you’ve got Ring cams on every front door accessible via online rubber-stamp warrant factories.

While the DAIR crew agree with some of the letter’s aims, like identifying synthetic media, they emphasize that action must be taken now, on today’s problems, with remedies we have available to us:

What we need is regulation that enforces transparency. Not only should it always be clear when we are encountering synthetic media, but organizations building these systems should also be required to document and disclose the training data and model architectures. The onus of creating tools that are safe to use should be on the companies that build and deploy generative systems, which means that builders of these systems should be made accountable for the outputs produced by their products.

The current race towards ever larger “AI experiments” is not a preordained path where our only choice is how fast to run, but rather a set of decisions driven by the profit motive. The actions and choices of corporations must be shaped by regulation which protects the rights and interests of people.

It is indeed time to act: but the focus of our concern should not be imaginary “powerful digital minds.” Instead, we should focus on the very real and very present exploitative practices of the companies claiming to build them, who are rapidly centralizing power and increasing social inequities.

Incidentally, this letter echoes a sentiment I heard from Uncharted Power founder Jessica Matthews at yesterday’s AfroTech event in Seattle: “You should not be afraid of AI. You should be afraid of the people building it.” (Her solution: become the people building it.)

While it is vanishingly unlikely that any major company would ever agree to pause its research efforts in accordance with the open letter, it’s clear judging from the engagement it received that the risks — real and hypothetical — of AI are of great concern across many segments of society. But if they won’t do it, perhaps someone will have to do it for them.

Ethicists fire back at ‘AI Pause’ letter they say ‘ignores the actual harms’ by Devin Coldewey originally published on TechCrunch



from TechCrunch https://ift.tt/nfpgcdX

Comments

Popular posts from this blog

New month, new crypto market moves?

To get a roundup of TechCrunch’s biggest and most important crypto stories delivered to your inbox every Thursday at 12 p.m. PT, subscribe here . Welcome back to Chain Reaction. Seems like just yesterday we were ringing in the New Year, but we’ve coasted into February and all seems to be somewhat relaxed (for once) in the crypto world. Last month was filled with crypto companies laying off staff , developments around the existing and new Chapter 11 bankruptcies in the space, partnerships and conversations about potential recovery in 2023. Even with a range of bad news flooding the industry, some cryptocurrencies had a bull run in January, amid the market turmoil. Bitcoin rallied 40% on the month, while ether rose about 32% during the same period. Solana also saw serious recovery, from about $10 in the beginning of the year, near its lowest level since February 2021, up 146% to about $24.3 by the end of January, CoinMarketCap data showed. These market movements could pot

Can Arbitrum’s recently formed DAO recover from its messy week?

The TechCrunch Podcast Network has been nominated for two Webbys in the Best Technology Podcast category. You can help TechCrunch win by voting for Chain Reaction , which digs into the wild world of crypto, or Found , which brings you the stories behind the startups by sitting down with the founders themselves. Please take a few moments to vote here . Voting closes April 20. (NB I host Chain Reaction, so vote for my show!) Welcome back to Chain Reaction. This week was pretty bearable as a crypto reporter covering this space. There was less crazy news transpiring, compared to previous weeks (where we saw a number of U.S. government crackdowns on major crypto companies like Binance and Coinbase ). Still, it’s never a dull week in the crypto world. In late March, Arbitrum, an Ethereum scaling solution, transitioned into a decentralized autonomous organization (DAO), after airdropping community members its new token, ARB. DAOs are meant to operate with no central authority and token h

Metaverse app BUD raises another $37M, plans to launch NFTs

BUD , a nascent app taking a shot at creating a metaverse for Gen Z to play and interact with each other, has raised another round of funding in three months. The Singapore-based startup told TechCrunch that it has closed $36.8 million in a Series B round led by Sequoia Capital India, not long after it secured a Series A extension in February . The new infusion brings BUD’s total financing to over $60 million. As with BUD’s previous rounds, this round of raise attracted a handful of prominent China-focused investors — ClearVue Partners, NetEase and Northern Light Venture Capital. Its existing investors GGV Capital, Qiming Venture Partners and Source Code Capital also participated in the round. Founded by two former Snap engineers Risa Feng and Shawn Lin in 2019, BUD lets users create bulbous 3D characters, cutesy virtual assets and richly colored experiences through drag-and-drop and without any coding background. The company declined to reveal its active user size but said its use