Skip to main content

Deck.blue brings a TweetDeck experience to Bluesky users

With over 3 million users and plans to open up more broadly in the months ahead, Bluesky is still establishing itself as an alternative to Twitter/X. However, that hasn’t stopped the developer community from embracing the project and building tools to meet the needs of those fleeing the now Elon Musk-owned social network, formerly known […] © 2024 TechCrunch. All rights reserved. For personal use only. from TechCrunch https://ift.tt/TBbEAPF

There’s now an open source alternative to ChatGPT, but good luck running it

The first open-source equivalent of OpenAI’s ChatGPT has arrived, but good luck running it on your laptop — or at all.

This week, Philip Wang, the developer responsible for reverse-engineering closed-sourced AI systems including Meta’s Make-A-Video, released PaLM + RLHF, a text-generating model that behaves similarly to ChatGPT. The system combines PaLM, a large language model from Google, and a technique called Reinforcement Learning with Human Feedback — RLHF, for short — to create a system that can accomplish pretty much any task that ChatGPT can, including drafting emails and suggesting computer code.

But PaLM + RLHF isn’t pretrained. That is to say, the system hasn’t been trained on the example data from the web necessary for it to actually work. Downloading PaLM + RLHF won’t magically install a ChatGPT-like experience — that would require compiling gigabytes of text from which the model can learn and finding hardware beefy enough to handle the training workload.

Like ChatGPT, PaLM + RLHF is essentially a statistical tool to predict words. When fed an enormous number of examples from training data — e.g. posts from Reddit, news articles and ebooks — PaLM + RLHF learns how likely words are to occur based on patterns like the semantic context of surrounding text.

ChatGPT and PaLM + RLHF share a special sauce in Reinforcement Learning with Human Feedback, a technique that aims to better align language models with what users wish them to accomplish. RLHF involves training a language model — in PaLM + RLHF’s case, PaLM — and fine-tuning it on a data set that includes prompts (e.g. “Explain machine learning to a six-year-old”) paired with what human volunteers expect the model to say (e.g. “Machine learning is a form of AI…”). The aforementioned prompts are then fed to the fine-tuned model, which generates several responses, and the volunteers rank all the responses from best to worst. Finally, the rankings are used to train a “reward model” that takes the original model’s responses and sorts them in order of preference, filtering for the top answers to a given prompt.

It’s an expensive process, collecting the training data. And training itself isn’t cheap. PaLM is 540 billion parameters in size, “parameters” referring to the parts of the language model learned from the training data. A 2020 study pegged the expenses for developing a text-generating model with only 1.5 billion parameters at as much as $1.6 million. And to train the open source model Bloom, which has 176 billion parameters, it took three months using 384 Nvidia A100 GPUs; a single A100 costs thousands of dollars.

Running a trained model of PaLM + RLHF’s size isn’t trivial, either. Bloom requires a dedicated PC with around eight A100 GPUs. Cloud alternatives are pricey, with back-of-the-envelope math finding the cost of running OpenAI’s text-generating GPT-3 — which has around 175 billion parameters — on a single Amazon Web Services to be around $87,000 per year.

Sebastian Raschka, an AI researcher, points out in a LinkedIn post about PaLM + RLHF that scaling up the necessary dev workflows could prove to be a challenge as well. “Even if someone provides you with 500 GPUs to train this model, you still need to have to deal with infrastructure and have a software framework that can handle that,” he said. “It’s obviously possible, but it’s a big effort at the moment (of course, we are developing frameworks to make that simpler, but it’s still not trivial, yet).”

That’s all to say that PaLM + RLHF isn’t going to replace ChatGPT today — unless a well-funded venture (or person) goes to the trouble of training and making it available publicly.

In better news, several other efforts to replicate ChatGPT are progressing at a fast clip, including one led by a research group called CarperAI. In partnership with the open AI research organization EleutherAI and startups Scale AI and Hugging Face, CarperAI plans to release the first ready-to-run, ChatGPT-like AI model trained with human feedback.

LAION, the nonprofit that supplied the initial data set used to train Stable Diffusion, is also spearheading a project to replicate ChatGPT using the newest machine learning techniques. Ambitiously, LAION aims to build an “assistant of the future” — one that not only writes emails and cover letters but “does meaningful work, uses APIs, dynamically researches information, and much more.” It’s in the early stages. But a GitHub page with resources for the project went live a few weeks ago.

There’s now an open source alternative to ChatGPT, but good luck running it by Kyle Wiggers originally published on TechCrunch



from TechCrunch https://ift.tt/DpMsZGI

Comments

Popular posts from this blog

New month, new crypto market moves?

To get a roundup of TechCrunch’s biggest and most important crypto stories delivered to your inbox every Thursday at 12 p.m. PT, subscribe here . Welcome back to Chain Reaction. Seems like just yesterday we were ringing in the New Year, but we’ve coasted into February and all seems to be somewhat relaxed (for once) in the crypto world. Last month was filled with crypto companies laying off staff , developments around the existing and new Chapter 11 bankruptcies in the space, partnerships and conversations about potential recovery in 2023. Even with a range of bad news flooding the industry, some cryptocurrencies had a bull run in January, amid the market turmoil. Bitcoin rallied 40% on the month, while ether rose about 32% during the same period. Solana also saw serious recovery, from about $10 in the beginning of the year, near its lowest level since February 2021, up 146% to about $24.3 by the end of January, CoinMarketCap data showed. These market movements could pot

Can Arbitrum’s recently formed DAO recover from its messy week?

The TechCrunch Podcast Network has been nominated for two Webbys in the Best Technology Podcast category. You can help TechCrunch win by voting for Chain Reaction , which digs into the wild world of crypto, or Found , which brings you the stories behind the startups by sitting down with the founders themselves. Please take a few moments to vote here . Voting closes April 20. (NB I host Chain Reaction, so vote for my show!) Welcome back to Chain Reaction. This week was pretty bearable as a crypto reporter covering this space. There was less crazy news transpiring, compared to previous weeks (where we saw a number of U.S. government crackdowns on major crypto companies like Binance and Coinbase ). Still, it’s never a dull week in the crypto world. In late March, Arbitrum, an Ethereum scaling solution, transitioned into a decentralized autonomous organization (DAO), after airdropping community members its new token, ARB. DAOs are meant to operate with no central authority and token h

Metaverse app BUD raises another $37M, plans to launch NFTs

BUD , a nascent app taking a shot at creating a metaverse for Gen Z to play and interact with each other, has raised another round of funding in three months. The Singapore-based startup told TechCrunch that it has closed $36.8 million in a Series B round led by Sequoia Capital India, not long after it secured a Series A extension in February . The new infusion brings BUD’s total financing to over $60 million. As with BUD’s previous rounds, this round of raise attracted a handful of prominent China-focused investors — ClearVue Partners, NetEase and Northern Light Venture Capital. Its existing investors GGV Capital, Qiming Venture Partners and Source Code Capital also participated in the round. Founded by two former Snap engineers Risa Feng and Shawn Lin in 2019, BUD lets users create bulbous 3D characters, cutesy virtual assets and richly colored experiences through drag-and-drop and without any coding background. The company declined to reveal its active user size but said its use