Skip to main content

Deck.blue brings a TweetDeck experience to Bluesky users

With over 3 million users and plans to open up more broadly in the months ahead, Bluesky is still establishing itself as an alternative to Twitter/X. However, that hasn’t stopped the developer community from embracing the project and building tools to meet the needs of those fleeing the now Elon Musk-owned social network, formerly known […] © 2024 TechCrunch. All rights reserved. For personal use only. from TechCrunch https://ift.tt/TBbEAPF

How confidential computing could secure generative AI adoption

Generative AI has the potential to change everything. It can inform new products, companies, industries, and even economies. But what makes it different and better than “traditional” AI could also make it dangerous.

Its unique ability to create has opened up an entirely new set of security and privacy concerns.

Enterprises are suddenly having to ask themselves new questions: Do I have the rights to the training data? To the model? To the outputs? Does the system itself have rights to data that’s created in the future? How are rights to that system protected? How do I govern data privacy in a model using generative AI? The list goes on.

It’s no surprise that many enterprises are treading lightly. Blatant security and privacy vulnerabilities coupled with a hesitancy to rely on existing Band-Aid solutions have pushed many to ban these tools entirely. But there is hope.

Confidential computing — a new approach to data security that protects data while in use and ensures code integrity — is the answer to the more complex and serious security concerns of large language models (LLMs). It’s poised to help enterprises embrace the full power of generative AI without compromising on safety. Before I explain, let’s first take a look at what makes generative AI uniquely vulnerable.

Generative AI has the capacity to ingest an entire company’s data, or even a knowledge-rich subset, into a queryable intelligent model that provides brand new ideas on tap. This has massive appeal, but it also makes it extremely difficult for enterprises to maintain control over their proprietary data and stay compliant with evolving regulatory requirements.

Protecting training data and models must be the top priority; it’s no longer sufficient to encrypt fields in databases or rows on a form.

This concentration of knowledge and subsequent generative outcomes, without adequate data security and trust control, could inadvertently weaponize generative AI for abuse, theft, and illicit use.

Indeed, employees are increasingly feeding confidential business documents, client data, source code, and other pieces of regulated information into LLMs. Since these models are partly trained on new inputs, this could lead to major leaks of intellectual property in the event of a breach. And if the models themselves are compromised, any content that a company has been legally or contractually obligated to protect might also be leaked. In a worst-case scenario, theft of a model and its data would allow a competitor or nation-state actor to duplicate everything and steal that data.

These are high stakes. Gartner recently found that 41% of organizations have experienced an AI privacy breach or security incident—and over half are the result of a data compromise by an internal party. The advent of generative AI is bound to grow these numbers.

Separately, enterprises also need to keep up with evolving privacy regulations when they invest in generative AI. Across industries, there’s a deep responsibility and incentive to stay compliant with data requirements. In healthcare, for example, AI-powered personalized medicine has huge potential when it comes to improving patient outcomes and overall efficiency. But providers and researchers will need to access and work with large amounts of sensitive patient data while still staying compliant, presenting a new quandary.

To address these challenges, and the rest that will inevitably arise, generative AI needs a new security foundation. Protecting training data and models must be the top priority; it’s no longer sufficient to encrypt fields in databases or rows on a form.

In scenarios where generative AI outcomes are used for important decisions, evidence of the integrity of the code and data—and the trust it conveys—will be absolutely critical, both for compliance and for potentially legal liability management. There must be a way to provide airtight protection for the entire computation and the state in which it runs.

The advent of “confidential” generative AI

Confidential computing offers a simple, yet hugely powerful way out of what would otherwise seem to be an intractable problem. With confidential computing, data and IP are completely isolated from infrastructure owners and made only accessible to trusted applications running on trusted CPUs. Data privacy is ensured through encryption, even during execution.

Data security and privacy become intrinsic properties of cloud computing—so much so that even if a malicious attacker breaches infrastructure data, IP and code are completely invisible to that bad actor. This is perfect for generative AI, mitigating its security, privacy, and attack risks.

Confidential computing has been increasingly gaining traction as a security game-changer. Every major cloud provider and chip maker is investing in it, with leaders at Azure, AWS, and GCP all proclaiming its efficacy. Now, the same technology that’s converting even the most steadfast cloud holdouts could be the solution that helps generative AI take off securely. Leaders must begin to take it seriously and understand its profound impacts.

With confidential computing, enterprises gain assurance that generative AI models only learn on data they intend to use, and nothing else. Training with private datasets across a network of trusted sources across clouds provides full control and peace of mind. All information, whether an input or an output, remains completely protected, and behind a company’s own four walls.

How confidential computing could secure generative AI adoption by Walter Thompson originally published on TechCrunch



from TechCrunch https://ift.tt/490YdMc

Comments

Popular posts from this blog

Nimbus launches tiny EV prototype that’s like a motorbike with a roof

As shared e-scooter companies have infiltrated cities and e-bike sales have soared, micromobility has been offered up as a panacea to save us all from the ill humors and packed streets caused by gas-guzzling cars. However, one of the major roadblocks in front of well-intentioned city dwellers who’d love to trade in their cumbersome and environmentally unfriendly vehicles for an e-bike or scooter remains: What happens when it rains? Nimbus, a California-based electric vehicle startup, wants to solve that problem with a simple solution: Put a roof on it. The company recently came out of stealth with a prototype for its Nimbus One, a tiny, three-wheeled EV that “combines the convenience and cost of a motorbike with the safety and comfort of a car.” The Nimbus One. Image Credits: Nimbus The thin, pod-like vehicle is only about 2.75 feet wide and 7.5 feet long, which Nimbus says makes it three to five times smaller than a compact car — the better to park and navigate busy urban stree...

Pitch Deck Teardown: Encore’s $3M seed deck

For this week’s Pitch Deck Teardown, I’m (virtually) traveling to Sweden to take a look at the $3 million seed round raised by developer tool startup Encore . The company is creating what it calls a software development platform for the cloud. It reportedly raised from Crane Venture Partners with Acequia Capital ,  Essence Venture Capital  and  Third Kind Venture Capital joining the round. I wanted to take a look at this deck in more detail, in particular, because it tells a really elegant story in a market where it’s extraordinarily hard to differentiate yourself — both to your customers and to investors! Pitching a dev tool in a way that tells the story well enough to understand but without dropping deep into a rabbit hole is a particularly hard challenge, and that’s the needle Encore threads ever so efficiently in this 24-slide pitch deck. We’re looking for more unique pitch decks to tear down, so if you want to submit your own, here’s how you can do that ....

Multifamily housing has missed the solar boom. PearlX wants to fix that with $70M Series B

If you’re a renter and you want solar power, you’re usually out of luck. For most, the only option is a community solar program, where people subscribe to utility-scale projects, but they’re not available everywhere. And given that most renters only stay for a few years, which of them are going to pay tens of thousands of dollars for solar panels — and what landlord would let them? That’s where PearlX comes in. “Think of us as like the Sunrun for renters,” said co-founder and CEO Michael Huerta, referring to the company that rents solar installations to single-family homeowners. “PearlX is a rental electrification platform.” Earlier this year, the startup began installing solar panels and backup batteries at multifamily rentals in Texas as part of its “TexFlex” project. PearlX’s next step, which Huerta shared exclusively with TechCrunch, will be a California expansion called “Flexifornia.” The startup is also rolling out a virtual power plant, which will allow the company to tap the...