When consumers, firms and governments accumulate for the promise of low cost, fast and seemingly magical AI tools, one query still bothers: find out how to maintain data privacy?
Technological giants, corresponding to OpenAI, Anthropic, Xai, Google and others, quietly collect and stop the user’s data to enhance their models or monitor security and security, even in some contexts of enterprises in which firms assume that their information is beyond the limits. In the case of very regulated industries or firms on the border, this gray area may be a brood. Fears about where the data goes, who will see it and how it will possibly be used, decelerate AI adoption in sectors corresponding to healthcare, finance and government.
Enter the startup based in San Francisco Confident safetywhich is to be a “signal for AI”. The company’s product, Confsec, is a comprehensive encryption tool that wraps around the basic models, guaranteeing that you simply cannot store, see or use metadata, seen or used for AI training, even by the model supplier or no third page.
“Secondly, when you give up your data to someone else, you basically reduced your privacy,” said Techcrunch Jonathan Mortensen, founder and general director of confident. “And the purpose of our product is to remove this compromise.”
Some collateral got here out on Thursday from $ 4.2 million to funds from seeds from Decibel, South Park Commons, Ex Ante and Svyx, TechCrunch only learned. The company desires to function an intermediary between AI suppliers and their clients – corresponding to hyperslain, governments and enterprises.
Mortensen said that even AI could see the value in offering a tool of confident security tools as a strategy to unlock this market. He added that CONFSEC is also suitable for latest artificial intelligence browsers on the market, similar to the recently released Complexity comet, to present customers a guarantee that their confidential data is not stored on a server in which the company or bad actors could access or that they are not used for “AI training”.
Confsec is modeled after the Apple Cloud Compute (PCC) architecture, which, in line with Mortensen, “is 10 times better than anything when it comes to guaranteeing that Apple does not see your data” when it safely launches certain and cloud -headed tasks.
TechCrunch event
San Francisco
|.
October 27-29 2025
Like PCC Apple, a certain security system works through the first anonymous data by encrypting and directing them by services corresponding to cloudflare or quickly, so the servers never see the original source or content. Then he uses advanced encryption, which allows decryption only in strict conditions.
“So you can say that you can decipher it only if you do not intend to register data and you will not use it for training, and you will not let anyone see it,” said Mortensen.
Finally, software launching AI inference is publicly registered and open to review so that experts can confirm their guarantees.
“Some security is ahead of the curve, considering that the future of AI depends on the trust built into the infrastructure itself,” said Jess Leão, partner at Decibel. “Without such solutions, many enterprises simply can’t move on with AI.”
It is still early days in an annual company, but Mortensen said that Confsec was tested, controlled externally and is ready for production. The team is conducting talks with banks, browsers and serps, including potential customers, so as to add Confsec to their infrastructure piles.
“You bring artificial intelligence, we bring privacy,” said Mortensen.
