Elon Musk, the billionaire behind Tesla and SpaceX, said on Monday that he would ban the sale of Apple devices at his firms if the iPhone maker integrates OpenAI artificial intelligence technology at the operating system level. The threat posted on Musk’s social media platform X.com, formerly generally known as Twitter, got here hours after Apple revealed a broad partnership with OpenAI at its annual Worldwide Developers Conference.
“This is an unacceptable security breach,” Musk wrote in a post on X, referring to Apple’s plans to weave OpenAI’s powerful language models and other artificial intelligence capabilities into the core of its iOS, iPadOS and macOS operating systems. “And visitors will need to check their Apple devices at the door, where they will be stored in a Faraday cage,” he added, apparently referring to the shielded housing that blocks electromagnetic signals.
Intensifying competition between technology giants
Musk’s brawl against Apple and OpenAI highlights the escalating rivalry and tension between tech giants as they race for dominance in the booming generative artificial intelligence market. Tesla’s CEO has been an outspoken critic of OpenAI, the company he helped found as a nonprofit in 2015 before its acrimonious split, and is now positioning his own artificial intelligence startup xAI as a direct competitor to Apple, OpenAI and other major players.
But Musk is not alone in raising concerns about the security implications of Apple’s tight integration with OpenAI technology, which is able to enable developers across the iOS ecosystem to leverage the startup’s advanced language models for applications akin to natural language processing, image generation and more. Pliny Prompter, a pseudonymous but widely respected cybersecurity researcher known for jailbreaking OpenAI’s ChatGPT model, called the move a “bold” but potentially dangerous step given the current state of AI security.
Registration for VB Transform 2024 is open
Join enterprise leaders in San Francisco July Sept. 11 for our flagship AI event. Connect with your peers, explore the opportunities and challenges of generative AI, and learn methods to integrate AI applications in your industry. Register now
Security concerns are high
“Time will tell! A bold integration move in this regard, given the current state of llm security,” Pliny wrote in X, using an acronym for large language models akin to OpenAI’s GPT series. In recent months, Pliny and other researchers have demonstrated that they will security of ChatGPT and other AI models, which led them to generate malicious content or reveal sensitive information used in their training data.
The tech industry has grappled with data breaches, cyberattacks and the theft of sensitive user information in recent years, raising the stakes for Apple because it opens its operating systems to third-party artificial intelligence. While Apple has long advocated for user privacy and insists that OpenAI follows its strict data privacy policies, some security experts fear the partnership could create latest security vulnerabilities for bad actors to take advantage of.
From our perspective, Apple is essentially installing a black box at the heart of its operating system and trusting that OpenAI’s systems and security are robust enough to maintain users protected. However, even today’s most advanced AI models are liable to errors, biases, and potential abuse. This is a calculated risk on Apple’s part.
Musk’s turbulent history with OpenAI
Both Apple and OpenAI insist that AI systems integrated with iOS will run entirely on users’ devices by default, as a substitute of sending sensitive data to the cloud, which is what developers use Apple Intelligence Tools might be subject to strict guidelines designed to stop abuse. But details remain scarce, and some fear the allure of user data from Apple’s 1.5 billion lively devices could create a temptation for OpenAI to bend its own rules.
Musk’s history with OpenAI has been tumultuous. He previously supported the company and served as its chairman of the board before leaving in 2018 amid disagreements over its direction. Musk has since criticized OpenAI for transforming from a nonprofit research lab into a for-profit giant and accused it of abandoning its original mission to develop AI that is protected and helpful to humanity.
Now, with his xAI startup riding a wave of hype and a recent $6 billion fundraising round, Musk seems desirous to fuel the narrative of an epic AI battle in the making. By threatening to ban the use of Apple devices in his company’s offices, factories and facilities around the world, the tech mogul is making it clear that he sees the coming competition as limitless and zero-sum.
Time will tell whether Musk will follow Apple’s wholesale ban on Tesla, SpaceX and his other firms. As Meta’s chief AI specialist recently noted, Musk often makes “blatantly false predictions” in the press. The sheer logistical and security challenges of enforcing such a policy among tens of hundreds of employees can be enormous. Some also query whether Musk actually has the right as CEO to unilaterally ban the use of employees’ personal devices.
But the episode shows strange alliances and hostility taking shape AI gold rush in Silicon Valley, where yesterday’s partners can quickly grow to be today’s rivals and vice versa. With technology superpowers like Apple, Microsoft, Google, and Amazon now deeply involved in OpenAI or developing their very own advanced AI, the battle lines are being drawn to make your mind up the way forward for computing.
As the stakes rise and saber rattling intensifies, cybersecurity researchers like Pliny Sugester will observe and investigate any signs of vulnerabilities that would harm consumers inside. “We’re going to have fun, Pliny!” Comed joked, one other distinguished AI security tester, during X’s hilarious but ominous exchange on Monday. Fun, it seems, is one word.