Gen AI security issues are forcing enterprises to modernize their AI auditing measures

Gen AI security issues are forcing enterprises to modernize their AI auditing measures

The constant drumbeat of massive mistakes made by customer support AI agents, akin to big names like Chevy, Aviation Canadaand even New Yorkrefocused on the need for greater reliability.

If you are an enterprise decision maker building generative AI applications and strategies and are struggling to sustain with the latest chatbot technology and how to stay accountable for it, you need to apply to attend our exclusive AI event in New York on June 5 at “AI Audit.”

- Advertisement -

During this networking event from VentureBeat, aimed at technical leaders of enterprises involved in the design and development of AI products, we’ll hear from three key players in the ecosystem on the latest best practices in AI auditing.

We’ll hear from Michael Raj, Verizon’s vice chairman of artificial intelligence and data, on how he uses rigorous AI audits and worker training to shape the framework for the responsible use of generative AI in customer interactions.

Michael Raj of Verizon, Rebecca Qian of Patronus AI, and Justin Greenberger of UiPath

We will even hear from Rebecca Qian, co-founder and chief technology officer of the company called AI Patronus, which is a leader in creating AI audit strategies and technology that will help locate and patch security gaps. Qian worked at Meta for over 4 years and led the AI ​​evaluation efforts at Meta AI Research (FAIR).

I’ll be hosting a conversation with my colleague Carl Franzen, editor-in-chief at VentureBeat. We’re thrilled to have UI Path sponsoring the event: Justin Greenberger, vice chairman of UiPath, will even be on hand to share insights on how audit and compliance guidelines are changing as a results of rapid changes in artificial intelligence and how to manage these processes throughout the organization. The event is a part of our AI Impact Tour series of events designed to foster conversations and connections between enterprise decision-makers looking to implement generative AI applications in real-world deployments.

So what exactly is an AI audit and how does it differ from AI management? Well, once you arrange your broader AI governance policies, you would like to arrange an audit of your generative AI applications to ensure they follow the policies you set. However, this is becoming increasingly necessary given the rapid changes in technology. Leading LLM providers akin to Open AI and Google are consistently improving their latest versions of ChatGPT and Gemini. As of last week, these AI models can see, hear, speak, and even inject emotions into their interactions. This, combined with advances from other providers including Meta (Llama 3), Anthropic (Claude) and Inflection (and its recent empathy-based AI), makes keeping pace with accuracy, privacy and other auditing requirements a challenge. challenge.

It is value noting that many recent corporations, including Patronus AI, have emerged to fill the void in this area by launching reference pointsdatasets and diagnostics to help in areas akin to sensitive data detection personal data (PII) in bot information. It seems that even basic techniques akin to enhanced search generation (RAG), prolonged context windows and system suggestions are not enough to reduce errors. Sometimes problems are inherent in the LLM training datasets themselves, which frequently lack transparency. This makes the audit much more necessary.

Don’t miss this necessary meeting for enterprise AI decision-makers who want to lead with integrity in the digital age. Apply and join us on the AI ​​Impact Tour to secure your home at the forefront of AI innovation and management.

Latest Posts

Advertisement

More from this stream

Recomended