Artificial IntelligenceNews/PR

Enkrypt AI unveils LLM Safety Leaderboard to enable enterprises to adopt GenAI safely and responsibly

3 Mins read
LLM Safety
  • Revolutionizing AI Security: Enkrypt AI debuts the groundbreaking LLM Safety Leaderboard at the RSA conference, setting the benchmark of transparency and security in AI technology.
  • Smart Choices, Safer Tech, Faster Adoption: With the LLM Safety Leaderboard, enterprises can swiftly identify the safest and most reliable AI models for their needs by understanding their vulnerabilities and enhancing tech trustworthiness.
  • Ethics and Compliance Front and Center: Enkrypt AI’s latest innovation allows AI engineers to make informed decisions to uphold the highest ethical and regulatory standards, building a future where AI is safe for all.

The rapid adoption of Generative AI, including in regulated settings, has continued to make the security and safety of Large Language Models (LLMs) a key concern amongst cybersecurity professionals. Policy-makers and security professionals around the world continue to seek new technology to help mitigate the risks of Generative AI technologies. For example; just days ago, the US Government’s Department of Homeland Security appointed a board to advise on the role of artificial intelligence on critical infrastructure.

“LLMs are increasingly seen as potential back-office powerhouses for enterprises, processing data and enabling faster front-office decision-making. Consider a fintech where an LLM-powered application is key in rejecting a loan application from a person of color without clear explanation. This raises concerns about implicit biases, as LLMs often reflect societal inequities present in their training data sourced from the internet. Moreover, cases like Google’s LLM appearing ‘woke’ highlight the risks of overcorrecting these biases. How safe is Anthropic’s Claude3 Model? Is Cohere’s Command R+ LLM really ready for enterprise use? These scenarios underscore the urgent need for careful checks on these models to prevent exacerbating societal inequities and causing harm.”

At the highly anticipated RSA conference, Enkrypt AI, the leader in securing Generative AI technologies, will introduce its latest innovation, the LLM Safety Leaderboard. This product is part of Enkrypt AI’s comprehensive Sentry suite, designed to empower enterprises to deploy LLMs with heightened security and peace of mind.

The LLM Safety Leaderboard will provide essential insights into the vulnerabilities and hallucination risks of various LLMs, enabling technology teams to make informed decisions about which models best suit their specific needs. This tool aims to educate and raise awareness about the relative strengths and potential weaknesses of different LLMs, so AI engineers can make informed decisions about the unique strengths of each.

Highlights of the LLM Safety Leaderboard include: Comprehensive Vulnerability Insights which delivers detailed evaluations of potential security risks, including data leakage, privacy breaches, and susceptibility to cyber-attacks. Ethical and Compliance Risk Assessment which tests for biases, toxicity, and compliance with ethical standards and regulatory requirements, ensuring models align with enterprise and brand values.

The LLM Safety Leaderboard is a new component of Enkrypt’s Sentry suite, which includes Sentry Red Team, Sentry Guardrails, and Sentry Compliance. This suite offers a holistic approach to managing and securing LLMs, aligning with the strictest standards for privacy, security, and compliance within the enterprise environment.

The announcement comes as a new preprint paper by Enkrypt AI, “Increased LLM Vulnerabilities from Fine-tuning and Quantization”, has found that common practices used to implement LLMs in business settings, namely fine-tuning and quantization, lead to increased risk of security vulnerabilities namely from jailbreaking. However, implementing external guardrails platforms like Enkrypt’s Sentry Guardrails solution was successful in mitigating such vulnerabilities. On one model, Enkrypt’s Sentry Guardrails provided a 9x reduction in vulnerability to jailbreaking attacks.

Sahil Agarwal, CEO of Enkrypt AI, said: “With the launch of the LLM Safety Leaderboard, we are enhancing our commitment to enabling the safe, secure, and responsible use of generative AI in the enterprise. This tool will serve as a critical resource for organizations aiming to navigate the complexities of AI implementation with full confidence in their security posture.”

Prashanth Harshangi, CTO of Enkrypt AI, added: “In the last two quarters, our team has been solely focused on generative AI safety and making rapid progress with our Sentry Suite. Comprising three key components – Sentry Red Team, Sentry Guardrails, and Sentry Compliance. With the LLM Safety Leaderboard, we are proud to offer a product that not only identifies potential risks but also empowers businesses to proactively manage and mitigate these challenges, enabling informed and faster decision making.”

Read next: Metaverse ecommerce to become a $200 billion industry by 2030

Leave a Reply

Your email address will not be published. Required fields are marked *

− 5 = 4