Inspiration:

AI is revolutionizing services for every business. But it is a double-edged sword. Advanced AI = advanced manipulation tactics. Honestly, it is pretty hard for businesses to stay up-to-date with the latest manipulation tactics leading to data leakage. And interestingly, this is the number-one concerning thing that demotivates small businesses from integrating AI into their processes.

What it does

We turn AI threats into enterprises' AI solutions' defense. Sniffy's value is that it uses Linkup, APIfy, and Bem to constantly stay up-to-date with the latest AI security threats. Then Sniffy powers Operant AI to use these latest malicious prompts to secure your AI model. Sniffy extracts all the existing AI security threats causing data leakage. Then we use these malicious prompts to test enterprise AI models for vulnerabilities. If your model fails against any of these prompts, we secure it using GateKeeper. This makes sure your model is up-to-date with the latest threat in town, and your customers feel protected. This also increases the trust of small businesses in integrating AI into their systems.

How we built it

Scan & Extract: Leverage the power of Linkup & APIfy to scan for the Urls mentioning latest AI threats and precisely extract critical web data. Structure & Understand: Utilize the cutting-edge Bem AI to transform this unstructured raw url data into precise, structured test prompts. These prompts are the malicious threats. Test & Identify: Automate comprehensive red-teaming with Woodpecker by Operant AI to pinpoint vulnerabilities in your AI. Secure & Protect: Implement robust fixes and safeguard your AI using Gatekeeper by Operant AI.

Challenges we ran into

We ran into issues setting up GateKeeper and Bem SDK

Accomplishments that we're proud of

We are proud of coming up with this valuable idea and executing it in a tight deadline

What we learned

We learned working with APIs, and AI security.

What's next for Sniffy

Next, Sniffy aims to be used by enterprises to periodically test their AI models. We test the models in 3 ways:

  1. if we notice any new security threats
  2. if their is any update in model
  3. At a constant periodic cycle We believe that AI security is one of the most important things we need to focus on. Considering that, if we get the positive feedback we are aiming for, we also plan to contribute more to Operant AI and Woodpecker.

Built With

Share this project:

Updates