{AI Security Research Lab}



What we are

PII Detection & Synthetic Data

We focus on PII detection and synthetic data generation. Our goal is to enhance data privacy and security.


GenAI Security

We concentrate on the security of AI agents, aiming to prevent unauthorized access and malicious use.


LLM Security

We develop strategies to protect large language models from jailbreak and prompt injection attacks.

AI Ethics

Our research focuses on the ethical use of AI, aiming to prevent toxic content and harmful behaviors.


Blog Posts

Our projects

LLMBUS - AI Red Team Tool

LLMBUS is a retro-cyberpunk–themed red team toolkit designed for developers, security professionals, and AI researchers.

It offers tools for prompt transformation, tokenizer inspection, paraphrasing, and multimodal export (audio/image), along with project tracking capabilities.

Download!
LLMBUS Screenshot
Prompt Firewall Chrome Extension

The Prompt Firewall Chrome Extension is a browser add-on designed to safeguard sensitive personal information, including PII, for both individuals and businesses.

This extension offers several features specifically tailored to improve the privacy of data processed by large language models (LLMs), including pseudonymization or blocking functions.

Get Extension Now! Playground Page
LangTsunami - Multi-Lingual GenAI Red Teaming Tool

LangTsunami is a tool designed to facilitate various text manipulation tasks such as scrambling and code-switching in multiple languages.

It offers insights into undesirable LLM behaviors for red teamers and researchers focusing on multi-lingual contexts.

Download!