Skip to content

Conversation

abhijitjavelin
Copy link

@abhijitjavelin abhijitjavelin commented Sep 26, 2025

Title

Add Javelin standalone guardrails integration for LiteLLM Proxy

Relevant issues

Pre-Submission checklist

Please complete all items before asking a LiteLLM maintainer to review your PR

  • I have Added testing in the tests/litellm/ directory, Adding at least 1 test is a hard requirement - see details
  • I have added a screenshot of my new test passing locally
  • My PR passes all unit tests on make test-unit
  • My PR's scope is as isolated as possible, it only solves 1 specific problem

Type

🆕 New Feature

Changes

New Javelin Guardrails Integration

This PR adds support for Javelin's standalone guardrails API, enabling enterprise-grade content safety and security checks for LLM calls through LiteLLM Proxy.

🛡️ Features Added:

  • Prompt injection detection - Identify and block prompt injection attempts and jailbreaks
  • Trust & safety filtering - Content filtering for violence, weapons, hate speech, crime, sexual content, and profanity
  • Language detection - Detect the language of input text
  • Application-specific policies - Support for custom Javelin application configurations

🔧 Implementation:

  • Full integration with LiteLLM's guardrail system using CustomGuardrail base class
  • Support for pre_call, post_call, and during_call execution modes
  • Automatic registration through the guardrail discovery system
  • Proper error handling and HTTP exception mapping

📁 Files Added:

  • litellm/proxy/guardrails/guardrail_hooks/javelin/javelin_guardrail.py - Main implementation
  • litellm/proxy/guardrails/guardrail_hooks/javelin/init.py - Registry integration
  • litellm/proxy/guardrails/guardrail_hooks/javelin/test_javelin_guardrail.py - Unit tests
  • litellm/proxy/guardrails/guardrail_hooks/javelin/example_usage.py - Usage examples
  • docs/my-website/docs/proxy/guardrails/javelin.md - Complete documentation

📝 Configuration Example:

guardrails:
- guardrail_name: "javelin-prompt-injection"
litellm_params:
guardrail: javelin
mode: "pre_call"
guardrail_processor: "promptinjectiondetection"
api_key: os.environ/JAVELIN_API_KEY
api_base: os.environ/JAVELIN_API_BASE

✅ Testing:

  • Unit tests covering initialization, text extraction, and content analysis
  • Multi-guardrail coordination testing with other providers (OpenAI, etc.)
  • Error handling and edge case validation

This integration enables users to leverage Javelin's enterprise-grade guardrails alongside LiteLLM's unified interface for comprehensive LLM safety and security.

Copy link

vercel bot commented Sep 26, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
litellm Error Error Sep 26, 2025 3:47pm

@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.


Abhijit L seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants