Guardrails AI
- 194 followers
- United States of America
- http://guardrailsai.com
- contact@guardrailsai.com
Pinned Loading
Repositories
Showing 10 of 97 repositories
- detect_system_prompt_leakage Public
A validator that checks for system prompt leakage in LLM output.
guardrails-ai/detect_system_prompt_leakage’s past year of commit activity - provenance_embeddings Public
Guardrails AI: Provenance Embeddings - Validates that LLM-generated text matches some source text based on distance in embedding space
guardrails-ai/provenance_embeddings’s past year of commit activity - guardrails-api-client Public
OpenAPI Specifications and scripts for generating SDKs for the various Guardrails services
guardrails-ai/guardrails-api-client’s past year of commit activity - unusual_prompt Public
A Guardrails AI input validator that detects if the user is trying to jailbreak an LLM using unusual prompting techniques that involve jailbreaking and tricking the LLM
guardrails-ai/unusual_prompt’s past year of commit activity
People
This organization has no public members. You must be a member to see who’s a part of this organization.
Top languages
Loading…
Most used topics
Loading…