__init__.py
1 """Guardrails for validating and filtering LLM outputs. 2 3 Guardrails provide runtime checks to ensure LLM outputs meet specific criteria 4 before they are used. They can block, filter, or transform outputs based on 5 custom rules. 6 7 Available guards: 8 - `PIICheck`: Detect personally identifiable information 9 - `IncludesWords`: Check if output contains specific words 10 - `WordsPresence`: Check for presence/absence of word lists 11 - `PythonFunction`: Custom validation using Python functions 12 13 Use the `@guard` decorator to apply guards to LLM functions. 14 15 Example: 16 ```python 17 from evidently.guardrails import guard, PIICheck 18 19 @guard(PIICheck()) 20 def generate_response(prompt: str) -> str: 21 # LLM generation code 22 pass 23 ``` 24 """ 25 26 from .core import GuardException 27 from .decorators import guard 28 from .guards import IncludesWords 29 from .guards import PIICheck 30 from .guards import PythonFunction 31 from .guards import WordsPresence 32 33 __all__ = ["guard", "PIICheck", "PythonFunction", "IncludesWords", "WordsPresence", "GuardException"]