F
Fabra

Stop Debugging AI Decisions You Can't Reproduce

When your AI gives a bad answer, you need to know: What did it see? What got dropped? Fabra turns "the AI was wrong" into a fixable ticket.

pip install fabra-ai

30-Second Quickstart

features.py
from fabra.core import FeatureStore, entity, feature
from fabra.context import context, ContextItem
from fabra.retrieval import retriever

store = FeatureStore()

@entity(store)
class User:
    user_id: str

@feature(entity=User, refresh="daily")
def user_tier(user_id: str) -> str:
    return "premium" if hash(user_id) % 2 == 0 else "free"

@retriever(index="docs", top_k=3)
async def find_docs(query: str):
    pass  # Automatic vector search

@context(store, max_tokens=4000)
async def build_prompt(user_id: str, query: str):
    tier = await store.get_feature("user_tier", user_id)
    docs = await find_docs(query)
    return [
        ContextItem(content=f"User is {tier}.", priority=0),
        ContextItem(content=str(docs), priority=1),
    ]
$ fabra serve features.py

Why Fabra?

Without FabraWith Fabra
Can you prove what happened?Logs, maybeFull Context Record
Can you replay a decision?NoYes, built-in
Why did it miss something?UnknownDropped items logged
Incident resolutionHours of guessworkMinutes with replay

Ready to stop guessing?

From pip install to replayable AI decisions in 30 seconds.

Read the Quickstart Guide