Trust and verification
Use Celsus as a legal work aid, not as an autonomous decision-maker.
You should trust output more when:
- the scope is clear
- relevant documents were selected
- the answer is tied to visible sources
- the result matches the known facts of the matter
You should verify or override output when:
- the issue is outcome-determinative
- the answer is broad or inferential
- relevant source material is missing
- the answer conflicts with your legal judgment
Privacy and model use
Celsus uses the OpenAI API for AI runs, not consumer ChatGPT.
The product is intended to keep AI use explicit and scoped. Ordinary case notes are not the same thing as AI runs.
As a user, you should assume:
- only use Celsus for matters that fit your team’s confidentiality rules
- confirm the scope before running AI actions
- do not widen scope casually
For the current public policy wording, see:
Current limits
This is still a pilot-stage product.
Current limits include:
- Celsus does not replace legal review
- broad unspecific prompts produce weaker results
- missing documents or missing authority sources limit reliability
- AI outputs may still be incomplete, wrong, or too confident
Best-practice summary
- keep ordinary notes useful and factual
- run AI only when the task is explicit
- keep the scope narrow and visible
- review outputs against sources
- escalate important points to human legal judgment