Manifesto
Back to chatLow Tide is a gateway that defaults to restraint. We route intelligently, spend less compute, and make usage legible.
Principles
- Do the minimum work needed to answer.
- Route to the smallest acceptable model before reaching for premium.
- Never hide cost; always surface receipts.
- Prefer explicit upgrades over automatic re-runs.
- Disclose estimation methods in plain language.
How it behaves
Short outputs by default.
Routing profiles for common intents.
Cache reuse to avoid repeat inference.
Token receipts for auditing and reporting.
Transparency
Provider and model used for each request.
Input and output token counts (measured when available, otherwise estimated).
Profile used and whether the response came from cache.
Emissions are estimated with a configurable factor per 1k tokens. This is an accounting proxy, not a measurement of electricity use.
Formula:
kgCO2e = tokens / 1000 * factorSavings display (optional) compares a baseline factor you configure against the actual factor for the same token count. This is an estimate, not a guarantee.
Data retention
Receipts exclude prompt text by default.
Conversation history can be stored locally or synced per workspace.
Keep logs minimal and redact prompts in any analytics pipeline.
Security posture
Keys remain server-side.
Rate limits and size caps prevent abuse.
Model override is disabled unless explicitly enabled.
What it is
A single entry point for multiple model providers. You send a request once; Low Tide routes it according to your configured profiles, costs, and policies.
What it is not
A claim that every prompt is renewable. Most energy is consumed inside provider infrastructure. We reduce waste, estimate usage, and disclose methodology.