Cut LLM costs by 60%.
Without losing meaning.
Brevit compresses structured data before it reaches your LLM — flattening JSON, abbreviating repeated keys, and summarizing text. Same quality responses, fraction of the tokens.
{
"friends": ["ana", "luis", "sam"],
"hikes": [
{"id":1,"name":"Blue Lake","km":7.5},
{"id":2,"name":"Ridge View","km":9.2}
]
}friends[3]:ana,luis,sam hikes[2]{id,name,km}: 1,Blue Lake,7.5 2,Ridge View,9.2
Four steps from raw data to optimized output
Brevit auto-detects your input type and applies the right compression strategy. No configuration needed.
Detect
Auto-routes your input — JSON objects, arrays, plain text, or mixed data — to the right optimization pipeline.
Optimize
Flattens nested JSON to dot-notation, converts uniform arrays to tables, and applies TextRank to compress text.
Abbreviate
Identifies repeated key prefixes and generates @alias definitions. Only abbreviates when it actually saves tokens.
Output
Delivers compact Brevit-format output that any LLM can parse natively. No schema changes, no information lost.
See it in action
Edit the JSON or paste your own. Brevit compresses instantly in your browser.
Click 'Run brevity()' to compress…
Everything you need to optimize LLM inputs
JSON Flattening
Transform nested JSON into dot-notation key-value pairs. Uniform arrays become compact tabular format — up to 60% fewer tokens.
Abbreviation Engine
Detects repeated key prefixes and auto-generates @alias shortcuts. An additional 10–25% reduction on top of flattening.
TextRank Compression
Deterministic extractive summarization using PageRank-style graph scoring. Lossless by default, ratio-controlled when needed.
Multi-Language
Identical API across JavaScript/TypeScript, Python, and C#/.NET. Same patterns, same output, everywhere.
Extensible Plugins
Register custom optimization strategies. Hook into LangChain, Semantic Kernel, Azure AI, or any system you use.
Zero Config
One line to compress any data structure. Smart auto-detection picks the right strategy — no configuration required.
Four techniques for maximum JSON reduction
Brevit analyzes your JSON structure and picks the most efficient encoding for each node.
{
"user": {
"name": "Jane",
"address": {
"city": "NYC",
"zip": "10001"
}
}
}user.name:Jane user.address.city:NYC user.address.zip:10001
Dot-Notation: Nested objects become flat key:value pairs using dot separators.
TextRank extractive summarization
Deterministic, graph-based sentence scoring. Keep the most important sentences, discard the rest.
How TextRank Works
- 1.Split text into sentences and build a similarity graph
- 2.Score each sentence using PageRank-style iterative scoring
- 3.Keep top-ranked sentences based on the target ratio
- 4.Preserve original order for coherent output
Mode
Intelligent @alias generation
Repeated key prefixes are automatically aliased. Brevit only abbreviates when it actually saves tokens.
Uses the first letter of the key prefix as the alias.
@c=customerc.name:Janec.email:jane@ex.comc.tier:premium
Real token savings, real data
Measured across realistic LLM workloads. Brevit consistently outperforms raw JSON and YAML, with tabular optimization delivering the highest compression ratios.
- ✓40–60% average reduction across all data types
- ✓Up to 70% for primitive arrays
- ✓10–25% additional savings with abbreviation engine
- ✓Lossless by default — LLMs read Brevit format natively
Same API. Three ecosystems.
Learn Brevit once and use it in JavaScript, Python, or .NET. Identical patterns, identical output.
$ npm install brevitimport Brevit from 'brevit'; const client = new Brevit(); const result = client.brevity(data); console.log(result.compressed);
$ pip install brevitfrom brevit import Brevit client = Brevit() result = client.brevity(data) print(result.compressed)
$ dotnet add package Brevitusing Brevit; var client = new BrevitClient(); var result = client.Brevity(data); Console.WriteLine(result.Compressed);
When to use Brevit
Perfect for
- ✓LLM prompt pipelines with structured JSON data
- ✓RAG systems where context window is limited
- ✓Batch document processing with high API call volume
- ✓Any workload where reducing token count saves cost
- ✓Multi-turn conversations with accumulated context
- ✓Function calling / tool-use payloads with nested objects
Consider alternatives
- ⚠Human-readable API responses — Brevit is for LLM input
- ⚠Data under ~100 tokens — overhead exceeds savings
- ⚠Strict JSON schema requirements downstream
- ⚠Real-time streaming where compression latency matters
- ⚠Binary data or media files
- ⚠Cases where output must be valid JSON
Brevit isn't just a library.
It's a notation standard.
The Brevit Format Specification (BFS) defines a structured data notation optimized for LLM prompts — analogous to OpenAPI for REST APIs. Versioned, portable, and natively understood by any LLM.
[brevit:1.0]One command to get started
Available on npm, PyPI, and NuGet. Same API design across all three — learn it once, use it everywhere.
$ npm install brevitimport { BrevitClient, BrevitConfig } from 'brevit';
const brevit = new BrevitClient();
const result = await brevit.brevity({
user: { name: 'Jane', email: 'jane@example.com' },
orders: [
{ id: 'ORD-001', status: 'SHIPPED', total: 79.98 },
{ id: 'ORD-002', status: 'PENDING', total: 29.99 }
]
});
// user.name:Jane
// user.email:jane@example.com
// orders[2]{id,status,total}:
// ORD-001,SHIPPED,79.98
// ORD-002,PENDING,29.99How much will you save?
Adjust the sliders to estimate your annual savings with Brevit.
Monthly Cost
Annual Savings
$1,200
per year at 50% token reduction
Start optimizing today
Free, open source, zero configuration. Drop Brevit into any LLM pipeline in minutes.