CPF: Compact Prompt Format — 30-50% Fewer Tokens, Zero Loss
import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem'; LLM prompts are full of repeated English grammar. Every "If a module exists, then recommend it. Do NOT reinvent the wheel." bur...
Source: dev.to
import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem'; LLM prompts are full of repeated English grammar. Every "If a module exists, then recommend it. Do NOT reinvent the wheel." burns tokens on words the model already understands. I built CPF (Compact Prompt Format) to replace that grammar with operators and abbreviations that LLMs decode natively — cutting token costs by 30-50% with zero runtime dependencies. The Problem System prompts, personas, and agent instruction sets get expensive fast. A typical 120-token rule block carries more grammar than meaning — "if", "then", "do not", "otherwise", "and also" — all filler that the model already understands from context. Multiply that across dozens of rules, priorities, tone directives, and scoped behaviors, and you are paying for syntax, not semantics. I wanted a notation that: LLMs already understand without fine-tuning (they process code, math operators, and shorthand every day) Compresses the grammar, not the meaning Ha