Overview
Loading…
Total Requests
—
translations processed
Tokens Original
—
before optimization
Tokens Optimized
—
after optimization
Tokens Saved
—
0% savings
Savings by source language
No translations yet.
⚡ Tokinensis
Tokinensis v1 replaces expensive English words with shorter BPE-efficient forms: math symbols (∴ ≈ →), tech abbreviations (impl, cfg, infra), and single CJK characters for common concepts. Average savings: ~28% vs standard English.
Tokinensis v2 is a cross-lingual constructed language with ~160 Latin-ASCII root morphemes (2-4 chars each). Words from EN, ES, ZH, JA, HI all map to the same efficient roots (hom=person, sap=know, vel=want). Articles removed, stop words dropped. Average savings: ~35-55% depending on source language.
v1 Encoder
Original
—
⚡ Tokinensis v1
—
Saved: 0 tokens
Reduction: 0%
Word mappings (v2)
Sample Comparisons — v1
Loading…
How v1 works
1. Symbols
∴ = therefore (3→1)
≈ = approximately (4→1)
→ = implies (2→1)
≈ = approximately (4→1)
→ = implies (2→1)
2. Abbreviations
impl = implementation (5→1)
cfg = configuration (4→1)
infra = infrastructure (4→1)
cfg = configuration (4→1)
infra = infrastructure (4→1)
3. CJK Glyphs
好 = good · 全 = all
无 = none · 新 = new
大 = big · 快 = fast
无 = none · 新 = new
大 = big · 快 = fast
How v2 works
1. Latin-ASCII Roots
160 roots, 2-4 chars each.
hom = person/humano/人
sap = know/saber/知道
vel = want/querer/想
hom = person/humano/人
sap = know/saber/知道
vel = want/querer/想
2. Cross-Lingual
EN, ES, ZH, JA, HI all map to the same roots. A Spanish user and English user produce the same compressed output.
3. Operators
& = and · | = or
! = not/no · ? = question
~ = approximately
:: = is defined as
! = not/no · ? = question
~ = approximately
:: = is defined as
🔬 Cross-Lingual Token Mapper
Find cheapest form per concept
Enter the same concept expressed in different languages. The mapper will count actual BPE tokens (cl100k_base) for each form and tell you which uses the fewest — this is how Tokinensis v2 determines its root vocabulary.
Enter the same concept in different languages:
Preset examples
Superhuman
EN:2 · ES:3 · ZH:2 · tok:2
Artificial Intelligence
EN:3 · ES:5 · ZH:3 · tok:1
Configuration
EN:4 · ES:5 · ZH:3 · tok:1
Implementation
EN:5 · ES:5 · ZH:2 · tok:1
🌐 Live Translator
Your text (any language)
Optimized (sent to LLM)
—
Detected lang
—
Original tokens
—
Optimized tokens
—
Tokens saved
—
% savings
📋 Recent Activity
| Source | Target | Orig tokens | Opt tokens | Saved | Backend | When |
|---|---|---|---|---|---|---|
| Loading… | ||||||
🔑 API Key
Loading…
HTTP · Python · skill prompt
# HTTP Header X-API-Key: tk_your_key # Python (translate_in → LLM → translate_out) tt = TokenTranslationClient("https://tokenstree.eu", api_key="tk_your_key") optimized, meta = await tt.translate_in(user_prompt) final = await tt.translate_out(response, meta["source_lang"]) # With Tokinensis v2 optimized, meta = await tt.translate_in(prompt, use_tokinensis=True, tokinensis_version=2) final = await tt.translate_out(resp, meta["source_lang"], was_tokinensis=True, tokinensis_version=2)