v0.0.5 — Production Ready

Write fewer tokens,
get native performance

Vais (Vibe AI Language for Systems) is a systems programming language optimized for AI code generation. Single-character keywords, expression-oriented syntax, and LLVM-powered native speed.

F fib(n: i64) -> i64 {
  I n <= 1 { n }
  E { @(n - 1) + @(n - 2) }
}

F main() {
  result := fib(10)
  puts("fib(10) = ~{result}")
}S Vec2 { x: f64, y: f64 }

F len(v: Vec2) -> f64 {
  sqrt(v.x * v.x + v.y * v.y)
}

F main() {
  p := Vec2 { x: 3.0, y: 4.0 }
  puts("len = ~{len(p)}")
}E Shape {
  Circle(f64),
  Rect(f64, f64),
}

F area(s: Shape) -> f64 {
  M s {
    Circle(r) => 3.14159 * r * r,
    Rect(w, h) => w * h,
  }
}

Token Efficiency Matters

AI models pay per token. Vais uses 33% fewer tokens than Rust and 40% fewer than C for equivalent programs — single-char keywords, expression bodies, and auto-return add up across codebases.

Vais 88 tokens
E Shape { Circle(f64), Rect(f64, f64) }

F area(s: Shape) -> f64 {
  M s {
    Circle(r) => 3.14 * r * r,
    Rect(w, h) => w * h,
  }
}

F classify(s: Shape) -> str {
  a := area(s)
  I a > 100.0 { "large" }
  E I a > 10.0 { "medium" }
  E { "small" }
}
97 tokens
enum Shape { Circle(f64), Rect(f64, f64) }

fn area(s: &Shape) -> f64 {
    match s {
        Shape::Circle(r) => 3.14 * r * r,
        Shape::Rect(w, h) => w * h,
    }
}

fn classify(s: &Shape) -> &str {
    let a = area(s);
    if a > 100.0 { "large" }
    else if a > 10.0 { "medium" }
    else { "small" }
}
from enum import Enum
from dataclasses import dataclass
import math

class Circle:
    radius: float

class Rect:
    width: float; height: float

def area(s) -> float:
    match s:
        case Circle(r): return 3.14 * r * r
        case Rect(w, h): return w * h

def classify(s) -> str:
    a = area(s)
    if a > 100.0: return "large"
    elif a > 10.0: return "medium"
    else: return "small"
type Shape interface { area() float64 }

type Circle struct { R float64 }
type Rect struct   { W, H float64 }

func (c Circle) area() float64 {
    return 3.14 * c.R * c.R
}
func (r Rect) area() float64 {
    return r.W * r.H
}

func classify(s Shape) string {
    a := s.area()
    if a > 100.0 { return "large" }
    if a > 10.0 { return "medium" }
    return "small"
}
typedef enum { CIRCLE, RECT } ShapeTag;
typedef struct {
    ShapeTag tag;
    union { double r; struct { double w, h; }; };
} Shape;

double area(Shape s) {
    switch (s.tag) {
        case CIRCLE: return 3.14 * s.r * s.r;
        case RECT:   return s.w * s.h;
    }
}

const char* classify(Shape s) {
    double a = area(s);
    if (a > 100.0) return "large";
    if (a > 10.0)  return "medium";
    return "small";
}

Token Count — 4 Benchmark Programs

Vais
721
Python
889
Go
893
Rust
1,080
C
1,211

fibonacci + quicksort + http_types + linked_list (tiktoken cl100k_base). See benchmarks.

Native Performance

Vais compiles to optimized LLVM IR — on par with C and Rust at runtime, with fast compilation.

Runtime — Fibonacci(35)

C
32ms
Rust
33ms
Vais
34ms

Compile Speed — Single File

Vais
6.4ms
Go
52ms
C (clang)
55ms
Rust
122ms

Vais --emit-ir vs full compilation. 8.6x faster than C, 19x faster than Rust. Updated 2026-02-11.

Why Vais?

Designed from the ground up for AI-assisted development without sacrificing performance.

V

Single-Char Keywords

F for function, S for struct, I/E for if/else, L for loop, M for match. Minimal tokens, maximum clarity.

@

Self-Recursion Operator

The @ operator calls the current function recursively. No need to repeat the function name — AI-friendly and concise.

{}

Everything is an Expression

If/else, match, and blocks all return values. No return keyword needed for the last expression. Functional and ergonomic.

LLVM Native Speed

Compiles to optimized native code via LLVM. Get C-level performance with high-level ergonomics. Supports LTO and PGO.

T

Type Inference

Bidirectional type checking infers types automatically. Write x := 42 instead of explicit type annotations everywhere.

🔧

Full Toolchain

LSP server, formatter, debugger, REPL, package manager, and IDE plugins for VSCode and IntelliJ. Production-ready tooling.

🌐

Multi-Target Output

Compile to native binaries, WebAssembly, or JavaScript. Run Vais code in the browser with full WASM interop and JS codegen.

🔄

Incremental Compilation

Per-module caching with parallel codegen. Recompile only what changed — 30K-line projects rebuild in under 100ms.

🏗️

Self-Hosting Compiler

Vais can compile itself. The self-hosting compiler achieves bootstrap with 50,000+ lines of Vais, backed by 900 E2E tests and 5,300+ total tests.

New

Build Web Apps with Vais

VaisX is a full-stack web framework powered by the Vais compiler. Compile-time reactivity, file-based routing, and SSR — all in under 3KB.

Compile-Time Reactivity

Reactive state analyzed at build time. Zero runtime overhead for state tracking — just surgical DOM updates.

🗂️

File-Based Routing & SSR

Your app/ directory structure becomes your URL routes. SSR, SSG, and streaming out of the box.

📦

< 3KB Runtime

The entire runtime weighs under 3KB gzipped. Most work happens at compile time, not in the browser.

Counter.vaisx
<script>
  let count = __vx_state(0)
  let doubled = __vx_derived(count * 2)
</script>

<template>
  <button @on:click={count++}>
    Count: {count} (×2 = {doubled})
  </button>
</template>

<style>
  button { padding: 12px 24px; }
</style>

Try Vais Now

Write and run Vais code directly in your browser. No installation required.

Launch Interactive Playground

Open in New Tab

Get Started in Seconds

Windows (PowerShell)

irm https://vais.dev/install.ps1 | iex

macOS / Linux (Homebrew)

brew tap vaislang/tap && brew install vais

Cargo

cargo install vaisc

Docker

docker run -it vaislang/vais:latest