Iron compiles to C and produces native binaries with zero runtime overhead. Manual memory control, first-class concurrency, and game-dev primitives — all with clean, readable syntax.
import raylib object Player { var pos: Vec2 var hp: Int val speed: Float val name: String } func Player.update(dt: Float) { if is_key_down(.RIGHT) { self.pos.x += self.speed * dt } if is_key_down(.LEFT) { self.pos.x -= self.speed * dt } } func main() { val window = init_window(800, 600, "My Game") defer close_window(window) var player = Player(vec2(0.0, 0.0), 100, 200.0, "Victor") while not window_should_close() { player.update(get_frame_time()) draw { clear(DARKGRAY) player.draw() } } }
Iron gives you the performance of C with a modern syntax that stays out of your way.
Compiles to C and produces native binaries. No garbage collector, no VM, no interpreter overhead. Just raw speed.
You manage memory explicitly — stack, heap, reference counting — with compiler-assisted safety nets. No borrow checker, no hidden magic.
Thread pools, parallel loops, draw {} blocks, and concurrency primitives are first-class language features, not library afterthoughts.
No operator overloading, no implicit conversions, no hidden control flow. When you read Iron code, you know exactly what it does.
Iron gives you fine-grained control without boilerplate.
Choose the right strategy for each allocation. Stack by default, heap when you need it, reference counting for shared ownership. The compiler tracks escapes and auto-frees when safe.
rc for shared ownershipdefer for deterministic resource cleanupleak for intentional permanent allocations-- stack: default, auto-freed at scope exit val pos = vec2(10.0, 20.0) -- heap: auto-freed when value doesn't escape val enemies = heap [Enemy; 64] -- reference counted: shared ownership val sprite = rc load_texture("hero.png") var other = sprite -- ref count = 2 -- deterministic cleanup val file = open_file("save.dat") defer close_file(file) -- intentional permanent allocation val atlas = heap load_texture("atlas.png") leak atlas
Named thread pools, typed channels, mutexes, and parallel loops are built into the language. No external threading libraries, no callback pyramids.
parallel loops with automatic work distributionval compute = pool("compute", 4) val io = pool("io", 1) -- background work on a specific pool val handle = spawn("load-assets", io) { return load_texture("hero.png") } val tex = await handle -- typed channels val ch = channel[Texture](8) ch.send(load_texture("hero.png")) val tex = ch.recv() -- parallel loops across cores for i in range(len(particles)) parallel(compute) { particles[i].update(dt) }
Any pure function can run at compile time. Build lookup tables, embed files, precompute values — all baked into the binary with zero runtime cost.
comptimefunc build_sin_table() -> [Float; 360] { var table = [Float; 360] for i in range(360) { table[i] = sin(Float(i) * 3.14159 / 180.0) } return table } -- evaluated at compile time, zero cost val SIN_TABLE = comptime build_sin_table() -- embed files directly val SHADER = comptime read_file("main.glsl") -- same function also works at runtime val dynamic = build_sin_table()
Clone, build, and run your first Iron program.
git clone https://github.com/victorl2/iron-lang.git
cd iron-lang && cmake -B build && cmake --build build
./build/iron run examples/hello.iron
func main() { println("Hello, Iron!") }