You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For most use cases, **Q4_K_M** provides the best speed/quality tradeoff. Zerfoo achieves **234 tok/s on Gemma 3 1B Q4_K_M** on a DGX Spark (19% faster than Ollama on the same hardware).
293
+
For most use cases, **Q4_K_M** provides the best speed/quality tradeoff. Zerfoo achieves **241 tok/s on Gemma 3 1B Q4_K_M** on a DGX Spark (19% faster than Ollama on the same hardware).
Copy file name to clipboardExpand all lines: content/docs/blog/01-introducing-zerfoo.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -98,9 +98,9 @@ This means every tool, library, and application built for the OpenAI API works w
98
98
99
99
## Performance
100
100
101
-
> **Update 2026-03-27:** Benchmarks updated to reflect multi-model 3-run median methodology. Gemma 3 1B: 235 tok/s (was 245), Ollama: 188 tok/s (was 204). The speedup is now 25%.
101
+
> **Update 2026-03-27:** Benchmarks updated to reflect multi-model 3-run median methodology. Gemma 3 1B: 241 tok/s (was 245), Ollama: 188 tok/s (was 204). The speedup is now 25%.
102
102
103
-
On an NVIDIA DGX Spark with Gemma 3 1B Q4_K_M, Zerfoo achieves **235 tokens/second** decode throughput — 25% faster than Ollama (188 tok/s) on the same hardware. This comes from three key optimizations:
103
+
On an NVIDIA DGX Spark with Gemma 3 1B Q4_K_M, Zerfoo achieves **241 tokens/second** decode throughput — 28% faster than Ollama (188 tok/s) on the same hardware. This comes from three key optimizations:
Copy file name to clipboardExpand all lines: content/docs/blog/02-benchmark-comparison.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,9 +6,9 @@ bookToc: true
6
6
7
7
# Zerfoo vs Ollama vs llama.cpp: A Performance Comparison
8
8
9
-
> **Update 2026-03-27:** Benchmarks updated to multi-model 3-run median methodology. Gemma 3 1B: 235 tok/s (Ollama 188 tok/s) = 25% faster. Additional models: DeepSeek R1 1.5B (186 vs 167, +11%), Llama 3.2 3B (92 vs 93, parity), Mistral 7B (44 vs 44, parity).
9
+
> **Update 2026-03-27:** Benchmarks updated to multi-model 3-run median methodology. Gemma 3 1B: 241 tok/s (Ollama 188 tok/s) = 25% faster. Additional models: DeepSeek R1 1.5B (186 vs 167, +11%), Llama 3.2 3B (92 vs 93, parity), Mistral 7B (44 vs 44, parity).
10
10
11
-
When we set out to build an ML inference framework in Go, the first question everyone asked was: "Can Go actually compete with C++ on inference throughput?" The answer is yes. On Gemma 3 1B Q4_K_M, Zerfoo decodes at **235 tokens/second** — 25% faster than Ollama on the same NVIDIA DGX Spark hardware.
11
+
When we set out to build an ML inference framework in Go, the first question everyone asked was: "Can Go actually compete with C++ on inference throughput?" The answer is yes. On Gemma 3 1B Q4_K_M, Zerfoo decodes at **241 tokens/second** — 28% faster than Ollama on the same NVIDIA DGX Spark hardware.
12
12
13
13
This post breaks down how we measured these numbers, what architectural decisions make them possible, and how you can reproduce the results on your own hardware.
14
14
@@ -18,7 +18,7 @@ All measurements use the same GGUF model file, the same prompt ("The meaning of
Copy file name to clipboardExpand all lines: content/docs/blog/03-architecture-deep-dive.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ bookToc: true
6
6
7
7
# Inside Zerfoo: An Architecture Deep Dive
8
8
9
-
Zerfoo runs LLM inference in Go at 235 tokens/second — 25% faster than Ollama. This post walks through the internal architecture that makes that possible, from loading a GGUF file to streaming tokens over an OpenAI-compatible API.
9
+
Zerfoo runs LLM inference in Go at 241 tokens/second — 28% faster than Ollama. This post walks through the internal architecture that makes that possible, from loading a GGUF file to streaming tokens over an OpenAI-compatible API.
10
10
11
11
## The Pipeline
12
12
@@ -122,7 +122,7 @@ CUDA graph capture is the single biggest performance optimization in Zerfoo. It
122
122
123
123
Without CUDA graphs, each decode step dispatches hundreds of individual kernel launches — each one costing 5-10 microseconds of CPU-GPU synchronization. With CUDA graphs, the entire decode step is a single graph launch.
124
124
125
-
The numbers tell the story: 235 tok/s with CUDA graphs vs 174 tok/s without — a 35% throughput increase from this optimization alone.
125
+
The numbers tell the story: 241 tok/s with CUDA graphs vs 174 tok/s without — a 35% throughput increase from this optimization alone.
126
126
127
127
Zerfoo achieves 99.5% instruction coverage in CUDA graph capture. The remaining 0.5% consists of operations that must run on the host: token sampling and tokenizer lookup.
Copy file name to clipboardExpand all lines: content/docs/blog/04-why-go-for-ml.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -163,6 +163,6 @@ If you're running Go in production and using LLMs, give Zerfoo a try:
163
163
go get github.com/zerfoo/zerfoo@latest
164
164
```
165
165
166
-
Seven lines of code to run inference. One binary to deploy. 235 tokens per second on a DGX Spark.
166
+
Seven lines of code to run inference. One binary to deploy. 241 tokens per second on a DGX Spark.
167
167
168
168
The question isn't whether Go can do ML. The question is why your production inference is still running in a different language than the rest of your stack.
Copy file name to clipboardExpand all lines: content/docs/blog/gguf-industry-standard-format.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -29,7 +29,7 @@ The alignment matters. Because tensor data is aligned and the format has no enco
29
29
30
30
## Why Not ONNX, SafeTensors, or PyTorch Pickle
31
31
32
-
**ONNX** stores computation graphs, not just weights. An ONNX file contains every operation in the model as decomposed primitives -- a single RMSNorm becomes Pow, ReduceMean, Add, Sqrt, Div, Mul. This is useful for portability across runtimes, but it means every inference framework has to either execute the decomposed graph (slow) or reverse-engineer fused operations from the decomposed pattern (fragile). For Zerfoo, the decomposed ONNX graph produced 4-16 tok/s. The architecture-specific GGUF path produces 232+ tok/s. The computation graph belongs in the framework, not the file format.
32
+
**ONNX** stores computation graphs, not just weights. An ONNX file contains every operation in the model as decomposed primitives -- a single RMSNorm becomes Pow, ReduceMean, Add, Sqrt, Div, Mul. This is useful for portability across runtimes, but it means every inference framework has to either execute the decomposed graph (slow) or reverse-engineer fused operations from the decomposed pattern (fragile). For Zerfoo, the decomposed ONNX graph produced 4-16 tok/s. The architecture-specific GGUF path produces 241+ tok/s. The computation graph belongs in the framework, not the file format.
33
33
34
34
**SafeTensors** is a good format. It is simple, memory-mappable, and safe (no arbitrary code execution). But it stores unquantized weights only. It has no built-in support for the quantization types that make small-model inference practical (Q4_0, Q4_K_M, Q8_0). And its ecosystem is smaller -- while HuggingFace supports SafeTensors natively, GGUF has become the de facto standard for quantized inference models.
Copy file name to clipboardExpand all lines: content/docs/blog/how-we-beat-ollama-cuda-graph-capture.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ bookToc: true
8
8
9
9
*Performance deep-dive: how CUDA graph capture and fused kernels took Zerfoo from 186 tok/s to 234.30 tok/s on Gemma 3 1B.*
10
10
11
-
> **Update 2026-03-27:** Current throughput is **235 tok/s** (25% faster than Ollama 188 tok/s, 3-run median from multi-model benchmark). The Phase 6 journey below documents reaching 234.30 tok/s.
11
+
> **Update 2026-03-27:** Current throughput is **241 tok/s** (28% faster than Ollama 188 tok/s, 3-run median from multi-model benchmark). The Phase 6 journey below documents reaching 234.30 tok/s.
Copy file name to clipboardExpand all lines: content/docs/blog/zero-cgo-pure-go-ml-inference.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -208,10 +208,10 @@ Here are the numbers. On a DGX Spark (GB10 Grace Blackwell), running Gemma 3 1B
208
208
209
209
| Runtime | Decode throughput | Notes |
210
210
|---------|------------------|-------|
211
-
|**Zerfoo**|**235 tok/s**| Pure Go, zero CGo, custom CUDA kernels via dlopen |
211
+
|**Zerfoo**|**241 tok/s**| Pure Go, zero CGo, custom CUDA kernels via dlopen |
212
212
| Ollama | 188 tok/s | Go wrapper around llama.cpp (C++) |
213
213
214
-
Zerfoo is 25% faster than Ollama on the same hardware, despite Ollama being a thin wrapper around C++. The performance comes from the kernels, not the binding mechanism:
214
+
Zerfoo is 28% faster than Ollama on the same hardware, despite Ollama being a thin wrapper around C++. The performance comes from the kernels, not the binding mechanism:
215
215
216
216
-**25+ custom CUDA kernels** including fused RoPE, fused SwiGLU, fused Add+RMSNorm, fused QK-Norm+RoPE, flash attention (prefill and decode), quantized GEMM/GEMV (Q4_0, Q4_K_M, Q8_0)
217
217
-**CUDA graph capture** replays the entire decode step as a single graph launch, eliminating per-kernel launch overhead. 99.5% of decode instructions are captured.
0 commit comments