What the Numbers Say
Real nanosecond traces from 11 languages, captured in Docker on identical hardware. Every number on this page comes from instrumented code you can run yourself: github.com/adamzwasserman/honest-code-traces
1. The Ranking
We measured six dishonest operations (method dispatch, field mutation, singleton lookup, cache check, computation, timestamp) and four honest operations (function call, argument passing, pure computation, return value). Same business logic in both. Same hardware.
| Language | Dishonest | Honest | Ratio |
|---|---|---|---|
| Dart | 220 ns | 110 ns | 2.0x |
| Go | 695 ns | 170 ns | 4.1x |
| Swift | 1,312 ns | 220 ns | 6.0x |
| Java | 1,204 ns | 333 ns | 3.6x |
| Kotlin | 1,037 ns | 336 ns | 3.1x |
| C++ | 1,042 ns | 374 ns | 2.8x |
| C# | 897 ns | 662 ns | 1.4x |
| Python | 1,916 ns | 1,417 ns | 1.4x |
| Ruby | 3,542 ns | 1,792 ns | 2.0x |
| TypeScript | 4,498 ns | 1,959 ns | 2.3x |
| PHP | 4,126 ns | 2,000 ns | 2.1x |
Sorted by honest code speed. The ratio column shows how many times slower the dishonest version is. Honest code is faster in every language we tested. The margin ranges from 1.4x (C#, Python) to 6.0x (Swift).
2. The Surprises
The interesting question isn't whether honest code beats dishonest code in the same language. It always does. The interesting question is: can honest code in a "slower" language beat dishonest code in a "faster" one?
C++ compiles to native machine code. Java compiles to bytecode and runs on the JVM. By every conventional measure, C++ should be faster. It is, when both sides write the same kind of code. But honest Java with 9 pure function calls beats dishonest C++ with 27 steps of method dispatch, field mutation, singleton lookups, and mutable state management. The structural overhead of dishonesty erases C++'s native compilation advantage.
V8 is one of the fastest JIT compilers ever built. Its hidden class optimization and inline caches make property access fast when object shapes are stable. But the dishonest path constantly mutates fields and walks through singleton lookups. Honest Python at 1,417 ns beats dishonest TypeScript at 4,498 ns. Python's 9 pure function calls win by doing less.
Every language we tested (native compiled, JIT compiled, bytecode VM) shows honest code faster than dishonest code by a consistent margin. The overhead isn't a language problem or a runtime problem. It's a structural problem. 27 steps of method dispatch, field mutation, singleton lookup, and cache management will always cost more than 9 steps of pure function calls and return values. The runtime determines the absolute speed; the architecture determines the ratio.
The full picture
Honest Java beats dishonest C++ by 3.1x. Let that sink in. JVM bytecode beats native-compiled code because the structural overhead of the dishonest path erases the compilation advantage. Honest Python beats dishonest TypeScript by 3.2x: CPython's bytecode VM, with no JIT, outperforms V8's highly optimized JIT. The same pattern holds across every language: the dishonest tax is proportional, and it compounds across every call site.
3. Why the Numbers Break Down This Way
The crime scene runs 27 operations: 7 method dispatches, 5 field mutations, 2 computations, 4 singleton lookups, 5 cache checks, and 2 timestamp captures. The rescue runs 9: 2 function calls, 2 argument passes, 3 pure computations, and 2 return values. Same business logic, same result. The difference is entirely structural.
Native / AOT compiled: Go, C++, Swift, Dart
Operations compile to direct machine instructions. A singleton lookup via sync.Once (Go) or static let (Swift) is a single atomic load after first call, effectively 0ns. Field writes are direct memory stores. The dishonest path is still slower because 27 steps of ceremony cost more than 9 steps of computation, but each individual step is cheap.
Typical ratio: 2.0x–6.0x
JIT-compiled VM: Java, Kotlin, C#
With proper warmup, the JVM's C2 JIT compiles both honest and dishonest code to native machine instructions. The ratios are high (3.1x–3.6x) because the JIT can't eliminate the structural overhead: virtual dispatch, synchronized singleton acquisition, and mutable field writes still cost more than direct function calls and immutable returns, even after optimization. Java's stream().mapToDouble().sum() compiles to a tight loop at 209 ns. Fast, but the honest for loop at 83 ns is still 2.5x cheaper because it skips the Stream API's iterator and boxing pipeline.
C# with .NET's RyuJIT shows a tighter ratio (1.4x) because the Stopwatch batching granularity compresses small differences. The structural cost is still there.
Typical ratio: 1.4x–3.6x
Bytecode VM without JIT: Python, Ruby, PHP
Every operation goes through the VM's bytecode dispatch loop, so both honest and dishonest code are slower in absolute terms. But the ratio is consistent: the ceremony of classes, singletons, and mutable state costs the same proportional overhead regardless of VM speed. Python's dict.get() is a single bytecode instruction (BINARY_SUBSCR). A method call on a class requires LOAD_ATTR + CALL_FUNCTION + argument marshaling. Three instructions minimum.
Typical ratio: 1.4x–2.0x
V8 (TypeScript / JavaScript)
TypeScript erases all types at compile time. By the time V8 sees it, it's plain JavaScript. These numbers are pure V8/Node.js performance, not TypeScript overhead. Pure JS would produce identical results.
V8 sits between these categories. Its hidden classes and inline caches make property access fast when object shapes are stable. But the dishonest path constantly mutates fields (this.total, this.discount, this.tax), which can trigger deoptimization and hidden class transitions. The honest path returns plain objects from pure functions. V8 can optimize these aggressively because the shapes never change.
Ratio: 2.3x
4. Beyond Method Dispatch
The numbers above measure one anti-pattern: mutable classes with singleton lookups. But dishonest code takes many forms. We built separate Java harnesses for six additional architectural crimes, each run under the same conditions (OpenJDK 21, Docker, 5000 warmup iterations, 1000 measured runs, median nanoseconds).
| Anti-pattern | Chapter | Dishonest | Honest | Ratio |
|---|---|---|---|---|
| Try/catch recovery chain | 8 | 2,084 ns | 124 ns | 16.8x |
| Mock-heavy test setup | 10 | 1,376 ns | 334 ns | 4.1x |
| Scattered state (6 locations) | 5 | 335 ns | 84 ns | 4.0x |
| 13-frame middleware chain | 1 | 623 ns | 166 ns | 3.8x |
| Object graph pointer chasing | 3 | 251 ns | 83 ns | 3.0x |
| ORM + cache miss path | 9 | 454 ns | 166 ns | 2.7x |
Most developers think try/catch is free. It isn't. On the JVM, creating an exception object fills in the entire stack trace: walking every frame, allocating strings, building arrays. The crime path catches the exception, wraps it with context (allocating a second exception), checks a retry counter in a HashMap (simulating Redis), updates a circuit breaker, appends to an error log, and builds a dead letter queue entry. Six address spaces touched during "recovery." The honest path: the process dies. The supervisor notices. A fresh process starts with clean state. Three operations, 124 nanoseconds, zero stale state.
The dishonest test creates a mock CouponRegistry, a mock TaxService, a mock Logger, a dependency injection container, resolves three services, constructs an OrderService with injected dependencies, calls the method under test, verifies mock interactions, and tears down. Ten steps before you even check the result. The honest test calls calculateOrder(100.0, 10.0, 0.08) and asserts the return value. Two steps. The 10 mocks aren't just ceremony. They're 10 hidden dependencies your code can't function without. The pure function needs nothing but its arguments.
The ORM path: receive request, check cache key (miss), load Customer entity, pointer-chase through Orders, pointer-chase through LineItems, pointer-chase through Addresses, assemble a response DTO into a HashMap, store the DTO in cache, return. Ten steps. The honest path: receive request, execute a flat query (simulating a single SQL JOIN that returns all fields), return the result dict. Three steps. The cache exists because the object graph is slow to traverse. The object graph is slow because data is scattered across the heap. The cache is solving a problem the architecture created.
The pattern holds across every anti-pattern
Method dispatch overhead (Section 1) is not a special case. The same structural tax appears when you scatter state across locations, when you build deep middleware chains, when you chase pointers through object graphs, when you wrap exceptions in recovery ceremony, and when you bury your test logic under mock scaffolding. In every case, the honest alternative does less work because it has fewer steps, fewer address spaces, fewer indirections. The ratio varies (2.7x to 16.8x in these Java harnesses), but the direction never does.
5. The Point
Performance is a happy side effect. It is not why honest code matters.
The dishonest path has 8 state mutations and 4 global singleton lookups per business operation. Each mutation is a place where state can be corrupted. Each singleton is a place where two threads can disagree about reality. Each cache check is a place where stale data can silently produce wrong results. The rescue path has zero mutations, zero globals, zero caches. The same values arrive at the same answer through pure computation.
When your Order class has 5 mutable fields that can be written from 7 different call sites, your test suite needs to cover the interaction of every possible mutation sequence. When your calculate_order() function takes items, a region, and tax rates, and returns a total, a tax, and a subtotal, you test it with one assertion. assert calculate_order(items, "NY", rates) == (89.97, 7.20, 97.17). Done.
The 9 mocks in your dishonest test are 9 hidden dependencies. The 4 singleton lookups in your dishonest code are 4 invisible couplings to global state. The 8 field mutations are 8 moments where your object can be observed in an inconsistent state by another thread, another method, another test. No amount of JIT compilation fixes that.
That honest code is also faster, in every language, every runtime, every VM architecture we tested, is the data telling you what good architecture has always looked like.
Source code and raw data: github.com/adamzwasserman/honest-code-traces
Docker image with all 11 runtimes. One command to reproduce every number on this page.