Skip to main content

WebAssembly Platform

Buffer on Kotlin/WASM uses native WASM linear memory for optimal performance with JavaScript interoperability.

Implementation

ZoneWASM TypeUse Case
HeapByteArrayBufferHigh-frequency allocations, compute-heavy workloads
DirectLinearBufferJS interop, zero-copy sharing with JavaScript
SharedMemoryLinearBufferSame as Direct

LinearBuffer: Native WASM Memory

LinearBuffer uses Kotlin/WASM's Pointer API to read/write directly to WASM linear memory. This provides:

  • Native instructions: Pointer.loadInt()/storeInt() compile to single WASM instructions (i32.load/i32.store)
  • Zero-copy JS interop: JavaScript can access the same memory via DataView on wasmMemory.buffer
  • ~10-20% faster primitive operations vs ByteArrayBuffer
Performance Trade-offs

LinearBuffer's main advantage is JavaScript interoperability, not raw speed. For pure Kotlin operations without JS interop, ByteArrayBuffer can be faster for bulk operations since it stays in the WasmGC heap.

When to Use Each Zone

// Use Heap for high-frequency allocations (compute-heavy workloads)
val computeBuffer = PlatformBuffer.allocate(1024, AllocationZone.Heap)

// Use Direct for JS interop (shares memory with JavaScript)
val interopBuffer = PlatformBuffer.allocate(1024, AllocationZone.Direct)

Performance

Benchmark results (WASM Node.js):

OperationLinearBuffer (Direct)ByteArrayBuffer (Heap)Winner
Primitive read/write~68M ops/sec~57M ops/secLinearBuffer (~1.2x)
Buffer-to-buffer copy~2.6M ops/sec~5.5M ops/secByteArrayBuffer (~2x)
AllocationBump allocator (fast)GC-managedLinearBuffer

Key insight: LinearBuffer is faster for primitive operations, but ByteArrayBuffer is faster for bulk operations that stay within the WasmGC heap. Choose based on your use case:

  • JS interop needed? → Use LinearBuffer (Direct)
  • Pure Kotlin computation? → Use ByteArrayBuffer (Heap)

Memory Management

LinearBuffer uses a bump allocator with pre-allocated memory:

  • 16MB allocated by default at first allocation
  • Configurable via LinearMemoryAllocator.configure()
  • Memory is not freed (bump allocator)
  • Best for buffers with longer lifetimes (interop scenarios)
  • Use AllocationZone.Heap for high-frequency short-lived allocations

Configuring Memory Size

// At app startup, BEFORE any LinearBuffer allocation:
LinearMemoryAllocator.configure(initialSizeMB = 32) // Set to 32MB

// Or use a smaller size for lightweight apps:
LinearMemoryAllocator.configure(initialSizeMB = 4) // Set to 4MB

Usage Patterns

// Good: Long-lived buffer for JS interop
val wsBuffer = PlatformBuffer.allocate(8192, AllocationZone.Direct)

// Good: High-frequency allocations
pool.withBuffer(1024, AllocationZone.Heap) { buffer ->
// Process data
}

JavaScript Interoperability

LinearBuffer enables zero-copy data sharing between Kotlin/WASM and JavaScript:

// Kotlin side: allocate in linear memory and get offset for JS
val buffer = PlatformBuffer.allocate(1024, AllocationZone.Direct) as LinearBuffer
buffer.writeInt(42)
buffer.writeString("Hello from WASM")

// Pass this offset to JavaScript
val jsOffset = buffer.linearMemoryOffset // or buffer.baseOffset for start of buffer
// JavaScript side: access same memory using the offset from Kotlin
const wasmMemory = wasmExports.memory;
const view = new DataView(wasmMemory.buffer, jsOffset, 1024);
const value = view.getInt32(0, false); // 42 - same bytes, zero copy!

LinearBuffer also provides helper methods for JS array interop:

// Write from JS Int8Array to buffer
linearBuffer.writeFromJsArray(jsInt8Array, srcOffset = 0, length = 100)

// Read from buffer to JS Int8Array
linearBuffer.readToJsArray(jsInt8Array, dstOffset = 0, length = 100)

Known Limitations

Optimizer Bug Workaround

Due to a Kotlin/WASM production optimizer bug, LinearBuffer pre-allocates memory at initialization rather than growing dynamically. This means:

  1. Configurable limit - Default 16MB, adjustable via configureWasmMemory()
  2. No memory reclamation - Bump allocator doesn't free memory
  3. Use Heap for benchmarks - High-frequency allocation benchmarks should use AllocationZone.Heap

If you exceed the configured limit, you'll get an OutOfMemoryError with guidance:

LinearBuffer allocation exceeded 16MB pre-allocated memory.
Call LinearMemoryAllocator.configure(initialSizeMB = N) at startup with a larger value,
or use AllocationZone.Heap for high-frequency allocation.

ByteArray Conversion

Converting between LinearBuffer and Kotlin ByteArray requires a copy (they live in different memory spaces - linear memory vs WasmGC heap).

Cross-Module Memory

Each WASM module has its own isolated linear memory. Passing buffers between different WASM modules (e.g., Kotlin buffer to a compression WASM module) requires copying:

Kotlin/WASM Module    SSL WASM Module     Compression Module
[Memory A] ──COPY──> [Memory B] ──COPY──> [Memory C]

Workarounds:

  • Use JS as intermediary (create Uint8Array view, pass to other module)
  • Some libraries accept Uint8Array input, allowing a view over LinearBuffer's memory
  • Future: WASM Component Model may enable shared memory regions

Usage

// Standard usage - API is identical to other platforms
val buffer = PlatformBuffer.allocate(1024)
buffer.writeInt(42)
buffer.writeLong(123456789L)
buffer.writeString("Hello WASM")

buffer.resetForRead()
val i = buffer.readInt()
val l = buffer.readLong()
val s = buffer.readString(10)

Native Data Conversion

Convert buffers to WASM-native LinearBuffer for JavaScript interop:

val buffer = PlatformBuffer.allocate(1024, AllocationZone.Direct)
buffer.writeBytes(data)
buffer.resetForRead()

// Get LinearBuffer (zero-copy slice)
val nativeData = buffer.toNativeData()
val linearBuffer: LinearBuffer = nativeData.linearBuffer

// Access memory offset for JS interop
val offset = linearBuffer.baseOffset

Zero-Copy Behavior

ConversionByteArrayBuffer (Heap)LinearBuffer (Direct)
toNativeData()Copy (different memory)Zero-copy (slice)
toMutableNativeData()Copy (different memory)Zero-copy (view)
toByteArray()Zero-copy (backing array)Copy (different memory)
Memory Spaces

WASM has two memory spaces: WasmGC heap (where ByteArray lives) and linear memory (where LinearBuffer lives). Conversions between these always require a copy.

JavaScript Interop with Native Data

// Kotlin side
val buffer = PlatformBuffer.allocate(1024, AllocationZone.Direct) as LinearBuffer
buffer.writeInt(42)
buffer.writeString("Hello from WASM")
buffer.resetForRead()

val nativeData = buffer.toNativeData()
val offset = nativeData.linearBuffer.baseOffset
// JavaScript side - access same memory
const view = new DataView(wasmExports.memory.buffer, offset, 1024);
const value = view.getInt32(0, false); // 42 - zero copy!

See Platform Interop for more details.

Best Practices

  1. Use Direct for JS interop - Zero-copy sharing with JavaScript via wasmMemory.buffer
  2. Use Heap for pure Kotlin workloads - ByteArrayBuffer is faster for bulk operations and has no memory limit concerns
  3. Pool buffers - Reduces allocation overhead for both types
  4. Reuse buffers - Call resetForWrite() instead of allocating new buffers
  5. Consider memory boundaries - Crossing between WasmGC heap and linear memory has overhead