Virtual GPU (vGPU)
The unified GPU abstraction layer that powers every system in the engine.
VirtualGPU wraps raw WebGPU with automatic caching, pooling, and resource management — turning 20 lines of boilerplate into 1.
This is the most important concept in Particle Engine v2. Every renderer, every simulation system, every compute shader goes through vGPU. Master this and you understand the entire engine's GPU layer.
Quick Start
Get a vGPU instance and start using it immediately:
import { getVGPU } from './engine/core/gpu/VirtualGPU.js';
// Initialize (once per app — returns singleton)
const vgpu = await getVGPU();
// Create a vertex buffer
const { buffer, id } = vgpu.buffer.create({
size: 1024,
usage: 'vertex',
label: 'myVertices'
});
// Compile a WGSL shader (cached automatically)
const module = vgpu.shader.compile('triangle', `
@vertex fn vs(@builtin(vertex_index) i: u32) -> @builtin(position) vec4f {
var pos = array(vec2f(0, 0.5), vec2f(-0.5, -0.5), vec2f(0.5, -0.5));
return vec4f(pos[i], 0, 1);
}
@fragment fn fs() -> @location(0) vec4f {
return vec4f(0.23, 0.74, 0.97, 1);
}
`);
// Create a render pipeline — blend mode as a simple string
const pipeline = vgpu.pipeline.render({
vertex: { module },
fragment: { module, targets: [{ format: 'bgra8unorm' }] },
label: 'trianglePipeline'
});
That's it. No GPUBufferUsage.VERTEX | GPUBufferUsage.COPY_DST flags. No bind group layout descriptors. No pipeline layout boilerplate. vGPU handles it all.
Initialization
There are three ways to get a vGPU instance:
Singleton (Recommended)
import { getVGPU, vgpu } from './engine/core/gpu/VirtualGPU.js';
// First call creates the instance, subsequent calls return the same one
const gpu = await getVGPU();
// After initialization, synchronous access anywhere in your code:
const gpu = vgpu(); // throws if not initialized yet
Direct Creation
// Create a new instance (not singleton)
const gpu = await VirtualGPU.create({
powerPreference: 'high-performance',
requiredFeatures: ['timestamp-query']
});
From Existing Device
// Wrap an existing GPUDevice
const gpu = VirtualGPU.fromDevice(myGpuDevice);
The 6 Core Managers
Every vGPU instance exposes 6 manager objects. These are the primary API you'll use daily:
vgpu.buffer — Buffer Management
Create, write, and manage GPU buffers with automatic usage flag resolution and optional pooling.
// Create a buffer with simplified usage strings
const { buffer, id } = vgpu.buffer.create({
size: 4096,
usage: 'storage', // or 'vertex', 'index', 'uniform', 'storage|vertex', etc.
label: 'particleData',
pooled: true, // optional: reuse from buffer pool
});
// Write data
vgpu.buffer.write(buffer, new Float32Array([1, 2, 3, 4]));
// Release (returns to pool if pooled)
vgpu.buffer.release(id);
Usage string mapping — vGPU translates simple strings to WebGPU flags:
| String | WebGPU Flag |
|---|---|
'vertex' | VERTEX | COPY_DST |
'index' | INDEX | COPY_DST |
'uniform' | UNIFORM | COPY_DST |
'storage' | STORAGE | COPY_DST | COPY_SRC |
'indirect' | INDIRECT | COPY_DST | STORAGE |
'map-read' | MAP_READ | COPY_DST |
'map-write' | MAP_WRITE | COPY_SRC |
'storage|vertex' | Combined flags (pipe-separated) |
vgpu.bindings — Bind Group Layouts & Bind Groups
Define named layouts and create bind groups with caching and deduplication.
// Define a named layout (reusable across pipelines)
const layout = vgpu.bindings.defineLayout('material', [
{ binding: 0, type: 'uniform', visibility: 'vertex|fragment' },
{ binding: 1, type: 'texture', visibility: 'fragment' },
{ binding: 2, type: 'sampler', visibility: 'fragment' },
]);
// Create a bind group using the layout
const group = vgpu.bindings.createGroup('material', [
{ binding: 0, resource: { buffer: uniformBuffer } },
{ binding: 1, resource: textureView },
{ binding: 2, resource: sampler },
], 'materialGroup');
Binding types — simplified strings for entry type:
| Type String | WebGPU Equivalent |
|---|---|
'uniform' | { buffer: { type: 'uniform' } } |
'storage' | { buffer: { type: 'storage' } } |
'read-only-storage' | { buffer: { type: 'read-only-storage' } } |
'texture' | { texture: { sampleType: 'float' } } |
'sampler' | { sampler: {} } |
'storage-texture' | { storageTexture: { format, access } } |
Visibility strings — pipe-separated stage names: 'vertex', 'fragment', 'compute', 'vertex|fragment'
vgpu.shader — Shader Compilation
Compile WGSL shaders with automatic caching, preprocessor defines, and hot reload support.
// Compile (cached by name — second call returns cached module)
const module = vgpu.shader.compile('myShader', wgslSource);
// Compile with preprocessor defines
const module = vgpu.shader.compile('myShader_hq', wgslSource, {
MAX_LIGHTS: 16,
ENABLE_SHADOWS: 1
});
// Hot reload (development only)
vgpu.shader.recompile('myShader', updatedSource);
// Check if shader had compilation errors
const info = await module.getCompilationInfo();
info.messages.forEach(m => console.warn(m.message));
vgpu.pipeline — Render & Compute Pipelines
Create and cache render and compute pipelines with simplified blend state resolution.
// Render pipeline with string blend mode
const pipeline = vgpu.pipeline.render({
vertex: { module: vsModule, entryPoint: 'vs_main', buffers: [...] },
fragment: { module: fsModule, entryPoint: 'fs_main', targets: [{ format: 'bgra8unorm' }] },
blend: 'alpha', // or 'additive', 'premultiplied', 'none'
depthStencil: true, // shorthand for depth24plus with less/write
topology: 'triangle-list', // optional, default
label: 'myPipeline'
});
// Compute pipeline
const compute = vgpu.pipeline.compute({
module: csModule,
entryPoint: 'main',
label: 'physicsUpdate'
});
// Async pipeline creation (non-blocking — ideal for loading screens)
const pipeline = await vgpu.pipeline.renderAsync({ ...same options... });
Blend mode strings:
| String | Effect |
|---|---|
'none' | No blending (opaque) |
'alpha' | Standard alpha blending (srcAlpha, oneMinusSrcAlpha) |
'additive' | Additive blending (one, one) |
'premultiplied' | Pre-multiplied alpha (one, oneMinusSrcAlpha) |
Automatic caching: Pipelines are keyed by their full configuration. If you call vgpu.pipeline.render() twice with identical options, the second call returns the cached pipeline instantly — zero GPU work.
vgpu.texture — Textures & Samplers
Create textures with simplified usage strings and cached samplers.
// Create a texture
const { texture, view, id } = vgpu.texture.create({
width: 512,
height: 512,
format: 'rgba8unorm',
usage: 'render|texture', // render target + sampleable
label: 'colorTarget'
});
// Create (or reuse cached) sampler
const sampler = vgpu.texture.sampler({
filter: 'linear',
wrap: 'repeat'
});
// Release texture
vgpu.texture.release(id);
vgpu.command — Command Encoding
Encode and submit GPU work with convenience helpers for common patterns.
// Manual encoder
const encoder = vgpu.command.encoder('myPass');
// ... set up passes ...
vgpu.command.submit(encoder.finish());
// One-shot compute dispatch (creates encoder, dispatches, submits)
vgpu.command.dispatchCompute({
pipeline: computePipeline,
bindGroups: [group0, group1],
workgroups: [64, 1, 1],
label: 'physicsStep'
});
// Copy buffer
vgpu.command.copyBuffer(srcBuffer, dstBuffer, 0, 0, 4096);
// Read back GPU data to CPU
const data = await vgpu.command.readBuffer(gpuBuffer);
Advanced Subsystems
Beyond the 6 core managers, vGPU includes 20+ specialized modules. These are available as properties on the vGPU instance:
Enhancement Modules
| Property | Module | Purpose |
|---|---|---|
vgpu.debug | VGPUDebugManager | Debug labels, validation markers |
vgpu.ring | VGPURingManager | Ring buffer allocation for streaming uploads |
vgpu.profiler | VGPUProfilerManager | GPU timing and performance metrics |
vgpu.scheduler | VGPUSchedulerManager | Work scheduling and prioritization |
vgpu.bundles | VGPURenderBundleManager | Pre-recorded render bundles |
vgpu.mipmap | VGPUMipmapGenerator | GPU-based mipmap generation |
vgpu.queries | VGPUQueryPool | Occlusion queries and statistics |
vgpu.warmup | VGPUPipelineWarmup | Async pipeline pre-compilation |
Resource Management
| Property | Module | Purpose |
|---|---|---|
vgpu.readback | VGPUReadbackQueue | Non-blocking GPU→CPU data transfer |
vgpu.memory | VGPUMemoryTracker | GPU memory usage tracking and budgets |
vgpu.materials | VGPUBindGroupManager | Material-oriented bind group management |
vgpu.barriers | VGPUResourceBarriers | Resource transition tracking |
vgpu.quality | VGPUQualityScaler | Dynamic resolution and quality scaling |
vgpu.renderStats | VGPURenderStats | Draw calls, triangles, buffer stats |
Advanced Rendering (Lazy-Initialized)
These heavy modules are only created when first accessed via factory methods:
| Factory Method | Module | Purpose |
|---|---|---|
vgpu.getRenderGraph() | VGPURenderGraph | Automatic pass scheduling and resource aliasing |
vgpu.getIndirectRenderer() | VGPUIndirectRenderer | GPU-driven indirect draw calls |
vgpu.getHiZCulling() | VGPUHiZCulling | Hierarchical-Z occlusion culling |
vgpu.getStreaming() | VGPUStreamingManager | Texture/mesh streaming with priority queues |
vgpu.getDebugDraw() | VGPUDebugDraw | Immediate-mode debug line/shape rendering |
Compute & Shader Utilities
| Property | Module | Purpose |
|---|---|---|
vgpu.computeUtils | VGPUComputeUtils | Reduction, scan, fill, copy helpers |
vgpu.preprocessor | WGSLPreprocessor | WGSL macro and include preprocessing |
vgpu.reflection | VGPUShaderReflection | WGSL shader introspection |
vgpu.bindless | VGPUBindless | Bindless resource management |
vgpu.multiQueue | VGPUMultiQueue | Multi-queue submission |
vgpu.semaphores | VGPUTimelineSemaphores | Timeline-based GPU synchronization |
Common Patterns
Pattern 1: Simple Compute Shader
const vgpu = vgpu();
// 1. Create buffers
const { buffer: input } = vgpu.buffer.create({ size: 4096, usage: 'storage', data: inputData });
const { buffer: output } = vgpu.buffer.create({ size: 4096, usage: 'storage' });
// 2. Compile shader
const module = vgpu.shader.compile('transform', `
@group(0) @binding(0) var<storage, read> input: array<f32>;
@group(0) @binding(1) var<storage, read_write> output: array<f32>;
@compute @workgroup_size(64)
fn main(@builtin(global_invocation_id) gid: vec3u) {
output[gid.x] = input[gid.x] * 2.0;
}
`);
// 3. Create pipeline
const pipeline = vgpu.pipeline.compute({ module, entryPoint: 'main' });
// 4. Create bind group
const layout = vgpu.bindings.defineLayout('transform', [
{ binding: 0, type: 'read-only-storage', visibility: 'compute' },
{ binding: 1, type: 'storage', visibility: 'compute' },
]);
const group = vgpu.bindings.createGroup('transform', [
{ binding: 0, resource: { buffer: input } },
{ binding: 1, resource: { buffer: output } },
]);
// 5. Dispatch
vgpu.command.dispatchCompute({
pipeline,
bindGroups: group,
workgroups: [16] // 16 workgroups × 64 threads = 1024 elements
});
// 6. Read back results
const result = await vgpu.command.readBuffer(output);
const floats = new Float32Array(result);
Pattern 2: Render Pass with Depth
// Create depth texture
const { texture: depthTex, view: depthView } = vgpu.texture.create({
width: canvas.width, height: canvas.height,
format: 'depth24plus',
usage: 'render'
});
// Each frame:
const encoder = vgpu.command.encoder('frame');
const pass = encoder.beginRenderPass({
colorAttachments: [{
view: context.getCurrentTexture().createView(),
clearValue: [0.06, 0.09, 0.16, 1],
loadOp: 'clear', storeOp: 'store'
}],
depthStencilAttachment: {
view: depthView,
depthClearValue: 1.0,
depthLoadOp: 'clear', depthStoreOp: 'store'
}
});
pass.setPipeline(pipeline);
pass.setBindGroup(0, cameraGroup);
pass.setVertexBuffer(0, vertexBuffer);
pass.draw(36);
pass.end();
vgpu.command.submit(encoder.finish());
Pattern 3: Frame Lifecycle
// vGPU tracks per-frame stats and profiling
vgpu.beginFrame();
// ... all your rendering and compute work ...
vgpu.endFrame();
// Get comprehensive statistics
const stats = vgpu.getStats();
// → { buffers: { managed, pooled, totalSize }, shaders: { compiled, cached },
// pipelines: { render, compute }, textures: { count }, ... }
Before & After: Why vGPU Matters
Here's a real comparison. Creating a simple particle compute pipeline:
Raw WebGPU (68 lines)
// Buffer creation
const buffer = device.createBuffer({
size: 4096,
usage: GPUBufferUsage.STORAGE | GPUBufferUsage.COPY_DST | GPUBufferUsage.COPY_SRC,
label: 'particles'
});
// Shader compilation
const module = device.createShaderModule({ code: wgslSource, label: 'particleSim' });
// Bind group layout
const bgl = device.createBindGroupLayout({
entries: [{
binding: 0,
visibility: GPUShaderStage.COMPUTE,
buffer: { type: 'storage' }
}]
});
// Pipeline layout
const pipelineLayout = device.createPipelineLayout({
bindGroupLayouts: [bgl]
});
// Compute pipeline
const pipeline = device.createComputePipeline({
layout: pipelineLayout,
compute: { module, entryPoint: 'main' }
});
// Bind group
const bindGroup = device.createBindGroup({
layout: bgl,
entries: [{ binding: 0, resource: { buffer } }]
});
// Dispatch
const encoder = device.createCommandEncoder();
const pass = encoder.beginComputePass();
pass.setPipeline(pipeline);
pass.setBindGroup(0, bindGroup);
pass.dispatchWorkgroups(64);
pass.end();
device.queue.submit([encoder.finish()]);
With vGPU (12 lines)
const { buffer } = vgpu.buffer.create({ size: 4096, usage: 'storage' });
const module = vgpu.shader.compile('particleSim', wgslSource);
const layout = vgpu.bindings.defineLayout('sim', [
{ binding: 0, type: 'storage', visibility: 'compute' }
]);
const pipeline = vgpu.pipeline.compute({ module, entryPoint: 'main' });
const group = vgpu.bindings.createGroup('sim', [
{ binding: 0, resource: { buffer } }
]);
vgpu.command.dispatchCompute({ pipeline, bindGroups: group, workgroups: [64] });
82% less code, plus automatic caching, pooling, memory tracking, and debug labels — all for free.
Device Properties
vGPU exposes the underlying WebGPU device properties:
vgpu.device // GPUDevice
vgpu.queue // GPUQueue
vgpu.adapter // GPUAdapter
vgpu.limits // GPU limits (maxBufferSize, maxComputeWorkgroupSizeX, etc.)
vgpu.features // Supported features set
vgpu.capabilities // Full capability info
Cleanup
// Release all managed resources
vgpu.destroy();
// Destroys all buffers, textures, pipelines, and the device itself
See Also
vGPU API Reference
Complete method-by-method reference for all 6 managers and advanced modules.
API Reference →Particle System
See vGPU in action powering 100M+ particle simulation and rendering.
Particles Guide →