StreamingOnnxReader
Streaming ONNX reader that parses metadata without loading tensor data.
Memory usage is proportional to metadata size (~1-10 MB), not file size (100+ GB). Individual tensors can be loaded on-demand via loadTensorData.
This enables parsing of very large ONNX model files without requiring the entire file to fit in memory.
Usage:
StreamingOnnxReader.open(source).use { reader ->
// Access metadata immediately - only metadata loaded
println("Tensors: ${reader.tensors.size}")
println("IR Version: ${reader.irVersion}")
// Load specific tensor when needed
val weights = reader.loadTensorData("conv1.weight")
}Content copied to clipboard
Properties
Functions
Link copied to clipboard
Load tensor data by name.
Load tensor data for a specific tensor.
Load tensor data into an existing buffer.
Link copied to clipboard
Load tensor data asynchronously by name.
Load tensor data asynchronously for a specific tensor.
Load tensor data asynchronously by name.
Load tensor data asynchronously for a specific tensor.