matmulAutoDispatch
fun matmulAutoDispatch(input: Tensor<FP32, Float>, weight: Tensor<*, *>, ctx: ExecutionContext): Tensor<FP32, Float>(source)
Perform matmul with automatic dispatch based on weight type. Uses ternary-optimized path when weights are TernaryTensorData, otherwise falls back to standard matmul.
Return
Output tensor
Parameters
input
FP32 input tensor
weight
Weight tensor (either FP32 or Ternary)
ctx
ExecutionContext