Gemma3nLayerWeights

data class Gemma3nLayerWeights<T : DType>(val inputLayernorm: Tensor<T, Float>, val wq: Tensor<T, Float>, val wk: Tensor<T, Float>, val wv: Tensor<T, Float>, val wo: Tensor<T, Float>, val postAttentionLayernorm: Tensor<T, Float>, val gateProj: Tensor<T, Float>, val upProj: Tensor<T, Float>, val downProj: Tensor<T, Float>, val perLayerInput: Tensor<T, Float>?, val perLayerOutput: Tensor<T, Float>?)(source)

Weights for a single Gemma 3n transformer layer.

Key differences from LLaMA:

  • Variable FFN dimensions per layer (MatFormer architecture)

  • Per-layer embeddings (perLayerInput/Output) are optional

Constructors

Link copied to clipboard
constructor(inputLayernorm: Tensor<T, Float>, wq: Tensor<T, Float>, wk: Tensor<T, Float>, wv: Tensor<T, Float>, wo: Tensor<T, Float>, postAttentionLayernorm: Tensor<T, Float>, gateProj: Tensor<T, Float>, upProj: Tensor<T, Float>, downProj: Tensor<T, Float>, perLayerInput: Tensor<T, Float>?, perLayerOutput: Tensor<T, Float>?)

Properties

Link copied to clipboard
Link copied to clipboard
Link copied to clipboard
Link copied to clipboard

Optional per-layer input embedding

Link copied to clipboard

Optional per-layer output embedding

Link copied to clipboard
Link copied to clipboard
Link copied to clipboard
val wk: Tensor<T, Float>
Link copied to clipboard
val wo: Tensor<T, Float>
Link copied to clipboard
val wq: Tensor<T, Float>
Link copied to clipboard
val wv: Tensor<T, Float>