LogCoshLoss

Log-Cosh Loss.

LogCosh(pred, target) = log(cosh(pred - target))

Log-cosh is approximately quadratic for small errors (like MSE) but behaves like L1 (MAE) for large errors, making it robust to outliers while still being twice differentiable everywhere.

Properties:

  • log(cosh(x)) ≈ x²/2 for small x

  • log(cosh(x)) ≈ |x| - log(2) for large |x|

This loss is smoother than Huber loss and doesn't have a non-differentiable point.

Constructors

Link copied to clipboard
constructor()

Functions

Link copied to clipboard
open override fun <T : DType, V> forward(preds: Tensor<T, V>, targets: Tensor<out DType, *>, ctx: ExecutionContext, reduction: Reduction): Tensor<T, V>