Mean Absolute Error
Contents
\[
\newcommand{\F}{\mathbb{F}}
\newcommand{\R}{\mathbb{R}}
\newcommand{\v}{\mathbf{v}}
\newcommand{\a}{\mathbf{a}}
\newcommand{\b}{\mathbf{b}}
\newcommand{\c}{\mathbf{c}}
\newcommand{\x}{\mathbf{x}}
\newcommand{\y}{\mathbf{y}}
\newcommand{\yhat}{\mathbf{\hat{y}}}
\newcommand{\0}{\mathbf{0}}
\newcommand{\1}{\mathbf{1}}
\]
Mean Absolute Error#
Mean Absolute Error is a risk metric corresponding to the expected value of the absolute error loss or \(l1\)-norm loss.
Definition (Mean Absolute Error)#
Given a dataset of \(n\) samples indexed by the tuple pair \((x_i, y_i)\), the mean absolute error (MAE) is defined as:
\[
\textbf{MAE} = \dfrac{\sum_{i=1}^n |\hat{y}_i - y_i|}{n}
\]
Theorem (Optimality)#
In simple words, this is the property of the median minimizes the sum of absolute error (\({\ell}_{1}\) loss). This is important that one realises this when doing linear regression, and in complement to the mean minimizes the root mean squared error.
For some proofs:
Implementation of MAE#
import numpy as np
def mean_absolute_error_(y_true: np.ndarray, y_pred: np.ndarray) -> float:
"""Mean absolute error regression loss.
Args:
y_true (np.ndarray): Ground truth (correct) target values.
y_pred (np.ndarray): Estimated target values.
Shape:
y_true: (n_samples, )
y_pred: (n_samples, )
Returns:
loss (float): The mean absolute error.
Examples:
>>> y_true = [3, -0.5, 2, 7]
>>> y_pred = [2.5, 0.0, 2, 8]
>>> mean_absolute_error_(y_true, y_pred)
0.5
"""
y_true = np.asarray(y_true).flatten()
y_pred = np.asarray(y_pred).flatten()
loss = np.mean(np.abs(y_true - y_pred))
return loss
>>> y_true = [3, -0.5, 2, 7]
>>> y_pred = [2.5, 0.0, 2, 8]
>>> mean_absolute_error_(y_true, y_pred)
0.5