Skip to content

Test the normality of a given Time-Series Dataset🔗

Introduction🔗

Summary

As stated by the NIST/SEMATECH e-Handbook of Statistical Methods:

Normal distributions are important in statistics and are often used in the natural and social sciences to represent real-valued random variables whose distributions are not known. Their importance is partly due to the central limit theorem.


For more info, see: Engineering Statistics Handbook: Measures of Skewness and Kurtosis.

Info

The normality test is used to determine whether a data set is well-modeled by a normal distribution. In time series forecasting, we primarily test the residuals (errors) of a model for normality. If the residuals follow a normal distribution, it suggests that the model has successfully captured the systematic patterns in the data, and the remaining errors are random white noise.

If the residuals are not normally distributed, it may indicate that the model is missing important features, such as seasonal patterns or long-term trends, or that a transformation of the data (e.g., Log or Box-Cox) is required before modeling.

library category algorithm short import script url
scipy Normality Shapiro-Wilk Test SW from scipy.stats import shapiro https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.shapiro.html
scipy Normality D'Agostino & Pearson's Test DP from scipy.stats import normaltest https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.normaltest.html
scipy Normality Anderson-Darling Test AD from scipy.stats import anderson https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.anderson.html
statsmodels Normality Jarque-Bera Test JB from statsmodels.stats.stattools import jarque_bera https://www.statsmodels.org/stable/generated/statsmodels.stats.stattools.jarque_bera.html
statsmodels Normality Omnibus Test OB from statsmodels.stats.diagnostic import omni_normtest https://www.statsmodels.org/stable/generated/statsmodels.stats.diagnostic.omni_normtest.html

For more info, see: Hyndman & Athanasopoulos: Forecasting: Principles and Practice.

Source Library

The scipy and statsmodels packages were chosen because they provide standard, reliable implementations of classical statistical tests. scipy.stats provides implementations for Shapiro-Wilk, D'Agostino-Pearson, and Anderson-Darling tests, while statsmodels provides the Jarque-Bera and Omnibus tests.

Source Module

All of the source code can be found within these modules:

Modules🔗

ts_stat_tests.normality.tests 🔗

Summary

This module contains convenience functions and tests for normality measures, allowing for easy access to different normality algorithms.

normality 🔗

normality(
    x: ArrayLike,
    algorithm: str = "dp",
    axis: int = 0,
    nan_policy: VALID_DP_NAN_POLICY_OPTIONS = "propagate",
    dist: VALID_AD_DIST_OPTIONS = "norm",
) -> Union[
    tuple[float, ...],
    NormaltestResult,
    ShapiroResult,
    AndersonResult,
]

Summary

Perform a normality test on the given data.

Details

This function is a convenience wrapper around the five underlying algorithms:
- jb()
- ob()
- sw()
- dp()
- ad()

Parameters:

Name Type Description Default
x ArrayLike

The data to be checked. Should be a 1-D or N-D data array.

required
algorithm str

Which normality algorithm to use.
- jb(): ["jb", "jarque", "jarque-bera"]
- ob(): ["ob", "omni", "omnibus"]
- sw(): ["sw", "shapiro", "shapiro-wilk"]
- dp(): ["dp", "dagostino", "dagostino-pearson"]
- ad(): ["ad", "anderson", "anderson-darling"]
Default: "dp"

'dp'
axis int

Axis along which to compute the test. Default: 0

0
nan_policy VALID_DP_NAN_POLICY_OPTIONS

Defines how to handle when input contains NaN.
- propagate: returns NaN
- raise: throws an error
- omit: performs the calculations ignoring NaN values
Default: "propagate"

'propagate'
dist VALID_AD_DIST_OPTIONS

The type of distribution to test against.
Only relevant when algorithm=anderson.
Default: "norm"

'norm'

Raises:

Type Description
ValueError

When the given value for algorithm is not valid.

Returns:

Type Description
Union[tuple[float, float], tuple[float, list[float], list[float]]]

If not "ad", returns a tuple of (stat, pvalue). If "ad", returns a tuple of (stat, critical_values, significance_level).

Credit

Calculations are performed by scipy.stats and statsmodels.stats.

Examples
Setup
1
2
3
>>> from ts_stat_tests.normality.tests import normality
>>> from ts_stat_tests.utils.data import data_normal
>>> normal = data_normal
Example 1: D'Agostino-Pearson test
1
2
3
4
5
>>> stat, pvalue = normality(normal, algorithm="dp")
>>> print(f"DP statistic: {stat:.4f}")
DP statistic: 1.3537
>>> print(f"p-value: {pvalue:.4f}")
p-value: 0.5082
Example 2: Jarque-Bera test
1
2
3
4
5
>>> stat, pvalue = normality(normal, algorithm="jb")
>>> print(f"JB statistic: {stat:.4f}")
JB statistic: 1.4168
>>> print(f"p-value: {pvalue:.4f}")
p-value: 0.4924
Source code in src/ts_stat_tests/normality/tests.py
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
@typechecked
def normality(
    x: ArrayLike,
    algorithm: str = "dp",
    axis: int = 0,
    nan_policy: VALID_DP_NAN_POLICY_OPTIONS = "propagate",
    dist: VALID_AD_DIST_OPTIONS = "norm",
) -> Union[tuple[float, ...], NormaltestResult, ShapiroResult, AndersonResult]:
    """
    !!! note "Summary"
        Perform a normality test on the given data.

    ???+ abstract "Details"
        This function is a convenience wrapper around the five underlying algorithms:<br>
        - [`jb()`][ts_stat_tests.normality.algorithms.jb]<br>
        - [`ob()`][ts_stat_tests.normality.algorithms.ob]<br>
        - [`sw()`][ts_stat_tests.normality.algorithms.sw]<br>
        - [`dp()`][ts_stat_tests.normality.algorithms.dp]<br>
        - [`ad()`][ts_stat_tests.normality.algorithms.ad]

    Params:
        x (ArrayLike):
            The data to be checked. Should be a `1-D` or `N-D` data array.
        algorithm (str):
            Which normality algorithm to use.<br>
            - `jb()`: `["jb", "jarque", "jarque-bera"]`<br>
            - `ob()`: `["ob", "omni", "omnibus"]`<br>
            - `sw()`: `["sw", "shapiro", "shapiro-wilk"]`<br>
            - `dp()`: `["dp", "dagostino", "dagostino-pearson"]`<br>
            - `ad()`: `["ad", "anderson", "anderson-darling"]`<br>
            Default: `"dp"`
        axis (int):
            Axis along which to compute the test.
            Default: `0`
        nan_policy (VALID_DP_NAN_POLICY_OPTIONS):
            Defines how to handle when input contains `NaN`.<br>
            - `propagate`: returns `NaN`<br>
            - `raise`: throws an error<br>
            - `omit`: performs the calculations ignoring `NaN` values<br>
            Default: `"propagate"`
        dist (VALID_AD_DIST_OPTIONS):
            The type of distribution to test against.<br>
            Only relevant when `algorithm=anderson`.<br>
            Default: `"norm"`

    Raises:
        (ValueError):
            When the given value for `algorithm` is not valid.

    Returns:
        (Union[tuple[float, float], tuple[float, list[float], list[float]]]):
            If not `"ad"`, returns a `tuple` of `(stat, pvalue)`.
            If `"ad"`, returns a `tuple` of `(stat, critical_values, significance_level)`.

    !!! success "Credit"
        Calculations are performed by `scipy.stats` and `statsmodels.stats`.

    ???+ example "Examples"

        ```pycon {.py .python linenums="1" title="Setup"}
        >>> from ts_stat_tests.normality.tests import normality
        >>> from ts_stat_tests.utils.data import data_normal
        >>> normal = data_normal

        ```

        ```pycon {.py .python linenums="1" title="Example 1: D'Agostino-Pearson test"}
        >>> stat, pvalue = normality(normal, algorithm="dp")
        >>> print(f"DP statistic: {stat:.4f}")
        DP statistic: 1.3537
        >>> print(f"p-value: {pvalue:.4f}")
        p-value: 0.5082

        ```

        ```pycon {.py .python linenums="1" title="Example 2: Jarque-Bera test"}
        >>> stat, pvalue = normality(normal, algorithm="jb")
        >>> print(f"JB statistic: {stat:.4f}")
        JB statistic: 1.4168
        >>> print(f"p-value: {pvalue:.4f}")
        p-value: 0.4924

        ```
    """
    options: dict[str, tuple[str, ...]] = {
        "jb": ("jb", "jarque", "jarque-bera"),
        "ob": ("ob", "omni", "omnibus"),
        "sw": ("sw", "shapiro", "shapiro-wilk"),
        "dp": ("dp", "dagostino", "dagostino-pearson"),
        "ad": ("ad", "anderson", "anderson-darling"),
    }
    if algorithm in options["jb"]:
        res_jb = _jb(x=x, axis=axis)
        return (res_jb[0], res_jb[1])
    if algorithm in options["ob"]:
        return _ob(x=x, axis=axis)
    if algorithm in options["sw"]:
        return _sw(x=x)
    if algorithm in options["dp"]:
        return _dp(x=x, axis=axis, nan_policy=nan_policy)
    if algorithm in options["ad"]:
        return _ad(x=x, dist=dist)

    raise ValueError(
        generate_error_message(
            parameter_name="algorithm",
            value_parsed=algorithm,
            options=options,
        )
    )

is_normal 🔗

is_normal(
    x: ArrayLike,
    algorithm: str = "dp",
    alpha: float = 0.05,
    axis: int = 0,
    nan_policy: VALID_DP_NAN_POLICY_OPTIONS = "propagate",
    dist: VALID_AD_DIST_OPTIONS = "norm",
) -> dict[str, Union[str, float, bool, None]]

Summary

Test whether a given data set is normal or not.

Details

This function implements the given algorithm (defined in the parameter algorithm), and returns a dictionary containing the relevant data:

{
    "result": ...,  # The result of the test. Will be `True` if `p-value >= alpha`, and `False` otherwise
    "statistic": ...,  # The test statistic
    "p_value": ...,  # The p-value of the test (if applicable)
    "alpha": ...,  # The significance level used
}

Parameters:

Name Type Description Default
x ArrayLike

The data to be checked. Should be a 1-D or N-D data array.

required
algorithm str

Which normality algorithm to use.
- jb(): ["jb", "jarque", "jarque-bera"]
- ob(): ["ob", "omni", "omnibus"]
- sw(): ["sw", "shapiro", "shapiro-wilk"]
- dp(): ["dp", "dagostino", "dagostino-pearson"]
- ad(): ["ad", "anderson", "anderson-darling"]
Default: "dp"

'dp'
alpha float

Significance level. Default: 0.05

0.05
axis int

Axis along which to compute the test. Default: 0

0
nan_policy VALID_DP_NAN_POLICY_OPTIONS

Defines how to handle when input contains NaN.
- propagate: returns NaN
- raise: throws an error
- omit: performs the calculations ignoring NaN values
Default: "propagate"

'propagate'
dist VALID_AD_DIST_OPTIONS

The type of distribution to test against.
Only relevant when algorithm=anderson.
Default: "norm"

'norm'

Returns:

Type Description
dict[str, Union[str, float, bool, None]]

A dictionary containing: - "result" (bool): Indicator if the series is normal. - "statistic" (float): The test statistic. - "p_value" (float): The p-value of the test (if applicable). - "alpha" (float): The significance level used.

Credit

Calculations are performed by scipy.stats and statsmodels.stats.

Examples
Setup
1
2
3
4
>>> from ts_stat_tests.normality.tests import is_normal
>>> from ts_stat_tests.utils.data import data_normal, data_random
>>> normal = data_normal
>>> random = data_random
Example 1: Test normal data
1
2
3
4
5
>>> res = is_normal(normal, algorithm="dp")
>>> res["result"]
True
>>> print(f"p-value: {res['p_value']:.4f}")
p-value: 0.5082
Example 2: Test non-normal (random) data
1
2
3
>>> res = is_normal(random, algorithm="sw")
>>> res["result"]
False
Source code in src/ts_stat_tests/normality/tests.py
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
@typechecked
def is_normal(
    x: ArrayLike,
    algorithm: str = "dp",
    alpha: float = 0.05,
    axis: int = 0,
    nan_policy: VALID_DP_NAN_POLICY_OPTIONS = "propagate",
    dist: VALID_AD_DIST_OPTIONS = "norm",
) -> dict[str, Union[str, float, bool, None]]:
    """
    !!! note "Summary"
        Test whether a given data set is `normal` or not.

    ???+ abstract "Details"
        This function implements the given algorithm (defined in the parameter `algorithm`), and returns a dictionary containing the relevant data:
        ```python
        {
            "result": ...,  # The result of the test. Will be `True` if `p-value >= alpha`, and `False` otherwise
            "statistic": ...,  # The test statistic
            "p_value": ...,  # The p-value of the test (if applicable)
            "alpha": ...,  # The significance level used
        }
        ```

    Params:
        x (ArrayLike):
            The data to be checked. Should be a `1-D` or `N-D` data array.
        algorithm (str):
            Which normality algorithm to use.<br>
            - `jb()`: `["jb", "jarque", "jarque-bera"]`<br>
            - `ob()`: `["ob", "omni", "omnibus"]`<br>
            - `sw()`: `["sw", "shapiro", "shapiro-wilk"]`<br>
            - `dp()`: `["dp", "dagostino", "dagostino-pearson"]`<br>
            - `ad()`: `["ad", "anderson", "anderson-darling"]`<br>
            Default: `"dp"`
        alpha (float):
            Significance level.
            Default: `0.05`
        axis (int):
            Axis along which to compute the test.
            Default: `0`
        nan_policy (VALID_DP_NAN_POLICY_OPTIONS):
            Defines how to handle when input contains `NaN`.<br>
            - `propagate`: returns `NaN`<br>
            - `raise`: throws an error<br>
            - `omit`: performs the calculations ignoring `NaN` values<br>
            Default: `"propagate"`
        dist (VALID_AD_DIST_OPTIONS):
            The type of distribution to test against.<br>
            Only relevant when `algorithm=anderson`.<br>
            Default: `"norm"`

    Returns:
        (dict[str, Union[str, float, bool, None]]):
            A dictionary containing:
            - `"result"` (bool): Indicator if the series is normal.
            - `"statistic"` (float): The test statistic.
            - `"p_value"` (float): The p-value of the test (if applicable).
            - `"alpha"` (float): The significance level used.

    !!! success "Credit"
        Calculations are performed by `scipy.stats` and `statsmodels.stats`.

    ???+ example "Examples"

        ```pycon {.py .python linenums="1" title="Setup"}
        >>> from ts_stat_tests.normality.tests import is_normal
        >>> from ts_stat_tests.utils.data import data_normal, data_random
        >>> normal = data_normal
        >>> random = data_random

        ```

        ```pycon {.py .python linenums="1" title="Example 1: Test normal data"}
        >>> res = is_normal(normal, algorithm="dp")
        >>> res["result"]
        True
        >>> print(f"p-value: {res['p_value']:.4f}")
        p-value: 0.5082

        ```

        ```pycon {.py .python linenums="1" title="Example 2: Test non-normal (random) data"}
        >>> res = is_normal(random, algorithm="sw")
        >>> res["result"]
        False

        ```
    """
    res: Any = normality(x=x, algorithm=algorithm, axis=axis, nan_policy=nan_policy, dist=dist)

    if algorithm in ("ad", "anderson", "anderson-darling"):
        # res is AndersonResult(statistic, critical_values, significance_level, fit_result)
        # indexing only gives the first 3 elements
        res_list: list[Any] = list(res) if isinstance(res, (tuple, list)) else []
        if len(res_list) >= 3:
            v0: Any = res_list[0]
            v1: Any = res_list[1]
            v2: Any = res_list[2]
            stat = v0
            crit = v1
            sig = v2

            # sig is something like [15. , 10. ,  5. ,  2.5,  1. ]
            # alpha is something like 0.05 (which is 5%)
            sig_arr = np.asarray(sig)
            crit_arr = np.asarray(crit)
            idx = np.argmin(np.abs(sig_arr - (alpha * 100)))
            critical_value = crit_arr[idx]
            is_norm = stat < critical_value
            return {
                "result": bool(is_norm),
                "statistic": float(stat),
                "critical_value": float(critical_value),
                "significance_level": float(sig_arr[idx]),
                "alpha": float(alpha),
            }
        # Fallback for unexpected return format
        return {
            "result": False,
            "statistic": 0.0,
            "alpha": float(alpha),
        }

    # For others, they return (statistic, pvalue) or similar
    p_val: Union[float, None] = None
    stat_val: Union[float, None] = None

    # Use getattr to avoid type checker attribute issues
    p_val_attr = getattr(res, "pvalue", None)
    stat_val_attr = getattr(res, "statistic", None)

    if p_val_attr is not None and stat_val_attr is not None:
        p_val = float(p_val_attr)
        stat_val = float(stat_val_attr)
    elif isinstance(res, (tuple, list)) and len(res) >= 2:
        res_tuple: Any = res
        stat_val = float(res_tuple[0])
        p_val = float(res_tuple[1])
    else:
        # Fallback
        if isinstance(res, (float, int)):
            stat_val = float(res)
        p_val = None

    is_norm_val = p_val >= alpha if p_val is not None else False

    return {
        "result": bool(is_norm_val),
        "statistic": stat_val,
        "p_value": p_val,
        "alpha": float(alpha),
    }

ts_stat_tests.normality.algorithms 🔗

Summary

This module provides implementations of various statistical tests to assess the normality of data distributions. These tests are essential in statistical analysis and time series forecasting, as many models assume that the underlying data follows a normal distribution.

VALID_DP_NAN_POLICY_OPTIONS module-attribute 🔗

VALID_DP_NAN_POLICY_OPTIONS = Literal[
    "propagate", "raise", "omit"
]

VALID_AD_DIST_OPTIONS module-attribute 🔗

VALID_AD_DIST_OPTIONS = Literal[
    "norm",
    "expon",
    "logistic",
    "gumbel",
    "gumbel_l",
    "gumbel_r",
    "extreme1",
    "weibull_min",
]

jb 🔗

jb(
    x: ArrayLike, axis: int = 0
) -> tuple[np.float64, np.float64, np.float64, np.float64]

Summary

The Jarque-Bera test is a statistical test used to determine whether a dataset follows a normal distribution. In time series forecasting, the test can be used to evaluate whether the residuals of a model follow a normal distribution.

Details

To apply the Jarque-Bera test to time series data, we first need to estimate the residuals of the forecasting model. The residuals represent the difference between the actual values of the time series and the values predicted by the model. We can then use the Jarque-Bera test to evaluate whether the residuals follow a normal distribution.

The Jarque-Bera test is based on two statistics, skewness and kurtosis, which measure the degree of asymmetry and peakedness in the distribution of the residuals. The test compares the observed skewness and kurtosis of the residuals to the expected values for a normal distribution. If the observed values are significantly different from the expected values, the test rejects the null hypothesis that the residuals follow a normal distribution.

Parameters:

Name Type Description Default
x ArrayLike

Data to test for normality. Usually regression model residuals that are mean 0.

required
axis int

Axis to use if data has more than 1 dimension. Default: 0

0

Raises:

Type Description
ValueError

If the input data x is invalid.

Returns:

Name Type Description
JB float

The Jarque-Bera test statistic.

JBpv float

The pvalue of the test statistic.

skew float

Estimated skewness of the data.

kurtosis float

Estimated kurtosis of the data.

Examples
Setup
1
2
3
4
>>> from ts_stat_tests.normality.algorithms import jb
>>> from ts_stat_tests.utils.data import data_airline, data_noise
>>> airline = data_airline.values
>>> noise = data_noise
Example 1: Using the airline dataset
1
2
3
>>> jb_value, p_value, skew, kurt = jb(airline)
>>> print(f"{jb_value:.4f}")
8.9225
Example 2: Using random noise
1
2
3
4
5
6
7
8
9
>>> jb_value, p_value, skew, kurt = jb(noise)
>>> print(f"{jb_value:.4f}")
0.7478
>>> print(f"{p_value:.4f}")
0.6881
>>> print(f"{skew:.4f}")
-0.0554
>>> print(f"{kurt:.4f}")
3.0753
Calculation

The Jarque-Bera test statistic is defined as:

\[ JB = \frac{n}{6} \left( S^2 + \frac{(K-3)^2}{4} \right) \]

where:

  • \(n\) is the sample size,
  • \(S\) is the sample skewness, and
  • \(K\) is the sample kurtosis.
Notes

Each output returned has 1 dimension fewer than data. The Jarque-Bera test statistic tests the null that the data is normally distributed against an alternative that the data follow some other distribution. It has an asymptotic \(\chi_2^2\) distribution.

Credit

All credit goes to the statsmodels library.

References
  • Jarque, C. and Bera, A. (1980) "Efficient tests for normality, homoscedasticity and serial independence of regression residuals", 6 Econometric Letters 255-259.
See Also
Source code in src/ts_stat_tests/normality/algorithms.py
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
@typechecked
def jb(x: ArrayLike, axis: int = 0) -> tuple[np.float64, np.float64, np.float64, np.float64]:
    r"""
    !!! note "Summary"
        The Jarque-Bera test is a statistical test used to determine whether a dataset follows a normal distribution. In time series forecasting, the test can be used to evaluate whether the residuals of a model follow a normal distribution.

    ???+ abstract "Details"
        To apply the Jarque-Bera test to time series data, we first need to estimate the residuals of the forecasting model. The residuals represent the difference between the actual values of the time series and the values predicted by the model. We can then use the Jarque-Bera test to evaluate whether the residuals follow a normal distribution.

        The Jarque-Bera test is based on two statistics, skewness and kurtosis, which measure the degree of asymmetry and peakedness in the distribution of the residuals. The test compares the observed skewness and kurtosis of the residuals to the expected values for a normal distribution. If the observed values are significantly different from the expected values, the test rejects the null hypothesis that the residuals follow a normal distribution.

    Params:
        x (ArrayLike):
            Data to test for normality. Usually regression model residuals that are mean 0.
        axis (int):
            Axis to use if data has more than 1 dimension.
            Default: `0`

    Raises:
        (ValueError):
            If the input data `x` is invalid.

    Returns:
        JB (float):
            The Jarque-Bera test statistic.
        JBpv (float):
            The pvalue of the test statistic.
        skew (float):
            Estimated skewness of the data.
        kurtosis (float):
            Estimated kurtosis of the data.

    ???+ example "Examples"

        ```pycon {.py .python linenums="1" title="Setup"}
        >>> from ts_stat_tests.normality.algorithms import jb
        >>> from ts_stat_tests.utils.data import data_airline, data_noise
        >>> airline = data_airline.values
        >>> noise = data_noise

        ```

        ```pycon {.py .python linenums="1" title="Example 1: Using the airline dataset"}
        >>> jb_value, p_value, skew, kurt = jb(airline)
        >>> print(f"{jb_value:.4f}")
        8.9225

        ```

        ```pycon {.py .python linenums="1" title="Example 2: Using random noise"}
        >>> jb_value, p_value, skew, kurt = jb(noise)
        >>> print(f"{jb_value:.4f}")
        0.7478
        >>> print(f"{p_value:.4f}")
        0.6881
        >>> print(f"{skew:.4f}")
        -0.0554
        >>> print(f"{kurt:.4f}")
        3.0753

        ```

    ??? equation "Calculation"
        The Jarque-Bera test statistic is defined as:

        $$
        JB = \frac{n}{6} \left( S^2 + \frac{(K-3)^2}{4} \right)
        $$

        where:

        - $n$ is the sample size,
        - $S$ is the sample skewness, and
        - $K$ is the sample kurtosis.

    ??? note "Notes"
        Each output returned has 1 dimension fewer than data.
        The Jarque-Bera test statistic tests the null that the data is normally distributed against an alternative that the data follow some other distribution. It has an asymptotic $\chi_2^2$ distribution.

    ??? success "Credit"
        All credit goes to the [`statsmodels`](https://www.statsmodels.org) library.

    ??? question "References"
        - Jarque, C. and Bera, A. (1980) "Efficient tests for normality, homoscedasticity and serial independence of regression residuals", 6 Econometric Letters 255-259.

    ??? tip "See Also"
        - [`ob()`][ts_stat_tests.normality.algorithms.ob]
        - [`sw()`][ts_stat_tests.normality.algorithms.sw]
        - [`dp()`][ts_stat_tests.normality.algorithms.dp]
        - [`ad()`][ts_stat_tests.normality.algorithms.ad]
    """
    return _jb(resids=x, axis=axis)  # type: ignore[return-value]

ob 🔗

ob(x: ArrayLike, axis: int = 0) -> tuple[float, float]

Summary

The Omnibus test is a statistical test used to evaluate the normality of a dataset, including time series data. In time series forecasting, the Omnibus test can be used to assess whether the residuals of a model follow a normal distribution.

Details

The Omnibus test uses a combination of skewness and kurtosis measures to assess whether the residuals follow a normal distribution. Skewness measures the degree of asymmetry in the distribution of the residuals, while kurtosis measures the degree of peakedness or flatness. If the residuals follow a normal distribution, their skewness and kurtosis should be close to zero.

Parameters:

Name Type Description Default
x ArrayLike

Data to test for normality. Usually regression model residuals that are mean 0.

required
axis int

Axis to use if data has more than 1 dimension. Default: 0

0

Raises:

Type Description
ValueError

If the input data x is invalid.

Returns:

Name Type Description
statistic float

The Omnibus test statistic.

pvalue float

The p-value for the hypothesis test.

Examples
Setup
1
2
3
4
>>> from ts_stat_tests.normality.algorithms import ob
>>> from ts_stat_tests.utils.data import data_airline, data_noise
>>> airline = data_airline.values
>>> noise = data_noise
Example 1: Using the airline dataset
1
2
3
>>> stat, p_val = ob(airline)
>>> print(f"{stat:.4f}")
8.6554
Example 2: Using random noise
1
2
3
>>> stat, p_val = ob(noise)
>>> print(f"{stat:.4f}")
0.8637
Calculation

The D'Agostino's \(K^2\) test statistic is defined as:

\[ K^2 = Z_1(g_1)^2 + Z_2(g_2)^2 \]

where:

  • \(Z_1(g_1)\) is the standard normal transformation of skewness, and
  • \(Z_2(g_2)\) is the standard normal transformation of kurtosis.
Notes

The Omnibus test statistic tests the null that the data is normally distributed against an alternative that the data follow some other distribution. It is based on D'Agostino's \(K^2\) test statistic.

Credit

All credit goes to the statsmodels library.

References
  • D'Agostino, R. B. and Pearson, E. S. (1973), "Tests for departure from normality," Biometrika, 60, 613-622.
  • D'Agostino, R. B. and Stephens, M. A. (1986), "Goodness-of-fit techniques," New York: Marcel Dekker.
See Also
Source code in src/ts_stat_tests/normality/algorithms.py
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
@typechecked
def ob(x: ArrayLike, axis: int = 0) -> tuple[float, float]:
    r"""
    !!! note "Summary"
        The Omnibus test is a statistical test used to evaluate the normality of a dataset, including time series data. In time series forecasting, the Omnibus test can be used to assess whether the residuals of a model follow a normal distribution.

    ???+ abstract "Details"
        The Omnibus test uses a combination of skewness and kurtosis measures to assess whether the residuals follow a normal distribution. Skewness measures the degree of asymmetry in the distribution of the residuals, while kurtosis measures the degree of peakedness or flatness. If the residuals follow a normal distribution, their skewness and kurtosis should be close to zero.

    Params:
        x (ArrayLike):
            Data to test for normality. Usually regression model residuals that are mean 0.
        axis (int):
            Axis to use if data has more than 1 dimension.
            Default: `0`

    Raises:
        (ValueError):
            If the input data `x` is invalid.

    Returns:
        statistic (float):
            The Omnibus test statistic.
        pvalue (float):
            The p-value for the hypothesis test.

    ???+ example "Examples"

        ```pycon {.py .python linenums="1" title="Setup"}
        >>> from ts_stat_tests.normality.algorithms import ob
        >>> from ts_stat_tests.utils.data import data_airline, data_noise
        >>> airline = data_airline.values
        >>> noise = data_noise

        ```

        ```pycon {.py .python linenums="1" title="Example 1: Using the airline dataset"}
        >>> stat, p_val = ob(airline)
        >>> print(f"{stat:.4f}")
        8.6554

        ```

        ```pycon {.py .python linenums="1" title="Example 2: Using random noise"}
        >>> stat, p_val = ob(noise)
        >>> print(f"{stat:.4f}")
        0.8637

        ```

    ??? equation "Calculation"
        The D'Agostino's $K^2$ test statistic is defined as:

        $$
        K^2 = Z_1(g_1)^2 + Z_2(g_2)^2
        $$

        where:

        - $Z_1(g_1)$ is the standard normal transformation of skewness, and
        - $Z_2(g_2)$ is the standard normal transformation of kurtosis.

    ??? note "Notes"
        The Omnibus test statistic tests the null that the data is normally distributed against an alternative that the data follow some other distribution. It is based on D'Agostino's $K^2$ test statistic.

    ??? success "Credit"
        All credit goes to the [`statsmodels`](https://www.statsmodels.org) library.

    ??? question "References"
        - D'Agostino, R. B. and Pearson, E. S. (1973), "Tests for departure from normality," Biometrika, 60, 613-622.
        - D'Agostino, R. B. and Stephens, M. A. (1986), "Goodness-of-fit techniques," New York: Marcel Dekker.

    ??? tip "See Also"
        - [`jb()`][ts_stat_tests.normality.algorithms.jb]
        - [`sw()`][ts_stat_tests.normality.algorithms.sw]
        - [`dp()`][ts_stat_tests.normality.algorithms.dp]
        - [`ad()`][ts_stat_tests.normality.algorithms.ad]
    """
    return _ob(resids=x, axis=axis)

sw 🔗

sw(x: ArrayLike) -> ShapiroResult

Summary

The Shapiro-Wilk test is a statistical test used to determine whether a dataset follows a normal distribution.

Details

The Shapiro-Wilk test is based on the null hypothesis that the residuals of the forecasting model are normally distributed. The test calculates a test statistic that compares the observed distribution of the residuals to the expected distribution under the null hypothesis of normality.

Parameters:

Name Type Description Default
x ArrayLike

Array of sample data.

required

Raises:

Type Description
ValueError

If the input data x is invalid.

Returns:

Type Description
ShapiroResult

A named tuple containing the test statistic and p-value: - statistic (float): The test statistic. - pvalue (float): The p-value for the hypothesis test.

Examples
Setup
1
2
3
4
>>> from ts_stat_tests.normality.algorithms import sw
>>> from ts_stat_tests.utils.data import data_airline, data_noise
>>> airline = data_airline.values
>>> noise = data_noise
Example 1: Using the airline dataset
1
2
3
>>> stat, p_val = sw(airline)
>>> print(f"{stat:.4f}")
0.9520
Example 2: Using random noise
1
2
3
>>> stat, p_val = sw(noise)
>>> print(f"{stat:.4f}")
0.9985
Calculation

The Shapiro-Wilk test statistic is defined as:

\[ W = \frac{\left( \sum_{i=1}^n a_i x_{(i)} \right)^2}{\sum_{i=1}^n (x_i - \bar{x})^2} \]

where:

  • \(x_{(i)}\) are the ordered sample values,
  • \(\bar{x}\) is the sample mean, and
  • \(a_i\) are constants generated from the covariances, variances and means of the order statistics of a sample of size \(n\) from a normal distribution.
Notes

The algorithm used is described in (Algorithm as R94 Appl. Statist. (1995)) but censoring parameters as described are not implemented. For \(N > 5000\) the \(W\) test statistic is accurate but the \(p-value\) may not be.

Credit

All credit goes to the scipy library.

References
  • Shapiro, S. S. & Wilk, M.B (1965). An analysis of variance test for normality (complete samples), Biometrika, Vol. 52, pp. 591-611.
  • Algorithm as R94 Appl. Statist. (1995) VOL. 44, NO. 4.
See Also
Source code in src/ts_stat_tests/normality/algorithms.py
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
@typechecked
def sw(x: ArrayLike) -> ShapiroResult:
    r"""
    !!! note "Summary"
        The Shapiro-Wilk test is a statistical test used to determine whether a dataset follows a normal distribution.

    ???+ abstract "Details"
        The Shapiro-Wilk test is based on the null hypothesis that the residuals of the forecasting model are normally distributed. The test calculates a test statistic that compares the observed distribution of the residuals to the expected distribution under the null hypothesis of normality.

    Params:
        x (ArrayLike):
            Array of sample data.

    Raises:
        (ValueError):
            If the input data `x` is invalid.

    Returns:
        (ShapiroResult):
            A named tuple containing the test statistic and p-value:
            - statistic (float): The test statistic.
            - pvalue (float): The p-value for the hypothesis test.

    ???+ example "Examples"

        ```pycon {.py .python linenums="1" title="Setup"}
        >>> from ts_stat_tests.normality.algorithms import sw
        >>> from ts_stat_tests.utils.data import data_airline, data_noise
        >>> airline = data_airline.values
        >>> noise = data_noise

        ```

        ```pycon {.py .python linenums="1" title="Example 1: Using the airline dataset"}
        >>> stat, p_val = sw(airline)
        >>> print(f"{stat:.4f}")
        0.9520

        ```

        ```pycon {.py .python linenums="1" title="Example 2: Using random noise"}
        >>> stat, p_val = sw(noise)
        >>> print(f"{stat:.4f}")
        0.9985

        ```

    ??? equation "Calculation"
        The Shapiro-Wilk test statistic is defined as:

        $$
        W = \frac{\left( \sum_{i=1}^n a_i x_{(i)} \right)^2}{\sum_{i=1}^n (x_i - \bar{x})^2}
        $$

        where:

        - $x_{(i)}$ are the ordered sample values,
        - $\bar{x}$ is the sample mean, and
        - $a_i$ are constants generated from the covariances, variances and means of the order statistics of a sample of size $n$ from a normal distribution.

    ??? note "Notes"
        The algorithm used is described in (Algorithm as R94 Appl. Statist. (1995)) but censoring parameters as described are not implemented. For $N > 5000$ the $W$ test statistic is accurate but the $p-value$ may not be.

    ??? success "Credit"
        All credit goes to the [`scipy`](https://docs.scipy.org/) library.

    ??? question "References"
        - Shapiro, S. S. & Wilk, M.B (1965). An analysis of variance test for normality (complete samples), Biometrika, Vol. 52, pp. 591-611.
        - Algorithm as R94 Appl. Statist. (1995) VOL. 44, NO. 4.

    ??? tip "See Also"
        - [`jb()`][ts_stat_tests.normality.algorithms.jb]
        - [`ob()`][ts_stat_tests.normality.algorithms.ob]
        - [`dp()`][ts_stat_tests.normality.algorithms.dp]
        - [`ad()`][ts_stat_tests.normality.algorithms.ad]
    """
    return _sw(x=x)

dp 🔗

dp(
    x: ArrayLike,
    axis: int = 0,
    nan_policy: VALID_DP_NAN_POLICY_OPTIONS = "propagate",
) -> NormaltestResult

Summary

The D'Agostino and Pearson's test is a statistical test used to evaluate whether a dataset follows a normal distribution.

Details

The D'Agostino and Pearson's test uses a combination of skewness and kurtosis measures to assess whether the residuals follow a normal distribution. Skewness measures the degree of asymmetry in the distribution of the residuals, while kurtosis measures the degree of peakedness or flatness.

Parameters:

Name Type Description Default
x ArrayLike

The array containing the sample to be tested.

required
axis int

Axis along which to compute test. If None, compute over the whole array a. Default: 0

0
nan_policy VALID_DP_NAN_POLICY_OPTIONS

Defines how to handle when input contains nan.

  • "propagate": returns nan
  • "raise": throws an error
  • "omit": performs the calculations ignoring nan values

Default: "propagate"

'propagate'

Raises:

Type Description
ValueError

If the input data x is invalid.

Returns:

Type Description
NormaltestResult

A named tuple containing the test statistic and p-value: - statistic (float): The test statistic (\(K^2\)). - pvalue (float): A 2-sided chi-squared probability for the hypothesis test.

Examples
Setup
1
2
3
4
>>> from ts_stat_tests.normality.algorithms import dp
>>> from ts_stat_tests.utils.data import data_airline, data_noise
>>> airline = data_airline.values
>>> noise = data_noise
Example 1: Using the airline dataset
1
2
3
>>> stat, p_val = dp(airline)
>>> print(f"{stat:.4f}")
8.6554
Example 2: Using random noise
1
2
3
>>> stat, p_val = dp(noise)
>>> print(f"{stat:.4f}")
0.8637
Calculation

The D'Agostino's \(K^2\) test statistic is defined as:

\[ K^2 = Z_1(g_1)^2 + Z_2(g_2)^2 \]

where:

  • \(Z_1(g_1)\) is the standard normal transformation of skewness, and
  • \(Z_2(g_2)\) is the standard normal transformation of kurtosis.
Notes

This function is a wrapper for the scipy.stats.normaltest function.

Credit

All credit goes to the scipy library.

References
  • D'Agostino, R. B. (1971), "An omnibus test of normality for moderate and large sample size", Biometrika, 58, 341-348
  • D'Agostino, R. and Pearson, E. S. (1973), "Tests for departure from normality", Biometrika, 60, 613-622
See Also
Source code in src/ts_stat_tests/normality/algorithms.py
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
@typechecked
def dp(
    x: ArrayLike,
    axis: int = 0,
    nan_policy: VALID_DP_NAN_POLICY_OPTIONS = "propagate",
) -> NormaltestResult:
    r"""
    !!! note "Summary"
        The D'Agostino and Pearson's test is a statistical test used to evaluate whether a dataset follows a normal distribution.

    ???+ abstract "Details"
        The D'Agostino and Pearson's test uses a combination of skewness and kurtosis measures to assess whether the residuals follow a normal distribution. Skewness measures the degree of asymmetry in the distribution of the residuals, while kurtosis measures the degree of peakedness or flatness.

    Params:
        x (ArrayLike):
            The array containing the sample to be tested.
        axis (int):
            Axis along which to compute test. If `None`, compute over the whole array `a`.
            Default: `0`
        nan_policy (VALID_DP_NAN_POLICY_OPTIONS):
            Defines how to handle when input contains nan.

            - `"propagate"`: returns nan
            - `"raise"`: throws an error
            - `"omit"`: performs the calculations ignoring nan values

            Default: `"propagate"`

    Raises:
        (ValueError):
            If the input data `x` is invalid.

    Returns:
        (NormaltestResult):
            A named tuple containing the test statistic and p-value:
            - statistic (float): The test statistic ($K^2$).
            - pvalue (float): A 2-sided chi-squared probability for the hypothesis test.

    ???+ example "Examples"

        ```pycon {.py .python linenums="1" title="Setup"}
        >>> from ts_stat_tests.normality.algorithms import dp
        >>> from ts_stat_tests.utils.data import data_airline, data_noise
        >>> airline = data_airline.values
        >>> noise = data_noise

        ```

        ```pycon {.py .python linenums="1" title="Example 1: Using the airline dataset"}
        >>> stat, p_val = dp(airline)
        >>> print(f"{stat:.4f}")
        8.6554

        ```

        ```pycon {.py .python linenums="1" title="Example 2: Using random noise"}
        >>> stat, p_val = dp(noise)
        >>> print(f"{stat:.4f}")
        0.8637

        ```

    ??? equation "Calculation"
        The D'Agostino's $K^2$ test statistic is defined as:

        $$
        K^2 = Z_1(g_1)^2 + Z_2(g_2)^2
        $$

        where:

        - $Z_1(g_1)$ is the standard normal transformation of skewness, and
        - $Z_2(g_2)$ is the standard normal transformation of kurtosis.

    ??? note "Notes"
        This function is a wrapper for the `scipy.stats.normaltest` function.

    ??? success "Credit"
        All credit goes to the [`scipy`](https://docs.scipy.org/) library.

    ??? question "References"
        - D'Agostino, R. B. (1971), "An omnibus test of normality for moderate and large sample size", Biometrika, 58, 341-348
        - D'Agostino, R. and Pearson, E. S. (1973), "Tests for departure from normality", Biometrika, 60, 613-622

    ??? tip "See Also"
        - [`jb()`][ts_stat_tests.normality.algorithms.jb]
        - [`ob()`][ts_stat_tests.normality.algorithms.ob]
        - [`sw()`][ts_stat_tests.normality.algorithms.sw]
        - [`ad()`][ts_stat_tests.normality.algorithms.ad]
    """
    return _dp(a=x, axis=axis, nan_policy=nan_policy)

ad 🔗

ad(
    x: ArrayLike, dist: VALID_AD_DIST_OPTIONS = "norm"
) -> AndersonResult

Summary

The Anderson-Darling test is a statistical test used to evaluate whether a dataset follows a normal distribution.

Details

The Anderson-Darling test tests the null hypothesis that a sample is drawn from a population that follows a particular distribution. For the Anderson-Darling test, the critical values depend on which distribution is being tested against.

Parameters:

Name Type Description Default
x ArrayLike

Array of sample data.

required
dist VALID_AD_DIST_OPTIONS

The type of distribution to test against. Default: "norm"

'norm'

Raises:

Type Description
ValueError

If the input data x is invalid.

Returns:

Type Description
AndersonResult

A named tuple containing the test statistic, critical values, and significance levels: - statistic (float): The Anderson-Darling test statistic. - critical_values (list[float]): The critical values for this distribution. - significance_level (list[float]): The significance levels for the corresponding critical values in percents.

Examples
Setup
1
2
3
4
>>> from ts_stat_tests.normality.algorithms import ad
>>> from ts_stat_tests.utils.data import data_airline, data_noise
>>> airline = data_airline.values
>>> noise = data_noise
Example 1: Using the airline dataset
1
2
3
>>> stat, cv, sl = ad(airline)
>>> print(f"{stat:.4f}")
1.8185
Example 2: Using random normal data
1
2
3
>>> stat, cv, sl = ad(noise)
>>> print(f"{stat:.4f}")
0.2325
Calculation

The Anderson-Darling test statistic \(A^2\) is defined as:

\[ A^2 = -n - \sum_{i=1}^n \frac{2i-1}{n} \left[ \ln(F(x_i)) + \ln(1 - F(x_{n-i+1})) \right] \]

where:

  • \(n\) is the sample size,
  • \(F\) is the cumulative distribution function of the specified distribution, and
  • \(x_i\) are the ordered sample values.
Notes

Critical values provided are for the following significance levels: - normal/exponential: 15%, 10%, 5%, 2.5%, 1% - logistic: 25%, 10%, 5%, 2.5%, 1%, 0.5% - Gumbel: 25%, 10%, 5%, 2.5%, 1%

Credit

All credit goes to the scipy library.

References
  • Stephens, M. A. (1974). EDF Statistics for Goodness of Fit and Some Comparisons, Journal of the American Statistical Association, Vol. 69, pp. 730-737.
See Also
Source code in src/ts_stat_tests/normality/algorithms.py
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
@typechecked
def ad(
    x: ArrayLike,
    dist: VALID_AD_DIST_OPTIONS = "norm",
) -> AndersonResult:
    r"""
    !!! note "Summary"
        The Anderson-Darling test is a statistical test used to evaluate whether a dataset follows a normal distribution.

    ???+ abstract "Details"
        The Anderson-Darling test tests the null hypothesis that a sample is drawn from a population that follows a particular distribution. For the Anderson-Darling test, the critical values depend on which distribution is being tested against.

    Params:
        x (ArrayLike):
            Array of sample data.
        dist (VALID_AD_DIST_OPTIONS):
            The type of distribution to test against.
            Default: `"norm"`

    Raises:
        (ValueError):
            If the input data `x` is invalid.

    Returns:
        (AndersonResult):
            A named tuple containing the test statistic, critical values, and significance levels:
            - statistic (float): The Anderson-Darling test statistic.
            - critical_values (list[float]): The critical values for this distribution.
            - significance_level (list[float]): The significance levels for the corresponding critical values in percents.

    ???+ example "Examples"

        ```pycon {.py .python linenums="1" title="Setup"}
        >>> from ts_stat_tests.normality.algorithms import ad
        >>> from ts_stat_tests.utils.data import data_airline, data_noise
        >>> airline = data_airline.values
        >>> noise = data_noise

        ```

        ```pycon {.py .python linenums="1" title="Example 1: Using the airline dataset"}
        >>> stat, cv, sl = ad(airline)
        >>> print(f"{stat:.4f}")
        1.8185

        ```

        ```pycon {.py .python linenums="1" title="Example 2: Using random normal data"}
        >>> stat, cv, sl = ad(noise)
        >>> print(f"{stat:.4f}")
        0.2325

        ```

    ??? equation "Calculation"
        The Anderson-Darling test statistic $A^2$ is defined as:

        $$
        A^2 = -n - \sum_{i=1}^n \frac{2i-1}{n} \left[ \ln(F(x_i)) + \ln(1 - F(x_{n-i+1})) \right]
        $$

        where:

        - $n$ is the sample size,
        - $F$ is the cumulative distribution function of the specified distribution, and
        - $x_i$ are the ordered sample values.

    ??? note "Notes"
        Critical values provided are for the following significance levels:
        - normal/exponential: 15%, 10%, 5%, 2.5%, 1%
        - logistic: 25%, 10%, 5%, 2.5%, 1%, 0.5%
        - Gumbel: 25%, 10%, 5%, 2.5%, 1%

    ??? success "Credit"
        All credit goes to the [`scipy`](https://docs.scipy.org/) library.

    ??? question "References"
        - Stephens, M. A. (1974). EDF Statistics for Goodness of Fit and Some Comparisons, Journal of the American Statistical Association, Vol. 69, pp. 730-737.

    ??? tip "See Also"
        - [`jb()`][ts_stat_tests.normality.algorithms.jb]
        - [`ob()`][ts_stat_tests.normality.algorithms.ob]
        - [`sw()`][ts_stat_tests.normality.algorithms.sw]
        - [`dp()`][ts_stat_tests.normality.algorithms.dp]
    """
    return _ad(x=x, dist=dist)