Coverage for src / ts_stat_tests / regularity / algorithms.py: 100%
23 statements
« prev ^ index » next coverage.py v7.13.2, created at 2026-02-01 09:48 +0000
« prev ^ index » next coverage.py v7.13.2, created at 2026-02-01 09:48 +0000
1# ============================================================================ #
2# #
3# Title: Regularity Algorithms #
4# Purpose: Functions to compute regularity measures for time series data. #
5# #
6# ============================================================================ #
9# ---------------------------------------------------------------------------- #
10# #
11# Overview ####
12# #
13# ---------------------------------------------------------------------------- #
16# ---------------------------------------------------------------------------- #
17# Description ####
18# ---------------------------------------------------------------------------- #
21"""
22!!! note "Summary"
23 This module contains algorithms to compute regularity measures for time series data, including approximate entropy, sample entropy, spectral entropy, and permutation entropy.
24"""
27# ---------------------------------------------------------------------------- #
28# #
29# Setup ####
30# #
31# ---------------------------------------------------------------------------- #
34# ---------------------------------------------------------------------------- #
35# Imports ####
36# ---------------------------------------------------------------------------- #
39# ## Python StdLib Imports ----
40from typing import Literal, Optional, Union
42# ## Python Third Party Imports ----
43import numpy as np
44from antropy import (
45 app_entropy as a_app_entropy,
46 perm_entropy as a_perm_entropy,
47 sample_entropy as a_sample_entropy,
48 spectral_entropy as a_spectral_entropy,
49 svd_entropy as a_svd_entropy,
50)
51from numpy.typing import ArrayLike, NDArray
52from typeguard import typechecked
55# ---------------------------------------------------------------------------- #
56# Exports ####
57# ---------------------------------------------------------------------------- #
60__all__: list[str] = [
61 "approx_entropy",
62 "sample_entropy",
63 "spectral_entropy",
64 "permutation_entropy",
65 "svd_entropy",
66]
69## --------------------------------------------------------------------------- #
70## Constants ####
71## --------------------------------------------------------------------------- #
74VALID_KDTREE_METRIC_OPTIONS = Literal[
75 "euclidean", "l2", "minkowski", "p", "manhattan", "cityblock", "l1", "chebyshev", "infinity"
76]
79VALID_SPECTRAL_ENTROPY_METHOD_OPTIONS = Literal["fft", "welch"]
82# ---------------------------------------------------------------------------- #
83# #
84# Algorithms ####
85# #
86# ---------------------------------------------------------------------------- #
89@typechecked
90def approx_entropy(
91 x: ArrayLike,
92 order: int = 2,
93 tolerance: Optional[float] = None,
94 metric: VALID_KDTREE_METRIC_OPTIONS = "chebyshev",
95) -> float:
96 r"""
97 !!! note "Summary"
98 Approximate entropy is a measure of the amount of regularity or predictability in a time series. It is used to quantify the degree of self-similarity of a signal over different time scales, and can be useful for detecting underlying patterns or trends in data.
100 This function implements the [`app_entropy()`](https://raphaelvallat.com/antropy/build/html/generated/antropy.app_entropy.html) function from the [`AntroPy`](https://raphaelvallat.com/antropy/) library.
102 ???+ abstract "Details"
103 Approximate entropy is a technique used to quantify the amount of regularity and the unpredictability of fluctuations over time-series data. Smaller values indicate that the data is more regular and predictable.
105 To calculate approximate entropy, we first need to define a window size or scale factor, which determines the length of the subsequences that are used to compare the similarity of the time series. We then compare all possible pairs of subsequences within the time series and calculate the probability that two subsequences are within a certain tolerance level of each other, where the tolerance level is usually expressed as a percentage of the standard deviation of the time series.
107 The approximate entropy is then defined as the negative natural logarithm of the average probability of similarity across all possible pairs of subsequences, normalized by the length of the time series and the scale factor.
109 The approximate entropy measure is useful in a variety of applications, such as the analysis of physiological signals, financial time series, and climate data. It can be used to detect changes in the regularity or predictability of a time series over time, and can provide insights into the underlying dynamics or mechanisms that generate the signal.
111 Params:
112 x (ArrayLike):
113 One-dimensional time series of shape `(n_times,)`.
114 order (int, optional):
115 Embedding dimension.<br>
116 Defaults to `2`.
117 tolerance (Optional[float], optional):
118 Tolerance level or similarity criterion. If `None` (default), it is set to $0.2 \times \text{std}(x)$.<br>
119 Defaults to `None`.
120 metric (VALID_KDTREE_METRIC_OPTIONS, optional):
121 Name of the distance metric function used with [`sklearn.neighbors.KDTree`](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KDTree.html#sklearn.neighbors.KDTree). Default is to use the [Chebyshev distance](https://en.wikipedia.org/wiki/Chebyshev_distance). For a full list of all available metrics, see [`sklearn.metrics.pairwise.distance_metrics`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.pairwise_distances.html) and [`scipy.spatial.distance`](https://docs.scipy.org/doc/scipy/reference/spatial.distance.html)<br>
122 Defaults to `"chebyshev"`.
124 Returns:
125 (float):
126 The approximate entropy score.
128 ???+ example "Examples"
130 ```pycon {.py .python linenums="1" title="Setup"}
131 >>> from ts_stat_tests.regularity.algorithms import approx_entropy
132 >>> from ts_stat_tests.utils.data import data_airline, data_random
133 >>> airline = data_airline.values
134 >>> random = data_random
136 ```
138 ```pycon {.py .python linenums="1" title="Example 1: Airline Passengers Data"}
139 >>> print(f"{approx_entropy(x=airline):.4f}")
140 0.6451
142 ```
144 ```pycon {.py .python linenums="1" title="Example 2: Random Data"}
145 >>> print(f"{approx_entropy(x=random):.4f}")
146 1.8177
148 ```
150 ??? equation "Calculation"
151 The equation for ApEn is:
153 $$
154 \text{ApEn}(m, r, N) = \phi_m(r) - \phi_{m+1}(r)
155 $$
157 where:
159 - $m$ is the embedding dimension,
160 - $r$ is the tolerance or similarity criterion,
161 - $N$ is the length of the time series, and
162 - $\phi_m(r)$ and $\phi_{m+1}(r)$ are the logarithms of the probabilities that two sequences of $m$ data points in the time series that are similar to each other within a tolerance $r$ remain similar for the next data point, for $m$ and $m+1$, respectively.
164 ??? note "Notes"
165 - **Inputs**: `x` is a 1-dimensional array. It represents time-series data, ideally with each element in the array being a measurement or value taken at regular time intervals.
166 - **Settings**: `order` is used for determining the number of values that are used to construct each permutation pattern. If the embedding dimension is too small, we may miss important patterns. If it's too large, we may overfit noise.
167 - **Metric**: The Chebyshev metric is often used because it is a robust and computationally efficient way to measure the distance between two time series.
169 ??? success "Credit"
170 All credit goes to the [`AntroPy`](https://raphaelvallat.com/antropy/) library.
172 ??? question "References"
173 - [Richman, J. S. et al. (2000). Physiological time-series analysis using approximate entropy and sample entropy. American Journal of Physiology-Heart and Circulatory Physiology, 278(6), H2039-H2049](https://journals.physiology.org/doi/epdf/10.1152/ajpheart.2000.278.6.H2039)
174 - [SK-Learn: Pairwise metrics, Affinities and Kernels](https://scikit-learn.org/stable/modules/metrics.html#metrics)
175 - [Spatial data structures and algorithms](https://docs.scipy.org/doc/scipy/tutorial/spatial.html)
177 ??? tip "See Also"
178 - [`antropy.app_entropy`](https://raphaelvallat.com/antropy/build/html/generated/antropy.app_entropy.html)
179 - [`antropy.sample_entropy`](https://raphaelvallat.com/antropy/build/html/generated/antropy.sample_entropy.html)
180 - [`antropy.perm_entropy`](https://raphaelvallat.com/antropy/build/html/generated/antropy.perm_entropy.html)
181 - [`antropy.spectral_entropy`](https://raphaelvallat.com/antropy/build/html/generated/antropy.spectral_entropy.html)
182 """
183 return a_app_entropy(
184 x=x,
185 order=order,
186 tolerance=tolerance,
187 metric=metric,
188 )
191@typechecked
192def sample_entropy(
193 x: ArrayLike,
194 order: int = 2,
195 tolerance: Optional[float] = None,
196 metric: VALID_KDTREE_METRIC_OPTIONS = "chebyshev",
197) -> float:
198 r"""
199 !!! note "Summary"
200 Sample entropy is a measure of the amount of regularity or predictability in a time series. It is used to quantify the degree of self-similarity of a signal over different time scales, and can be useful for detecting underlying patterns or trends in data.
202 This function implements the [`sample_entropy()`](https://raphaelvallat.com/antropy/build/html/generated/antropy.sample_entropy.html) function from the [`AntroPy`](https://raphaelvallat.com/antropy/) library.
204 ???+ abstract "Details"
205 Sample entropy is a modification of approximate entropy, used for assessing the complexity of physiological time-series signals. It has two advantages over approximate entropy: data length independence and a relatively trouble-free implementation. Large values indicate high complexity whereas smaller values characterize more self-similar and regular signals.
207 The value of SampEn ranges from zero ($0$) to infinity ($\infty$), with lower values indicating higher regularity or predictability in the time series. A time series with high $SampEn$ is more unpredictable or irregular, whereas a time series with low $SampEn$ is more regular or predictable.
209 Sample entropy is often used in time series forecasting to assess the complexity of the data and to determine whether a time series is suitable for modeling with a particular forecasting method, such as ARIMA or neural networks.
211 Choosing an appropriate embedding dimension is crucial in ensuring that the permutation entropy calculation is robust and reliable, and captures the essential features of the time series in a meaningful way. This allows us to make more accurate and informative inferences about the behavior of the system that generated the data, and can be useful in a wide range of applications, from signal processing to data analysis and beyond.
213 Params:
214 x (ArrayLike):
215 One-dimensional time series of shape `(n_times,)`.
216 order (int, optional):
217 Embedding dimension.<br>
218 Defaults to `2`.
219 tolerance (Optional[float], optional):
220 Tolerance level or similarity criterion. If `None` (default), it is set to $0.2 \times \text{std}(x)$.<br>
221 Defaults to `None`.
222 metric (VALID_KDTREE_METRIC_OPTIONS, optional):
223 Name of the distance metric function used with [`sklearn.neighbors.KDTree`](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KDTree.html#sklearn.neighbors.KDTree). Default is to use the [Chebyshev distance](https://en.wikipedia.org/wiki/Chebyshev_distance). For a full list of all available metrics, see [`sklearn.metrics.pairwise.distance_metrics`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.pairwise_distances.html) and [`scipy.spatial.distance`](https://docs.scipy.org/doc/scipy/reference/spatial.distance.html)<br>
224 Defaults to `"chebyshev"`.
226 Returns:
227 (float):
228 The sample entropy score.
230 ???+ example "Examples"
232 ```pycon {.py .python linenums="1" title="Setup"}
233 >>> from ts_stat_tests.regularity.algorithms import sample_entropy
234 >>> from ts_stat_tests.utils.data import data_airline, data_random
235 >>> airline = data_airline.values
236 >>> random = data_random
238 ```
240 ```pycon {.py .python linenums="1" title="Example 1: Airline Passengers Data"}
241 >>> print(f"{sample_entropy(x=airline):.4f}")
242 0.6177
244 ```
246 ```pycon {.py .python linenums="1" title="Example 2: Random Data"}
247 >>> print(f"{sample_entropy(x=random):.4f}")
248 2.2017
250 ```
252 ??? equation "Calculation"
253 The equation for sample entropy (SampEn) is as follows:
255 $$
256 \text{SampEn}(m, r, N) = - \log \left( \frac {C_m(r)} {C_{m+1}(r)} \right)
257 $$
259 where:
261 - $m$ is the embedding dimension,
262 - $r$ is the tolerance or similarity criterion,
263 - $N$ is the length of the time series, and
264 - $C_m(r)$ and $C_{m+1}(r)$ are the number of $m$-tuples (vectors of $m$ consecutive data points) that have a distance less than or equal to $r$, and $(m+1)$-tuples with the same property, respectively.
266 The calculation of sample entropy involves the following steps:
268 1. Choose the values of $m$ and $r$.
269 2. Construct $m$-tuples from the time series data.
270 3. Compute the number of $m$-tuples that are within a distance $r$ of each other ($C_m(r)$).
271 4. Compute the number of $(m+1)$-tuples that are within a distance $r$ of each other ($C_{m+1}(r)$).
272 5. Compute the value of $SampEn$ using the formula above.
274 ??? note "Notes"
275 - Note that if `metric == 'chebyshev'` and `len(x) < 5000` points, then the sample entropy is computed using a fast custom Numba script. For other distance metric or longer time-series, the sample entropy is computed using a code from the [`mne-features`](https://mne.tools/mne-features/) package by Jean-Baptiste Schiratti and Alexandre Gramfort (requires sklearn).
276 - The embedding dimension is important in the calculation of sample entropy because it affects the sensitivity of the measure to different patterns in the data. If the embedding dimension is too small, we may miss important patterns or variations. If it is too large, we may overfit the data.
278 ??? success "Credit"
279 All credit goes to the [`AntroPy`](https://raphaelvallat.com/antropy/) library.
281 ??? question "References"
282 - [Richman, J. S. et al. (2000). Physiological time-series analysis using approximate entropy and sample entropy. American Journal of Physiology-Heart and Circulatory Physiology, 278(6), H2039-H2049](https://journals.physiology.org/doi/epdf/10.1152/ajpheart.2000.278.6.H2039)
283 - [SK-Learn: Pairwise metrics, Affinities and Kernels](https://scikit-learn.org/stable/modules/metrics.html#metrics)
284 - [Spatial data structures and algorithms](https://docs.scipy.org/doc/scipy/tutorial/spatial.html)
286 ??? tip "See Also"
287 - [`antropy.app_entropy`](https://raphaelvallat.com/antropy/build/html/generated/antropy.app_entropy.html)
288 - [`antropy.sample_entropy`](https://raphaelvallat.com/antropy/build/html/generated/antropy.sample_entropy.html)
289 - [`antropy.perm_entropy`](https://raphaelvallat.com/antropy/build/html/generated/antropy.perm_entropy.html)
290 - [`antropy.spectral_entropy`](https://raphaelvallat.com/antropy/build/html/generated/antropy.spectral_entropy.html)
291 - [`sklearn.neighbors.KDTree`](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KDTree.html)
292 - [`sklearn.metrics.pairwise_distances`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.pairwise_distances.html)
293 - [`scipy.spatial.distance`](https://docs.scipy.org/doc/scipy/reference/spatial.distance.html)
294 """
295 return a_sample_entropy(
296 x=x,
297 order=order,
298 tolerance=tolerance,
299 metric=metric,
300 )
303@typechecked
304def permutation_entropy(
305 x: ArrayLike,
306 order: int = 3,
307 delay: Union[int, list, NDArray[np.int64]] = 1,
308 normalize: bool = False,
309) -> float:
310 r"""
311 !!! note "Summary"
312 Permutation entropy is a measure of the complexity or randomness of a time series. It is based on the idea of permuting the order of the values in the time series and calculating the entropy of the resulting permutation patterns.
314 This function implements the [`perm_entropy()`](https://raphaelvallat.com/antropy/build/html/generated/antropy.perm_entropy.html) function from the [`AntroPy`](https://raphaelvallat.com/antropy/) library.
316 ???+ abstract "Details"
317 The permutation entropy is a complexity measure for time-series first introduced by Bandt and Pompe in 2002.
319 It is particularly useful for detecting nonlinear dynamics and nonstationarity in the data. The value of permutation entropy ranges from $0$ to $\log_2(\text{order}!)$, where the lower bound is attained for an increasing or decreasing sequence of values, and the upper bound for a completely random system where all possible permutations appear with the same probability.
321 Choosing an appropriate embedding dimension is crucial in ensuring that the permutation entropy calculation is robust and reliable, and captures the essential features of the time series in a meaningful way.
323 Params:
324 x (ArrayLike):
325 One-dimensional time series of shape `(n_times,)`.
326 order (int, optional):
327 Order of permutation entropy.<br>
328 Defaults to `3`.
329 delay (Union[int, list, NDArray[np.int64]], optional):
330 Time delay (lag). If multiple values are passed, the average permutation entropy across all these delays is calculated.<br>
331 Defaults to `1`.
332 normalize (bool, optional):
333 If `True`, divide by $\log_2(\text{order}!)$ to normalize the entropy between $0$ and $1$. Otherwise, return the permutation entropy in bits.<br>
334 Defaults to `False`.
336 Returns:
337 (Union[float, NDArray[np.float64]]):
338 The permutation entropy of the data set.
340 ???+ example "Examples"
342 ```pycon {.py .python linenums="1" title="Setup"}
343 >>> from ts_stat_tests.regularity.algorithms import permutation_entropy
344 >>> from ts_stat_tests.utils.data import data_airline, data_random
345 >>> airline = data_airline.values
346 >>> random = data_random
348 ```
350 ```pycon {.py .python linenums="1" title="Example 1: Airline Passengers Data"}
351 >>> print(f"{permutation_entropy(x=airline):.4f}")
352 2.3601
354 ```
356 ```pycon {.py .python linenums="1" title="Example 2: Random Data (Normalized)"}
357 >>> print(f"{permutation_entropy(x=random, normalize=True):.4f}")
358 0.9997
360 ```
362 ??? equation "Calculation"
363 The formula for permutation entropy ($PE$) is as follows:
365 $$
366 PE(n) = - \sum_{i=0}^{n!} p(i) \times \log_2(p(i))
367 $$
369 where:
371 - $n$ is the embedding dimension (`order`),
372 - $p(i)$ is the probability of the $i$-th ordinal pattern.
374 The embedded matrix $Y$ is created by:
376 $$
377 \begin{align}
378 y(i) &= [x_i, x_{i+\text{delay}}, \dots, x_{i+(\text{order}-1) \times \text{delay}}] \\
379 Y &= [y(1), y(2), \dots, y(N-(\text{order}-1) \times \text{delay})]^T
380 \end{align}
381 $$
383 ??? note "Notes"
384 - The embedding dimension (`order`) determines the number of values used to construct each permutation pattern. If too small, patterns may be missed. If too large, overfitting to noise may occur.
386 ??? success "Credit"
387 All credit goes to the [`AntroPy`](https://raphaelvallat.com/antropy/) library.
389 ??? question "References"
390 - [Bandt, Christoph, and Bernd Pompe. "Permutation entropy: a natural complexity measure for time series." Physical review letters 88.17 (2002): 174102](http://materias.df.uba.ar/dnla2019c1/files/2019/03/permutation_entropy.pdf)
392 ??? tip "See Also"
393 - [`antropy.perm_entropy`](https://raphaelvallat.com/antropy/build/html/generated/antropy.perm_entropy.html)
394 - [`antropy.app_entropy`](https://raphaelvallat.com/antropy/build/html/generated/antropy.app_entropy.html)
395 - [`antropy.sample_entropy`](https://raphaelvallat.com/antropy/build/html/generated/antropy.sample_entropy.html)
396 - [`antropy.spectral_entropy`](https://raphaelvallat.com/antropy/build/html/generated/antropy.spectral_entropy.html)
397 """
398 return a_perm_entropy(
399 x=x,
400 order=order,
401 delay=delay, # type: ignore[arg-type] # antropy function can handle Union[int, list[int], NDArray[np.int64]], however the function signature is not annotated as such
402 normalize=normalize,
403 )
406@typechecked
407def spectral_entropy(
408 x: ArrayLike,
409 sf: float = 1,
410 method: VALID_SPECTRAL_ENTROPY_METHOD_OPTIONS = "fft",
411 nperseg: Optional[int] = None,
412 normalize: bool = False,
413 axis: int = -1,
414) -> Union[float, NDArray[np.float64]]:
415 r"""
416 !!! note "Summary"
417 Spectral entropy is a measure of the amount of complexity or unpredictability in a signal's frequency domain representation. It is used to quantify the degree of randomness or regularity in the power spectrum of a signal.
419 This function implements the [`spectral_entropy()`](https://raphaelvallat.com/antropy/build/html/generated/antropy.spectral_entropy.html) function from the [`AntroPy`](https://raphaelvallat.com/antropy/) library.
421 ???+ abstract "Details"
422 Spectral Entropy is defined to be the Shannon entropy of the power spectral density (PSD) of the data. It is based on the Shannon entropy, which is a measure of the uncertainty or information content of a probability distribution.
424 The value of spectral entropy ranges from $0$ to $\log_2(N)$, where $N$ is the number of frequency bands. Lower values indicate a more concentrated or regular distribution of power, while higher values indicate a more spread-out or irregular distribution.
426 Spectral entropy is particularly useful for detecting periodicity and cyclical patterns, as well as changes in the frequency distribution over time.
428 Params:
429 x (ArrayLike):
430 One-dimensional or N-dimensional data array.
431 sf (float, optional):
432 Sampling frequency, in Hz.<br>
433 Defaults to `1`.
434 method (VALID_SPECTRAL_ENTROPY_METHOD_OPTIONS, optional):
435 Spectral estimation method: `'fft'` or `'welch'`.<br>
436 - `'fft'`: Fourier Transformation ([`scipy.signal.periodogram()`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.periodogram.html#scipy.signal.periodogram))<br>
437 - `'welch'`: Welch periodogram ([`scipy.signal.welch()`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.welch.html#scipy.signal.welch))<br>
438 Defaults to `"fft"`.
439 nperseg (Optional[int], optional):
440 Length of each FFT segment for Welch method. If `None`, uses `scipy`'s default of 256 samples.<br>
441 Defaults to `None`.
442 normalize (bool, optional):
443 If `True`, divide by $\log_2(\text{psd.size})$ to normalize the spectral entropy to be between $0$ and $1$. Otherwise, return the spectral entropy in bits.<br>
444 Defaults to `False`.
445 axis (int, optional):
446 The axis along which the entropy is calculated. Default is the last axis.<br>
447 Defaults to `-1`.
449 Returns:
450 (Union[float, NDArray[np.float64]]):
451 The spectral entropy score. Returned as a float for 1D input, or a numpy array for N-dimensional input.
453 ???+ example "Examples"
455 ```pycon {.py .python linenums="1" title="Setup"}
456 >>> from ts_stat_tests.regularity.algorithms import spectral_entropy
457 >>> from ts_stat_tests.utils.data import data_airline
458 >>> airline = data_airline.values
460 ```
462 ```pycon {.py .python linenums="1" title="Example 1: Airline Passengers Data"}
463 >>> print(f"{spectral_entropy(x=airline, sf=12):.4f}")
464 2.6538
466 ```
468 ```pycon {.py .python linenums="1" title="Example 2: Welch method for spectral entropy"}
469 >>> data_sine = np.sin(2 * np.pi * 1 * np.arange(400) / 100)
470 >>> print(f"{spectral_entropy(x=data_sine, sf=100, method='welch'):.4f}")
471 1.2938
473 ```
475 ??? equation "Calculation"
476 The spectral entropy ($SE$) is defined as:
478 $$
479 H(x, f_s) = - \sum_{i=0}^{f_s/2} P(i) \times \log_2(P(i))
480 $$
482 where:
484 - $P(i)$ is the normalized power spectral density (PSD) at the $i$-th frequency band,
485 - $f_s$ is the sampling frequency.
487 ??? note "Notes"
488 - The power spectrum represents the energy of the signal at different frequencies. High spectral entropy indicates multiple sources or processes with different frequencies, while low spectral entropy suggests a dominant frequency or periodicity.
490 ??? success "Credit"
491 All credit goes to the [`AntroPy`](https://raphaelvallat.com/antropy/) library.
493 ??? question "References"
494 - [Inouye, T. et al. (1991). Quantification of EEG irregularity by use of the entropy of the power spectrum. Electroencephalography and clinical neurophysiology, 79(3), 204-210.](https://pubmed.ncbi.nlm.nih.gov/1714811/)
495 - [Wikipedia: Spectral density](https://en.wikipedia.org/wiki/Spectral_density)
496 - [Wikipedia: Welch's method](https://en.wikipedia.org/wiki/Welch%27s_method)
498 ??? tip "See Also"
499 - [`antropy.spectral_entropy`](https://raphaelvallat.com/antropy/build/html/generated/antropy.spectral_entropy.html)
500 - [`antropy.app_entropy`](https://raphaelvallat.com/antropy/build/html/generated/antropy.app_entropy.html)
501 - [`antropy.sample_entropy`](https://raphaelvallat.com/antropy/build/html/generated/antropy.sample_entropy.html)
502 - [`antropy.perm_entropy`](https://raphaelvallat.com/antropy/build/html/generated/antropy.perm_entropy.html)
503 """
504 return a_spectral_entropy(
505 x=x,
506 sf=sf,
507 method=method,
508 nperseg=nperseg,
509 normalize=normalize,
510 axis=axis,
511 )
514@typechecked
515def svd_entropy(
516 x: ArrayLike,
517 order: int = 3,
518 delay: int = 1,
519 normalize: bool = False,
520) -> float:
521 r"""
522 !!! note "Summary"
523 SVD entropy is a measure of the complexity or randomness of a time series based on Singular Value Decomposition (SVD).
525 This function implements the [`svd_entropy()`](https://raphaelvallat.com/antropy/build/html/generated/antropy.svd_entropy.html) function from the [`AntroPy`](https://raphaelvallat.com/antropy/) library.
527 ???+ abstract "Details"
528 SVD entropy is calculated by first embedding the time series into a matrix, then performing SVD on that matrix to obtain the singular values. The entropy is then calculated from the normalized singular values.
530 Params:
531 x (ArrayLike):
532 One-dimensional time series of shape `(n_times,)`.
533 order (int, optional):
534 Order of the SVD entropy (embedding dimension).<br>
535 Defaults to `3`.
536 delay (int, optional):
537 Time delay (lag).<br>
538 Defaults to `1`.
539 normalize (bool, optional):
540 If `True`, divide by $\log_2(\text{order}!)$ to normalize the entropy between $0$ and $1$.<br>
541 Defaults to `False`.
543 Returns:
544 (float):
545 The SVD entropy of the data set.
547 ???+ example "Examples"
549 ```pycon {.py .python linenums="1" title="Setup"}
550 >>> from ts_stat_tests.regularity.algorithms import svd_entropy
551 >>> from ts_stat_tests.utils.data import data_random
552 >>> random = data_random
554 ```
556 ```pycon {.py .python linenums="1" title="Example 1: Basic SVD entropy"}
557 >>> print(f"{svd_entropy(random):.4f}")
558 1.3514
560 ```
562 ??? equation "Calculation"
563 The SVD entropy is calculated as the Shannon entropy of the singular values of the embedded matrix.
565 ??? note "Notes"
566 - Singular Value Decomposition (SVD) is a factorization of a real or complex matrix. It is the generalization of the eigendecomposition of a positive semidefinite normal matrix.
568 ??? success "Credit"
569 All credit goes to the [`AntroPy`](https://raphaelvallat.com/antropy/) library.
571 ??? tip "See Also"
572 - [`antropy.svd_entropy`](https://raphaelvallat.com/antropy/build/html/generated/antropy.svd_entropy.html)
573 - [`ts_stat_tests.regularity.algorithms.approx_entropy`][ts_stat_tests.regularity.algorithms.approx_entropy]
574 - [`ts_stat_tests.regularity.algorithms.sample_entropy`][ts_stat_tests.regularity.algorithms.sample_entropy]
575 - [`ts_stat_tests.regularity.algorithms.permutation_entropy`][ts_stat_tests.regularity.algorithms.permutation_entropy]
576 - [`ts_stat_tests.regularity.algorithms.spectral_entropy`][ts_stat_tests.regularity.algorithms.spectral_entropy]
577 """
578 return a_svd_entropy(
579 x=x,
580 order=order,
581 delay=delay,
582 normalize=normalize,
583 )