diff --git a/.github/workflows/test.yml b/.github/workflows/test.yml index a5d95b3d..4dee0d47 100644 --- a/.github/workflows/test.yml +++ b/.github/workflows/test.yml @@ -5,9 +5,9 @@ name: Run Tests on: push: - branches: [ "master" ] + branches: [ "main", "master" ] pull_request: - branches: [ "master" ] + branches: [ "main", "master" ] jobs: build: diff --git a/CHANGES.rst b/CHANGES.rst index d40aa093..49bc9948 100644 --- a/CHANGES.rst +++ b/CHANGES.rst @@ -1,6 +1,46 @@ Change Log =================== +Unreleased 4.x +-------------- + +Changes: + +- Includes the `main` branch in continuous integration automation. +- [**BREAKING**] Previously when tallying the uncertainty for a `UFloat` object, the + contribution from other `UFloat` objects with `std_dev == 0` were excluded. Now this + special casing for `std_dev == 0` has been removed so that the contribution from all + contributing `UFloat` objects is included. This changes the behavior in certain + corner cases where `UFloat` `f` is derived from `UFloat` `x`, `x` has `x.s == 0`, but + the derivative of `f` with respect to `x` is `NaN`. For example, previously + `(-1)**ufloat(1, 0)` gave `-1.0+/-0`. The justification for this was that the second + `UFloat` with `std_dev` of `0` should be treated like a regular float. Now the same + calculation returns `-1.0+/-nan`. In this case the `UFloat` in the second argument + of the power operator is treated as a degenerate `UFloat`. + +Removes: + +- [**BREAKING**] Removes certain deprecated `umath` functions and + `AffineScalarFunc`/`UFloat` methods. The following `umath` functions are removed: + `ceil`, `copysign`, `fabs`, `factorial`, `floor`, `fmod`, `frexp`, `ldexp`, `modf`, + `trunc`. The following `AffineScalarFunc`/`UFloat` methods are removed: + `__floordiv__`, `__mod__`, `__abs__`, `__trunc__`, `__lt__`, `__le__`, `__gt__`, + `__ge__`, `__bool__`. +- [**BREAKING**] Removes the `uncertainties.unumpy.matrix` class and the corresponding + `umatrix` constructor function. The `unumpy_to_numpy_matrix` function is also + removed. Various `unumpy` functions have dropped support for matrix compatibility. +- [**BREAKING**] Previously it was possible for a `UFloat` object to compare equal to a + `float` object if the `UFloat` `standard_deviation` was zero and the `UFloat` + `nominal_value` was equal to the `float`. Now, when an equality comparison is made + between a `UFloat` object and another object, if the object is not a `UFloat` then + the equality comparison is deferred to this other object. For the specific case of + `float` this means that the equality comparison always returns `False`. +- [**BREAKING**] The `uncertainties` package is generally dropping formal support for + edge cases involving `UFloat` objects with `std_dev == 0`. +- [**BREAKING**] Previously if a negative `std_dev` was used to construct a `UFloat` + object a custom `NegativeStdDev` exception was raised. Now a standard `ValueError` + exception is raised. + Unreleased ---------- diff --git a/doc/numpy_guide.rst b/doc/numpy_guide.rst index 32e064f8..3ff9f1a8 100644 --- a/doc/numpy_guide.rst +++ b/doc/numpy_guide.rst @@ -76,41 +76,6 @@ through NumPy, thanks to NumPy's support of arrays of arbitrary objects: >>> arr = np.array([ufloat(1, 0.1), ufloat(2, 0.002)]) -.. index:: - single: matrices; creation and manipulation - single: creation; matrices - -Matrices -^^^^^^^^ - -.. warning:: - ``unumpy.umatrix`` is deprecated and will be removed in Uncertainties 4.0. - -Matrices of numbers with uncertainties are best created in one of -two ways. The first way is similar to using :func:`uarray`: - ->>> mat = unumpy.umatrix([1, 2], [0.01, 0.002]) - -Matrices can also be built by converting arrays of numbers with -uncertainties into matrices through the :class:`unumpy.matrix` class: - ->>> mat = unumpy.matrix(arr) - -:class:`unumpy.matrix` objects behave like :class:`numpy.matrix` -objects of numbers with uncertainties, but with better support for -some operations (such as matrix inversion). For instance, regular -NumPy matrices cannot be inverted, if they contain numbers with -uncertainties (i.e., ``numpy.matrix([[ufloat(…), …]]).I`` does not -work). This is why the :class:`unumpy.matrix` class is provided: both -the inverse and the pseudo-inverse of a matrix can be calculated in -the usual way: if :data:`mat` is a :class:`unumpy.matrix`, - ->>> print(mat.I) -[[0.19999999999999996+/-0.012004265908417718] - [0.3999999999999999+/-0.01600179989876138]] - -does calculate the inverse or pseudo-inverse of :data:`mat` with -uncertainties. .. index:: pair: nominal value; uniform access (array) @@ -120,14 +85,11 @@ uncertainties. Uncertainties and nominal values ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Nominal values and uncertainties in arrays (and matrices) can be -directly accessed (through functions that work on pure float arrays -too): +Nominal values and uncertainties in arrays can be directly accessed (through functions +that work on pure float arrays too): >>> unumpy.nominal_values(arr) array([1., 2.]) ->>> unumpy.std_devs(mat) -matrix([[0.1 , 0.002]]) .. index:: mathematical operation; on an array of numbers @@ -263,6 +225,7 @@ numbers with uncertainties, the **matrix inverse and pseudo-inverse**: >>> print(unumpy.ulinalg.inv([[ufloat(2, 0.1)]])) [[0.5+/-0.025]] +>>> mat = np.array([[ufloat(1, 0.1), ufloat(2, 0.002)]]) >>> print(unumpy.ulinalg.pinv(mat)) [[0.19999999999999996+/-0.012004265908417718] [0.3999999999999999+/-0.01600179989876138]] diff --git a/doc/tech_guide.rst b/doc/tech_guide.rst index 326e0ad5..0ff7a653 100644 --- a/doc/tech_guide.rst +++ b/doc/tech_guide.rst @@ -93,107 +93,47 @@ are completely uncorrelated. .. _comparison_operators: -Comparison operators --------------------- +Equality Comparison +------------------- + +Numbers with uncertainty, :class:`UFloat` objects, model random variables. +There are a number of senses of equality for two random variables. +The stronger senses of equality define two random variables to be equal if the two +random variables always produce the same result given a random sample from the sample +space. +For :class:`UFloat`, this is the case if two :class:`UFloat` objects have equal +nominal values and standard deviations and are perfectly correlated. +We can test for these conditions by taking the difference of two :class:`UFloat` objects +and looking at the nominal value and standard deviation of the result. +If both the nominal value and standard deviation of the difference are zero, then the +two :class:`UFloat` objects have the same nominal value, standard deviation, and are +perfectly correlated. +In this case we say the two :class:`UFloat` are equal. -.. warning:: - Support for comparing variables with uncertainties is deprecated and will be - removed in Uncertainties 4.0. The behavior of ``bool`` will also be changed - to always return ``True`` for ``UFloat`` objects. - -Comparison operations (>, ==, etc.) on numbers with uncertainties have -a **pragmatic semantics**, in this package: numbers with uncertainties -can be used wherever Python numbers are used, most of the time with a -result identical to the one that would be obtained with their nominal -value only. This allows code that runs with pure numbers to also work -with numbers with uncertainties. - -.. index:: boolean value - -The **boolean value** (``bool(x)``, ``if x …``) of a number with -uncertainty :data:`x` is defined as the result of ``x != 0``, as usual. - -However, since the objects defined in this module represent -probability distributions and not pure numbers, comparison operators -are interpreted in a specific way. - -The result of a comparison operation is defined so as to be -essentially consistent with the requirement that uncertainties be -small: the **value of a comparison operation** is True only if the -operation yields True for all *infinitesimal* variations of its random -variables around their nominal values, *except*, possibly, for an -*infinitely small number* of cases. - -Example: - ->>> x = ufloat(3.14, 0.01) ->>> x == x +>>> x = ufloat(1, 0.1) +>>> a = 2 * x +>>> b = x + x +>>> print(a - b) +0.0+/-0 +>>> print(a == b) True -because a sample from the probability distribution of :data:`x` is always -equal to itself. However: +It might be the case that two random variables have the same marginal probability +distribution but are uncorrelated. +A weaker sense of equality between random variables may consider two such random +variables to be equal. +This is equivalent to two :class:`UFloat` objects have equal nominal values and +standard deviations, but, are uncorrelated. +The :mod:`uncertainties` package, however, keeps to the stronger sense of random +variable equality such that two such :class:`UFloat` objects do not compare as equal. ->>> y = ufloat(3.14, 0.01) ->>> x == y +>>> x = ufloat(1, 0.1) +>>> y = ufloat(1, 0.1) +>>> print(x - y) +0.00+/-0.14 +>>> print(x == y) False -since :data:`x` and :data:`y` are independent random variables that -*almost* always give a different value (put differently, -:data:`x`-:data:`y` is not equal to 0, as it can take many different -values). Note that this is different -from the result of ``z = 3.14; t = 3.14; print(z == t)``, because -:data:`x` and :data:`y` are *random variables*, not pure numbers. - -Similarly, - ->>> x = ufloat(3.14, 0.01) ->>> y = ufloat(3.00, 0.01) ->>> x > y -True - -because :data:`x` is supposed to have a probability distribution largely -contained in the 3.14±~0.01 interval, while :data:`y` is supposed to be -well in the 3.00±~0.01 one: random samples of :data:`x` and :data:`y` will -most of the time be such that the sample from :data:`x` is larger than the -sample from :data:`y`. Therefore, it is natural to consider that for all -practical purposes, ``x > y``. - -Since comparison operations are subject to the same constraints as -other operations, as required by the :ref:`linear approximation -` method, their result should be essentially *constant* -over the regions of highest probability of their variables (this is -the equivalent of the linearity of a real function, for boolean -values). Thus, it is not meaningful to compare the following two -independent variables, whose probability distributions overlap: - ->>> x = ufloat(3, 0.01) ->>> y = ufloat(3.0001, 0.01) - -In fact the function (x, y) → (x > y) is not even continuous over the -region where x and y are concentrated, which violates the assumption -of approximate linearity made in this package on operations involving -numbers with uncertainties. Comparing such numbers therefore returns -a boolean result whose meaning is undefined. - -However, values with largely overlapping probability distributions can -sometimes be compared unambiguously: - ->>> x = ufloat(3, 1) ->>> x -3.0+/-1.0 ->>> y = x + 0.0002 ->>> y -3.0002+/-1.0 ->>> y > x -True - -In fact, correlations guarantee that :data:`y` is always larger than -:data:`x`: ``y-x`` correctly satisfies the assumption of linearity, -since it is a constant "random" function (with value 0.0002, even -though :data:`y` and :data:`x` are random). Thus, it is indeed true -that :data:`y` > :data:`x`. - - .. index:: linear propagation of uncertainties .. _linear_method: @@ -257,16 +197,6 @@ This indicates that **the derivative required by linear error propagation theory is not defined** (a Monte-Carlo calculation of the resulting random variable is more adapted to this specific case). -However, even in this case where the derivative at the nominal value -is infinite, the :mod:`uncertainties` package **correctly handles -perfectly precise numbers**: - ->>> umath.sqrt(ufloat(0, 0)) -0.0+/-0 - -is thus the correct result, despite the fact that the derivative of -the square root is not defined in zero. - .. _math_def_num_uncert: Mathematical definition of numbers with uncertainties diff --git a/doc/user_guide.rst b/doc/user_guide.rst index 0c555a23..7bed6224 100644 --- a/doc/user_guide.rst +++ b/doc/user_guide.rst @@ -216,40 +216,6 @@ uncertainty of 0. >>> (x -y) 0.0+/-0.7071067811865476 - -Comparisons of magnitude ------------------------------------- - -The concept of comparing the magnitude of values with uncertainties is a bit -complicated. That is, a Variable with a value of 25 +/- 10 might be greater -than a Variable with a value of 24 +/- 8 most of the time, but *sometimes* it -might be less than it. The :mod:`uncertainties` package takes the simple -approach of comparing nominal values. That is - ->>> a = ufloat(25, 10) ->>> b = ufloat(24, 8) ->>> a > b -True - -Note that combining this comparison and the above discussion of `==` and `!=` -can lead to a result that maybe somewhat surprising: - - ->>> a = ufloat(25, 10) ->>> b = ufloat(25, 8) ->>> a >= b -False ->>> a > b -False ->>> a == b -False ->>> a.nominal_value >= b.nominal_value -True - -That is, since `a` is neither greater than `b` (nominal value only) nor equal to -`b`, it cannot be greater than or equal to `b`. - - .. index:: pair: testing (scalar); NaN diff --git a/tests/test_power.py b/tests/test_power.py index 82d00e96..90790f55 100644 --- a/tests/test_power.py +++ b/tests/test_power.py @@ -86,19 +86,12 @@ def test_power_derivatives(first_ufloat, second_ufloat, first_der, second_der): one = ufloat(1, 0) p = ufloat(0.3, 0.01) -power_float_result_cases = [ +power_zero_std_dev_result_cases = [ (0, p, 0), - (zero, p, 0), - (float("nan"), zero, 1), - (one, float("nan"), 1), (p, 0, 1), (zero, 0, 1), (-p, 0, 1), - (-10.3, zero, 1), - (0, zero, 1), (0.3, zero, 1), - (-p, zero, 1), - (zero, zero, 1), (p, zero, 1), (one, -3, 1), (one, -3.1, 1), @@ -116,11 +109,13 @@ def test_power_derivatives(first_ufloat, second_ufloat, first_der, second_der): @pytest.mark.parametrize( "first_ufloat, second_ufloat, result_float", - power_float_result_cases, + power_zero_std_dev_result_cases, ) -def test_power_float_result_cases(first_ufloat, second_ufloat, result_float): +def test_power_zero_std_dev_result_cases(first_ufloat, second_ufloat, result_float): for op in [pow, umath_pow]: - assert op(first_ufloat, second_ufloat) == result_float + result = op(first_ufloat, second_ufloat) + assert result.n == result_float + assert result.s == 0 power_reference_cases = [ diff --git a/tests/test_ulinalg.py b/tests/test_ulinalg.py index 570dc452..8c031d64 100644 --- a/tests/test_ulinalg.py +++ b/tests/test_ulinalg.py @@ -33,35 +33,21 @@ def test_list_inverse(): mat_list_inv_numpy = numpy.linalg.inv(mat_list) assert type(mat_list_inv) == type(mat_list_inv_numpy) - # The resulting matrix does not have to be a matrix that can - # handle uncertainties, because the input matrix does not have - # uncertainties: - assert not isinstance(mat_list_inv, unumpy.matrix) - # Individual element check: assert isinstance(mat_list_inv[1, 1], float) assert mat_list_inv[1, 1] == -1 - x = ufloat(1, 0.1) - y = ufloat(2, 0.1) - mat = unumpy.matrix([[x, x], [y, 0]]) - - # Internal consistency: ulinalg.inv() must coincide with the - # unumpy.matrix inverse, for square matrices (.I is the - # pseudo-inverse, for non-square matrices, but inv() is not). - assert uarrays_close(unumpy.ulinalg.inv(mat), mat.I).all() - def test_list_pseudo_inverse(): "Test of the pseudo-inverse" x = ufloat(1, 0.1) y = ufloat(2, 0.1) - mat = unumpy.matrix([[x, x], [y, 0]]) + mat = numpy.array([[x, x], [y, 0]]) # Internal consistency: the inverse and the pseudo-inverse yield # the same result on square matrices: - assert uarrays_close(mat.I, unumpy.ulinalg.pinv(mat), 1e-4).all() + assert uarrays_close(unumpy.ulinalg.inv(mat), unumpy.ulinalg.pinv(mat), 1e-4).all() assert uarrays_close( unumpy.ulinalg.inv(mat), # Support for the optional pinv argument is @@ -69,13 +55,3 @@ def test_list_pseudo_inverse(): unumpy.ulinalg.pinv(mat, 1e-15), 1e-4, ).all() - - # Non-square matrices: - x = ufloat(1, 0.1) - y = ufloat(2, 0.1) - mat1 = unumpy.matrix([[x, y]]) # "Long" matrix - mat2 = unumpy.matrix([[x, y], [1, 3 + x], [y, 2 * x]]) # "Tall" matrix - - # Internal consistency: - assert uarrays_close(mat1.I, unumpy.ulinalg.pinv(mat1, 1e-10)).all() - assert uarrays_close(mat2.I, unumpy.ulinalg.pinv(mat2, 1e-8)).all() diff --git a/tests/test_umath.py b/tests/test_umath.py index 521c105a..310fbc22 100644 --- a/tests/test_umath.py +++ b/tests/test_umath.py @@ -1,5 +1,4 @@ import json -import inspect import math from math import isnan from pathlib import Path @@ -200,13 +199,6 @@ def test_math_module(): # Regular operations are chosen to be unchanged: assert isinstance(umath_core.sin(3), float) - # factorial() must not be "damaged" by the umath_core module, so as - # to help make it a drop-in replacement for math (even though - # factorial() does not work on numbers with uncertainties - # because it is restricted to integers, as for - # math.factorial()): - assert umath_core.factorial(4) == 24 - # fsum is special because it does not take a fixed number of # variables: assert umath_core.fsum([x, x]).nominal_value == -3 @@ -269,19 +261,3 @@ def test_hypot(): result = umath_core.hypot(x, y) assert isnan(result.derivatives[x]) assert isnan(result.derivatives[y]) - - -@pytest.mark.parametrize("function_name", umath_core.deprecated_functions) -def test_deprecated_function(function_name): - num_args = len(inspect.signature(getattr(math, function_name)).parameters) - args = [ufloat(1, 0.1)] - if num_args == 1: - if function_name == "factorial": - args[0] = 6 - else: - if function_name == "ldexp": - args.append(3) - else: - args.append(ufloat(-12, 2.4)) - with pytest.warns(FutureWarning, match="will be removed"): - getattr(umath_core, function_name)(*args) diff --git a/tests/test_uncertainties.py b/tests/test_uncertainties.py index a4cc3276..299deab3 100644 --- a/tests/test_uncertainties.py +++ b/tests/test_uncertainties.py @@ -1,6 +1,5 @@ import copy import json -import inspect import math from pathlib import Path import random # noqa @@ -12,7 +11,6 @@ ufloat, AffineScalarFunc, ufloat_fromstr, - deprecated_methods, ) from uncertainties import ( umath, @@ -57,7 +55,7 @@ def test_ufloat_function_construction(): assert x.std_dev == 0.14 assert x.tag == "pi" - with pytest.raises(uncert_core.NegativeStdDev): + with pytest.raises(ValueError): _ = ufloat(3, -0.1) with pytest.raises(TypeError): @@ -276,7 +274,7 @@ def test_pickling(): assert isinstance(f, AffineScalarFunc) (f_unpickled, x_unpickled2) = pickle.loads(pickle.dumps((f, x))) # Correlations must be preserved: - assert f_unpickled - x_unpickled2 - x_unpickled2 == 0 + assert f_unpickled == x_unpickled2 + x_unpickled2 ## Tests with subclasses: @@ -328,45 +326,17 @@ def test_pickling(): assert pickle.loads(pickle.dumps(x)).linear_combo == {} -def test_int_div(): - "Integer division" - # We perform all operations on floats, because derivatives can - # otherwise be meaningless: - x = ufloat(3.9, 2) // 2 - assert x.nominal_value == 1.0 - # All errors are supposed to be small, so the ufloat() - # in x violates the assumption. Therefore, the following is - # correct: - assert x.std_dev == 0.0 - - def test_comparison_ops(): "Test of comparison operators" # Operations on quantities equivalent to Python numbers must still # be correct: - a = ufloat(-3, 0) b = ufloat(10, 0) c = ufloat(10, 0) - assert a < b - assert a < 3 - assert 3 < b # This is first given to int.__lt__() assert b == c x = ufloat(3, 0.1) - # One constraint is that usual Python code for inequality testing - # still work in a reasonable way (for instance, it is generally - # desirable that functions defined by different formulas on - # different intervals can still do "if 0 < x < 1:...". This - # supposes again that errors are "small" (as for the estimate of - # the standard error). - assert x > 1 - - # The limit case is not obvious: - assert not (x >= 3) - assert not (x < 3) - assert x == x # Comparaison between Variable and AffineScalarFunc: assert x == x + 0 @@ -384,7 +354,7 @@ def test_comparison_ops(): # Comparison to other types should work: assert x is not None # Not comparable - assert x - x == 0 # Comparable, even though the types are different + assert x - x != 0 # Equality comparison with float is always False assert x != [1, 2] #################### @@ -417,7 +387,7 @@ def random_float(var): return (random.random() - 0.5) * min(var.std_dev, 1e-5) + var.nominal_value # All operations are tested: - for op in ["__%s__" % name for name in ("ne", "eq", "lt", "le", "gt", "ge")]: + for op in ["__%s__" % name for name in ("ne", "eq")]: try: float_func = getattr(float, op) except AttributeError: # Python 2.3's floats don't have __ne__ @@ -474,17 +444,16 @@ def random_float(var): def test_logic(): - "Boolean logic: __nonzero__, bool." - + "bool defers to object.__bool__ and always returns True." x = ufloat(3, 0) y = ufloat(0, 0) z = ufloat(0, 0.1) t = ufloat(-1, 2) assert bool(x) - assert not bool(y) + assert bool(y) assert bool(z) - assert bool(t) # Only infinitseimal neighborhood are used + assert bool(t) def test_basic_access_to_data(): @@ -1106,20 +1075,6 @@ def test_numpy_comparison(): assert len(numpy.array([x, x, x]) == x) == 3 assert numpy.all(x == numpy.array([x, x, x])) - # Inequalities: - assert len(x < numpy.arange(10)) == 10 - assert len(numpy.arange(10) > x) == 10 - assert len(x <= numpy.arange(10)) == 10 - assert len(numpy.arange(10) >= x) == 10 - assert len(x > numpy.arange(10)) == 10 - assert len(numpy.arange(10) < x) == 10 - assert len(x >= numpy.arange(10)) == 10 - assert len(numpy.arange(10) <= x) == 10 - - # More detailed test, that shows that the comparisons are - # meaningful (x >= 0, but not x <= 1): - assert numpy.all((x >= numpy.arange(3)) == [True, False, False]) - def test_correlated_values(): """ Correlated variables. @@ -1336,23 +1291,6 @@ def test_no_numpy(): _ = correlation_matrix([x, y, z]) -@pytest.mark.parametrize("method_name", deprecated_methods) -def test_deprecated_method(method_name): - x = ufloat(1, 0.1) - y = ufloat(-12, 2.4) - num_args = len(inspect.signature(getattr(float, method_name)).parameters) - with pytest.warns(FutureWarning, match="will be removed"): - if num_args == 1: - getattr(x, method_name)() - else: - getattr(x, method_name)(y) - - -def test_deprecated_bool(): - with pytest.warns(FutureWarning, match="is deprecated.*will defer to"): - bool(ufloat(0, 0)) - - def test_zero_std_dev_warn(): with pytest.warns(UserWarning, match="std_dev==0.*unexpected results"): ufloat(1, 0) diff --git a/tests/test_unumpy.py b/tests/test_unumpy.py index 701137ff..22d9aa3d 100644 --- a/tests/test_unumpy.py +++ b/tests/test_unumpy.py @@ -5,7 +5,7 @@ sys.exit() # There is no reason to test the interface to NumPy -import uncertainties +import uncertainties.umath import uncertainties.core as uncert_core from uncertainties import ufloat, unumpy from uncertainties.unumpy import core @@ -31,19 +31,6 @@ def test_numpy(): # Operations with arrays work (they are first handled by NumPy, # then by this module): prod1 * prod2 # This should be calculable - assert not (prod1 - prod2).any() # All elements must be 0 - - # Comparisons work too: - - # Usual behavior: - assert len(arr[arr > 1.5]) == 1 - # Comparisons with Variable objects: - assert len(arr[arr > ufloat(1.5, 0.1)]) == 1 - - assert len(prod1[prod1 < prod1 * prod2]) == 2 - - # The following can be calculated (special NumPy abs() function): - numpy.abs(arr + ufloat(-1, 0.1)) # The following does not completely work, because NumPy does not # implement numpy.exp on an array of general objects, apparently: @@ -68,26 +55,6 @@ def test_numpy(): arr.mean() # Global mean -def test_matrix(): - "Matrices of numbers with uncertainties" - # Matrix inversion: - - # Matrix with a mix of Variable objects and regular - # Python numbers: - - m = unumpy.matrix([[ufloat(10, 1), -3.1], [0, ufloat(3, 0)]]) - m_nominal_values = unumpy.nominal_values(m) - - # Test of the nominal_value attribute: - assert numpy.all(m_nominal_values == m.nominal_values) - - assert type(m[0, 0]) == uncert_core.Variable - - # Test of scalar multiplication, both sides: - 3 * m - m * 3 - - def derivatives_close(x, y): """ Returns True iff the AffineScalarFunc objects x and y have @@ -107,48 +74,48 @@ def derivatives_close(x, y): def test_inverse(): "Tests of the matrix inverse" - m = unumpy.matrix([[ufloat(10, 1), -3.1], [0, ufloat(3, 0)]]) + m = numpy.array([[ufloat(10, 1), -3.1], [0, ufloat(3, 0)]]) m_nominal_values = unumpy.nominal_values(m) # "Regular" inverse matrix, when uncertainties are not taken # into account: - m_no_uncert_inv = m_nominal_values.I + m_no_uncert_inv = numpy.linalg.inv(m_nominal_values) # The matrix inversion should not yield numbers with uncertainties: assert m_no_uncert_inv.dtype == numpy.dtype(float) # Inverse with uncertainties: - m_inv_uncert = m.I # AffineScalarFunc elements + m_inv_uncert = core.inv(m) # AffineScalarFunc elements # The inverse contains uncertainties: it must support custom # operations on matrices with uncertainties: - assert isinstance(m_inv_uncert, unumpy.matrix) + assert isinstance(m_inv_uncert, numpy.ndarray) assert type(m_inv_uncert[0, 0]) == uncert_core.AffineScalarFunc # Checks of the numerical values: the diagonal elements of the # inverse should be the inverses of the diagonal elements of # m (because we started with a triangular matrix): assert nan_close( - 1 / m_nominal_values[0, 0], m_inv_uncert[0, 0].nominal_value + 1 / m_nominal_values[0, 0], core.nominal_values(m_inv_uncert[0, 0]) ), "Wrong value" assert nan_close( - 1 / m_nominal_values[1, 1], m_inv_uncert[1, 1].nominal_value + 1 / m_nominal_values[1, 1], core.nominal_values(m_inv_uncert[1, 1]) ), "Wrong value" #################### # Checks of the covariances between elements: x = ufloat(10, 1) - m = unumpy.matrix([[x, x], [0, 3 + 2 * x]]) + m = numpy.array([[x, x], [0, 3 + 2 * x]]) - m_inverse = m.I + m_inverse = core.inv(m) # Check of the properties of the inverse: - m_double_inverse = m_inverse.I + m_double_inverse = core.inv(m_inverse) # The initial matrix should be recovered, including its # derivatives, which define covariances: - assert nan_close(m_double_inverse[0, 0].nominal_value, m[0, 0].nominal_value) - assert nan_close(m_double_inverse[0, 0].std_dev, m[0, 0].std_dev) + assert nan_close(m_double_inverse[0, 0].nominal_value, core.nominal_values(m[0, 0])) + assert nan_close(m_double_inverse[0, 0].std_dev, core.std_devs(m[0, 0])) assert uarrays_close(m_double_inverse, m).all() @@ -167,7 +134,7 @@ def test_inverse(): # Correlations between m and m_inverse should create a perfect # inversion: - assert uarrays_close(m * m_inverse, numpy.eye(m.shape[0])).all() + assert uarrays_close(m @ m_inverse, numpy.eye(m.shape[0])).all() def test_wrap_array_func(): @@ -179,7 +146,7 @@ def test_wrap_array_func(): # Function that works with numbers with uncertainties in mat (if # mat is an uncertainties.unumpy.matrix): def f_unc(mat, *args, **kwargs): - return mat.I + args[0] * kwargs["factor"] + return core.pinv(mat) + args[0] * kwargs["factor"] # Test with optional arguments and keyword arguments: def f(mat, *args, **kwargs): @@ -193,7 +160,7 @@ def f(mat, *args, **kwargs): ########## # Full rank rectangular matrix: - m = unumpy.matrix([[ufloat(10, 1), -3.1], [0, ufloat(3, 0)], [1, -3.1]]) + m = numpy.array([[ufloat(10, 1), -3.1], [0, ufloat(3, 0)], [1, -3.1]]) # Numerical and package (analytical) pseudo-inverses: they must be # the same: @@ -211,7 +178,7 @@ def test_pseudo_inverse(): ########## # Full rank rectangular matrix: - m = unumpy.matrix([[ufloat(10, 1), -3.1], [0, ufloat(3, 0)], [1, -3.1]]) + m = numpy.array([[ufloat(10, 1), -3.1], [0, ufloat(3, 0)], [1, -3.1]]) # Numerical and package (analytical) pseudo-inverses: they must be # the same: @@ -223,14 +190,14 @@ def test_pseudo_inverse(): ########## # Example with a non-full rank rectangular matrix: vector = [ufloat(10, 1), -3.1, 11] - m = unumpy.matrix([vector, vector]) + m = numpy.array([vector, vector]) m_pinv_num = pinv_num(m, rcond) m_pinv_package = core.pinv(m, rcond) assert uarrays_close(m_pinv_num, m_pinv_package).all() ########## # Example with a non-full-rank square matrix: - m = unumpy.matrix([[ufloat(10, 1), 0], [3, 0]]) + m = numpy.array([[ufloat(10, 1), 0], [3, 0]]) m_pinv_num = pinv_num(m, rcond) m_pinv_package = core.pinv(m, rcond) assert uarrays_close(m_pinv_num, m_pinv_package).all() @@ -260,18 +227,13 @@ def test_broadcast_funcs(): assert "acos" not in unumpy.__all__ -def test_array_and_matrix_creation(): +def test_array_creation(): "Test of custom array creation" arr = unumpy.uarray([1, 2], [0.1, 0.2]) - assert arr[1].nominal_value == 2 - assert arr[1].std_dev == 0.2 - - # Same thing for matrices: - mat = unumpy.umatrix([1, 2], [0.1, 0.2]) - assert mat[0, 1].nominal_value == 2 - assert mat[0, 1].std_dev == 0.2 + assert core.nominal_values(arr)[1] == 2 + assert core.std_devs(arr)[1] == 0.2 def test_component_extraction(): @@ -282,21 +244,9 @@ def test_component_extraction(): assert numpy.all(unumpy.nominal_values(arr) == [1, 2]) assert numpy.all(unumpy.std_devs(arr) == [0.1, 0.2]) - # unumpy matrices, in addition, should have nominal_values that - # are simply numpy matrices (not unumpy ones, because they have no - # uncertainties): - mat = unumpy.matrix(arr) - assert numpy.all(unumpy.nominal_values(mat) == [1, 2]) - assert numpy.all(unumpy.std_devs(mat) == [0.1, 0.2]) - assert type(unumpy.nominal_values(mat)) == numpy.matrix - def test_array_comparisons(): "Test of array and matrix comparisons" arr = unumpy.uarray([1, 2], [1, 4]) assert numpy.all((arr == [arr[0], 4]) == [True, False]) - - # For matrices, 1D arrays are converted to 2D arrays: - mat = unumpy.umatrix([1, 2], [1, 4]) - assert numpy.all((mat == [mat[0, 0], 4]) == [True, False]) diff --git a/uncertainties/core.py b/uncertainties/core.py index 0b2f4018..5200bfda 100644 --- a/uncertainties/core.py +++ b/uncertainties/core.py @@ -469,26 +469,10 @@ def error_components(self): object take scalar values (and are not a tuple, like what math.frexp() returns, for instance). """ - - # Calculation of the variance: - error_components = {} - - for variable, derivative in self.derivatives.items(): - # print "TYPE", type(variable), type(derivative) - - # Individual standard error due to variable: - - # 0 is returned even for a NaN derivative (in this case no - # multiplication by the derivative is performed): an exact - # variable obviously leads to no uncertainty in the - # functions that depend on it. - if variable._std_dev == 0: - # !!! Shouldn't the errors always be floats, as a - # convention of this module? - error_components[variable] = 0 - else: - error_components[variable] = abs(derivative * variable._std_dev) - + error_components = { + variable: abs(derivative * variable._std_dev) + for variable, derivative in self.derivatives.items() + } return error_components @property @@ -513,6 +497,12 @@ def std_dev(self): # Abbreviation (for formulas, etc.): s = std_dev + def __eq__(self, other): + if not isinstance(other, type(self)): + return NotImplemented + diff = self - other + return diff.n == 0 and diff.s == 0 + def __repr__(self): # Not putting spaces around "+/-" helps with arrays of # Variable, as each value with an uncertainty is a @@ -656,8 +646,6 @@ def __setstate__(self, data_dict): ops.add_arithmetic_ops(AffineScalarFunc) -ops.add_comparative_ops(AffineScalarFunc) -to_affine_scalar = AffineScalarFunc._to_affine_scalar # Nicer name, for users: isinstance(ufloat(...), UFloat) is # True. Also: isinstance(..., UFloat) is the test for "is this a @@ -717,12 +705,6 @@ def wrap(f, derivatives_args=None, derivatives_kwargs=None): ############################################################################### -class NegativeStdDev(Exception): - """Raise for a negative standard deviation""" - - pass - - class Variable(AffineScalarFunc): """ Representation of a float-like scalar Variable with its uncertainty. @@ -792,7 +774,7 @@ def std_dev(self, std_dev): # separately for NaN. But this is not guaranteed, even if it # should work on most platforms.) if std_dev < 0 and isfinite(std_dev): - raise NegativeStdDev("The standard deviation cannot be negative") + raise ValueError("The standard deviation cannot be negative") self._std_dev = float(std_dev) @@ -1038,26 +1020,3 @@ def wrapped(*args, **kwargs): return func(*args, **kwargs) return wrapped - - -deprecated_methods = [ - "__floordiv__", - "__mod__", - "__abs__", - "__trunc__", - "__lt__", - "__gt__", - "__le__", - "__ge__", -] - -for method_name in deprecated_methods: - message = ( - f"AffineScalarFunc.{method_name}() is deprecated. It will be removed in a future " - f"release." - ) - setattr( - AffineScalarFunc, - method_name, - deprecation_wrapper(getattr(AffineScalarFunc, method_name), message), - ) diff --git a/uncertainties/ops.py b/uncertainties/ops.py index 5df483a5..88019fcd 100644 --- a/uncertainties/ops.py +++ b/uncertainties/ops.py @@ -5,7 +5,6 @@ import itertools from inspect import getfullargspec import numbers -from warnings import warn # Some types known to not depend on Variable objects are put in # CONSTANT_TYPES. The most common types can be put in front, as this @@ -122,10 +121,8 @@ def get_ops_with_reflection(): # AffineScalarFunc._nominal_value numbers, it is applied on # floats, and is therefore the "usual" mathematical division. "div": ("1/y", "-x/y**2"), - "floordiv": ("0.", "0."), # Non exact: there is a discontinuity # The derivative wrt the 2nd arguments is something like (..., x//y), # but it is calculated numerically, for convenience: - "mod": ("1.", "partial_derivative(float.__mod__, 1)(x, y)"), "mul": ("y", "x"), "sub": ("1.", "-1."), "truediv": ("1/y", "-x/y**2"), @@ -227,10 +224,8 @@ def _simple_add_deriv(x): # Single-argument operators that should be adapted from floats to # AffineScalarFunc objects, associated to their derivative: simple_numerical_operators_derivatives = { - "abs": _simple_add_deriv, "neg": lambda x: -1.0, "pos": lambda x: 1.0, - "trunc": lambda x: 0.0, } for op, derivative in iter(simple_numerical_operators_derivatives.items()): @@ -651,218 +646,3 @@ def partial_derivative_of_f(*args, **kwargs): return (shifted_f_plus - shifted_f_minus) / 2 / step return partial_derivative_of_f - - -######################################## - -# Definition of boolean operators, that assume that self and -# y_with_uncert are AffineScalarFunc. - -# The fact that uncertainties must be small is used, here: the -# comparison functions are supposed to be constant for most values of -# the random variables. - -# Even though uncertainties are supposed to be small, comparisons -# between 3+/-0.1 and 3.0 are handled correctly (even though x == 3.0 is -# not a constant function in the 3+/-0.1 interval). The comparison -# between x and x is handled too, when x has an uncertainty. In fact, -# as explained in the main documentation, it is possible to give a -# useful meaning to the comparison operators, in these cases. - - -def eq_on_aff_funcs(self, y_with_uncert): - """ - __eq__ operator, assuming that both self and y_with_uncert are - AffineScalarFunc objects. - """ - difference = self - y_with_uncert - # Only an exact zero difference means that self and y are - # equal numerically: - return not (difference._nominal_value or difference.std_dev) - - -def ne_on_aff_funcs(self, y_with_uncert): - """ - __ne__ operator, assuming that both self and y_with_uncert are - AffineScalarFunc objects. - """ - - return not eq_on_aff_funcs(self, y_with_uncert) - - -def gt_on_aff_funcs(self, y_with_uncert): - """ - __gt__ operator, assuming that both self and y_with_uncert are - AffineScalarFunc objects. - """ - return self._nominal_value > y_with_uncert._nominal_value - - -def ge_on_aff_funcs(self, y_with_uncert): - """ - __ge__ operator, assuming that both self and y_with_uncert are - AffineScalarFunc objects. - """ - - return gt_on_aff_funcs(self, y_with_uncert) or eq_on_aff_funcs(self, y_with_uncert) - - -def lt_on_aff_funcs(self, y_with_uncert): - """ - __lt__ operator, assuming that both self and y_with_uncert are - AffineScalarFunc objects. - """ - return self._nominal_value < y_with_uncert._nominal_value - - -def le_on_aff_funcs(self, y_with_uncert): - """ - __le__ operator, assuming that both self and y_with_uncert are - AffineScalarFunc objects. - """ - - return lt_on_aff_funcs(self, y_with_uncert) or eq_on_aff_funcs(self, y_with_uncert) - - -def add_comparative_ops(cls): - def to_affine_scalar(x): - """ - Transforms x into a constant affine scalar function - (AffineScalarFunc), unless it is already an AffineScalarFunc (in - which case x is returned unchanged). - - Raises an exception unless x belongs to some specific classes of - objects that are known not to depend on AffineScalarFunc objects - (which then cannot be considered as constants). - """ - - if isinstance(x, cls): - return x - - if isinstance(x, CONSTANT_TYPES): - # No variable => no derivative: - return cls(x, {}) - - # Case of lists, etc. - raise NotUpcast( - "%s cannot be converted to a number with" " uncertainty" % type(x) - ) - - cls._to_affine_scalar = to_affine_scalar - - def force_aff_func_args(func): - """ - Takes an operator op(x, y) and wraps it. - - The constructed operator returns func(x, to_affine_scalar(y)) if y - can be upcast with to_affine_scalar(); otherwise, it returns - NotImplemented. - - Thus, func() is only called on two AffineScalarFunc objects, if - its first argument is an AffineScalarFunc. - """ - - def op_on_upcast_args(x, y): - """ - Return %s(self, to_affine_scalar(y)) if y can be upcast - through to_affine_scalar. Otherwise returns NotImplemented. - """ % func.__name__ - - try: - y_with_uncert = to_affine_scalar(y) - except NotUpcast: - # This module does not know how to handle the comparison: - # (example: y is a NumPy array, in which case the NumPy - # array will decide that func() should be applied - # element-wise between x and all the elements of y): - return NotImplemented - else: - return func(x, y_with_uncert) - - return op_on_upcast_args - - ### Operators: operators applied to AffineScalarFunc and/or - ### float-like objects only are supported. This is why methods - ### from float are used for implementing these operators. - - # Operators with no reflection: - - ######################################## - - # __nonzero__() is supposed to return a boolean value (it is used - # by bool()). It is for instance used for converting the result - # of comparison operators to a boolean, in sorted(). If we want - # to be able to sort AffineScalarFunc objects, __nonzero__ cannot - # return a AffineScalarFunc object. Since boolean results (such - # as the result of bool()) don't have a very meaningful - # uncertainty unless it is zero, this behavior is fine. - - def __bool__(self): - """ - Equivalent to self != 0. - """ - #! This might not be relevant for AffineScalarFunc objects - # that contain values in a linear space which does not convert - # the float 0 into the null vector (see the __eq__ function: - # __nonzero__ works fine if subtracting the 0 float from a - # vector of the linear space works as if 0 were the null - # vector of that space): - msg = ( - f"{self.__class__.__name__}.__bool__() is deprecated. In future releases " - f"it will defer to object.__bool__() and always return True." - ) - warn(msg, FutureWarning, stacklevel=2) - return self != 0.0 # Uses the AffineScalarFunc.__ne__ function - - cls.__bool__ = __bool__ - ######################################## - - ## Logical operators: warning: the resulting value cannot always - ## be differentiated. - - # The boolean operations are not differentiable everywhere, but - # almost... - - # (1) I can rely on the assumption that the user only has "small" - # errors on variables, as this is used in the calculation of the - # standard deviation (which performs linear approximations): - - # (2) However, this assumption is not relevant for some - # operations, and does not have to hold, in some cases. This - # comes from the fact that logical operations (e.g. __eq__(x,y)) - # are not differentiable for many usual cases. For instance, it - # is desirable to have x == x for x = n+/-e, whatever the size of e. - # Furthermore, n+/-e != n+/-e', if e != e', whatever the size of e or - # e'. - - # (3) The result of logical operators does not have to be a - # function with derivatives, as these derivatives are either 0 or - # don't exist (i.e., the user should probably not rely on - # derivatives for his code). - - # !! In Python 2.7+, it may be possible to use functools.total_ordering. - - # __eq__ is used in "if data in [None, ()]", for instance. It is - # therefore important to be able to handle this case too, which is - # taken care of when force_aff_func_args(eq_on_aff_funcs) - # returns NotImplemented. - cls.__eq__ = force_aff_func_args(eq_on_aff_funcs) - - cls.__ne__ = force_aff_func_args(ne_on_aff_funcs) - cls.__gt__ = force_aff_func_args(gt_on_aff_funcs) - - # __ge__ is not the opposite of __lt__ because these operators do - # not always yield a boolean (for instance, 0 <= numpy.arange(10) - # yields an array). - cls.__ge__ = force_aff_func_args(ge_on_aff_funcs) - - cls.__lt__ = force_aff_func_args(lt_on_aff_funcs) - cls.__le__ = force_aff_func_args(le_on_aff_funcs) - - -# Mathematical operations with local approximations (affine scalar -# functions) - - -class NotUpcast(Exception): - "Raised when an object cannot be converted to a number with uncertainty" diff --git a/uncertainties/umath_core.py b/uncertainties/umath_core.py index f07a0b60..2328e174 100644 --- a/uncertainties/umath_core.py +++ b/uncertainties/umath_core.py @@ -19,7 +19,6 @@ # Local modules import uncertainties.core as uncert_core -from uncertainties.core import to_affine_scalar, AffineScalarFunc, LinearCombination ############################################################################### @@ -58,7 +57,7 @@ # Functions with numerical derivatives: # # !! Python2.7+: {..., ...} -num_deriv_funcs = set(["fmod", "gamma", "lgamma"]) +num_deriv_funcs = set(["gamma", "lgamma"]) # Functions are by definition locally constant (on real # numbers): their value does not depend on the uncertainty (because @@ -70,7 +69,7 @@ # comparisons (==, >, etc.). # # !! Python 2.7+: {..., ...} -locally_cst_funcs = set(["ceil", "floor", "isinf", "isnan", "trunc"]) +locally_cst_funcs = set(["isinf", "isnan"]) # Functions that do not belong in many_scalars_to_scalar_funcs, but # that have a version that handles uncertainties. These functions are @@ -192,7 +191,6 @@ def _deriv_pow_1(x, y): lambda y, x: -y / (x**2 + y**2), ], # Correct for x == 0 "atanh": [lambda x: 1 / (1 - x**2)], - "copysign": [_deriv_copysign, lambda x, y: 0], "cos": [lambda x: -math.sin(x)], "cosh": [math.sinh], "degrees": [lambda x: math.degrees(1)], @@ -200,7 +198,6 @@ def _deriv_pow_1(x, y): "erfc": [lambda x: -math.exp(-(x**2)) * erf_coef], "exp": [math.exp], "expm1": [math.exp], - "fabs": [_deriv_fabs], "hypot": [lambda x, y: x / math.hypot(x, y), lambda x, y: y / math.hypot(x, y)], "log": [log_der0, lambda x, y: -math.log(x, y) / y / math.log(y)], "log10": [lambda x: 1 / x / math.log(10)], @@ -291,10 +288,6 @@ def wrapped_func(*args, **kwargs): # Only for Python 2.6+: -# For drop-in compatibility with the math module: -factorial = math.factorial -non_std_wrapped_funcs.append("factorial") - # We wrap math.fsum @@ -323,136 +316,6 @@ def wrapped_fsum(): fsum = wrapped_fsum() non_std_wrapped_funcs.append("fsum") -########## - -# Some functions that either return multiple arguments (modf, frexp) -# or take some non-float arguments (which should not be converted to -# numbers with uncertainty). - -# ! The arguments have the same names as in the math module -# documentation, so that the docstrings are consistent with them. - - -@uncert_core.set_doc(math.modf.__doc__) -def modf(x): - """ - Version of modf that works for numbers with uncertainty, and also - for regular numbers. - """ - - # The code below is inspired by uncert_core.wrap(). It is - # simpler because only 1 argument is given, and there is no - # delegation to other functions involved (as for __mul__, etc.). - - aff_func = to_affine_scalar(x) # Uniform treatment of all numbers - - (frac_part, int_part) = math.modf(aff_func.nominal_value) - - if aff_func._linear_part: # If not a constant - # The derivative of the fractional part is simply 1: the - # linear part of modf(x)[0] is the linear part of x: - return (AffineScalarFunc(frac_part, aff_func._linear_part), int_part) - else: - # This function was not called with an AffineScalarFunc - # argument: there is no need to return numbers with uncertainties: - return (frac_part, int_part) - - -many_scalars_to_scalar_funcs.append("modf") - - -@uncert_core.set_doc(math.ldexp.__doc__) -def ldexp(x, i): - # Another approach would be to add an additional argument to - # uncert_core.wrap() so that some arguments are automatically - # considered as constants. - - aff_func = to_affine_scalar(x) # y must be an integer, for math.ldexp - - if aff_func._linear_part: - return AffineScalarFunc( - math.ldexp(aff_func.nominal_value, i), - LinearCombination([(2**i, aff_func._linear_part)]), - ) - else: - # This function was not called with an AffineScalarFunc - # argument: there is no need to return numbers with uncertainties: - - # aff_func.nominal_value is not passed instead of x, because - # we do not have to care about the type of the return value of - # math.ldexp, this way (aff_func.nominal_value might be the - # value of x coerced to a difference type [int->float, for - # instance]): - return math.ldexp(x, i) - - -many_scalars_to_scalar_funcs.append("ldexp") - - -@uncert_core.set_doc(math.frexp.__doc__) -def frexp(x): - """ - Version of frexp that works for numbers with uncertainty, and also - for regular numbers. - """ - - # The code below is inspired by uncert_core.wrap(). It is - # simpler because only 1 argument is given, and there is no - # delegation to other functions involved (as for __mul__, etc.). - - aff_func = to_affine_scalar(x) - - if aff_func._linear_part: - (mantissa, exponent) = math.frexp(aff_func.nominal_value) - return ( - AffineScalarFunc( - mantissa, - # With frexp(x) = (m, e), x = m*2**e, so m = x*2**-e - # and therefore dm/dx = 2**-e (as e in an integer that - # does not vary when x changes): - LinearCombination([2**-exponent, aff_func._linear_part]), - ), - # The exponent is an integer and is supposed to be - # continuous (errors must be small): - exponent, - ) - else: - # This function was not called with an AffineScalarFunc - # argument: there is no need to return numbers with uncertainties: - return math.frexp(x) - - -non_std_wrapped_funcs.append("frexp") - -# Deprecated functions - -deprecated_functions = [ - "ceil", - "copysign", - "fabs", - "factorial", - "floor", - "fmod", - "frexp", - "ldexp", - "modf", - "trunc", -] - -for function_name in deprecated_functions: - message = ( - f"umath.{function_name}() is deprecated. It will be removed in a future " - f"release." - ) - setattr( - this_module, - function_name, - uncert_core.deprecation_wrapper( - getattr(this_module, function_name), - message, - ), - ) - ############################################################################### # Exported functions: diff --git a/uncertainties/unumpy/core.py b/uncertainties/unumpy/core.py index 714f1729..72a65328 100644 --- a/uncertainties/unumpy/core.py +++ b/uncertainties/unumpy/core.py @@ -15,7 +15,6 @@ from builtins import range import sys import inspect -from warnings import warn # 3rd-party modules: import numpy @@ -27,12 +26,9 @@ __all__ = [ # Factory functions: "uarray", - "umatrix", # Utilities: "nominal_values", "std_devs", - # Classes: - "matrix", ] ############################################################################### @@ -72,22 +68,6 @@ ) -def unumpy_to_numpy_matrix(arr): - """ - If arr in a unumpy.matrix, it is converted to a numpy.matrix. - Otherwise, it is returned unchanged. - """ - msg = ( - "the uncertainties.unumpy.unumpy_to_numpy_matrix function is deprecated. It " - "will be removed in a future release." - ) - warn(msg, FutureWarning) - if isinstance(arr, matrix): - return arr.view(numpy.matrix) - else: - return arr - - def nominal_values(arr): """ Return the nominal values of the numbers in NumPy array arr. @@ -96,13 +76,9 @@ def nominal_values(arr): class from this module) are passed through untouched (because a numpy.array can contain numbers with uncertainties and pure floats simultaneously). - - If arr is of type unumpy.matrix, the returned array is a - numpy.matrix, because the resulting matrix does not contain - numbers with uncertainties. """ - return unumpy_to_numpy_matrix(to_nominal_values(arr)) + return to_nominal_values(arr) def std_devs(arr): @@ -113,13 +89,9 @@ def std_devs(arr): class from this module) are passed through untouched (because a numpy.array can contain numbers with uncertainties and pure floats simultaneously). - - If arr is of type unumpy.matrix, the returned array is a - numpy.matrix, because the resulting matrix does not contain - numbers with uncertainties. """ - return unumpy_to_numpy_matrix(to_std_devs(arr)) + return to_std_devs(arr) ############################################################################### @@ -466,11 +438,6 @@ def wrapped_func(array_like, *args, **kwargs): numpy.vectorize(uncert_core.LinearCombination)(derivatives), ) - # NumPy matrices that contain numbers with uncertainties are - # better as unumpy matrices: - if isinstance(result, numpy.matrix): - result = result.view(matrix) - return result return wrapped_func @@ -600,84 +567,6 @@ def pinv(array_like, rcond=pinv_default): """ )(pinv) -########## Matrix class - - -class matrix(numpy.matrix): - # The name of this class is the same as NumPy's, which is why it - # does not follow PEP 8. - """ - Class equivalent to numpy.matrix, but that behaves better when the - matrix contains numbers with uncertainties. - """ - - def __init__(self, *args, **kwargs): - warn( - "the uncertainties.unumpy.matrix() class is deprecated. It will be " - "removed in a future release.", - FutureWarning, - ) - super().__init__() - - def __rmul__(self, other): - # ! NumPy's matrix __rmul__ uses an apparently restrictive - # dot() function that cannot handle the multiplication of a - # scalar and of a matrix containing objects (when the - # arguments are given in this order). We go around this - # limitation: - if numpy.isscalar(other): - return numpy.dot(self, other) - else: - return numpy.dot(other, self) # The order is important - - def getI(self): - """Matrix inverse or pseudo-inverse.""" - m, n = self.shape - return (inv if m == n else pinv)(self) - - I = numpy.matrix.I.getter(getI) # noqa - - # !!! The following function is not in the official documentation - # of the module. Maybe this is because arrays with uncertainties - # do not have any equivalent in this module, and they should be - # the first ones to have such methods? - @property - def nominal_values(self): - """ - Nominal value of all the elements of the matrix. - """ - return nominal_values(self) - - # !!! The following function is not in the official documentation - # of the module. Maybe this is because arrays with uncertainties - # do not have any equivalent in this module, and they should be - # the first ones to have such methods? - @property - def std_devs(self): - return numpy.matrix(std_devs(self)) - - -def umatrix(nominal_values, std_devs=None): - """ - Constructs a matrix that contains numbers with uncertainties. - - The arguments are the same as for uarray(...): nominal values, and - standard deviations. - - The returned matrix can be inverted, thanks to the fact that it is - a unumpy.matrix object instead of a numpy.matrix one. - """ - msg = ( - "the uncertainties.unumpy.umatrix function is deprecated. It will be removed in " - "a future release." - ) - warn(msg, FutureWarning) - - if std_devs is None: # Obsolete, single tuple argument call - raise TypeError("umatrix() should be called with two arguments.") - - return uarray(nominal_values, std_devs).view(matrix) - ###############################################################################