Merge pull request #26 from p2p-ld/tests-linkml

[tests] Test against linkml numpydantic generator
This commit is contained in:
Jonny Saunders 2024-09-25 17:46:21 -07:00 committed by GitHub
commit b7f7140ec8
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
8 changed files with 597 additions and 425 deletions

67
.github/workflows/tests-linkml.yml vendored Normal file
View file

@ -0,0 +1,67 @@
name: LinkML Tests
on:
push:
branches:
- main
pull_request:
branches:
- main
jobs:
test-linkml:
strategy:
matrix:
platform: ["ubuntu-latest", "macos-latest", "windows-latest"]
python-version: ["3.9", "3.12"]
runs-on: ${{ matrix.platform }}
steps:
- name: Checkout LinkML
uses: actions/checkout@v4
with:
repository: linkml/linkml
path: linkml
ref: main
fetch-depth: 0
- name: Checkout numpydantic
uses: actions/checkout@v4
with:
path: numpydantic
- name: Install poetry
run: pipx install poetry
- name: Install dynamic versioning plugin
run: poetry self add "poetry-dynamic-versioning[plugin]"
- name: Set up python
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
cache: poetry
cache-dependency-path: |
linkml/poetry.lock
- name: Add checked out numpydantic to poetry deps
working-directory: linkml
run: poetry add '../numpydantic' --python='>=3.9' --editable
- name: Install dependencies
working-directory: linkml
run: poetry install --no-interaction -E tests
- name: Force uninstall and reinstall
working-directory: linkml
run: |
poetry run pip uninstall -y numpydantic
poetry run pip install -e ../numpydantic
- name: print numpydantic version and path
working-directory: linkml
run: poetry run python -c 'import numpydantic; from importlib.metadata import version; print(numpydantic.__file__); print(version("numpydantic"))'
- name: Run LinkML Numpydantic Tests
run: poetry run python -m pytest -m pydanticgen_npd
working-directory: linkml

View file

@ -11,6 +11,7 @@ on:
jobs: jobs:
test: test:
strategy: strategy:
fail-fast: false
matrix: matrix:
platform: ["ubuntu-latest", "macos-latest", "windows-latest"] platform: ["ubuntu-latest", "macos-latest", "windows-latest"]
numpy-version: ["<2.0.0", ">=2.0.0"] numpy-version: ["<2.0.0", ">=2.0.0"]

View file

@ -4,6 +4,32 @@
### 1.6.* ### 1.6.*
#### 1.6.2 - 24-09-35
Very minor bugfix and CI release
PR: https://github.com/p2p-ld/numpydantic/pull/26
**Bugfix**
- h5py v3.12.0 broke file locking, so a temporary maximum version cap was added
until that is resolved. See [`h5py/h5py#2506`](https://github.com/h5py/h5py/issues/2506)
and [`#27`](https://github.com/p2p-ld/numpydantic/issues/27)
- The `_relativize_paths` function used in roundtrip dumping was incorrectly
relativizing paths that are intended to refer to paths within a dataset,
rather than a file. This, as well as windows-specific bugs was fixed so that
directories that exist but are just below the filesystem root (like `/data`)
are excluded. If this becomes a problem then we will have to make the
relativization system a bit more robust by specifically enumerating which
path-like things are *not* intended to be paths.
**CI**
- `numpydantic` was added as an array range generator in `linkml`
([`linkml/linkml#2178`](https://github.com/linkml/linkml/pull/2178)),
so tests were added to ensure that changes to `numpydantic` don't break
linkml array range generation. `numpydantic`'s tests are naturally a
superset of the behavior tested in `linkml`, but this is a good
paranoia check in case we drift substantially (which shouldn't happen).
#### 1.6.1 - 24-09-23 - Support Union Dtypes #### 1.6.1 - 24-09-23 - Support Union Dtypes
It's now possible to do this, like it always should have been It's now possible to do this, like it always should have been

View file

@ -62,6 +62,31 @@ each interface, and that work is also ongoing. Once the test suite reaches
maturity, it should be possible for any downstream interfaces to simply use those to maturity, it should be possible for any downstream interfaces to simply use those to
ensure they are compatible with the latest version. ensure they are compatible with the latest version.
## Platform/Dependency Versions
### Python Version Support
Numpydantic will support all versions of python that have not reached end-of-life
status according to the official Python release cycle schedule:
https://devguide.python.org/versions/
Support for end-of-life Python versions will be dropped in the next release after
that Python version reaches end-of-life status without deprecation warning.
Support for new Python versions will be targeted as soon as possible after they
are out of pre-release stage.
Tests are run against the latest patch version for each active python minor version
on Linux (ubuntu), and the latest patch version for the oldest and most recent minor
versions on mac and windows (i.e. if python 3.9, 3.10, 3.11, and 3.12 are active,
tests for mac and windows run against 3.9 and 3.12)
### Numpy Version Support
Numpydantic is currently tested against
- The latest numpy version
- The last numpy version <2.0.0
## Release Schedule ## Release Schedule
There is no release schedule. Versions are released according to need and available labor. There is no release schedule. Versions are released according to need and available labor.

864
pdm.lock

File diff suppressed because it is too large Load diff

View file

@ -1,6 +1,6 @@
[project] [project]
name = "numpydantic" name = "numpydantic"
version = "1.6.1" version = "1.6.2"
description = "Type and shape validation and serialization for arbitrary array types in pydantic models" description = "Type and shape validation and serialization for arbitrary array types in pydantic models"
authors = [ authors = [
{name = "sneakers-the-rat", email = "sneakers-the-rat@protonmail.com"}, {name = "sneakers-the-rat", email = "sneakers-the-rat@protonmail.com"},
@ -53,7 +53,8 @@ dask = [
"dask>=2024.4.0", "dask>=2024.4.0",
] ]
hdf5 = [ hdf5 = [
"h5py>=3.10.0" "h5py>=3.10.0,<3.12.0;sys_platform!='darwin'",
"h5py>=3.10.0;sys_platform=='darwin'",
] ]
video = [ video = [
"opencv-python>=4.9.0.80", "opencv-python>=4.9.0.80",

View file

@ -64,14 +64,20 @@ def _relativize_paths(
``relative_to`` directory, if provided in the context ``relative_to`` directory, if provided in the context
""" """
relative_to = Path(relative_to).resolve() relative_to = Path(relative_to).resolve()
# pdb.set_trace()
def _r_path(v: Any) -> Any: def _r_path(v: Any) -> Any:
if not isinstance(v, (str, Path)): if not isinstance(v, (str, Path)):
return v return v
try: try:
path = Path(v) path = Path(v)
if not path.exists(): resolved = path.resolve()
# skip things that are pathlike but either don't exist
# or that are at the filesystem root (eg like /data)
if (
not path.exists()
or (resolved.is_dir() and str(resolved.parent) == resolved.anchor)
or relative_to.anchor != resolved.anchor
):
return v return v
return str(relative_path(path, relative_to)) return str(relative_path(path, relative_to))
except (TypeError, ValueError): except (TypeError, ValueError):
@ -82,6 +88,8 @@ def _relativize_paths(
def _absolutize_paths(value: dict, skip: Iterable = tuple()) -> dict: def _absolutize_paths(value: dict, skip: Iterable = tuple()) -> dict:
def _a_path(v: Any) -> Any: def _a_path(v: Any) -> Any:
if not isinstance(v, (str, Path)):
return v
try: try:
path = Path(v) path = Path(v)
if not path.exists(): if not path.exists():

View file

@ -10,7 +10,7 @@ from typing import Callable
import numpy as np import numpy as np
import json import json
from numpydantic.serialization import _walk_and_apply from numpydantic.serialization import _walk_and_apply, _relativize_paths, relative_path
pytestmark = pytest.mark.serialization pytestmark = pytest.mark.serialization
@ -56,7 +56,7 @@ def test_relative_to_path(hdf5_at_path, tmp_output_dir, model_blank):
""" """
out_path = tmp_output_dir / "relative.h5" out_path = tmp_output_dir / "relative.h5"
relative_to_path = Path(__file__) / "fake_dir" / "sub_fake_dir" relative_to_path = Path(__file__) / "fake_dir" / "sub_fake_dir"
expected_path = "../../../__tmp__/relative.h5" expected_path = Path("../../../__tmp__/relative.h5")
hdf5_at_path(out_path) hdf5_at_path(out_path)
model = model_blank(array=(out_path, "/data")) model = model_blank(array=(out_path, "/data"))
@ -69,13 +69,29 @@ def test_relative_to_path(hdf5_at_path, tmp_output_dir, model_blank):
# should not be absolute # should not be absolute
assert not Path(file).is_absolute() assert not Path(file).is_absolute()
# should be expected path and reach the file # should be expected path and reach the file
assert file == expected_path assert Path(file) == expected_path
assert (relative_to_path / file).resolve() == out_path.resolve() assert (relative_to_path / file).resolve() == out_path.resolve()
# we shouldn't have touched `/data` even though it is pathlike # we shouldn't have touched `/data` even though it is pathlike
assert data["path"] == "/data" assert data["path"] == "/data"
def test_relative_to_root_dir():
"""
The relativize function should ignore paths that are directories
beneath the root directory (eg `/data`) even if they exist
"""
# python 3.9 compat, which can't use negative indices
test_path = [p for p in Path(__file__).resolve().parents][-2]
test_data = {"some_field": str(test_path)}
walked = _relativize_paths(test_data, relative_to=".")
assert str(relative_path(test_path, Path(".").resolve())) != str(test_path)
assert walked["some_field"] == str(test_path)
def test_absolute_path(hdf5_at_path, tmp_output_dir, model_blank): def test_absolute_path(hdf5_at_path, tmp_output_dir, model_blank):
""" """
When told, we make paths absolute When told, we make paths absolute