8000 GitHub - User3574/C_T6: WP6 task concerning the development of AI and ML workflows to support data-driven digital twins
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
/ C_T6 Public
forked from interTwin-eu/itwinai

WP6 task concerning the development of AI and ML workflows to support data-driven digital twins

License

Notifications You must be signed in to change notification settings

User3574/C_T6

 
 

Repository files navigation

PoC for AI-centric digital twin workflows

GitHub Super-Linter GitHub Super-Linter

See the latest version of our docs for a quick overview of this platform for advanced AI/ML workflows in digital twin applications.

If you want to integrate a new use case, you can follow this step-by-step guide.

Installation

Requirements:

  • Linux, macOS environment. Windows was never tested.

Micromamba installation

To manage Conda environments we use micromamba, a light weight version of conda.

It is suggested to refer to the Manual installation guide.

Consider that Micromamba can eat a lot of space when building environments because packages are cached on the local filesystem after being downloaded. To clear cache you can use micromamba clean -a. Micromamba data are kept under the $HOME location. However, in some systems, $HOME has a limited storage space and it would be cleverer to install Micromamba in another location with more storage space. Thus by changing the $MAMBA_ROOT_PREFIX variable. See a complete installation example for Linux below, where the default $MAMBA_ROOT_PREFIX is overridden:

cd $HOME

# Download micromamba (This command is for Linux Intel (x86_64) systems. Find the right one for your system!)
curl -Ls https://micro.mamba.pm/api/micromamba/linux-64/latest | tar -xvj bin/micromamba

# Install micromamba in a custom directory
MAMBA_ROOT_PREFIX='my-mamba-root'
./bin/micromamba shell init $MAMBA_ROOT_PREFIX

# To invoke micromamba from Makefile, you need to add explicitly to $PATH
echo 'PATH="$(dirname $MAMBA_EXE):$PATH"' >> ~/.bashrc

Reference: Micromamba installation guide.

Workflow orchestrator

Install the (custom) orchestrator virtual environment.

source ~/.bashrc
# Create local env
make

# Activate env
micromamba activate ./.venv

To run tests on workflows use:

# Activate env
micromamba activate ./.venv

pytest tests/

Documentation folder

Documentation for this repository is maintained under ./docs location. If you are using code from a previous release, you can build the docs webpage locally using these instructions.

Development env setup

Requirements:

  • Linux, macOS environment. Windows was never tested.
  • Micromamba: see the installation instructions above.
  • VS Code, for development.

Installation:

make dev-env

# Activate env
micromamba activate ./.venv-dev

To run tests on itwinai package:

# Activate env
micromamba activate ./.venv-dev

pytest tests/ai/

AI environment setup

Requirements:

  • Linux, macOS environment. Windows was never tested.
  • Micromamba: see the installation instructions above.
  • VS Code, for development.

NOTE: this environment gets automatically setup when a workflow is executed!

However, you can also set it up explicitly with:

make ai-env

# Activate env
micromamba activate ./ai/.venv-pytorch

Updating the environment files

The files under ai/env-files/ are of two categories:

  • Simple environment definition, such as pytorch-env.yml and pytorch-env-gpu.yml
  • Lockfiles, such as pytorch-lock.yml and pytorch-gpu-lock.yml, generated by conda-lock.

When you install the ai environment, install it from the lock file!

When the "simple" environment file (e.g., pytorch-env.yml) changes, lock it with conda-lock:

micromamba activate ./.venv

make lock-ai

About

WP6 task concerning the development of AI and ML workflows to support data-driven digital twins

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 94.5%
  • Shell 2.6%
  • Common Workflow Language 2.0%
  • Makefile 0.9%
0