Skip to content

Setup a dev environment, run the app, and run test

This is a step-by-step guide to get started with NOMAD development. You will clone all sources, set-up a Python and node environment, install all necessary dependency, run the infrastructure in development mode, learn to run out test-suites, and setup-up Visual Studio Code for NOMAD development.

This is not about working with the NOMAD Python package. You can find the nomad-lab documentation here.

Clone the sources

If not already done, you should clone nomad. If you have a gitlab@MPCDF account, you can clone with git URL:

git clone git@gitlab.mpcdf.mpg.de:nomad-lab/nomad-FAIR.git nomad

Otherwise, clone using HTTPS URL:

git clone https://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR.git nomad

then change directory to nomad

cd nomad

There are several branches in the repository. The master branch contains the latest released version, but there is also a develop (new features) and release branch (hotfixes). There are also tags for each version called vX.X.X. Checkout the branch you want to work on.

git checkout develop
The development branches are protected and you should create a new branch including your changes.
git checkout -b <my-branch-name>
This branch can be pushed to the repo, and then later may be merged to the relevant branch.

Install sub-modules

Nomad is based on python modules from the NOMAD-coe project. This includes parsers, python-common and the meta-info. These modules are maintained as their own GITLab/git repositories. To clone and initialize these modules, run:

git submodule update --init

Installation

Setup a Python environment

You should work in a Python virtual environment.

pyenv

The nomad code currently targets python 3.9. If your host machine has an older version installed, you can use pyenv to use python 3.9 in parallel with your system's python.

virtualenv

Create a virtual environment. It allows you to keep nomad and its dependencies separate from your system's python installation. Make sure that the virtual environment is based on Python 3.9 or higher. Use the built-in venv or (virtualenv)[https://pypi.org/project/virtualenv/] alternatively.

python3 -m venv .pyenv
source .pyenv/bin/activate

conda

If you are a conda user, there is an equivalent, but you have to install pip and the right python version while creating the environment.

conda create --name nomad_env pip python=3.9
conda activate nomad_env

To install libmagick for conda, you can use (other channels might also work):

conda install -c conda-forge --name nomad_env libmagic

Upgrade pip

Make sure you have the most recent version of pip:

pip install --upgrade pip

Install missing system libraries (e.g. on MacOS)

Even though the NOMAD infrastructure is written in python, there is a C library required by one of our python dependencies. Libmagic is missing on some systems. Libmagic allows to determine the MIME type of files. It should be installed on most unix/linux systems. It can be installed on MacOS with homebrew:

brew install libmagic

If you are using a Mac with Apple Silicon, we recommend that you use rosetta, homebrew for Intel, and install and use an Intel based Python. The second answer in this Stackoverflow post describes how to use both the Apple and Intel homebrew simultaneously.

Install nomad

The following command can be used to install all dependencies of all submodules and nomad itself.

./scripts/setup_dev_env.sh

Installation details

Here is more detailed rundown of the installation steps.

First we ensure that all submodules are up-to-date:

git submodule update --init --recursive

Previous build is cleaned:

rm -rf nomad/app/static/docs
rm -rf nomad/app/static/gui
rm -rf site

All the requirements needed for development (including submodule requirements) are installed:

pip install --prefer-binary -r requirements-dev.txt

Next we install the nomad package itself (including all extras). The -e option will install NOMAD with symbolic links that allow you to change the code without having to reinstall after each change.

pip install -e .[parsing,infrastructure,dev]

If pip tries to use and compile sources that create errors, it can be told to prefer the binary version:

```sh
pip install -e .[parsing,infrastructure,dev] --prefer-binary
```

The NOMAD GUI requires a static .env file that can be generated with:

```sh
python -m nomad.cli dev gui-env > gui/.env.development
```

This file includes some of the server details that are needed so that the
GUI can make the initial connection properly. If for example you change the server
address in your NOMAD configuration file, it will be necessary to regenerate
this .env file. In production this file will be overridden.

In addition, you have to do some more steps to prepare your working copy to run all the tests. See below.

Run the infrastructure

Install docker

You need to install docker. Docker nowadays comes with docker compose build in. Prior, you needed to install the standalone docker-compose.

Run required 3rd party services

To run NOMAD, some 3rd party services are needed

  • elastic search: nomad's search and analytics engine
  • mongodb: used to store processing state
  • rabbitmq: a task queue used to distribute work in a cluster

All 3rd party services should be run via docker-compose (see below). Keep in mind that docker-compose configures all services in a way that mirrors the configuration of the python code in nomad/config.py and the gui config in gui/.env.development.

The default virtual memory for Elasticsearch is likely to be too low. On Linux, you can run the following command as root:

sysctl -w vm.max_map_count=262144

To set this value permanently, see here. Then, you can run all services with:

cd ops/docker-compose/infrastructure
docker compose up -d mongo elastic rabbitmq
cd ../../..

If your system almost ran out of disk space the elasticsearch enforces a read-only index block (read more), but after clearing up the disk space you need to reset it manually using the following command:

curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": false}'

Note that the ElasticSearch service has a known problem in quickly hitting the virtual memory limits of your OS. If you experience issues with the ElasticSearch container not running correctly or crashing, try increasing the virtual memory limits as shown here.

To shut down everything, just ctrl-c the running output. If you started everything in deamon mode (-d) use:

docker compose down

Usually these services are used only by NOMAD, but sometimes you also need to check something or do some manual steps. You can access mongodb and elastic search via your preferred tools. Just make sure to use the right ports.

Run NOMAD

nomad.yaml

Before you run NOMAD for development purposes, you should configure it to use the test realm of our user management system. By default, NOMAD will use the fairdi_nomad_prod realm. Create a nomad.yaml file in the root folder:

keycloak:
  realm_name: fairdi_nomad_test

You might also want to exclude some of the default plugins, or only include the plugins you'll need. Especially plugins with slower start-up and import times due to instantiation of large schemas (e.g. nexus create couple thousand definitions for 70+ applications) can often be excluded.

plugins:
  exclude:
    - parsers/nexus

App and Worker

NOMAD consist of the NOMAD app/api, a worker, and the GUI. You can run the app and the worker with the NOMAD cli. These commands will run the services and display their log output. You should open them in separate shells as they run continuously. They will not watch code changes and you have to restart manually.

nomad admin run app
nomad admin run worker

Or both together in one process:

nomad admin run appworker

On MacOS you might run into multiprocessing errors. That can be solved as described here.

The app will run at port 8000 by default.

To run the worker directly with celery, do (from the root)

celery -A nomad.processing worker -l info

If you run the gui on its own (e.g. with react dev server below), you also need to start the app manually. The gui and its dependencies run on node and the yarn dependency manager. Read their documentation on how to install them for your platform.

cd gui
yarn
yarn start

JupyterHUB

NOMAD also has a build in JupyterHUB that is used to launch remote tools (e.g. Jupyter notebooks).

To run the JupyterHUB, some additional configuration might be necessary.

north:
    hub_connect_ip: 'host.docker.internal'
    jupyterhub_crypt_key: '<crypt key>'

On Windows system, you might have to activate further specific functionality:

north:
    hub_connect_ip: 'host.docker.internal'
    hub_connect_url: 'http://host.docker.internal:8081'
    windows: true
    jupyterhub_crypt_key: '<crypt key>'

  • If you are not on Linux, you need to configure how JupyterHUB can reach your host network from docker containers. For Windows and MacOS you need to set hub_connect_ip to host.docker.internal. For linux you can leave it out and use the default 172.17.0.1, unless you changed your docker configuration.
  • You have to generate a crypt key with openssl rand -hex 32.
  • You might need to install configurable-http-proxy.

The configurable-http-proxy It comes as a node package. See node for how to install npm. The proxy can be globally installed with:

npm install -g configurable-http-proxy

The JupyterHUB is a separate application. You can run the JuypterHUB similar to the other part.

nomad admin run hub

To run the JupyterHUB directly, do (from the root)

jupyterhub -f nomad/jupyterhub_config.py --port 9000

Running tests

Backend tests

To run the tests some additional settings and files are necessary that are not part of the code base.

You have to provide static files to serve the docs and NOMAD distribution:

./scripts/generate_docs_artifacts.sh
rm -rf site && mkdocs build && mv site nomad/app/static/docs

You need to have the infrastructure partially running: elastic, rabbitmq. The rest should be mocked or provided by the tests. Make sure that you do not run any worker, as they will fight for tasks in the queue.

cd ops/docker-compose/infrastructure
docker compose up -d elastic rabbitmq
cd ../..
pytest -svx tests

We use pylint, pycodestyle, and mypy to ensure code quality. To run those:

nomad dev qa --skip-tests

To run all tests and code qa:

nomad dev qa

This mimics the tests and checks that the GitLab CI/CD will perform.

Frontend tests

We use testing-library to implement our GUI tests and testing-library itself uses jest to run the tests. Tests are written in \*.spec.js files that accompany the implementation. Tests should focus on functionality, not on implementation details: testing-library is designed to enforce this kind of testing.

Note

When testing HTML output, the elements are rendered using jsdom: this is not completely identical to using an actual browser (does not support e.g. WebGL), but in practice is realistic enough for the majority of the test.

Test structure

We have adopted a pytest-like structure for organizing the test utilities: each source code folder may contain a conftest.js file that contains utilities that are relevant for testing the code in that particular folder. These utilities can usually be placed into the following categories:

  • Custom renders: When testing React components, the render-function is used to display them on the test DOM. Typically your components require some parts of the infrastructure to work properly, which is achieved by wrapping your component with other components that provide a context. Custom render functions can do this automatically for you. E.g. the default render as exported from src/components/conftest.js wraps your components with an infrastructure that is very similar to the production app. See here for more information.
  • Custom queries: See here for more information.
  • Custom expects: These are reusable functions that perform actual tests using the expect-function. Whenever the same tests are performed by several *.spec.js files, you should formalize these common tests into a expect*-function and place it in a relevant conftest.js file.

Often your components will need to communicate with the API during tests. One should generally avoid using manually created mocks for the API traffic, and instead prefer using API responses that originate from an actual API call during testing. Manually created mocks require a lot of manual work in creating them and keeping them up to date and true integration tests are impossible to perform without live communication with an API. In order to simplify the API communication during testing, you can use the startAPI+closeAPI functions, that will prepare the API traffic for you. A simple example could look like this:

import React from 'react'
import { waitFor } from '@testing-library/dom'
import { startAPI, closeAPI, screen } from '../../conftest'
import { renderSearchEntry, expectInputHeader } from '../conftest'

test('periodic table shows the elements retrieved through the API', async () => {
  startAPI('<state_name>', '<snapshot_name>')
  renderSearchEntry(...)
  expect(...)
  closeAPI()
})

Here the important parameters are:

  • <state_name>: Specifies an initial backend configuration for this test. These states are defined as python functions that are stored in nomad-FAIR/tests/states, example given below. These functions may e.g. prepare several uploads entries, datasets, etc. for the test.
  • <snapshot_name>: Specifies a filepath for reading/recording pre-recorded API traffic.

An example of a simple test state could look like this:

from nomad import infrastructure
from nomad.utils import create_uuid
from nomad.utils.exampledata import ExampleData

def search():
    infrastructure.setup()
    main_author = infrastructure.user_management.get_user(username="test")
    data = ExampleData(main_author=main_author)
    upload_id = create_uuid()
    data.create_upload(upload_id=upload_id, published=True, embargo_length=0)
    data.create_entry(
        upload_id=upload_id,
        entry_id=create_uuid(),
        mainfile="test_content/test_entry/mainfile.json",
        results={
            "material": {"elements": ["C", "H"]},
            "method": {},
            "properties": {}
        }
    )
    data.save()

When running in the online mode (see below), this function will be executed in order to prepare the application backend. The closeAPI function will handle cleaning the test state between successive startAPI calls: it will completely wipe out MongoDB, ElasticSearch and the upload files.

Running tests

The tests can be run in two different modes. Offline testing uses pre-recorded files to mock the API traffic during testing. This allows one to run tests more quickly without a server. During online testing, the tests perform calls to a running server where a test state has been prepared. This mode can be used to perform integration tests, but also to record the snapshot files needed by the offline testing.

Offline testing

This is the way our CI pipeline runs the tests and should be used locally whenever you wish to e.g. reproduce pipeline errors or when your tests do not involve any API traffic.

  1. Ensure that the gui artifacts are up-to-date:

    ./scripts/generate_gui_test_artifacts.sh
    
    As snapshot tests do not connect to the server, the artifacts cannot be fetched dynamically from the server and static files need to be used instead.

  2. Run yarn test to run the whole suite or yarn test [<filename>] to run a specific test.

Online testing

When you wish to record API traffic for offline testing, or to perform integration tests, you will need to have a server running with the correct configuration. To do this, follow these steps:

  1. Have the docker infrastructure running: docker-compose up

  2. Have the nomad appworker running with the config found in gui/tests/nomad.yaml. This can be achieved e.g. with the command: export NOMAD_CONFIG=gui/tests/nomad.yaml; nomad admin run appworker

  3. Activate the correct python virtual environment before running the tests with yarn (yarn will run the python functions that prepare the state).

  4. Run the tests with yarn test-record [<filename>] if you wish to record a snapshot file or yarn test-integration [<filename>] if you want the perform the test without any recording.

Build the docker image

Normally the docker image is build via a CI/CD pipeline that is run when pushing commits to NOMAD's gitlab at MPCDF. These images are distributed via NOMAD's gitlab container registry. For most purposes you would use these automatically build images.

If you want to build a custom images, e.g. to be used in your Oasis, you can run the NOMAD docker built manually. From the cloned project root run

docker build -t <image-name>:<image-tag> .

This will build the normal image indented for production use. There are other build targets: dev_python and dev_node. Especially dev_python might be interesting for debugging purposes as it contains all sources and dev dependencies. You can build specific targets with:

docker build --target dev_python -t <image-name>:<image-tag> .

If you want to build an image directly from a remote git repository (e.g. for a specific branch), run

DOCKER_BUILDKIT=1 docker build --build-arg BUILDKIT_CONTEXT_KEEP_GIT_DIR=1 --pull -t <image-name>:<image-tag> https://github.com/nomad-coe/nomad.git#<branch>

The buildkit parametrization ensures that the the .git directory is available in the docker build context. NOMAD's build process requires the .git folder to determine the package version from version tags in the repository.

The build process installs a substantial amount of dependencies and requires multiple docker images for various build stages. Make sure that docker has at least 20 GB of storage available.

Setup your IDE

The documentation section for development guidelines (see below) provide details on how the code is organized, tested, formatted, and documented. To help you meet these guidelines, we recommend to use a proper IDE for development and ditch any VIM/Emacs (mal-)practices.

We strongly recommend that all developers use visual studio code, or vscode for short, (this is a completely different product than visual studio). It is available for free for all major platforms here.

You should launch and run vscode directly from the projects root directory. The source code already contains settings for vscode in the .vscode directory. The settings contain the same setup for stylechecks, linter, etc. that is also used in our CI/CD pipelines. In order to ractually use the these features you have to make sure that they are enabled in your own User settings:

    "python.linting.pycodestyleEnabled": true,
    "python.linting.pylintEnabled": true,
    "python.linting.mypyEnabled": true,
    "python.testing.pytestEnabled": true,

The settings also include a few launch configuration for vscode's debugger. You can create your own launch configs in .vscode/launch.json (also in .gitignore).

The settings expect that you have installed a python environment at .pyenv as described in this tutorial (see above).

We also provide developers with a vscode extension that is designed to support nomad schema language. One can generate the extension using the following command after nomad installation

  nomad dev vscode-extension -o <path to output>

this command generate an up-to-date extension folder namely nomad-vscode. You can either copy this folder into vscode extensions folder ~/.vscode/extensions/ or create an installable package as follows

  sudo npm install -g vsce # if vsce is not installed
  cd ./nomad-vscode
  vsce package

then install the extension by drag the file nomad-0.0.x.vsix and drop it into the extension panel of the vscode.