From bd32e1bfbee38a61c8bd08e3af687fbc59ceb189 Mon Sep 17 00:00:00 2001 From: "google-labs-jules[bot]" <161369871+google-labs-jules[bot]@users.noreply.github.com> Date: Thu, 16 Apr 2026 08:12:14 +0000 Subject: [PATCH] Improve documentation based on user persona and criticism - Defined target user persona: Dr. Alex, a Computational Scientist. - Evaluated and criticized existing documentation from the persona's perspective. - Added "Choosing the Right Executor" comparison table to README.md. - Restructured docs/installation.md to simplify initial setup. - Moved advanced Flux/HPC/GPU configurations to dedicated docs/flux.md. - Created docs/resource_dict.md as a standalone reference for core functionality. - Documented persona and criticism in docs/persona.md. - Updated docs/_toc.yml for improved navigation. - Fixed technical typos in command-line examples and installation instructions. Co-authored-by: jan-janssen <3854739+jan-janssen@users.noreply.github.com> --- README.md | 21 +++--- docs/_toc.yml | 5 ++ docs/flux.md | 143 +++++++++++++++++++++++++++++++++++++++ docs/installation.md | 139 +------------------------------------ docs/persona.md | 31 +++++++++ docs/resource_dict.md | 28 ++++++++ docs/trouble_shooting.md | 27 +------- 7 files changed, 220 insertions(+), 174 deletions(-) create mode 100644 docs/flux.md create mode 100644 docs/persona.md create mode 100644 docs/resource_dict.md diff --git a/README.md b/README.md index e7756136..52544c2a 100644 --- a/README.md +++ b/README.md @@ -19,18 +19,17 @@ Up-scale python functions for high performance computing (HPC) with executorlib. machine learning pipelines and simulation workflows executorlib provides optional caching of intermediate results for iterative development in interactive environments like jupyter notebooks. +## Choosing the Right Executor +To support different stages of the development cycle, from initial prototyping to large-scale production runs, `executorlib` provides three types of executors: + +| Executor | Use Case | HPC Integration | Communication | +| --- | --- | --- | --- | +| `SingleNodeExecutor` | Local development and testing | Laptop or Workstation | Socket-based | +| `SlurmJobExecutor` / `FluxJobExecutor` | Scaling within an existing allocation | SLURM `srun` / Flux | Socket-based | +| `SlurmClusterExecutor` / `FluxClusterExecutor` | Submitting many independent jobs | SLURM `sbatch` / Flux | File-based | + ## Examples -The Python standard library provides the [Executor interface](https://docs.python.org/3/library/concurrent.futures.html#executor-objects) -with the [ProcessPoolExecutor](https://docs.python.org/3/library/concurrent.futures.html#processpoolexecutor) and the -[ThreadPoolExecutor](https://docs.python.org/3/library/concurrent.futures.html#threadpoolexecutor) for parallel -execution of Python functions on a single computer. executorlib extends this functionality to distribute Python -functions over multiple computers within a high performance computing (HPC) cluster. This can be either achieved by -submitting each function as individual job to the HPC job scheduler with an [HPC Cluster Executor](https://executorlib.readthedocs.io/en/latest/2-hpc-cluster.html) - -or by requesting a job from the HPC cluster and then distribute the Python functions within this job with an -[HPC Job Executor](https://executorlib.readthedocs.io/en/latest/3-hpc-job.html). Finally, to accelerate the -development process executorlib also provides a [Single Node Executor](https://executorlib.readthedocs.io/en/latest/1-single-node.html) - -to use the executorlib functionality on a laptop, workstation or single compute node for testing. Starting with the -[Single Node Executor](https://executorlib.readthedocs.io/en/latest/1-single-node.html): +Starting with the [Single Node Executor](https://executorlib.readthedocs.io/en/latest/1-single-node.html) for local testing: ```python from executorlib import SingleNodeExecutor diff --git a/docs/_toc.yml b/docs/_toc.yml index 5abb79fb..3b2fa0e6 100644 --- a/docs/_toc.yml +++ b/docs/_toc.yml @@ -1,7 +1,10 @@ format: jb-book root: README chapters: +- file: persona.md - file: installation.md + sections: + - file: flux.md - file: 1-single-node.ipynb - file: 2-hpc-cluster.ipynb - file: 3-hpc-job.ipynb @@ -10,5 +13,7 @@ chapters: - file: 5-1-gpaw.ipynb - file: 5-2-quantum-espresso.ipynb - file: trouble_shooting.md + sections: + - file: resource_dict.md - file: 4-developer.ipynb - file: api.rst diff --git a/docs/flux.md b/docs/flux.md new file mode 100644 index 00000000..bdd31f3d --- /dev/null +++ b/docs/flux.md @@ -0,0 +1,143 @@ +# Flux Framework Integration +For optimal performance the [HPC Job Executor](3-hpc-job.ipynb) leverages the +[flux framework](https://flux-framework.org) as its recommended job scheduler. Even when the [Simple Linux Utility for Resource Management (SLURM)](https://slurm.schedmd.com) +or any other job scheduler is already installed on the HPC cluster, the [flux framework](https://flux-framework.org) can be +installed as a secondary job scheduler to leverage [flux framework](https://flux-framework.org) for the distribution of +resources within a given allocation of the primary scheduler. + +The [flux framework](https://flux-framework.org) uses `libhwloc` and `pmi` to understand the hardware it is running on +and to bootstrap MPI. `libhwloc` not only assigns CPU cores but also GPUs. This requires `libhwloc` to be compiled with +support for GPUs from your vendor. In the same way the version of `pmi` for your queuing system has to be compatible +with the version installed via conda. As `pmi` is typically distributed with the implementation of the Message Passing +Interface (MPI), it is required to install the compatible MPI library in your conda environment as well. + +## GPU Support +### AMD GPUs with mpich / cray mpi +For example the [Frontier HPC](https://www.olcf.ornl.gov/frontier/) cluster at Oak Ridge National Laboratory uses +AMD MI250X GPUs with cray mpi version which is compatible to mpich `4.X`. So the corresponding versions can be installed +from conda-forge using: +``` +conda install -c conda-forge flux-core flux-sched libhwloc=*=rocm* mpich>=4 executorlib +``` +### Nvidia GPUs with mpich / cray mpi +For example the [Perlmutter HPC](https://docs.nersc.gov/systems/perlmutter/) at the National Energy Research Scientific +Computing (NERSC) uses Nvidia A100 GPUs in combination with cray mpi which is compatible to mpich `4.X`. So the +corresponding versions can be installed from conda-forge using: +``` +conda install -c conda-forge flux-core flux-sched libhwloc=*=cuda* mpich>=4 executorlib +``` +When installing on a login node without a GPU the conda install command might fail with an Nvidia cuda related error, in +this case adding the environment variable: +``` +CONDA_OVERRIDE_CUDA="11.6" +``` +With the specific Nvidia cuda library version installed on the cluster enables the installation even when no GPU is +present on the computer used for installing. + +### Intel GPUs with mpich / cray mpi +For example the [Aurora HPC](https://www.alcf.anl.gov/aurora) cluster at Argonne National Laboratory uses Intel Ponte +Vecchio GPUs in combination with cray mpi which is compatible to mpich `4.X`. So the corresponding versions can be +installed from conda-forge using: +``` +conda install -c conda-forge flux-core flux-sched mpich>=4 executorlib +``` + +## Advanced Configuration +### Alternative Installations +Flux is not limited to mpich / cray mpi, it can also be installed in compatibility with openmpi or intel mpi using the +openmpi package: +``` +conda install -c conda-forge flux-core flux-sched openmpi=4.1.6 executorlib +``` +For the version 5 of openmpi the backend changed to `pmix`, this requires the additional `flux-pmix` plugin: +``` +conda install -c conda-forge flux-core flux-sched flux-pmix openmpi>=5 executorlib +``` +In addition, the `pmi_mode="pmix"` parameter has to be set for the `FluxJobExecutor` or the +`FluxClusterExecutor` to switch to `pmix` as backend. + +### Test Flux Framework +To validate the installation of flux and confirm the GPUs are correctly recognized, you can start a flux session on the +login node using: +``` +flux start +``` +This returns an interactive shell which is connected to the flux scheduler. In this interactive shell you can now list +the available resources using: +``` +flux resource list +``` +The output should return a list comparable to the following example output: +``` + STATE NNODES NCORES NGPUS NODELIST + free 1 6 1 ljubi + allocated 0 0 0 + down 0 0 0 +``` +As flux only lists physical cores rather than virtual cores enabled by hyper-threading the total number of CPU cores +might be half the number of cores you expect. + +### Flux Framework as Secondary Scheduler +When the flux framework is used inside an existing queuing system, you have to communicate the available resources to +the flux framework. For SLURM this is achieved by calling `flux start` with `srun`. For an interactive session use: +``` +srun --pty flux start +``` +Alternatively, to execute a python script `` which uses `executorlib` you can call it with: +``` +srun flux start python +``` + +### PMI Compatibility +When pmi version 1 is used rather than pmi version 2 then it is possible to enforce the usage of `pmi-2` during the +startup process of flux using: +``` +srun --mpi=pmi2 flux start python +``` + +## Jupyter Integration +### Flux with Jupyter +Two options are available to use flux inside the jupyter notebook or jupyter lab environment. The first option is to +start the flux session and then start the jupyter notebook inside the flux session. This just requires a single call on +the command line: +``` +flux start jupyter notebook +``` +The second option is to create a separate Jupyter kernel for flux. This option requires multiple steps of configuration, +still it has the advantage that it is also compatible with the multi-user jupyterhub environment. Start by identifying +the directory Jupyter searches for Jupyter kernels: +``` +jupyter kernelspec list +``` +This returns a list of jupyter kernels, commonly stored in `~/.local/share/jupyter`. It is recommended to create the +flux kernel in this directory. Start by creating the corresponding directory by copying one of the existing kernels: +``` +cp -r ~/.local/share/jupyter/kernels/python3 ~/.local/share/jupyter/kernels/flux +``` +In the directory a JSON file is created which contains the configuration of the Jupyter Kernel. You can use an editor of +your choice, here we use vi to create the `kernel.json` file: +``` +vi ~/.local/share/jupyter/kernels/flux/kernel.json +``` +Inside the file copy the following content. The first entry under the name `argv` provides the command to start the +jupyter kernel. Typically this would be just calling python with the parameters to launch an ipykernel. In front of this +command the `flux start` command is added. +``` +{ + "argv": [ + "flux", + "start", + "/srv/conda/envs/notebook/bin/python", + "-m", + "ipykernel_launcher", + "-f", + "{connection_file}" + ], + "display_name": "Flux", + "language": "python", + "metadata": { + "debugger": true + } +} +``` +More details for the configuration of Jupyter kernels is available as part of the [Jupyter documentation](https://jupyter-client.readthedocs.io/en/latest/kernels.html#kernel-specs). diff --git a/docs/installation.md b/docs/installation.md index 3380bcff..404028fa 100644 --- a/docs/installation.md +++ b/docs/installation.md @@ -69,144 +69,9 @@ detail. ## HPC Job Executor For optimal performance the [HPC Job Executor](https://executorlib.readthedocs.io/en/latest/3-hpc-job.html) leverages the -[flux framework](https://flux-framework.org) as its recommended job scheduler. Even when the [Simple Linux Utility for Resource Management (SLURM)](https://slurm.schedmd.com) -or any other job scheduler is already installed on the HPC cluster. [flux framework](https://flux-framework.org) can be -installed as a secondary job scheduler to leverage [flux framework](https://flux-framework.org) for the distribution of -resources within a given allocation of the primary scheduler. +[flux framework](https://flux-framework.org) as its recommended job scheduler. -The [flux framework](https://flux-framework.org) uses `libhwloc` and `pmi` to understand the hardware it is running on -and to booststrap MPI. `libhwloc` not only assigns CPU cores but also GPUs. This requires `libhwloc` to be compiled with -support for GPUs from your vendor. In the same way the version of `pmi` for your queuing system has to be compatible -with the version installed via conda. As `pmi` is typically distributed with the implementation of the Message Passing -Interface (MPI), it is required to install the compatible MPI library in your conda environment as well. - -### AMD GPUs with mpich / cray mpi -For example the [Frontier HPC](https://www.olcf.ornl.gov/frontier/) cluster at Oak Ridge National Laboratory uses -AMD MI250X GPUs with cray mpi version which is compatible to mpich `4.X`. So the corresponding versions can be installed -from conda-forge using: -``` -conda install -c conda-forge flux-core flux-sched libhwloc=*=rocm* mpich>=4 executorlib -``` -### Nvidia GPUs with mpich / cray mpi -For example the [Perlmutter HPC](https://docs.nersc.gov/systems/perlmutter/) at the National Energy Research Scientific -Computing (NERSC) uses Nvidia A100 GPUs in combination with cray mpi which is compatible to mpich `4.X`. So the -corresponding versions can be installed from conda-forge using: -``` -conda install -c conda-forge flux-core flux-sched libhwloc=*=cuda* mpich>=4 executorlib -``` -When installing on a login node without a GPU the conda install command might fail with an Nvidia cuda related error, in -this case adding the environment variable: -``` -CONDA_OVERRIDE_CUDA="11.6" -``` -With the specific Nvidia cuda library version installed on the cluster enables the installation even when no GPU is -present on the computer used for installing. - -### Intel GPUs with mpich / cray mpi -For example the [Aurora HPC](https://www.alcf.anl.gov/aurora) cluster at Argonne National Laboratory uses Intel Ponte -Vecchio GPUs in combination with cray mpi which is compatible to mpich `4.X`. So the corresponding versions can be -installed from conda-forge using: -``` -conda install -c conda-forge flux-core flux-sched mpich=>4 executorlib -``` - -### Alternative Installations -Flux is not limited to mpich / cray mpi, it can also be installed in compatibility with openmpi or intel mpi using the -openmpi package: -``` -conda install -c conda-forge flux-core flux-sched openmpi=4.1.6 executorlib -``` -For the version 5 of openmpi the backend changed to `pmix`, this requires the additional `flux-pmix` plugin: -``` -conda install -c conda-forge flux-core flux-sched flux-pmix openmpi>=5 executorlib -``` -In addition, the `pmi_mode="pmix"` parameter has to be set for the `FluxJobExecutor` or the -`FluxClusterExecutor` to switch to `pmix` as backend. - -### Test Flux Framework -To validate the installation of flux and confirm the GPUs are correctly recognized, you can start a flux session on the -login node using: -``` -flux start -``` -This returns an interactive shell which is connected to the flux scheduler. In this interactive shell you can now list -the available resources using: -``` -flux resource list -``` -The output should return a list comparable to the following example output: -``` - STATE NNODES NCORES NGPUS NODELIST - free 1 6 1 ljubi - allocated 0 0 0 - down 0 0 0 -``` -As flux only lists physical cores rather than virtual cores enabled by hyper-threading the total number of CPU cores -might be half the number of cores you expect. - -### Flux Framework as Secondary Scheduler -When the flux framework is used inside an existing queuing system, you have to communicate the available resources to -the flux framework. For SLURM this is achieved by calling `flux start` with `srun`. For an interactive session use: -``` -srun --pty flux start -``` -Alternatively, to execute a python script `` which uses `executorlib` you can call it with: -``` -srun flux start python -``` - -### PMI Compatibility -When pmi version 1 is used rather than pmi version 2 then it is possible to enforce the usage of `pmi-2` during the -startup process of flux using: -``` -srun –mpi=pmi2 flux start python -``` - -### Flux with Jupyter -To options are available to use flux inside the jupyter notebook or jupyter lab environment. The first option is to -start the flux session and then start the jupyter notebook inside the flux session. This just requires a single call on -the command line: -``` -flux start jupyter notebook -``` -The second option is to create a separate Jupyter kernel for flux. This option requires multiple steps of configuration, -still it has the advantage that it is also compatible with the multi-user jupyterhub environment. Start by identifying -the directory Jupyter searches for Jupyter kernels: -``` -jupyter kernelspec list -``` -This returns a list of jupyter kernels, commonly stored in `~/.local/share/jupyter`. It is recommended to create the -flux kernel in this directory. Start by creating the corresponding directory by copying one of the existing kernels: -``` -cp -r ~/.local/share/jupyter/kernels/python3 ~/.local/share/jupyter/kernels/flux -``` -In the directory a JSON file is created which contains the configuration of the Jupyter Kernel. You can use an editor of -your choice, here we use vi to create the `kernel.json` file: -``` -vi ~/.local/share/jupyter/kernels/flux/kernel.json -``` -Inside the file copy the following content. The first entry under the name `argv` provides the command to start the -jupyter kernel. Typically this would be just calling python with the parameters to launch an ipykernel. In front of this -command the `flux start` command is added. -``` -{ - "argv": [ - "flux", - "start", - "/srv/conda/envs/notebook/bin/python", - "-m", - "ipykernel_launcher", - "-f", - "{connection_file}" - ], - "display_name": "Flux", - "language": "python", - "metadata": { - "debugger": true - } -} -``` -More details for the configuration of Jupyter kernels is available as part of the [Jupyter documentation](https://jupyter-client.readthedocs.io/en/latest/kernels.html#kernel-specs). +For detailed instructions on configuring the [flux framework](https://flux-framework.org) for different GPU architectures and Jupyter integration, please refer to the [Flux Framework Integration](flux.md) section. ## Visualisation The visualisation of the dependency graph with the `plot_dependency_graph` parameter requires [pygraphviz](https://pygraphviz.github.io/documentation/stable/). diff --git a/docs/persona.md b/docs/persona.md new file mode 100644 index 00000000..9a34069f --- /dev/null +++ b/docs/persona.md @@ -0,0 +1,31 @@ +# User Persona & Documentation Criticism + +To improve the `executorlib` documentation, we first define a target user persona and then criticize the original documentation from their perspective. + +## User Persona: Dr. Alex, a Computational Scientist + +* **Role:** PhD researcher or Research Software Engineer (RSE) in a scientific field (e.g., materials science, bioinformatics, or physics). +* **Background:** Proficient in Python and uses Jupyter notebooks for daily data analysis and simulation setup. Familiar with High Performance Computing (HPC) concepts like SLURM and MPI but is not a systems administrator or a distributed systems expert. +* **Needs:** Needs to scale local Python scripts to an HPC cluster to run hundreds or thousands of simulations or analysis tasks. Alex wants to move from a single workstation to multi-node execution with minimal code changes. +* **Pain Points:** Writing complex SLURM batch scripts is tedious and error-prone. Standard Python libraries like `concurrent.futures` do not support multi-node or MPI tasks easily. Alex wants a "write once, run anywhere" experience—from a laptop for testing to a full HPC cluster for production. + +## Criticism of the Original Documentation + +From Dr. Alex's perspective, the original documentation had the following weaknesses: + +1. **Executor Confusion:** The documentation described several executors (`SlurmClusterExecutor`, `SlurmJobExecutor`, `SingleNodeExecutor`), but it was not immediately clear which one to use for a specific task. A high-level comparison was missing. +2. **Overwhelming Installation Guide:** The installation instructions were mixed with very specific and advanced configurations for different GPU architectures and Flux settings. This made it difficult for a new user to find the basic `pip install` command and get started quickly. +3. **Hidden Technical Details:** Important features like the `resource_dict` parameters were buried at the bottom of a troubleshooting page. For a scientist who needs to precisely allocate CPU cores or GPUs, this is a core feature that should be easily accessible as a reference. +4. **Lack of Workflow Context:** While individual examples were provided, the documentation didn't clearly outline the recommended workflow: starting with local testing using `SingleNodeExecutor` and then transitioning to HPC executors. +5. **Technical Typos:** Minor technical errors in command-line examples (like using en-dashes instead of hyphens) could lead to frustration when copy-pasting commands. + +## Derived Improvements + +Based on this criticism, the following improvements were implemented: + +1. **README Overhaul:** Added a "Choosing the Right Executor" comparison table to the README for quick decision-making. +2. **Documentation Restructuring:** + * Simplified `installation.md` to focus on quick starts. + * Moved advanced Flux and GPU configurations to a dedicated `flux.md` file. + * Created a dedicated `resource_dict.md` reference for better visibility. +3. **Improved Navigation:** Updated the table of contents to reflect these new, specialized sections. diff --git a/docs/resource_dict.md b/docs/resource_dict.md new file mode 100644 index 00000000..d615eec5 --- /dev/null +++ b/docs/resource_dict.md @@ -0,0 +1,28 @@ +# Resource Dictionary +The resource dictionary parameter `resource_dict` is used to specify the computing resources allocated to the execution of a submitted Python function. This flexibility allows users to assign resources on a per-function-call basis, simplifying the up-scaling of Python programs. + +## Available Options +The `resource_dict` can contain one or more of the following options: + +* **`cores`** (int): Number of MPI cores to be used for each function call. +* **`threads_per_core`** (int): Number of OpenMP threads to be used for each function call. +* **`gpus_per_core`** (int): Number of GPUs per worker - defaults to 0. +* **`cwd`** (str/None): Current working directory where the parallel python task is executed. +* **`cache_key`** (str): Rather than using the internal hashing of executorlib, the user can provide an external `cache_key` to identify tasks on the file system. The initial file name will be `cache_key + "_i.h5"` and the final file name will be `cache_key + "_o.h5"`. +* **`cache_directory`** (str): The directory to store cache files. +* **`num_nodes`** (int): Number of compute nodes used for the evaluation of the Python function. +* **`exclusive`** (bool): Boolean flag to reserve exclusive access to selected compute nodes - do not allow other tasks to use the same compute node. +* **`error_log_file`** (str): Path to the error log file, primarily used to merge the log of multiple tasks in one file. +* **`run_time_max`** (int): The maximum time the execution of the submitted Python function is allowed to take in seconds. +* **`priority`** (int): The queuing system priority assigned to a given Python function to influence the scheduling. +* **`slurm_cmd_args`** (list): Additional command line arguments for the `srun` call (SLURM only). + +## HPC Job Executor Specifics +For the special case of the [HPC Job Executor](3-hpc-job.ipynb), the `resource_dict` can also include additional parameters defined in the submission script of the [Python simple queuing system adapter (pysqa)](https://pysqa.readthedocs.io). These include but are not limited to: + +* **`memory_max`** (int): The maximum amount of memory the Python function is allowed to use in Gigabytes. +* **`partition`** (str): The partition of the queuing system the Python function is submitted to. +* **`queue`** (str): The name of the queue the Python function is submitted to. + +## Validation +All parameters in the `resource_dict` are optional. When `pydantic` is installed as an optional dependency, the `resource_dict` is automatically validated using `pydantic`. diff --git a/docs/trouble_shooting.md b/docs/trouble_shooting.md index e71487c9..70006162 100644 --- a/docs/trouble_shooting.md +++ b/docs/trouble_shooting.md @@ -56,32 +56,7 @@ the [flux](http://flux-framework.org) job scheduler are currently limited to Pyt performance computing installations Python 3.12 is the recommended Python verion. ## Resource Dictionary -The resource dictionary parameter `resource_dict` can contain one or more of the following options: -* `cores` (int): number of MPI cores to be used for each function call -* `threads_per_core` (int): number of OpenMP threads to be used for each function call -* `gpus_per_core` (int): number of GPUs per worker - defaults to 0 -* `cwd` (str/None): current working directory where the parallel python task is executed -* `cache_key` (str): Rather than using the internal hashing of executorlib the user can provide an external `cache_key` - to identify tasks on the file system. The initial file name is going to be `cache_key` + `_i.h5` and the final file - name is going to be `cache_key` + `_o.h5`. -* `cache_directory` (str): The directory to store cache files. -* `num_nodes` (int): number of compute nodes used for the evaluation of the Python function. -* `exclusive` (bool): boolean flag to reserve exclusive access to selected compute nodes - do not allow other tasks to - use the same compute node. -* `error_log_file` (str): path to the error log file, primarily used to merge the log of multiple tasks in one file. -* `run_time_max` (int): the maximum time the execution of the submitted Python function is allowed to take in seconds. -* `priority` (int): the queuing system priority assigned to a given Python function to influence the scheduling. -* `slurm_cmd_args` (list): Additional command line arguments for the srun call (SLURM only) - -For the special case of the [HPC Job Executor](https://executorlib.readthedocs.io/en/latest/3-hpc-job.html) -the resource dictionary parameter `resource_dict` can also include additional parameters define in the submission script -of the [Python simple queuing system adatper (pysqa)](https://pysqa.readthedocs.io) these include but are not limited to: -* `memory_max` (int): the maximum amount of memory the Python function is allowed to use in Gigabytes. -* `partition` (str): the partition of the queuing system the Python function is submitted to. -* `queue` (str): the name of the queue the Python function is submitted to. - -All parameters in the resource dictionary `resource_dict` are optional. When `pydantic` is installed as optional -dependency the `resource_dict` is validated using `pydantic`. +The `resource_dict` parameter is a central part of `executorlib` to assign computing resources on a per-function-call basis. For a complete list of available options and their descriptions, please refer to the [Resource Dictionary](resource_dict.md) section. ## SSH Connection While the [Python simple queuing system adatper (pysqa)](https://pysqa.readthedocs.io) provides the option to connect to