Skip to content

Conversation

emarti
Copy link

@emarti emarti commented Jun 2, 2023

Two changes: First, I added jupyterlab.sbatch so we can use jupyter lab notebooks instead of the modern ones. Second, I added a CPUS_PER_TASK parameter in param.sh so we can change the number of CPUs requested.

emarti added 2 commits June 2, 2023 15:59
New parameter CPUS_PER_TASK specificies the number of cpus requested.
File sbatches/sherlock/jupyterlab.sbatch enables more modern jupyter lab notebooks
@vsoch
Copy link
Owner

vsoch commented Jun 2, 2023

This looks great! Two quick checks:

  • the default of --cpus-per-task on the cluster is set to 1 (so essentially adding it doesn't change current workflows)
  • you've tested this and are happy with how it works?

@emarti
Copy link
Author

emarti commented Jun 4, 2023

  1. When the memory specified is large (on our nodes bigger than 16 GB) then slurm automatically assigns more CPUs. This change will fix this.
  2. Yes, I have tested it and it works. For the jupyter lab sbatch, jupyterlab must be installed on sherlock. Perhaps further instructions are needed?

@vsoch
Copy link
Owner

vsoch commented Jun 4, 2023

When the memory specified is large (on our nodes bigger than 16 GB) then slurm automatically assigns more CPUs. This change will fix this.

Gotcha. In this case, we don't want to set a default to be 1, because it will break the default behavior. Let's keep the variable unset, and then only add it to the command if it's defined.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants