You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
| 12 | /{version}/scheduler/{execution}/metrics/task/{id} | POST |
29
30
30
31
SWAGGER: /swagger-ui.html <br>
31
32
API-DOCS: /v3/api-docs
@@ -122,6 +123,78 @@ spec:
122
123
claimName: api-exp-data
123
124
```
124
125
126
+
#### Profiles
127
+
This is a Spring Boot application, that can be run with profiles. The "default" profile is used if no configuration is set. The "dev" profile can be enabled by setting the JVM System Parameter
128
+
129
+
-Dspring.profiles.active=dev
130
+
or Environment Variable
131
+
132
+
export spring_profiles_active=dev
133
+
or via the corresponding setting in your development environment or within the pod definition.
The "dev" profile is useful for debugging and reporting problems because it increases the log-level.
140
+
141
+
---
142
+
#### Memory Prediction and Task Scaling
143
+
- Kubernetes Feature InPlacePodVerticalScaling must be enabled. This is available starting from Kubernetes v1.27. See [KEP 1287](https://github.com/kubernetes/enhancements/issues/1287) for the current status.
144
+
- Supported if used together with [nf-cws](https://github.com/CommonWorkflowScheduler/nf-cws) version 1.0.5 or newer.
145
+
146
+
The memory predictor that shall be used for task scaling is set via the configuration. If not set, task scaling is disabled.
147
+
The memory predictor is provided as a string following the pattern "<memory predictor>-[<additional>=<parameter>]", e.g., "linear-offset=std".
| linear/lr | The Linear predictor, will try to predict a memory usage that is linear to the task input size. |
153
+
| linear2/lr2 | The Linear predictor with an unequal loss function. The loss penalizes underprediction more than overprediction. |
154
+
| mean | The Mean predictor predicts the mean memory seen so far. Prediction is independent of the input size. |
155
+
| ponder | The Ponder predictor is an advanced memory prediction strategy that ponders between linear regression with unequal loss and historic values. Details are provided in our paper [tbd](). |
156
+
| constX | Predicts a constant value (X), if no X is given, it predicts 0. |
157
+
| polyX | Prediction will be based on the Xth polynomial function based on a task's input size. If no X is provided, it uses X=2. |
158
+
159
+
160
+
The offset uses the current prediction model and based on that it predicts the memory for all finished tasks.
161
+
Then, it calculates the difference between the observed memory and the predicted memory.
| fifo | Tasks that have been submitted earlier, will be scheduled earlier. |
181
+
| rank | Tasks will be prioritized based on their rank in the DAG. |
182
+
| rank_min | Rank (min) Same as rank but solves ties such that tasks with smaller input size are preferred. |
183
+
| rank_max | Rank (max) Same as rank but solves ties such that tasks with larger input size are preferred. |
184
+
| lff_min | Least finished first (min): prioritizes abstract tasks where less instances have finished, solves ties with rank_min |
185
+
| lff_max | Least finished first (max): prioritizes abstract tasks where less instances have finished, solves ties with rank_max |
186
+
| gs_min | Generate Samples (min) Hybrid of LFF (min) and Rank (max), prioritize abstract tasks with less than five finished instances. Afterwards, use Rank (max). |
187
+
| gs_max | Generate Samples (max) Hybrid of LFF (max) and Rank (max), prioritize abstract tasks with less than five finished instances. Afterwards, use Rank (max). |
188
+
| random | Randomly prioritize tasks. |
189
+
| max | Prioritize tasks with larger input size. |
190
+
| min | Prioritize tasks with smaller input size. |
// this typically happens when the feature gate InPlacePodVerticalScaling was not enabled
313
+
if (e.toString().contains("Forbidden: pod updates may not change fields other than")) {
314
+
log.error("Could not patch task. Please make sure that the feature gate 'InPlacePodVerticalScaling' is enabled in Kubernetes. See https://github.com/kubernetes/enhancements/issues/1287 for details. Task scaling will now be disabled for the rest of this workflow execution.");
315
+
} else {
316
+
log.error("Could not patch task: {}", t.getConfig().getName(), e);
0 commit comments