Skip to content

Commit 0f2ac12

Browse files
MTN Synchronized the quizzes for module 1 and 7
1 parent 9e49083 commit 0f2ac12

File tree

2 files changed

+13
-13
lines changed

2 files changed

+13
-13
lines changed

jupyter-book/evaluation/evaluation_wrap_up_quiz.md

Lines changed: 11 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -55,13 +55,13 @@ This model is closer to what we saw previously: it is a linear model trained on
5555
a non-linear feature transformation. We will build, train and evaluate such a
5656
model as part of this exercise. Thus, you need to:
5757

58-
- create a new data matrix containing the cube of the speed, the speed, the
59-
speed multiplied by the sine of the angle of the slope, and the speed
60-
multiplied by the acceleration. To compute the angle of the slope, you need to
61-
take the arc tangent of the slope (`alpha = np.arctan(slope)`). In addition,
62-
we can limit ourself to positive acceleration only by clipping to 0 the
63-
negative acceleration values (they would correspond to some power created by
64-
the braking that we are not modeling here).
58+
- create a new data matrix `data_linear_model` containing the cube of the speed,
59+
the speed, the speed multiplied by the sine of the angle of the slope, and the
60+
speed multiplied by the acceleration. To compute the angle of the slope, you
61+
need to take the arc tangent of the slope (`alpha = np.arctan(slope)`). In
62+
addition, we can limit ourself to positive acceleration only by clipping to 0
63+
the negative acceleration values (they would correspond to some power created
64+
by the braking that we are not modeling here).
6565
- using the new data matrix, create a linear predictive model based on a
6666
[`sklearn.preprocessing.StandardScaler`](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html)
6767
and a
@@ -298,7 +298,9 @@ _Select a single answer_
298298

299299
Now, we will go more into details by picking a single ride for the testing and
300300
analyse the predictions of the models for this test ride. To do so, we can reuse
301-
the `LeaveOneGroupOut` cross-validation object in the following manner:
301+
the `LeaveOneGroupOut` cross-validation object in the following manner, where
302+
`data_linear_model` is the matrix defined in question 1 with the augmented data
303+
features:
302304

303305
```python
304306
cv = LeaveOneGroupOut()
@@ -349,7 +351,7 @@ data_test_subset = data_test[time_slice]
349351
target_test_subset = target_test[time_slice]
350352
```
351353

352-
It allows to select data from 5.00 pm until 5.05 pm. Used the previous fitted
354+
It allows to select data from 5.00 pm until 5.05 pm. Use the previous fitted
353355
models (linear and gradient-boosting regressor) to predict on this portion of
354356
the test data. Draw on the same plot the true targets and the predictions of
355357
each model.

jupyter-book/predictive_modeling_pipeline/wrap_up_quiz.md

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -155,10 +155,8 @@ Let's visualize the second approach.
155155
![Fold-to-fold comparison](../../figures/numerical_pipeline_wrap_up_quiz_comparison.png)
156156

157157
```{admonition} Question
158-
Select the true statement.
159-
160-
The number of folds where the model using all features perform better than the
161-
model using only numerical features lies in the range:
158+
Compare both models by counting on how many folds the model using all features
159+
has a better test score than the other. Select the correct statement:
162160
163161
- a) [0, 3]: the model using all features is consistently worse
164162
- b) [4, 6]: both models are almost equivalent

0 commit comments

Comments
 (0)