You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
+ attributes\_00.csv (attributes CSV for section 00)
131
141
+\<machine\_type1\_of\_additional\_dataset\> (The other machine types have the same directory structure as \<machine\_type0\_of\_additional\_dataset\>/.)
132
142
133
143
### 4. Change parameters
@@ -238,7 +248,9 @@ After the evaluation dataset for the test is launched, download and unzip it. Mo
238
248
$ 02a_test_2024t2.sh -e
239
249
```
240
250
241
-
Anomaly scores are calculated using the evaluation dataset, i.e., `data/dcase2024t2/eval_data/raw/<machine_type>/test/`. The anomaly scores are stored as CSV files in the directory `results/`. You can submit the CSV files for the challenge. From the submitted CSV files, we will calculate AUC, pAUC, and your ranking.
251
+
Anomaly scores are calculated using the evaluation dataset, i.e., `data/dcase2024t2/eval_data/raw/<machine_type>/test/`. The anomaly scores are stored as CSV files in the directory `results/`. ~~You can submit the CSV files for the challenge. From the submitted CSV files, we will calculate AUC, pAUC, and your ranking.~~
252
+
253
+
If you use [rename script](./tools/rename_eval_wav.py) to generate `test_rename` directory, AUC and pAUC are also calculated.
242
254
243
255
### 9.2. Testing with the Selective Mahalanobis mode
244
256
@@ -248,7 +260,9 @@ After the evaluation dataset for the test is launched, download and unzip it. Mo
248
260
$ 02b_test_2024t2.sh -e
249
261
```
250
262
251
-
Anomaly scores are calculated using the evaluation dataset, i.e., `data/dcase2024t2/eval_data/raw/<machine_type>/test/`. The anomaly scores are stored as CSV files in the directory `results/`. You can submit the CSV files for the challenge. From the submitted CSV files, we will calculate AUC, pAUC, and your ranking.
263
+
Anomaly scores are calculated using the evaluation dataset, i.e., `data/dcase2024t2/eval_data/raw/<machine_type>/test/`. The anomaly scores are stored as CSV files in the directory `results/`. ~~You can submit the CSV files for the challenge. From the submitted CSV files, we will calculate AUC, pAUC, and your ranking.~~
264
+
265
+
If you use [rename script](./tools/rename_eval_wav.py) to generate `test_rename` directory, AUC and pAUC are also calculated.
252
266
253
267
### 10. Summarize results
254
268
@@ -268,7 +282,7 @@ If you want to change, summarize results directory or export directory, edit `03
268
282
269
283
## Legacy support
270
284
271
-
This version takes the legacy datasets provided in DCASE2020 task2, DCASE2021 task2, DCASE2022 task2, and DCASE2023 task2 dataset for inputs.
285
+
This version takes the legacy datasets provided in DCASE2020 task2, DCASE2021 task2, DCASE2022 task2, DCASE2023 task2, and DCASE2024 task2 dataset for inputs.
272
286
The Legacy support scripts are similar to the main scripts. These are in `tools` directory.
273
287
274
288
[learn more](README_legacy.md)
@@ -297,6 +311,18 @@ We developed and tested the source code on Ubuntu 20.04.4 LTS.
### Ground truth for evaluation datasets in this repository
384
411
385
412
This repository have evaluation data's ground truth csv. this csv is using to rename evaluation datasets.
386
413
You can calculate AUC and other score if add ground truth to evaluation datasets file name. *Usually, rename function is executed along with [download script](#description) and [auto download function](#41-enable-auto-download-dataset).
Attribute information is hidden by default for the following machine types:
400
420
401
-
This repository have evaluation data's ground truth csv. this csv is using to rename evaluation datasets.
402
-
You can calculate AUC and other score if add ground truth to evaluation datasets file name. *Usually, rename function is executed along with [download script](#description) and [auto download function](#41-enable-auto-download-dataset).
Copy file name to clipboardExpand all lines: README_legacy.md
+123-4Lines changed: 123 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# Legacy support
2
2
3
-
This version supports reading the datasets from DCASE2020 task2, DCASE2021 task2, DCASE2022 task2and DCASE2023 task2 dataset for inputs.
3
+
This version supports reading the datasets from DCASE2020 task2, DCASE2021 task2, DCASE2022 task2, DCASE2023 task2 and DCASE2024 task2 dataset for inputs.
4
4
5
5
## Description
6
6
@@ -20,6 +20,9 @@ Legacy-support scripts are similar to the main scripts. These are in `tools` dir
20
20
- tools/data\_download\_2023.sh
21
21
- This script downloads development data and evaluation data files and puts them into `data/dcase2023t2/dev_data/raw/` and `data/dcase2023t2/eval_data/raw/`.
22
22
- Rename evaluation data after downloading the dataset to evaluate and calculate AUC score. Renamed data is stored in `data/dcase2023t2/eval_data/raw/test_rename`
23
+
- tools/data\_download\_2024.sh
24
+
- This script downloads development data and evaluation data files and puts them into `data/dcase2024t2/dev_data/raw/` and `data/dcase2024t2/eval_data/raw/`.
25
+
- Rename evaluation data after downloading the dataset to evaluate and calculate AUC score. Renamed data is stored in `data/dcase2024t2/eval_data/raw/test_rename`
23
26
24
27
25
28
- tools/01\_train\_legacy.sh
@@ -43,6 +46,11 @@ Legacy-support scripts are similar to the main scripts. These are in `tools` dir
43
46
- This script trains a model for each machine type for each section ID by using the directory `data/dcase2023t2/dev_data/raw/<machine_type>/train/<section_id>`
44
47
- "Evaluation" mode:
45
48
- This script trains a model for each machine type for each section ID by using the directory `data/dcase2023t2/eval_data/raw/<machine_type>/train/<section_id>`.
49
+
- DCASE2024 task2 mode:
50
+
- "Development" mode:
51
+
- This script trains a model for each machine type for each section ID by using the directory `data/dcase2024t2/dev_data/raw/<machine_type>/train/<section_id>`
52
+
- "Evaluation" mode:
53
+
- This script trains a model for each machine type for each section ID by using the directory `data/dcase2024t2/eval_data/raw/<machine_type>/train/<section_id>`.
46
54
47
55
48
56
- tools/02a\_test\_legacy.sh (Use MSE as a score function for the Simple Autoencoder mode)
@@ -82,6 +90,15 @@ Legacy-support scripts are similar to the main scripts. These are in `tools` dir
82
90
- This script generates a CSV file for each section, including the anomaly scores for each wav file in the directories `data/dcase2023t2/eval_data/raw/<machine_type>/test/`. (These directories will be made available with the "evaluation dataset".)
83
91
- The generated CSV files are stored in the directory `results/`.
84
92
- If `test_rename` directory is available, this script generates a CSV file including AUC, pAUC, precision, recall, and F1-score for each section.
93
+
- DCASE2024 task2 mode:
94
+
- "Development" mode:
95
+
- This script generates a CSV file for each section, including the anomaly scores for each wav file in the directories `data/dcase2024t2/dev_data/raw/<machine_type>/test/`.
96
+
- The generated CSV files will be stored in the directory `results/`.
97
+
- It also generates a CSV file including AUC, pAUC, precision, recall, and F1-score for each section.
98
+
- "Evaluation" mode:
99
+
- This script generates a CSV file for each section, including the anomaly scores for each wav file in the directories `data/dcase2024t2/eval_data/raw/<machine_type>/test/`. (These directories will be made available with the "evaluation dataset".)
100
+
- The generated CSV files are stored in the directory `results/`.
101
+
- If `test_rename` directory is available, this script generates a CSV file including AUC, pAUC, precision, recall, and F1-score for each section.
85
102
86
103
- tools/02b\_test\_legacy.sh (Use Mahalanobis distance as a score function for the Selective Mahalanobis mode)
87
104
- "Development" mode:
@@ -110,6 +127,15 @@ Legacy-support scripts are similar to the main scripts. These are in `tools` dir
110
127
- This script generates a CSV file for each section, including the anomaly scores for each wav file in the directories `data/dcase2023t2/eval_data/raw/<machine_type>/test/`. (These directories will be made available with the "evaluation dataset".)
111
128
- The generated CSV files are stored in the directory.
112
129
- This script also generates a CSV file, containing AUC, pAUC, precision, recall, and F1-score for each section.
130
+
- DCASE2024 task2 mode:
131
+
- "Development" mode:
132
+
- This script generates a CSV file for each section, including the anomaly scores for each wav file in the directories `data/dcase2024t2/dev_data/raw/<machine_type>/test/`.
133
+
- The CSV files will be stored in the directory `results/`.
134
+
- It also makes a csv file including AUC, pAUC, precision, recall, and F1-score for each section.
135
+
- "Evaluation" mode:
136
+
- This script generates a CSV file for each section, including the anomaly scores for each wav file in the directories `data/dcase2024t2/eval_data/raw/<machine_type>/test/`. (These directories will be made available with the "evaluation dataset".)
137
+
- The generated CSV files are stored in the directory.
138
+
- This script also generates a CSV file, containing AUC, pAUC, precision, recall, and F1-score for each section.
113
139
- 03_summarize_results.sh
114
140
- This script summarizes results into a csv file.
115
141
- Use the same as when summarizing DCASE2023T2 and DCASE2024T2 results.
@@ -147,6 +173,13 @@ Legacy scripts in `tools` directory can be executed regardless of the current di
147
173
+ Download "eval\_data_<machine_type>_train.zip" from [https://zenodo.org/record/7830345](https://zenodo.org/record/7830345).
148
174
+ "Evaluation Dataset", i.e., the evaluation dataset for test
149
175
+ Download "eval\_data_<machine_type>_test.zip" from [https://zenodo.org/record/7860847](https://zenodo.org/record/7860847).
176
+
+ DCASE2024T2
177
+
+ "Development Dataset"
178
+
+ Download "dev\_data_<machine_type>.zip" from [https://zenodo.org/records/10902294](https://zenodo.org/records/10902294).
179
+
+ "Additional Training Dataset", i.e., the evaluation dataset for training
180
+
+ Download "eval\_data_<machine_type>_train.zip" from [https://zenodo.org/records/11259435](https://zenodo.org/records/11259435).
181
+
+ "Evaluation Dataset", i.e., the evaluation dataset for test
182
+
+ Download "eval\_data_<machine_type>_test.zip" from [https://zenodo.org/records/11363076](https://zenodo.org/records/11363076).
150
183
151
184
152
185
### 3. Unzip the downloaded files and make the directory structure as follows:
### Ground truth for evaluation datasets in this repository
589
689
590
690
This repository have evaluation data's ground truth csv. this csv is using to rename evaluation datasets.
591
691
You can calculate AUC and other score if add ground truth to evaluation datasets file name. *Usually, rename function is executed along with [download script](#description) and [auto download function](#41-enable-auto-download-dataset).
@@ -594,7 +694,26 @@ You can calculate AUC and other score if add ground truth to evaluation datasets
0 commit comments