You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+5-7Lines changed: 5 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -19,7 +19,8 @@ This repository contains my paper reading notes on deep learning and machine lea
19
19
20
20
The sections below records paper reading activity in chronological order. See notes organized according to subfields [here](organized.md) (up to 06-2019).
21
21
22
-
## 2019-11 (9)
22
+
## 2019-11 (15)
23
+
-[Vehicle Detection With Automotive Radar Using Deep Learning on Range-Azimuth-Doppler Tensors](http://openaccess.thecvf.com/content_ICCVW_2019/papers/CVRSUAD/Major_Vehicle_Detection_With_Automotive_Radar_Using_Deep_Learning_on_Range-Azimuth-Doppler_ICCVW_2019_paper.pdf)[[Notes](paper_notes/radar_iccv.md)] <kbd>ICCV 2019</kbd>
23
24
-[GPP: Ground Plane Polling for 6DoF Pose Estimation of Objects on the Road](https://arxiv.org/abs/1811.06666)\[[Notes](paper_notes/gpp.md)] (UCSD, mono 3DOD)
24
25
-[MVRA: Multi-View Reprojection Architecture for Orientation Estimation](http://openaccess.thecvf.com/content_ICCVW_2019/papers/ADW/Choi_Multi-View_Reprojection_Architecture_for_Orientation_Estimation_ICCVW_2019_paper.pdf)[[Notes](paper_notes/mvra.md)] <kbd>ICCV 2019</kbd>
25
26
-[YOLOv3: An Incremental Improvement](https://pjreddie.com/media/files/papers/YOLOv3.pdf)
@@ -30,14 +31,14 @@ The sections below records paper reading activity in chronological order. See no
30
31
-[Can We Trust You? On Calibration of a Probabilistic Object Detector for Autonomous Driving](https://arxiv.org/abs/1909.12358)[[Notes](paper_notes/towards_safe_ad_calib.md)] <kbd>IROS 2019</kbd> (DriveU)
31
32
-[LaserNet: An Efficient Probabilistic 3D Object Detector for Autonomous Driving](https://arxiv.org/abs/1903.08701)[[Notes](paper_notes/lasernet.md)] <kbd>CVPR 2019</kbd> (uncertainty)
32
33
-[LaserNet KL: Learning an Uncertainty-Aware Object Detector for Autonomous Driving](https://arxiv.org/abs/1910.11375)\[[Notes](paper_notes/lasernet_kl.md)] (LaserNet with KL divergence)
-[IoUNet: Acquisition of Localization Confidence for Accurate Object Detection](https://arxiv.org/abs/1807.11590)[[Notes](paper_notes/iou_net.md)] <kbd>ECCV 2018</kbd>
35
35
-[gIoU: Generalized Intersection over Union: A Metric and A Loss for Bounding Box Regression](https://arxiv.org/abs/1902.09630)[[Notes](paper_notes/giou.md)] <kbd>CVPR 2019</kbd>
36
36
-[KL Loss: Bounding Box Regression with Uncertainty for Accurate Object Detection](https://arxiv.org/abs/1809.08545)[[Notes](paper_notes/kl_loss.md)] <kbd>CVPR 2019</kbd>
37
37
-[CAM-Convs: Camera-Aware Multi-Scale Convolutions for Single-View Depth](https://arxiv.org/abs/1904.02028)[[Notes](paper_notes/cam_conv.md)] <kbd>CVPR 2019</kbd>
38
38
-[BayesOD: A Bayesian Approach for Uncertainty Estimation in Deep Object Detectors](https://arxiv.org/abs/1903.03838)[[Notes](paper_notes/bayes_od.md)]
39
-
-[Multi-Task Learning of Depth from Tele and Wide Stereo Image Pairs](https://ieeexplore.ieee.org/abstract/document/8803566) <kbd>ICIP 2019</kbd>
40
39
-[TW-SMNet: Deep Multitask Learning of Tele-Wide Stereo Matching](https://arxiv.org/abs/1906.04463)[[Notes](paper_notes/twsm_net.md)] <kbd>ICIP 2019</kbd>
40
+
-[Accurate Uncertainties for Deep Learning Using Calibrated Regression](https://arxiv.org/abs/1807.00263)[[Notes](paper_notes/dl_regression_calib.md)] <kbd>ICML 2018</kbd>
41
+
-[Calibrating Uncertainties in Object Localization Task](https://arxiv.org/abs/1811.11210)[[Notes](paper_notes/2dod_calib.md)] <kbd>NIPS 2018</kbd>
41
42
-[Classification of Objects in Polarimetric Radar Images Using CNNs at 77 GHz](http://sci-hub.tw/10.1109/APMC.2017.8251453) (Radar, polar) <-- todo
42
43
-[Gated2Depth: Real-time Dense Lidar from Gated Images](https://arxiv.org/abs/1902.04997) <kbd>ICCV 2019 oral</kbd>
43
44
-[PifPaf: Composite Fields for Human Pose Estimation](https://arxiv.org/abs/1903.06593) <kbd>CVPR 2019</kbd>
@@ -48,21 +49,18 @@ The sections below records paper reading activity in chronological order. See no
48
49
-[Eliminating the Blind Spot: Adapting 3D Object Detection and Monocular Depth Estimation to 360° Panoramic Imagery](https://arxiv.org/abs/1808.06253) <kbd>ECCV 2018</kbd> (Monocular 3D object detection and depth estimation)
49
50
-[On Calibration of Modern Neural Networks](https://arxiv.org/abs/1706.04599) <kbd>ICML 2017</kbd> (Weinberger)
50
51
-[Measuring Calibration in Deep Learning](https://arxiv.org/abs/1904.01685) <kbd>CVPR 2019</kbd>
51
-
-[Calibrating uncertainties in object localization task](https://arxiv.org/abs/1811.11210)
52
52
-[Probabilistic Object Detection: Definition and Evaluation](https://arxiv.org/abs/1811.10800)
-[Vehicle Detection With Automotive Radar Using Deep Learning on Range-Azimuth-Doppler Tensors](http://openaccess.thecvf.com/content_ICCVW_2019/papers/CVRSUAD/Major_Vehicle_Detection_With_Automotive_Radar_Using_Deep_Learning_on_Range-Azimuth-Doppler_ICCVW_2019_paper.pdf) <kbd>ICCV 2019</kbd>
55
54
-[Deep Learning Based 3D Object Detection for Automotive Radar and Camera](https://www.astyx.com/fileadmin/redakteur/dokumente/Deep_Learning_Based_3D_Object_Detection_for_Automotive_Radar_and_Camera.PDF) (Astyx)
56
55
-[Automotive Radar Dataset for Deep Learning Based 3D Object Detection](https://www.astyx.com/fileadmin/redakteur/dokumente/Automotive_Radar_Dataset_for_Deep_learning_Based_3D_Object_Detection.PDF) (Astyx)
57
56
-[End-to-end Lane Detection through Differentiable Least-Squares Fitting](https://arxiv.org/abs/1902.00293) <kbd>ICCV 2019</kbd>
58
-
-[Accurate Uncertainties for Deep Learning Using Calibrated Regression](https://arxiv.org/abs/1807.00263) <kbd>ICML 2018</kbd>
59
-
-[Calibrating Uncertainties in Object Localization Task](https://arxiv.org/abs/1811.11210)
60
57
-[Momentum Contrast for Unsupervised Visual Representation Learning](https://arxiv.org/abs/1911.05722) (Kaiming He)
61
58
-[Frustum ConvNet: Sliding Frustums to Aggregate Local Point-Wise Features for Amodal 3D Object Detection](https://arxiv.org/abs/1903.01864) <kbd>IROS 2019</kbd>
62
59
-[Dropout Sampling for Robust Object Detection in Open-Set Conditions](https://arxiv.org/abs/1710.06677) <kbd>ICRA 2018</kbd> (Niko Sünderhauf)
63
60
-[Evaluating Merging Strategies for Sampling-based Uncertainty Techniques in Object Detection](https://arxiv.org/abs/1809.06006) <kbd>ICRA 2019</kbd> (Niko Sünderhauf)
64
61
-[Sparse-to-Dense: Depth Prediction from Sparse Depth Samples and a Single Image](https://arxiv.org/abs/1709.07492) <kbd>ICRA 2018</kbd> (depth completion)
65
62
-[Self-supervised Sparse-to-Dense: Self-supervised Depth Completion from LiDAR and Monocular Camera](https://arxiv.org/abs/1807.00275) <kbd>ICRA 2019</kbd> (depth completion)
63
+
-[Enhancing self-supervised monocular depth estimation with traditional visual odometry](https://arxiv.org/abs/1908.03127) <kbd>3DV 2019</kbd> (sparse to dense)
66
64
67
65
## 2019-10 (18)
68
66
-[Review of monocular object detection](paper_notes/review_mono_3dod.md)
# [Accurate Uncertainties for Deep Learning Using Calibrated Regression](https://arxiv.org/abs/1807.00263)
2
+
3
+
_November 2019_
4
+
5
+
tl;dr: Extends NN calibration from classification to regression.
6
+
7
+
#### Overall impression
8
+
The paper has a great introduction to the background of model calibration, and also summarizes the classification calibration really well.
9
+
10
+
The method can give calibrated credible intervals given sufficient amount of iid data.
11
+
12
+
For application of this in object detection, see [calibrating uncertainties in object detection](2dod_calib.md) and [can we trust you](towards_safe_ad_calib.md).
13
+
14
+
#### Key ideas
15
+
- For regression, the regressor H outputs at each step t a CDF $F_t$ targeting $y_t$.
16
+
- A calibrated regressor H satisfies
17
+
$$\frac{1}{T}\sum_{t=1}^T\mathbb{I}\{y_t \le F_t^{-1}(p)\} = p$$ for all $p \in (0, 1)$. This notion of calibration also extends to general confidence intervals.
18
+
- The calibration is usually measured with a calibration plot (aka reliability plot)
19
+
- For classification, divide pred $p_t$ into intervals $I_t$, then it plots the predicted average x = $mean(p_t)$ vs empirical average y = $mean(y_t)$, for $p_t \in I_t$.
As approximation, divide to bins $I_t$, for $p_t \in I_t$, plots the predicted average x = $mean(p_t)$, vs the empirical average y = $ \frac{1}{T}\sum_{\tau=1}^T\mathbb{I}\{F_\tau(y_\tau) \le p_t \}$. Then fit a model (e.g., isotonic regression) on this dataset.
23
+
- For example, for p - 0.95, if only 80/100 observed $y_t$ fall below the 95% quantile of $F_t$, then adjust the 95% to 80%.
24
+
25
+
#### Technical details
26
+
- Evaluation: calibration error
27
+
$$CalErr = \sum_j w_j (p_j - \hat{p_j})^2$$
28
+
- cf ECE (expected calibration error) from [can we trust you](towards_safe_ad_calib.md)
29
+
30
+
#### Notes
31
+
-[model calibration in the sense of cls](https://pyvideo.org/pycon-israel-2018/model-calibration-is-your-model-ready-for-the-real-world.html)
32
+
- Platt scaling just uses a logistic regression on the output of the model. See [this video](https://pyvideo.org/pycon-israel-2018/model-calibration-is-your-model-ready-for-the-real-world.html) for details. It recalibrates the predictions of a pre-trained classifier in a post-processing step. Thus it is classifier agnostic.
33
+
-[isotonic regression (保序回归)](https://scikit-learn.org/stable/auto_examples/plot_isotonic_regression.html) is a piece-wise constant function that finds a non-decreasing approximation of any function.
34
+
35
+
```python
36
+
ir = IsotonicRegression() # or LogisticRegression()
Copy file name to clipboardExpand all lines: paper_notes/towards_safe_ad_calib.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ tl;dr: Calibration of the network for a probabilistic object detector
7
7
#### Overall impression
8
8
The paper extends previous works in the [probabilistic lidar detector](towards_safe_ad.md) and its [successor](towards_safe_ad2.md). It is based on the work of Pixor.
9
9
10
-
Calibration: a probabilistic object detector should predict uncertainties that match the natural frequency of correct predictions. 90% of the predictions with 0.9 score from a calibrated detector should be correct. Humans have intuitive notion of probability in a frequentist sense.
10
+
Calibration: a probabilistic object detector should predict uncertainties that match the natural frequency of correct predictions. 90% of the predictions with 0.9 score from a calibrated detector should be correct. Humans have intuitive notion of probability in a frequentist sense. --> cf [accurate uncertainty via calibrated regression](dl_regression_calib.md).
11
11
12
12
A calibrated regression is a bit harder to interpret. P(gt < F^{-1}(p)) = p. F^{-1} = F_q is the inverse function of CDF, the quantile function.
13
13
@@ -26,7 +26,7 @@ The paper also has a very good way to visualize uncertainty in 2D object detecto
26
26
27
27
$$ECE = \sum_i^M \frac{N_m}{N}|p^m - \hat{p^m}|$$
28
28
29
-
- Isotonic regression
29
+
- Isotonic regression (保序回归)
30
30
- During test time, the object detector produced an uncalibrated uncertainty, then corrected by the recalib model g(). In practice, we build a recalib dataset from validation data.
31
31
- Post-processing, does not guarantee recalibration of individual prediction (only by bins).
32
32
- It changes probability distribution, Gaussian --> Non-Gaussian
Copy file name to clipboardExpand all lines: paper_notes/twsm_net.md
+5-1Lines changed: 5 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,9 @@ The paper is among the first to fuse stereo pairs with different focal length. F
12
12
- Single image depth estimation on the wide FoV has better performance on the periphery, but not so much in the overlapped FoV.
13
13
- TW-SMNet merges the depth of the two. The single image depth estimation branch forces the network to learn semantics. Actually only the stereo matching prediction is used during inference. **The single image depth estimation is used as auxiliary training branch**.
14
14
- Proper fusion of the two predictions can also improve performance (the paper has a long discussion on how to fuse them)
15
-
- the authors fused the input from the initial results (absolute metric value) from the stereo matching in tele FoV to the wide FoV raw image. This idea is similar to the [sparse to dense](sparse_to_dense.md).
15
+
- input fusion: the authors fused the input from the initial results (absolute metric value) from the stereo matching in tele FoV to the wide FoV raw image. This idea is similar to the [sparse to dense](sparse_to_dense.md).
16
+
- output fusion: pixel-wise decision selection. --> This leads to abrupt change in depths. Need to use global smoother such as FGS (Fast global smoother).
17
+
- deep fusion of depth uses robust regression as second stage refinement.
16
18
17
19
#### Technical details
18
20
-**classification-based robust regression** loss, by classifying regression target range into bins, then predict. Note that no cross entropy loss is added. The loss is on the soft prediction (weighted average of bin centers by the scores past softmax) --> this is very similar to the multi-bin loss proposed by [deep3dbox](deep3dbox.md).
@@ -22,4 +24,6 @@ The paper is among the first to fuse stereo pairs with different focal length. F
22
24
#### Notes
23
25
- Kitti's stereo pairs has a baseline of 54 cm. Human has baseline of 6 cm. Most trifocal lens system on the market has a couple of cm, smaller than human eye, and thus not much disparity.
24
26
- Note on the results: merging the two actually finds the middle ground between the TW-SMNet models T and W. With stereo info the intersected FoV has much better depth estimation than single image based model.
27
+
- Publication at ICIP 2019 [Multi-Task Learning of Depth from Tele and Wide Stereo Image Pairs](https://ieeexplore.ieee.org/abstract/document/8803566) <kbd>ICIP 2019</kbd>
0 commit comments