You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I encountered an issue when predicting depth from 360-degree ERP images. Specifically, when using a standard 2048x1024 equirectangular image (ERP), the predicted depth only appears in the center, while the left and right sides are completely black.
I tested both with:
My own ERP input image (2048x1024)
The official demo example: venice.jpg in assets/demo/ But the issue persists — valid depth appears only in the middle 1/3 of the image.
How I set up:
ERP Camera configuration (Spherical) is used.
Inference script is based on gradio_demo.py and your official gradio logic for Spherical camera rays.
I also tried running infer.py with JSON config like:
json
{
"name": "Spherical",
"params": [500.0, 500.0, 320.0, 240.0, 2048.0, 1024.0, 3.14159, 1.57079]
}
I expected valid depth predictions across the entire ERP image, not just the center. Since ERP represents a full 360-degree view, ideally there should be depth predictions also for the left and right edge areas.
Questions:
Is this black-border behavior expected for Spherical camera predictions?
Are there any known limitations in ray sampling or ERP ray projection that restricts full-horizontal coverage?
Should we adjust something (e.g., spherical rays’ angular sampling) to fix this issue?
Thanks again and looking forward to your suggestions!
The text was updated successfully, but these errors were encountered:
I found this issue too. I print out the values of the equi depth, looks like there's negative depth values close to the left and right borders. I wonder if there's any extra post processing steps for equi image format
Hey, thank you for the questions! Your setting looks correct. However, the depth is in fact negative for angles >180 as it is behind the camera position, we use as depth the z-axis coordinate from the camera plane and the colorize function sets to black any negative value. If you would rather obtain the distance (which is always positive) from the camera center, you would need to compute the norm of the 3d points, ie distance = torch.norm(pts_3d, dim=1, keepdim=True).
Hope it helps, let me know if you have any further questions!
Hello and thank you for your great work!
I encountered an issue when predicting depth from 360-degree ERP images. Specifically, when using a standard 2048x1024 equirectangular image (ERP), the predicted depth only appears in the center, while the left and right sides are completely black.
I tested both with:
My own ERP input image (2048x1024)
The official demo example: venice.jpg in assets/demo/ But the issue persists — valid depth appears only in the middle 1/3 of the image.
How I set up:
ERP Camera configuration (Spherical) is used.
Inference script is based on gradio_demo.py and your official gradio logic for Spherical camera rays.
I also tried running infer.py with JSON config like:
json
{
"name": "Spherical",
"params": [500.0, 500.0, 320.0, 240.0, 2048.0, 1024.0, 3.14159, 1.57079]
}
I expected valid depth predictions across the entire ERP image, not just the center. Since ERP represents a full 360-degree view, ideally there should be depth predictions also for the left and right edge areas.
Questions:
Is this black-border behavior expected for Spherical camera predictions?
Are there any known limitations in ray sampling or ERP ray projection that restricts full-horizontal coverage?
Should we adjust something (e.g., spherical rays’ angular sampling) to fix this issue?
Thanks again and looking forward to your suggestions!
The text was updated successfully, but these errors were encountered: