Skip to content

ERP Depth Prediction Missing on Panorama Edges #10

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
Super-lyh-neversleep opened this issue Apr 21, 2025 · 2 comments
Open

ERP Depth Prediction Missing on Panorama Edges #10

Super-lyh-neversleep opened this issue Apr 21, 2025 · 2 comments

Comments

@Super-lyh-neversleep
Copy link

Hello and thank you for your great work!

I encountered an issue when predicting depth from 360-degree ERP images. Specifically, when using a standard 2048x1024 equirectangular image (ERP), the predicted depth only appears in the center, while the left and right sides are completely black.

I tested both with:
My own ERP input image (2048x1024)
The official demo example: venice.jpg in assets/demo/ But the issue persists — valid depth appears only in the middle 1/3 of the image.

How I set up:
ERP Camera configuration (Spherical) is used.
Inference script is based on gradio_demo.py and your official gradio logic for Spherical camera rays.
I also tried running infer.py with JSON config like:
json
{
"name": "Spherical",
"params": [500.0, 500.0, 320.0, 240.0, 2048.0, 1024.0, 3.14159, 1.57079]
}

I expected valid depth predictions across the entire ERP image, not just the center. Since ERP represents a full 360-degree view, ideally there should be depth predictions also for the left and right edge areas.

Questions:
Is this black-border behavior expected for Spherical camera predictions?
Are there any known limitations in ray sampling or ERP ray projection that restricts full-horizontal coverage?
Should we adjust something (e.g., spherical rays’ angular sampling) to fix this issue?
Thanks again and looking forward to your suggestions!

@yuyanli0831
Copy link

I found this issue too. I print out the values of the equi depth, looks like there's negative depth values close to the left and right borders. I wonder if there's any extra post processing steps for equi image format

@lpiccinelli-eth
Copy link
Owner

Hey, thank you for the questions! Your setting looks correct. However, the depth is in fact negative for angles >180 as it is behind the camera position, we use as depth the z-axis coordinate from the camera plane and the colorize function sets to black any negative value. If you would rather obtain the distance (which is always positive) from the camera center, you would need to compute the norm of the 3d points, ie distance = torch.norm(pts_3d, dim=1, keepdim=True).
Hope it helps, let me know if you have any further questions!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants