[FEATURE] Add public API to Scene for drawing cameras frustum and trajectories.#2593
Conversation
duburcqa
left a comment
There was a problem hiding this comment.
All PRs adding features or fixing bugs must come with accompanying unit tests. In this case you have no other choice than resorting on screenshot-based validation.
|
Hey @duburcqa, I have addressed your feedback and added the screenshot-based unit tests for draw_debug_frustum and draw_debug_trajectory. Could you take a look? |
Look at pyrender_viewer = scene.visualizer.viewer._pyrender_viewer
assert pyrender_viewer.is_active
rgb_arr, *_ = pyrender_viewer.render_offscreen(
pyrender_viewer._camera_node,
pyrender_viewer._renderer,
rgb=True,
depth=False,
seg=False,
normal=False,
)
if sys.platform == "darwin":
glinfo = pyrender_viewer.context.get_info()
renderer = glinfo.get_renderer()
if renderer == "Apple Software Renderer":
pytest.xfail("Tile ground colors are altered on Apple Software Renderer.")
assert rgb_array_to_png_bytes(rgb_arr) == png_snapshot |
|
The procedure to quite painful unfortunately, but there is no way around if you are planning to working on rendering features for quite some time. It is based on a thin custom wrapper around pytest-snapshot to support pixel-matching validation:
|
|
By the way, the title of the PRs must focus on what the end user should be aware of. Not implementation details. In this case, add scene helper methods for drawing cameras frustum and trajectories. |
|
🔴 Benchmark Regression Detected ➡️ Report |
|
Ignore performance regression report. They are a bit unstable at the moment unfortunately. |
…PI with documentation
…PI with documentation
06689c2 to
8ca3cf5
Compare
|
Hi @duburcqa, I've uploaded the snapshot to my HuggingFace fork. Here's the commit: https://huggingface.co/datasets/Mehak261124/snapshots/commit/c947da0356cb88838242a484d8763086bf50a35c Please review and merge into Genesis-Intelligence/snapshots when ready. |
|
This snapshot is not right. The camera frustum is not visible because the camera is misplaced. It may be necessary to do at least one simulation step to avoid camera artefacts. You should already update the PR with corresponding png_snapshot. |
|
Hey @duburcqa, I've addressed all your feedback:
Could you please review and merge the HuggingFace snapshot PR? |
|
By the way, do not forget to remove doc update. |
…_frustum_and_trajectory
|
Based on the latest CI diff it looks like the issue is due to llvmpipe rendering on Linux (the scene appears mostly black), so it's not something the blur change can fix. Since this seems platform-specific, I’m planning to mark this test as xfail on llvmpipe (similar to the Apple Software Renderer case) so CI can pass for now. Let me know if that works. |
This is the image diff. Black means there is no difference. |
No this is not ok. llvmpipe is reliable on linux. I don't think it is the issue. Moreover, it is failing on production CI, which is using Nvidia RTX6000 Blackwell Datacenter Edition. |
|
Thanks for clarifying! I understand now - the issue is that the stored PNG snapshot doesn't match what's actually rendered on Linux/production CI. The snapshot was generated on a different platform and the visual difference is too large to be covered by blurring alone. |
I don't think it will help. We are using one single snapshot for all platform. If I generate it on Linux, it will pass on Linux but fail on Mac presumably. |
|
Could you first try to disable run_in_thread? Just to make sure it is a pure rendering issue? |
…bug_frustum_and_trajectory
|
Hey @duburcqa, I've pushed the change with |
|
working on this issue. it is due to a bug in Genesis. |
|
Fixed! The CI should pass now \o/ |
|
Updated the unit test. Hopefully it is finally ready now. |
|
Ok it is still failing... I will have a look tomorrow. |
|
Found yet another bug... I will fix it. |
8c606ca to
939f0b7
Compare
|
It should pass this time! |
2987ca4 to
46978fb
Compare
|
Thankyou so much @duburcqa ! |

Description
Added two new debug visualization methods to
scene.py:draw_debug_frustum(camera): Visualizes the frustum of any camera including sensor cameras, by reusing the existingcreate_camera_frustum()fromutils/mesh.py. Previously this was only available for the interactive viewer camera viashow_cameras=Trueinvis_options.draw_debug_trajectory(positions): Draws a trajectory as a series of connected lines by connecting a list of 3D positions. Useful for visualizing robot paths and motion history.Both methods follow the same pattern as existing
draw_debug_*methods and render as markers — visible in the interactive viewer but invisible to robot cameras (depth, segmentation, RGB renders are unaffected).Related Issue
Resolves #1049
Motivation and Context
draw_debug_frustumreuses the existingcreate_camera_frustum()fromutils/mesh.pywhich was previously only used internally for the viewer camera. This change makes frustum visualization available for any camera including sensor cameras, without duplicating code.draw_debug_trajectorywas missing entirely from the debug visualization API and is a commonly needed tool for visualizing robot paths during development and debugging.How Has This Been / Can This Be Tested?
Tested manually on macOS Apple Silicon (M2), Python 3.11.
Test draw_debug_frustum:
Test draw_debug_trajectory:
Screenshots:
Checklist: