GSoC 2025 - Week 8
I added rendering support for annotations and completed the rest of annotation features. Here are the details:
- PR #909 (open)
- Rendering of annotations to
PIL
(Python Imaging Library) image ready for compositing as an alpha over - complete with text depth support etc as seen in the 3D viewport - A pre-render handler to ensure annotations image is generated during the rendering pass
- Rendering of annotations to
- PR #910 (open)
- Trajectory universe info annotation that shows the universe details like frame number, topology and trajectory file names when present, etc
- A generic 2D and 3D label annotations that are common for all entities (
Molecule
andTrajectory
). These allow simple labels (both in 3D and 2D space) without the need for a full custom annotation for simple cases - Support for correctly scaled and positioned display of annotations in the 3D viewport camera view
Here is a video that shows the progress from this week:
Here are some learning from this week:
- Blender’s default font is
Inter
. PIL needs access to the font file to display text. We currently use the same to match what is seen in the 3D viewport - PIL images can be drawn at a larger scale and then scaled back to avoid anti-aliasing issues with lines. There isn’t direct support for anti-aliasing lines
- Blender uses a bit of magic math to display the camera view outline in the 3D viewport. The actual camera width and height shown and the position can be determined based on the
view_camera_zoom
andview_camera_offset
params of the region’s 3d data. There was some reverse engineering needed (by looking at Blender code) to determine this. There is also a dependency on the viewport aspect ratio - PIL’s
textbbox
can be used to get the bounding box - to determine the text width / height - PIL’s co-ordinate system is from the top left, where as Blender’s 3D viewport is from bottom left
- Blender’s
object_utils.world_to_camera_view
(frombpy_extras
) can directly be used to get the corresponding 2D coords of a 3D point in the camera space (for rendering) - Python annotations for a
tuple
can specify the types of individual entities liketuple[float, float]
for example. We use this to support 2D and 3D vectors as annotation inputs (which correspond toFloatVectorProperty
Blender property) - In the render mode, we can use the inverted value of the world matrix (
camera.matrix_world.inverted()
) as the view matrix for any computations pytests
don’t seem to work well with Blender’s render operator call (bpy.ops.render.render
) - there are several issues as outlined in PR #909 comment
Next week, I plan to work on the density component support. MN currently supports .map
files as read by mrcfile
and doesn’t seem to support direct .dx
and .vdb
files generated by MDAnalysis
. I will outline the approach in a discussion before implementing the details.
This post is licensed under CC BY 4.0 by the author.