Hello
TLDR: Are there any confirmed “per-pulse capture” solution for IMX477? We are trying to push the Rpi HQ camera to be as close a true hardware triggered Camera as possible.
Software-alignment works / is solved
I've been working on a multi-raspberry pi - HQ camera - synchronization setup for about 18 months. I've posted several times on here, including for example this chain:
viewtopic.php?p=2297222#p2297222
Initially my approach was to do "software" alignment. Collect video data from multiple cameras, use the metadata function that was added to reconstruct synchornized videos.
And this more or less worked! Thank you and congrats again to the amazing engineers working on this.
However, the problem is that this software-alignment is extremely computationally expensive. I have to uncompress up to 10 or more cameras, do timestamp alignment and then recompress them. At least 10x to as much as 50x realtime more work.
XVS - drops frames.
I've worked on the above approach for > 1 year and I'm now ready for the hardware version using XVS. So we started using a master-slave setup, with a pin soldered to the master XVS line. It seems to us that about 50% of the time - all cameras return the same # of frames. But the rest of the time we're off by 1-5 frames over a 60 sec period.
We thought for a while that the narrow, 100ns pulse of the XVS is what's causing the drops. Or that the line is noisy - so we should switch to an external TTL pulse.
Timing of the arriving edge is stochastic - and cannot guarantee frame is saved
However, it seems like that whether the camera will release a frame (?) depends on the relative timing of arriving XVS/TTL edge. From researching online (and ChatGPT) - it seems that our couple of extra /missing frames - might be due to the triggering pulse not being cached/buffered by the RPi camera. So periodically it lands in a "dead-zone" where there is no frame returned.
This frame drop also may arise from a weird idiosyncrasy on the master camera which seems to sometimes drop the first/last frames (but this can perhaps be bypassed with all rpis in slave mode and an external trigger).
We are wondering whether sending, many pulses per frame will help with this stochastic problem (e.g., 100 x 100ns pulses - still much less than the frame width which for us @ 40FPS is about 25ms). But this seems hacky - and also not to mention depends on RPi OS which isn't real time.
Our constraints are somewhat relaxed
Our must haves: we cannot loose frames randomly during this 60second recording - or we can't reconstruct the fast moving objects on the videos.
We don't care if the cameras skip a few frames or hang etc, or the inter-frame interval is a bit off - as long as all the cameras behave the same and grab the same # of frames from the same (approximately) window of time.
We are basically trying to push the Rpi HQ camera to be as close a true hardware triggered Camera as possible.
If anyone can propose a solution to this we'd much appreciate it. This can include additional hardware or camera modules - though the IMX477 is quite ideal for our use case.
TLDR: Are there any confirmed “per-pulse capture” solution for IMX477? We are trying to push the Rpi HQ camera to be as close a true hardware triggered Camera as possible.
Software-alignment works / is solved
I've been working on a multi-raspberry pi - HQ camera - synchronization setup for about 18 months. I've posted several times on here, including for example this chain:
viewtopic.php?p=2297222#p2297222
Initially my approach was to do "software" alignment. Collect video data from multiple cameras, use the metadata function that was added to reconstruct synchornized videos.
Code:
frame_wall_clock = metadata.get('FrameWallClock')However, the problem is that this software-alignment is extremely computationally expensive. I have to uncompress up to 10 or more cameras, do timestamp alignment and then recompress them. At least 10x to as much as 50x realtime more work.
XVS - drops frames.
I've worked on the above approach for > 1 year and I'm now ready for the hardware version using XVS. So we started using a master-slave setup, with a pin soldered to the master XVS line. It seems to us that about 50% of the time - all cameras return the same # of frames. But the rest of the time we're off by 1-5 frames over a 60 sec period.
We thought for a while that the narrow, 100ns pulse of the XVS is what's causing the drops. Or that the line is noisy - so we should switch to an external TTL pulse.
Timing of the arriving edge is stochastic - and cannot guarantee frame is saved
However, it seems like that whether the camera will release a frame (?) depends on the relative timing of arriving XVS/TTL edge. From researching online (and ChatGPT) - it seems that our couple of extra /missing frames - might be due to the triggering pulse not being cached/buffered by the RPi camera. So periodically it lands in a "dead-zone" where there is no frame returned.
This frame drop also may arise from a weird idiosyncrasy on the master camera which seems to sometimes drop the first/last frames (but this can perhaps be bypassed with all rpis in slave mode and an external trigger).
We are wondering whether sending, many pulses per frame will help with this stochastic problem (e.g., 100 x 100ns pulses - still much less than the frame width which for us @ 40FPS is about 25ms). But this seems hacky - and also not to mention depends on RPi OS which isn't real time.
Our constraints are somewhat relaxed
Our must haves: we cannot loose frames randomly during this 60second recording - or we can't reconstruct the fast moving objects on the videos.
We don't care if the cameras skip a few frames or hang etc, or the inter-frame interval is a bit off - as long as all the cameras behave the same and grab the same # of frames from the same (approximately) window of time.
We are basically trying to push the Rpi HQ camera to be as close a true hardware triggered Camera as possible.
If anyone can propose a solution to this we'd much appreciate it. This can include additional hardware or camera modules - though the IMX477 is quite ideal for our use case.
Statistics: Posted by catubc — Tue Dec 23, 2025 3:58 pm