Hi, sounds like an interesting problem.
My first comment is that rpicam-apps are really only one-camera-at-a-time. So as you've discovered, there's no particularly obvious way to include a stream from another camera, or something like that. One approach might be to re-engineer the whole rpicam-apps and post-processing framework to be multi-camera, but that's sounding like rather too much effort!
If you wanted to use the existing post-processing, I'd be tempted to add a second event loop for the second camera, much like the existing event loop, and some (global) API function for "give me the most recent frame from the 2nd camera". Then in the post-processing for the main camera, I'd call that function to get a 2nd camera frame. Not exactly lovely, but I guess it would work.
If that's too ugly, maybe it would be tidier not to use the existing post-processing, but to do that as part of your application, where you might have tidier access to the most recent frame from each camera. You could still copy code from the post-processing, of course!
Remember that on a Pi 5 you can have two streams from each camera, each one in either an RGB or YUV format, scaled as you like, so you should certainly try and use this to your advantage to avoid software scaling and format conversions. On a Pi 5 you can even set different crop regions for the two streams, so I don't know if that helps.
Finally, the suggestion of doing the processing in the application and not as "post-processing" sounds to me like it would be fairly prototype-able in Python, if you like Python programming, of course!
My first comment is that rpicam-apps are really only one-camera-at-a-time. So as you've discovered, there's no particularly obvious way to include a stream from another camera, or something like that. One approach might be to re-engineer the whole rpicam-apps and post-processing framework to be multi-camera, but that's sounding like rather too much effort!
If you wanted to use the existing post-processing, I'd be tempted to add a second event loop for the second camera, much like the existing event loop, and some (global) API function for "give me the most recent frame from the 2nd camera". Then in the post-processing for the main camera, I'd call that function to get a 2nd camera frame. Not exactly lovely, but I guess it would work.
If that's too ugly, maybe it would be tidier not to use the existing post-processing, but to do that as part of your application, where you might have tidier access to the most recent frame from each camera. You could still copy code from the post-processing, of course!
Remember that on a Pi 5 you can have two streams from each camera, each one in either an RGB or YUV format, scaled as you like, so you should certainly try and use this to your advantage to avoid software scaling and format conversions. On a Pi 5 you can even set different crop regions for the two streams, so I don't know if that helps.
Finally, the suggestion of doing the processing in the application and not as "post-processing" sounds to me like it would be fairly prototype-able in Python, if you like Python programming, of course!
Statistics: Posted by therealdavidp — Thu May 29, 2025 8:15 am