Handle overflows with two synchronized BladeRFs
Posted: Mon Jan 13, 2025 3:43 am
Hi,
long story short I made 4 RX channels recorder with two BladeRF micro xA9. Their trigger pins are connected and master device clk_out is connected to slave clk_in.
Im using asynchronous mode and saving samples basing on their metadata (timestamp). If there is no overflow there is no problem. But when overflow occurs some samples are "lost". And thats okay. Thats why I want to use timestamps. But here is the problem:
Lets say master device has overflow. Difference between timestamp of first buffer after overflow and buffer before that overflow is for example 71744 where in my case default difference should be 8192 hence 71744-8192=63552 lost samples. But when I correlate recorded MasterRX1 channel and SlaveRX1 channel from a sample where overflow occured for master then there is a shift of 61568 (screenshot, File1 = MasterRX1, File2 = SlaveRX1). So the actual shift does not match timestamp shift.
Link to image:
https://imgur.com/a/2tz01bi
I did the same thing handling overflows on USRP 2955 (X310) with double Twin RX daugherboards (4 RX channels) and everything worked there.
Versions I use:
'''
bladeRF> version
bladeRF-cli version: 1.9.0-git-fe3304d7
libbladeRF version: 2.5.1-git-fe3304d7
Firmware version: 2.4.0-git-a3d5c55f
FPGA version: 0.15.3 (configured from SPI flash)
'''
Is there a chance to get help?
Is there another way to handle overflows? I know how to lower the chance to get overflow, but I want to be sure that algorithm keeps running correctly even when it occurs.
Best regards
long story short I made 4 RX channels recorder with two BladeRF micro xA9. Their trigger pins are connected and master device clk_out is connected to slave clk_in.
Im using asynchronous mode and saving samples basing on their metadata (timestamp). If there is no overflow there is no problem. But when overflow occurs some samples are "lost". And thats okay. Thats why I want to use timestamps. But here is the problem:
Lets say master device has overflow. Difference between timestamp of first buffer after overflow and buffer before that overflow is for example 71744 where in my case default difference should be 8192 hence 71744-8192=63552 lost samples. But when I correlate recorded MasterRX1 channel and SlaveRX1 channel from a sample where overflow occured for master then there is a shift of 61568 (screenshot, File1 = MasterRX1, File2 = SlaveRX1). So the actual shift does not match timestamp shift.
Link to image:
https://imgur.com/a/2tz01bi
I did the same thing handling overflows on USRP 2955 (X310) with double Twin RX daugherboards (4 RX channels) and everything worked there.
Versions I use:
'''
bladeRF> version
bladeRF-cli version: 1.9.0-git-fe3304d7
libbladeRF version: 2.5.1-git-fe3304d7
Firmware version: 2.4.0-git-a3d5c55f
FPGA version: 0.15.3 (configured from SPI flash)
'''
Is there a chance to get help?
Is there another way to handle overflows? I know how to lower the chance to get overflow, but I want to be sure that algorithm keeps running correctly even when it occurs.
Best regards