scancapecod wrote:
What bothers me is if there is an earlier version of firmware/FPGA that this did not occur with, and it is occurring now, are there any steps being taken to eliminate the problem? Is it even perceived by others (including Nuand folks) as a problem?
I agree that it's strange that a previous bundle didn't exhibit this behavior.
Sorry for the lack of response - I've been trying to reproduce this and have been asking people on IRC to take a look at this thread and report back on whether they're able to reproduce it. I've been using GQRX and don't think I'm seeing it, but am not confident that I'm fully reproducing your setup.
This weekend I'm looking to meet up with another dev and go over a few items, including this; I'll try to bring up SDR-Radio on my machine and get a couple pair of eyes looking at it closely.
I believe the following information will be helpful in understanding this. If anyone has time to take a look and report back, it'd be appreciated (and will likely speed things up):
- What firwmare + FPGA+ libbladeRF version is not exhibiting this? What version of SDR-Radio is being used? (I assume for the new stuff, we're talking about FW 1.6.1 and FPGA v0.0.3.)
- How is the device configured when this is observed? As drmpeg noted, some information can be gleaned from the bladeRF-cli program:
- frequency ('print frequency')
- bandwidth ('print bandwidth')
- sample rate ('print samplerate')
- LMS register values ('peek lms 0 256')
- Is SDR-Radio shipping with and using libbladeRF DLL, or does it use the one you've built/installed?
- Can this be reproduced outside of SDR-Radio? Can you reproduce this with GQRX? Just want to be thorough and rule out app-side issues...
- Is there any way to look at the raw IQ samples and look for anything strange? (After setting up the device in SDR-Radio, you could use bladeRF-cli to save off some raw samples to a CSV). A short capture may be very helpful.
A while back the FPGA began sign-extending the int16_t I/Q values to the full 16-bit range; previously bits [15:12] were markers to differentiate between I and Q values. This alleviated the need for host-side software to mask off those bits and sign extend. I don't think there should be an issue if the FPGA is doing this, but the host-side SW hasn't changed...it might just be doing unneccessary work. I'll look into this further and review the git logs.