I am using the V5 board at a fixed sampling frequency. With the `drsosc` app I have executed the time calibration at 5 GS/s (actually 5.12 GS/s). This is how my setup looks like in the app:
Now I want to replicate this using the C++ API (not the positive width measurement shown, the signal sampling only). I am seting the sampling frequency to 5 GS/s, as I do in the `drsosc` app. Then I get the time information using the `DRSBoard::GetTime(unsigned int chipIndex, int channelIndex, int tc, float *time)` function (which I don't find defined either in `DRS.h` or `DRS.cpp` but somehow it works). How can I know if these times that I get here are being corrected with the time calibration? If so, should I expect the time resolution to be < 3 ps? Are these 3 ps accumulative, such that in the end I end up having a contribution from the evaluation board of 3 ps × 5 Gs/s × 100 ns where 100 ns is the time difference between my two pulses? (This does not seem to be the case because if so I would expect the jitter to be ~ 1 ns, and we see that the "Pos Width" measurement is ~ 0.1 ns std.)
Why am I asking? I want to measure the jitter between the two falling edges. This cannot be done easily with the `drsosc` app I think, so I am acquiring the data and doing this offline. I have done this measurement in the past using a LeCroy WaveRunner oscilloscope with 20 GS/s and 4 GHz bandwidth (offline, same code) and I have seen it vary from ~5 ps → 30 ps when I vary a voltage that I can control. Now if I calculate this time fluctuation using the data acquired with the V5 evaluation board I get a value ~100 ps and independent of this voltage, which leads me to conclude that the limiting factor is being the evaluation board itself. So now I am wondering if I have reached its limit, or if there is some setting that can still improve this result.