DRSBoard::GetTime is declared in DRS.h line 720.
If you want to measure timing down to ps, you need some basic knowledge, especially about signal-to-noise and risetime. This cannot be taught in a few sentenses, needs a full lecture. As a starting point please read that papat:
https://arxiv.org/abs/1405.4975
then you will understand why you measure different resolutions with different peak heights (and different rise times).
Concerning the DRS4 measurement, please be aware that the sampling poings are not equidistant, like not every 200ps for GSPS. They vary bin by bin significantly, from 50ps to 300ps. So you alway have to analyse the X/Y points as an array, not just the Y values assuming deltaX of 200ps. Probably you forgot that. Then, you have to interpolate between bins to find the crossing over your threshold. Linear interpolation is already good, spline interpolation even better. Deep inside Measurement.cpp of the drsosc program you find in the source code:
t1 = (thr*(x1[i]-x1[i-1])+x1[i-1]*y1[i]-x1[i]*y1[i-1])/(y1[i]-y1[i-1]);
which is the linear interpolation (thr is the threshold). You have to use (and understand!) similar code.
Best,
Stefan
Matias Senger wrote: |
I am using the V5 board at a fixed sampling frequency. With the `drsosc` app I have executed the time calibration at 5 GS/s (actually 5.12 GS/s). This is how my setup looks like in the app:
Now I want to replicate this using the C++ API (not the positive width measurement shown, the signal sampling only). I am seting the sampling frequency to 5 GS/s, as I do in the `drsosc` app. Then I get the time information using the `DRSBoard::GetTime(unsigned int chipIndex, int channelIndex, int tc, float *time)` function (which I don't find defined either in `DRS.h` or `DRS.cpp` but somehow it works). How can I know if these times that I get here are being corrected with the time calibration? If so, should I expect the time resolution to be < 3 ps? Are these 3 ps accumulative, such that in the end I end up having a contribution from the evaluation board of 3 ps × 5 Gs/s × 100 ns where 100 ns is the time difference between my two pulses? (This does not seem to be the case because if so I would expect the jitter to be ~ 1 ns, and we see that the "Pos Width" measurement is ~ 0.1 ns std.)
Why am I asking? I want to measure the jitter between the two falling edges. This cannot be done easily with the `drsosc` app I think, so I am acquiring the data and doing this offline. I have done this measurement in the past using a LeCroy WaveRunner oscilloscope with 20 GS/s and 4 GHz bandwidth (offline, same code) and I have seen it vary from ~5 ps → 30 ps when I vary a voltage that I can control. Now if I calculate this time fluctuation using the data acquired with the V5 evaluation board I get a value ~100 ps and independent of this voltage, which leads me to conclude that the limiting factor is being the evaluation board itself. So now I am wondering if I have reached its limit, or if there is some setting that can still improve this result.
Thanks!
|
|