Sorry, got the wrong link. Here the right one: https://www.dropbox.com/sh/clqo7ekr0ysbrip/AACoWJzrQAbf3WiBJHG89bGGa?dl=0
If you untar the archive, you will find a "firmware" subdirectory with all VHDL code.
I checked the link you provided but it seems that the link doesnt exist please send me valid one.
You find each software version at the usual download location at
The one you need is probably drs-2.1.3.tar.gz which was the last version for the 2.0 board which is now more than 10 years old.
Hi, I am working on an old DRS4 board Version "2.0" with firmware revision "13191", I was unable to find this specific firmware source files ("VHDL source code"), please help me where could I find this or send me the required.
What are the default register statuses after DRS4 gets reset?
Look at the DRS4 data sheet, Figure 12. You see there the rising SRCLK pulse which outputs the next analog value. You also see tSAMP which describes the sampling piont (strobe or clock sent to your ADC). The value of tSAMP must be such that the values is sampled at the point where it flattens out, just 2-3 ns BEFORE the next analog sample is clocked out, as written in the text. So you have to phase shift your clock going to SRCLK and the one going to your ADC against each other. This needs adjustment at the ns level, so you need a PLL with fine-valued taps, so you can shift it in fractions of a ns. What you see is that you sample at the BEGINNING of a new value to be output to the chip. Please also note that most ADCs have an internal delay of their clock (usually called 'aperture') which has to be taken into account. So if your SRCLK and your ADC clock come at the same time (not phase shifted), it might happen that the ADC internal aperture delay caues it to sample the analog signal at the BEGINNING of the new value.
Hope this is clearer now.
The firmware from the website always reads 1024 bins. You have to modify it to stop before that, like reading only 128 samples or so. For compiling under MacOSX, this should work, since I do it myself.
We are trying to increase the event rate of the DRS4. We looked into the ROI but couldn’t figure out how to run in ROI mode. We are wondering if there is pre-existing firmware for this? We also tried to download and build the software from source on MacOS 12.4 but we were not successful. Can you kindly help us with these?
Hello, I now have periodic spikes in CH0 and CH1 output. How can I eliminate these spikes? I'm sorry I didn't understand your elimination method. Please explain the method in detail. Thank you very much
Dear DRS4 team,
I'm trying to troubleshoot some odd spike behavior. If I run the ADC and SR CLK at 16 MHz (behavior also seen at 33 MHz) we get very noisy data (post-calibration) with periodic spikes.
In the below plot
After I modify some clock settings, things seem to improve dramatically, and the spike behavior changes
Artifacts seem related to clock configuration, but I am sort of in the dark on what might be happening from a first-principles point of view. Any tips?
Thank you very much for your help!
A3-A0 = 1001 should be all you need to activate OUT0-OUT7. It works in our designs. Maybe double check the address lines with an oscilloscope.
Hello, I am Lynsey. now I set A3-A0 to 1001 in ROI mode, but only OUT0 has output, and the other seven channels(OUT1-OUT7) do not output corresponding waveforms.
In ROI mode, can OUT0-OUT7 output sampled waveforms at the same time?
thank you very much
Thanks for your help. If I look into the app the behavior for the 4 channels is exactly as you show:
Now, when I sample with my code something strange happens, two of the channels are fine and the other two are wrong:
This is a surprise to me because I acquire the 4 channels in the same way within a `for` loop. To get the time data I use `DRSBoard::GetTime` with the `tcalibrated` argument set to `true`. Is there any aditional step to use the calibration?
Looks like you have the some time calibration, not sure if it's the correct one. Sample the sine wave from the calibration clock, once with and once without the timing calibration, then you will see if all points lie on a smooth line. Left: without timing calibration, right: with proper timing calibration:
If your points do not lie on a smooth line, you might habe a problem such as the wrong channel for calibration, an offset of 1 in the index of the time array or some other software bug. Measure the same signal with the DRSOsc application and then your code. If the results differ, you have a software problem on your side.
For the time of each bin I am using the values returend by `GetTime` without any assumption by my side. I did not notice before that the sampling time is not uniform, but I see that this is already happening. This is an example plot from one of the signals I processed:
The bin at 65.5 ns and the next one are closer than their neighbors. So this seems to indicate that the time calibration is being taken into account when I acquire the time bins using `GetTime`, is this correct?
To obtain the final time resolution I am using the constant fraction discriminator method and the signals are linearly interpolated to obtain the time at each percentage value, as seen in the plot. I have already measured time resolutions in the 5-100 ps range with exactly the same setup but using the LeCroy oscilloscope, which I am using just for data acquisition, and my software for offline analysis as shown in the plot above. Now what I am trying to do is to replace the LeCroy by the DRS4 Evaluation Board basically, so I can use the oscilloscope in a different setup.
DRSBoard::GetTime is declared in DRS.h line 720.
If you want to measure timing down to ps, you need some basic knowledge, especially about signal-to-noise and risetime. This cannot be taught in a few sentenses, needs a full lecture. As a starting point please read that papat:
then you will understand why you measure different resolutions with different peak heights (and different rise times).
Concerning the DRS4 measurement, please be aware that the sampling poings are not equidistant, like not every 200ps for GSPS. They vary bin by bin significantly, from 50ps to 300ps. So you alway have to analyse the X/Y points as an array, not just the Y values assuming deltaX of 200ps. Probably you forgot that. Then, you have to interpolate between bins to find the crossing over your threshold. Linear interpolation is already good, spline interpolation even better. Deep inside Measurement.cpp of the drsosc program you find in the source code:
t1 = (thr*(x1[i]-x1[i-1])+x1[i-1]*y1[i]-x1[i]*y1[i-1])/(y1[i]-y1[i-1]);
which is the linear interpolation (thr is the threshold). You have to use (and understand!) similar code.
I am using the V5 board at a fixed sampling frequency. With the `drsosc` app I have executed the time calibration at 5 GS/s (actually 5.12 GS/s). This is how my setup looks like in the app:
Now I want to replicate this using the C++ API (not the positive width measurement shown, the signal sampling only). I am seting the sampling frequency to 5 GS/s, as I do in the `drsosc` app. Then I get the time information using the `DRSBoard::GetTime(unsigned int chipIndex, int channelIndex, int tc, float *time)` function (which I don't find defined either in `DRS.h` or `DRS.cpp` but somehow it works). How can I know if these times that I get here are being corrected with the time calibration? If so, should I expect the time resolution to be < 3 ps? Are these 3 ps accumulative, such that in the end I end up having a contribution from the evaluation board of 3 ps × 5 Gs/s × 100 ns where 100 ns is the time difference between my two pulses? (This does not seem to be the case because if so I would expect the jitter to be ~ 1 ns, and we see that the "Pos Width" measurement is ~ 0.1 ns std.)
Why am I asking? I want to measure the jitter between the two falling edges. This cannot be done easily with the `drsosc` app I think, so I am acquiring the data and doing this offline. I have done this measurement in the past using a LeCroy WaveRunner oscilloscope with 20 GS/s and 4 GHz bandwidth (offline, same code) and I have seen it vary from ~5 ps → 30 ps when I vary a voltage that I can control. Now if I calculate this time fluctuation using the data acquired with the V5 evaluation board I get a value ~100 ps and independent of this voltage, which leads me to conclude that the limiting factor is being the evaluation board itself. So now I am wondering if I have reached its limit, or if there is some setting that can still improve this result.
Sorry for the spam. Just want to let you know that I was able to solve the problem, it was all due to a `float` being casted as `int` in the Python binding. Now it works like a charm.
I have seen in the app that the trigger source buttons do something different than the "or" and "transparent trigger" buttons:
If I enable the setup from the right, i.e. OR in CH4 and "Enable Transparent Trigger" the app stops triggering. This is the configuration that seems to be applied in the `drs_exam.cpp` code if I am not mistaken. For some reason in that code it still triggers (I have modified the code to trigger on CH4 instead of CH1 and the trigger level, polarity, etc.).
What does the button in the left actually do? The circular checkbox with the "4" I mean. This is the trigger configuration I want to get in the C++ code.
I also don't know what the function `DRSBoard::EnableTrigger` does, what is the meaning of `flag1` and `flag2`? In my code there is a call to this function which I copied from the example.
Unfortunately I have not idea what the problem could be. In principle the trigger should be independent of the sampling speed, since the trigger is only made with a discriminator and a flip-flop. The hardware must be ok since you see the trigger with the oscillocope app. All you can do is to go through the sorce code of the oscilloscope app, especially drsosc/Osic.cpp::ScanBoards(), SetTriggerLevel(), SetTriggerPolariy() etc. to make sure you do the same calls as the oscilloscope app.
I tried your measurement with the DRSOscilloscope app (see below), and I measure a constant difference of 10 Hz among the whole range of 100 Hz to 3 kHz.
So I don't know what's wrong on your side. Did you try with the oscilloscope app as well? Have you checked your pulse generator? The evaluation board time reference is a quartz with an accuracy of 10^-5, so no way one can get there a difference you see.
Thank you very much for your explanation.
I would like to show you a pulse example ('black line is the threshold).
Still, pulse generator rate and DRS4 rate are a bit different more than 10 Hz.
The scalers are read out 10x per seconds, so they have an accuracy of 10 Hz. I tried a 50 Hz pulser, and measured 40 Hz, I tried 52 Hz and measured 50 Hz. This is about what you can expect.
The scaler rate is measured after the discriminator of the trigger, so the trigger level also affects the scaler reading. If you have a 100 mV pulse and your threshold is 200 mV, your scaler rate drops to zero. That can be seen best with the DRSOsc and sliding the trigger value. If you have a 50 Hz pulse with narrow (< us) pulses, things are fine. But if you use a 50 Hz square wave, then you get distorted signals due to the AC coupling which can also be confusing. See for example here: https://www.daqarta.com/dw_gg0o.htm
Hi. I'm trying to evaluate livetime of the evaluation board with the hardware scaler. I'm facing a strange issue.
I took the rate with the function, DRS->GetScaler(int channel).
I guess that channels 0--3 mean the rate for the channel, and channel 4 means the counter of the trigger.
I took the 1,000 pulses generated by a pulse generator with 50 Hz.
The scaler values are ~ 39.83, not 50.
The timestamp difference between the initial event and the final event is 19.98 seconds.
1000/19.98 ~ 50, thus, the evaluation board took the pulses with enough livetime.
Can we believe the scaler value for the livetime evaluation?