Guideline to Measurements of SVX3 Chip Analog Performance --------------------------------------------------------- 1) Expectations In papers describing readout electronics it is standard to have a plot showing ENC (equivalent noise charge) versus capacitance load. The plot is usually a straight line (I have not found any paper where this is not the case), but the linearity of a noise vs. input capacitance plot is not automatic. To get a straight line one must have an amplifier where the load capacitance does not have a significant effect on the bandwidth. Usually, amplifiers are followed by shapers with a long shaping time (compared to the amplifier rise-times), and in that case it is the bandwidth of the shaper that matters, which is by construction not affected by the load capacitance. In our case we do not have a shaper, and the load very much affects the preamp bandwidth. However, we also have a bandwidth "setting" that permits one to slow down the preamp independently of load capacitance. We can therefore artificially slow down the preamp for small loads, in order to preserve an approximately constant bandwidth vs. load capacitance. This should result in a linear ENC vs. C plot. For completeness, however, we should also make a plot of MEASURED bandwidth vs. load capacitance. Tom Zimermann has already made such measurements for the HP and Honeywell prototypes of the SVX3B, as well as for the SVX3C chips. These are his results for SVX3C (105ns integration time). Ext. Capacitance BW setting 0-90% Rise-time ENC (e) 10pF 4 70ns 1070 20pF 2 70ns 1730 30pF 1 65ns 2460 The points lie on a straight line because the bandwidth (inverse of rise-time) remains constant as the load capacitance is changed (the load capacitance is the external capacitance plus internal and stray capacitances which total about 3pF). Note that there is no point for 0pF external capacitance. We would expect that point to have 400e ENC, but in fact one cannot slow the preamp enough without ext. capacitance to see that low a noise. It should also be noted that these were analog measurements done by looking directly at the voltage at the pipeline output, whereas most people will be measuring noise using the BE chip ADC. The ADC can potentially add a small contribution (independent of load) in quadrature that will increase low ENC values. 2) Integration time. Since our integration time will be imposed by the accelerator there is not much sense in arguing about it. However, it may occasionally be interesting to make plots of noise and gain as a function of integration time, all other things constant. This kind of plot is also a useful diagnostic. However, for actual numbers that we will quote I suggest we stick to two integration times: 100ns (representative of 132ns bunch spacing) and 360ns (representative of 396ns spacing). The integration time is the amount of time the FE clock is low. 3) Bandwidth setting. First one must note that bandwidth setting is not the same as the preamp bandwidth. The bandwidth setting simply selects discrete capacitors to be added to the dominant pole of the preamp circuit. The physical preamp bandwidth is a function of this capacitance as well as the load capacitance, the input transistor current (AVDD2, which is adjustable via the ISET1 resistor and some shift register bits), the operating voltage, and in second order possibly to other variables. Tom's suggested setting for SVX3C and D is to use a single resistor for all the ISET pins in parallel, with a value of 14K to ground, so let's use this value for numbers that we quote. As far as the SR bits, input transistor bias current=101, and cascode current=0 is what Tom suggests. That will give a current of 200 microamps in each input transistor. For the supply voltages use 5V for both AVDD and AVDD2. However, varying these values is always interesting as a diagnostic. For each measurement, one must choose a bandwidth setting AFTER the integration time and all other variables in the above paragraph have been fixed. There are many possible criteria for the BW selection. One could try to maximize signal to noise, one could try to minimize the product of sparse noise occupancy times sparse signal inefficiency, etc. For the purpose of performance bench testing, however, we would like to make a choice that is unambiguous, easily reproducible, doesn't require "signal" and leads to a straight line ENC vs. C plot. I suggest using the HIGHEST BW setting that results in a 0-90% risetime that is SMALLER or EQUAL to 0.8 integration times. Thus for 360ns integration time the 0-90% risetime should be 288ns or less. Before making a noise measurement, one must therefore measure the risetime. The BW setting used and the measured risetime should always be quoted along with an ENC value. An alternative to making a bandwidth measurement, for a system that does not have such a capability is to measure the gain (slope of ADC vs. Vcal plot) for each bandwidth setting (BW=0-7) for the loaded channel of interest. The gain should be constant for low BW values and start to fall off at higher values. Pick the highest BW value that is on the plateau part of the curve (to within 5%). If there is any question as to whether a point on or off the plateau, pick the next lower BW value. 4) Using Internal Calibration Injection In order to measure gain or risetime it is necessary to inject charge into the preamp. The internal calibration feature of SVX3 is convenient, and in some cases the only option availabe. However, one should be aware that there are large systematic effects associated with using this feature. Firstly, one should put a 10K resistor in series with VCAL before the chip, and a capacitor to ground (.01-.1uF) near the chip. This will filter out common mode noise and the cap will also keep the voltage constant during injection, thus giving a sharp charge pulse as desired. And when time comes to measure noise, not only should the calibration injection mask for the channel in question be turned OFF, but also there should NOT be a transition of the CAL_INJECT (a.k.a. CAL/SR) signal in or near the pipeline bucket being selected by the L1A trigger in question. Actually using the internal calibration injection is tricky because it causes effective charge injection in two independent ways. (1) by actually injecting charge through a small coupling capacitor and (2) by giving the front end a kick (detailed mechanism unknown) any time the CAL_INJECT signal switches. A prescription for measuring gain in a way insensitive to (2) is as follows: * If only (1) existed, to measure gain one sets the calibration voltage to some value, records some events (say 10), then changes the calibration voltage by some step (say 10mV), and repeats. A fit to the mean ADC of each set of 10 events vs. calibration voltage then gives the gain. * Because of (2), instead of taking just 10 events at each voltage point, take two sets of 10 events, one set with the calibration polarity and preamp polarity initialization bits as originally, and another set with both bits flipped. * Now use the average ADC from all 20 events taken at each voltage point in the ADC vs. voltage plot. This works because the polarity of mechanism (2) of effective charge injection is given by the sense of the transition of the CAL_INJECT signal, and by switching the preamp polarity it will appear inverted, whereas mechanism (1) will have been inverted twice (not inverted) because the calibration polarity bit affects mechanism (1) only. Still, this prescription is not fool-proof. One should also try to have a clean CAL_INJECT signal in order to minimize the effect of mechanism (2) in the first place. The signal can by cleaned up for example by using short cables, or by adding a termination near the chip (as long as the return current does not flow in the ground plane under the chip), etc. The same prescription can be used to measure risetime. That is, whenever one would normally record one set of events in the absence of (2), recors two sets with opposite preamp and calibration polarity bits. It is possible to measure bandwith using the back end ADC to measure the preamp output (rather than looking at the output directly on a scope). To do this one needs to "walk" the edge of the CAL_INJECT signal through the integration bucket being selected by the L1A trigger, and find the average ADC at each step. Graphically, stepping the CAL_INJECT pulse looks like: 1) FE clock: ____XX_______________XX____XX____XX____XX____XX____ CAL_INJ: ____________________XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX L1A: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXXXXXX 2) FE clock: ____XX_______________XX____XX____XX____XX____XX____ CAL_INJ: ___________________XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX L1A: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXXXXXX 3) FE clock: ____XX_______________XX____XX____XX____XX____XX____ CAL_INJ: __________________XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX L1A: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXXXXXX 4) FE clock: ____XX_______________XX____XX____XX____XX____XX____ CAL_INJ: _________________XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX L1A: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXXXXXX 5) etc. where "X" is logic high state, "_" is logic low, and the pipeline depth in this example should be set to 4. 5) External Load Capacitance. SVX3 chip versions B and earlier have a PMOS input transistor that is, as we know, very sensitive to power supply noise. On these chips exactly how a capacitor is attached to the input can make a big difference in behavior. At LBL we ended up always connecting surface mount capacitors between gold pads right at that chip input (wirebond directly from chip to pad) and a metal bar connected to AVDD2 tantalum capacitor, also right at the chip. For C and newer versions of the chip there is an NMOS input transistor and it seems OK to connect load capacitors to ground with traces or leads of order 1cm. It's OK to wirebond the chip pad to a short trace then put the cap TO GROUND at the other end of this trace. 6) Presentation of Results. In order to compare different noise measurements one needs to specify the conditions under which they were obtained. We would like a minimum set of things to keep track of, and at the same time "fool-proof" that will not have the potential to mean different things for different setups. Initialization bit settings fail the latter requirement. The ideal parameter is the measured rise-time of the preamp (as in Tom's results). Given that almost everything else (including the BW setting) is superfluous. Even the integration time will not matter as long as it is longer than the rise-time. An additional parameter that Tom specified is the current in the input transistor, which affects both the bandwidth and the intrinsic noise itself. But it only enters in the noise as a square-root, so that as long as we stick to 14K Iset resistor and 101 initialization bits it will be close enough to the same for everybody. Unfortunately many setups will not be able to make an accurate rise-time measurement. However, from the gain procedure outlined in (3), one should end up with roughly the same rise-time for every external load, and the way to verify this if one did not measure the risetime is to quote the measured gain for each load value (which should stay roughly constant). When using the ADC, it will also be useful to quote the noise as an RMS in ADC counts. This will give an indication of the possible systematic from binning, and together with the gain it automatically tells everyone what value of the internal calibration capacitor was assumed. Thus, results: External cap. BW bits Risetime* ADC rms Gain (ADC/Volt) ENC (e) c1 b1 r1 ns s1 g1 n1 c2 b2 r2 ns s2 g2 n2 .... * if measured. And of course the integration value used (100ns or 360ns) should be given. 7) Unloaded channels It is often important to look at the performance of unloaded channels, for example when testing hybrids before they are put on ladders. In this case one must be careful because the preamp will be very fast, and at low BW settings the "rise-time curve" may have a significant overshoot. This will lead to an increase in the ADC rms and a strange behavior of the gain at low BW settings. On the other hand, if one blindly chooses the maximum BW setting, the expected noise is so low that the ADC binning will probably be inadequate, and noise sources other than the preamp input transistor may be important or even dominate the measurement. For this special case of unloaded channels one therefore needs some kind of trick. A common thing to do so far has been to choose the LOWEST BW setting that is on a flat part of the gain vs. BW curve (This curve may look strange near BW=0). In this case the noise will be significantly larger than the 400e one gets by extrapolating from loaded channels, and slightly less than the 800e simulated noise of an unloaded channel with BW=0 (in the simulation the preamp is perfect and has no overshoot). Another alternative would be to significantly reduce the current in the input transistor. This will slow down the risetime, and it will also increase the intrinsic noise. Thus one would be able to keep the risetime the same as in the case of loaded channels, but have a noise that is not too small to measure. This noise would then be scaled by the square root of the ratio of before/after transistor currents to get to the the expected 400e at zero external load. This method has not been explored much at this time, and it has the disadvantage that one is required to mess with the shift register bits controlling the transistor current, that we said before ought to be fixed and left alone. 8) Robustness. To get the right answer the chip and setup need to be working together properly. One way to investigate this is to verify that results do not change with such things as slight changes in power supply voltage, and that they are repeatable from one day to the next. There should also be low common mode noise (significantly lower than the noise one is trying to measure), A large common mode noise is an indication that something is not right. The differential noise (dnoise) is useful in evaluating the level of common mode noise. 9) Dnoise. Dnoise stands for differential noise. It is the rms/sqrt(2) of the ADC of channel i minus the ADC of channel i+1. While dnoise is a useful diagnostic to gauge the amount of system noise present, it is not appropriate as a measure of performance. The reason is that in the experiment the important quantity is the total noise that the signal must exceed, and it is possible for adjacent channels to have some correlation in their noise (power supplies, ground, ADC ramp, are all shared) that is not due to external common mode noise that may or may not be there in the detector. Also unloaded channels that overshoot may have extra coupling, and thus larger noise but lower dnoise near BW=0. 10) Simulations. In principle simulations should tell us what noise to expect, and simulations to that effect have been done. However, one should note that simulations are notoriously bad at accurately predicting bandwidths. This will be especially true for Honeywell chips. Typically then, some bandwidth limiting component is added to the simulation, and then tuned according to some criterion. For example to make the simulation normalization agree with data. As an example, CDF note 3618 presents a simulation that applies to SVX2 and SVX3B and compares it with some data. Note that in this simulation the BW settings range from 0-63 and (except for 0) do not correspond to our present 0-7 settings. The numbers given for unloaded channels are for BW=0.