Jump to content
Electronics-Lab.com Community

Cabwood

Members
  • Posts

    86
  • Joined

  • Last visited

    Never

Posts posted by Cabwood

  1. An inductor, such as a relay coil, opposes changes in current. That means if an inductor is conducting one amp, and the source of that current is suddenly removed (like a switch opening or transistor switching off), the inductor will still conduct one amp. The question is " through what?"

    If there's nothing for the coil to conduct its current through except air, then air will do just fine. In order to pass current through air, the coil will develop hundreds or thousands of volts so that the air ionises and provides a conductive path, causing a spark.

    Usually a mechanical switch opening to cut off current flow becomes a smaller air gap than anywhere else in the circuit, and the coil's energy will be dumped into that gap. In the case of a transistor switching off, as the transistor's conductance drops, the coil responds by increasing its voltage in an attempt to maintain current flow - several hundred volts is not uncommon. Being a lower impedance than any local air gap, that poor transistor will suffer the energy dump instead of surrounding air. Few transistors will tolerate this abuse even once.

    As Audioguru states, the coil's resistance has little influence. The rule is simple: whatever current the coil was  conducting when switched on, it will continue to conduct immediately after it is switched off, and any conduction path, however resistive, will do.

  2. Daisy, I believe you are connecting the capacitor across the whole gauge (resistor and galvanometer together), effectively putting it in parallel with the power supply, which is why it doesn't slow the rise to 12, but does slow the drop to zero.

    What Audioguru correctly suggested was to connect the capacitor (100uF should make a significant difference), across the galvanometer itself. Only the galvanometer part. Inside the gauge's case there should be a resistor in series with the galvanometer's coil. Do not connect the capacitor across this resistor. Do not connect it across the galvanometer/resistor pair. Connect it only across the galvanometer's coil connections.

  3. Zeppelin pointed out to me in a nice and discrete PM:

    A is a common-collector (emitter follower) & B is a common-emitter with emitter resistance.


    That's a good point, and he asked me to clarify it. In my explanation I suggested that circuit B is a small modification to circuit A, insinuating that B is also a common collector setup, with a slight change. But, as Zeppelin says, the modification isn't really that trivial. It's quite a leap from A to B, with the two circuits operating in fundamentally different manners.

    Given that the output is derived from the collector (in contrast to circuit A), circuit B is technically a common emitter configuration with a resistance present in the emitter for feedback. From this point of view, its behaviour can be described starting with a common emitter setup, introducing a resistance in the emitter to diminish and linearise the response to base voltage fluctuations.

    I agree with Zeppelin in this sense, that classical transistor theory teaches this, and it is not wrong. I teach this too. But here (on this forum) I enjoy some license for deviation from the classical approach. Since the subject is inherent negative feedback in transistor amplifiers, I found it helpful to treat circuit B as a common collector system (an emitter follower), thus:

    Given that for circuit B to work, the collector voltage must always be greater than the emitter, and the emitter voltage will still follow the base. Thus this configuration is still an emitter follower at heart. The introduction of a collector resistance capitalises on this behaviour. It merely develops the emitter current (resulting from the buffered base voltage appearing at the emitter) into a voltage which is proportionally larger (in most cases) than the emitter voltage.

    Which approach is best? I think flexibity is more useful than any rigid textbook approach, but as Zeppelin notes, sometimes flexibility comes at the cost of technical correctness.

    He is right. Circuit B is a common emitter configuation.

    I would ask this though - take a phase spilitter, where RE and RC are identical, and outputs are derived from both emitter and collector. Is this a common emitter or common collector circuit?
  4. The negative feedback in many transistor amplifier circuit is not always apparent. Take circuit A, for instance. It's an emitter follower configuration, which has a voltage gain of +1. There is 100% negative feedback present in this amplifier, as I will explain.

    The transistor will conduct from collector to emitter when the base voltage is 0.7V or so above the emitter voltage. Increasing this base-emitter difference even slightly will cause the transistor to conduct heavily, and decreasing it only a tiny amount will cause the transistor to completely block current. Therefore, this transistor operates with a fairly constant voltage between base an emitter - about 0.7V.

    If the base input voltage rises, then the voltage difference between base and emitter increases. The transistor becomes more conductive, increasing current flow from the collector down through the emitter and resistor. More current through the resistor means a greater voltage across it, meaning the emmitter voltage also rises.

    When the base voltage falls, the base-emitter voltage difference is reduced, causing a drop in the transistor's conduction. So emitter current drops, and therefore also the voltage across the resistor.

    In this way, the emitter voltage (output) rises and falls with the base voltage (input) in order to maintain a constant 0.7V between base and emitter - hence the name 'emitter follower'.

    Even though we have placed no explicit feedback loop in this circuit, the very nature of the transistor and it's configuration in this circuit exhibits negative feedback, to ensure that the enmitter voltage follows the base. This is called "inherent negative feedback".

    If we modify the emitter follower slightly by adding a resistor at the collector (circuit B), we can reduce the amount of feedback inherent in the circuit to something less than 100%. The voltage gain is thus controlable (-R1/R2), but still there is no visible negative feedback path! The feedback present in this circuit is still inherent, and due to the emitter resistor.

    In the absence of an emitter resistor, there is no inherent negative feedback present, and so we must provide it explicitly (circuit C). Resistor R1 does this. The voltage gain is then controlable (-R1/R2), and the feedback greatly linearises the amplifier's response.

    Sometimes we do not need any feedback at all, especially if we are switching something on or off. In switching applications we are not concerned with linearity, and we want as much gain as possible, and so we omit the emitter resistor, and provide no explicit feedback path. That's what's happening in circuit D.

    Operational amplifiers (except in switching applications) are always connected with external components to provide feedback that determines the response of the system. Transistors, though, can be connected in such a way that negative feedback occurs even though no feedback path is provided explicitly.

    post-20531-14279143259028_thumb.png

  5. You'll need to wire those capacitors in parallel. Two 1000uF caps in parallel have a combined capacitance of 2000uF.

    A 1000uF cap's voltage will drop 3V (from 12V to 9V, or 15 to 12, for example) in 1 millisecond, if it is supplying 3A. Is this long enough to ignite the rocket?

    A 2000uF cap (or two 1000uF caps in parallel) supplying 3A will drop by 3V after 2 milliseconds (twice as long), and so on.

    Use 'low ESR' caps to maximise the current available in these short bursts.

    Capacitors discharge on their own, even with no load, so you'll need to charge them immediately prior to launch - otherwise by the time the second stage is ready to fire, the caps will already have lost a lot of charge.

  6. Audioguru,

    I considered the pot's effect, and decided (subjectively) that the equaliser wouldn't suffer much from the 5k output impedance of the volume control. You are right though, there would be attenuation, and shift of band freqencies. I haven't figured out how much.

    I made an assumption - that the supply was a single 9V battery. The original design would be OK if supplied from dual rails. I assume that there is only a single supply because of the 0V centre point being derived from 2 resistors. The ground symbol is visible in all three circuits, and my guess is that Jesus's circuit doesn't work because he's connected them all together. That is sure to break the preamp because of the fake ground.

    My design is to address the single supply issue, remove any ambiguity regarding ground points, and provide 1M input impedance, all with a single op-amp. Perfection, as you state, would require another op-amp to buffer the output from the volume pot.

  7. In the preamp section, 4.5V is derived from the potential divider of two 4k7 resistors. On the diagram, this is connected to ground. You must not connect that 4.5V point to the ground connectors of the other sections, because for those other sections ground is the negative supply rail. In other words, the ground symbol in your preamp schematic shouldn't be there.

    The preamp in this design has an ouput offset of 4.5V (assuming a supply of 9V). That will play merry hell with the 386,  so you must couple with a capacitor from the preamp to the next stage.

    The specs for your preamp should be: 1) Input impedance 1M, 2) Ground should be common to all sections. In my schematic I've shifted the volume control to after the op-amp, and I've decoupled the 4.5V mid-point with a capacitor in the feedback loop. This frees me to offset the input by +4.5V (so the amp can operate from a single supply) and create the 1M input impedance with only two resistors. Put this preamp as the first stage, then the equaliser.

    post-20531-14279143227297_thumb.png

  8. For a resistor the relationship between voltage across it and the current through it is very simple - the current is always proportional to the voltage (V = IR). We say that an alternating current through a resistance will develop an alternating voltage across it which is in phase with the current, or that there is zero phase difference.

    For a capacitor or inductor though, the relationship is more 'complex' (see what I did there?). For example, the current through a capacitor is proportional to the rate of change of the voltage across it. This means that a sinusoidal alternating current through the capacitor will be 90 degrees out of phase with the resulting sinusoidal voltage developed across it.

    This phase shift means that a simple value of resistance is not enough to completely describe the behaviour of a capacitor. Instead we use a complex number, which is really a vector encapsulating both 'phase angle' and 'magnitude' information.

    This complex number is called impedance. It is analagous to resistance, in that it describes the relationship between current through and voltage across some device, but it includes phase information as well as magnitude. You can think of resistance R as being a complex number (R + 0j) with an angle of zero and magnitude of R. In other words, a resistance is an impedance with no imaginary part. Thus a resistance is an impedance through which current remains in phase with and in direct proportion to voltage across it.

    Ohm's law can be applied using impedances instead of resistance. If impedance is represented by Z, then V = IZ.

    Just like resistances, impedances connected in series have a combined impedance of Z = Z1 + Z2 + ... + Zn

    Parallel impedances are treated as you would treat parallel resistances:
    1 / Z = 1 / Z1 + 1 / Z2 + ... + 1 / Zn

    One more complication: the impedance of a reactive component (like a capacitor) varies with frequency. For a given frequency f (Hz), the impedance of a capacitance C (Farads) is Zc = -j / (2 Pi f C). Note that this a purely imaginary value! That's describing the fact that the current in a capacitor is always 90 degrees out of phase with the voltage across it.

    Better brush up on your complex arithmetic. It will be worth it in the end.

  9. I think Kevin's question is expressed in terms of time, and should thus be replied to in terms of time. I reckon the trouble understanding reactive components (like capacitors) for most people lies in the difference between understanding their behaviour in terms of frequency and understanding them in terms of time. The two approaches are so far removed from each other that you might think we are talking about different components altogether.

    In the time domain, a capacitor's voltage changes at a rate proportional to the current flowing through it. Thus:

    Firstly, the capacitor's voltage cannot change instantly. It can only gradually approach some value, as it charges or discharges. The higher the time constant (R x C) of an RC circuit, the slower the rate of voltage change across the capacitor. If an input signal changes slowly enough, the capacitor in an RC circuit is able to charge and discharge quickly enough to keep up with input changes. If the input changes too quickly though, the capacitor cannot charge or discharge fast enough to follow the input. This inability of the capacitor's voltage to swing quickly enough results in its voltage being an attenuation of the input.

    Secondly, intuitively, it can be seen that if the input changes significantly faster than the capacitor can follow, fluctuations of voltage across the capacitor will be negligible compared to fluctuations in input voltage. Thus it is the resistor that is dominant in determining the current through the network, and so the current is roughly proportional to the input voltage. This means that the rate of change of the capacitor voltage is proportional to the instantaneous input voltage. This is the cause of waveform distortion (not harmonic distorion).

    So, in a low pass RC circuit, the output is the capacitor's voltage, whose rate of change is (nearly) proportional to the instantaneous input voltage, and the circuit is said to integrate. Conversely, with the resistor and capacitor swapped to form a high-pass filter, the ouput is the resistor's voltage, and the circuit differentiates, so that the ouput at any instant is (nearly) proportional to the rate of change of input. Read that again.

    The upshot of all this is that not only does the RC network attenuate, but it also distorts, by either integrating or differentiating. So, for a low-pass RC circuit, a square wave input (whose period is well below the R x C time constant of the circuit) will appear heavily attenuated at the output. It will also be distorted into a triangle wave, because the alternating high and low input voltages are causing the capacitor to charge and discharge (nearly) linearly - otherwise known as integration.

    For that same circuit, fed with a nice sinusoid of period significantly lower than R x C, the sinusoid is integrated. The integration of sin(x) is another sinusoid, phase shifted by -90 degrees (in other words, an upside down cosine, or -cos(x)). A high-pass RC filter would phase shift by +90 degrees, to yield a cosine. Try plotting a sinusoid, and then the rate of change of that sinusoid, and you'll see this effect clearly - two sinuoids out of phase with each other by 90 degrees.

    An interesting experiment to demonstrate this is to connect an oscilloscope in X-Y mode to the circuit - channel 1 to the input, and channel 2 to the output. This yields a wonderful ellipse, or nearly a circle if you choose your input signal frequency and channels' vertical scales properly.

    In summary, when feeding a simple RC network with a sinusoid, you get another phase-shifted sinusoid across the capacitor. With non-sinusoidal input waveforms, the capacitor voltage is always a distortion of the input. The amount of distortion depends upon how far the capacitor's rate of charge/discharge is exceeded by rate of change of input. Periodic signals whose periods are significantly smaller than the time constant R x C will appear in attenuated and integrated form across the capacitor, and in attenuated and differentiated form across the resistor.

    Understanding an RC network in these terms (the time domain) permits an understanding of how timing circuits (like the 555 IC) work, but is not really appropriate for understanding filter applications. Those are best described in the frequency domain, because the input waveform is rarely some nice square, triangular or sinusoidal form.

  10. Although not fully convinced, the most interesting is Cabwood's post. Mathematical Convenience

    LOL! Not entirely convinced! Interesting!

    For a simple low-pass RC network, the cut-off frequency is 1/(2.Pi.R.C). The voltage gain for a simple sinusoidal input at that frequency is

    Vout/Vin = 1 / Sqrt(2) = 0.707 = 71%

    The power gain is the square of this (power is proportional to square of voltage):

    Pout/Pin = 1 / 2 = one half. Very convenient.

    Expressed as decibels:

    10 log (1/2) = -3.01dB

    Not exactly 3, as I said. But conventionally engineers have decided that it's close enough to 3 to be able to call the cut-off frequency the '3dB point'. It could be argued that '10 log(Pout / Pin)' stinks of arbitraryness, but Mr. Bell found that log-base-ten simplified the maths quite a lot where power (not amplitude) is concerned, and engineers subsequently noticed that multiplying the logarithm of gain by 10 made 10dB equal to a power gain of 10. All for convenience. If Mr. Bell had settled upon the natural logarithm as the way to go, then the numbers would be a nightmare.

    I can imagine the meeting: 'Hey, 3 is nice round number, and half is a fantastically convenient gain, AND it happens to be roughly the frequency which is the inverse of the time constant of the system. Let's use it to define where a system stops or starts passing signals, and we can call it the 3dB point. Waddya reckon?" Surely it must have gone down to a vote.

    It didn't escape these guys that if you created a second order filter by cascading two identical first order RC filters, at frequency 1/(2.Pi.R.C) the response would be -6dB (1/4), which wasn't so convenient. They still wanted to define the 'half power gain' point of their systems, and so Butterworth and Chebyshev and all the other second order filter folk, for convenience, worked out the frequency where their own designs had a power gain of a half (the 3dB point), and used those instead of the 6dB points to define the passbands of their designs. Just for consistency, you understand.

    For 3rd order filters, which increased attentuation to 9dB at that f=1/(2.Pi.T) frequency, they did the same. "Let's not use the 9dB point to sell our designs", they said. "Stick with the conventional 3dBs" was their choice.

    So, to eat my words, 3dB was chosen to describe higher-than-one order active designs, but only because nature had a convenient "nearly 3" response in it's own passive systems.
  11. That value of 3dB is not "chosen".

    And it's not exactly 3dB. Just like pi is not exactly 22/7. It's nature, and the maths we use to model it, that chose something close enough to 3dB that today we use that figure conventionally, as we use Pi.

    Mathematically we say that a first order filter's response to a step input is related to e(-t/T). T is said to be the time constant (in seconds) and t is the time elasped after the step. Euler's number e is not chosen (unless you argue divine design), it's natural. The value of T, if anything, is the only arbitrary choice here. But for mathematical convenience we've decided that it's the time when the expression is equal to e-1, in the mathematical model of the system.

    Now, it just so happens that this filter will attenuate high (or low) frequencies by 3dB at the frequency 1/(2.Pi.T). Nobody said 'let's use 3dB as the convention'. Rather nature says "in first order filters I will attenuate signals by 3dB at the frequency which is 1 divided by 2 divided by Pi divided by the time constant of the filter, and that time constant shall be the time it takes for blah blah blah'.

  12. I'll suggest a simple bistable using two transistors. It'll work with any voltage up to the limit of the transistors, but only with a DC supply and load.

    You could switch a relay as the load (don't forget the protective diode). The circuit shown, with two BC108 transistors, will switch up to 200mA. If you want more, you'll need to replace TR2 with a power darlington.

    post-20531-14279143174662_thumb.png

  13. Yes, as I said, capacitative coupling. I'm pretty sure there is almost no electromagnetic phenomenon involved in 50Hz mains pickup.

    The body's capacitance to ambient electric fields is a few picofarards. When you touch the scope probe, you become a 10pF capacitor in a series circuit - a 50Hz 110V/230 AC signal generator, in series with a 10pF capacitor, in series with a 1MOhm load (the probe) to ground.

    That's a cut off frequency of several kilohertz, meaning the 50Hz 'signal' is hugely attenuated.

    This is what I believe to be the dominant effect here.

  14. In the first picture I've drawn a classical representaion of the voltages across the elements in a simple battery/load circuit. I am careful to observe that the voltage arrows always point to the higher potential end of the component. When this is so, the voltage is always marked against the arrow as positive. We have thus eliminated any ambiguity regarding polarity, prior to applying Kirchoff's laws.

    To apply Kirchoff's voltage law we must add all the voltages in a loop, without forgetting to account for polarites. In the left example, adding +10V to +10V is 20V, not zero, giving the impression that something is wrong. Nothing is really wrong here, of course, except that clockwise arrows all must be added, and anti-clockwise all subtracted (or vice versa if you wish) in order to correctly account for the various polarities. So in this example, the battery voltage (clockwise arrow) is positive, and the load R's arrow is anti-clockwise, and thus negative. Add those two: plus10 + minus10 = 0.

    In the right version I made a visual tweak to the load's arrow - I changed it's direction, and negated the voltage (to maintain polarity consistency). This visual trick now means all the arrows point clockwise, and can all be added as-is. The sum is zero, as you can see.

    For your own circuit, I worked out the layout and values, and drew in arrows and values paying careful attention to polarity. The arrow "VR2" points to the right-hand end of R2, and is marked "-1V". Read this to mean "The right-hand end of R2 is minus 1V higher than the left-hand end". In other words, the right end is one volt lower.

    You could change the arrow direction, and label it +1V, and this would say exactly the same thing.

    When observing consistency in your labelling this way, Kirchoff's voltage law is always applicable. Using the existing labels, as I have drawn them, we see that VR3 and VR2 point clockwise, and so are added, but E2 is anti-clockwise, so is subtracted. So:

    VR3 + VR2 - E2 = (+8)+(-1)-(+7) = 0

    There is another interesting observation to make regarding your circuit, and that is the direction of current through E2. Take another look at my simple battery/load circuit, and note how all the labelled values are positive. In this condition, all arrows are correctly depicting direction of current flow and polarity of potential difference, and can be used to visually spot odd circumstances.

    See how the current flow through the resistive load and voltage drop across it point in opposite directions? This is normal for a resistive load, and indicates that the load is absorbing energy. In other words, the resistor is getting warmer.

    What if (given all-positive current and voltage labels) the current direction and potential difference are in the same direction? That means that the component in question is delivering energy! In my picture that is exactly the case for the battery, and is perfectly normal considering batteries are power sources.

    But in your circuit there is an unusual situation for the battery E2. The current flowing through it is I2, which is labelled (at the moment) as -1A. First, normalise that arrow by changing it's direction and sign, so that we are using only positive values. Now you can see that for the battery E2 the two arrows (I2 and E2) point in opposite directions, telling you that this battery is absorbing energy. In other words it is being charged up! This battery would therefore heat up and possibly explode if it was not rechargeable!

    post-20531-14279143085625_thumb.png

    post-20531-14279143085753_thumb.png

×
  • Create New...