Maker Pro
Maker Pro

Confusion about oscilloscope setting

electronicsLearner77

Jul 2, 2015
306
Joined
Jul 2, 2015
Messages
306
I am very confused with oscilloscope time base setting. If i put 1ms as time base it means it shows 10ms of signal. Does it also means that the scope takes the samples at every 1ms of the input signal? If i want to see a signal of 10kz or 0.1ms do i need to change the time base to 0.1ms?
 

(*steve*)

¡sǝpodᴉʇuɐ ǝɥʇ ɹɐǝɥd
Moderator
Jan 21, 2010
25,510
Joined
Jan 21, 2010
Messages
25,510
In a direct answer to you questions, no, and no. But the information answering the former question should be available and preferably easy to obtain on the screen of your scope. In answer to the second question, you should set the timebase in order to see a useful amount of signal. Your scope may even have a button to do it (and and also set the vertical amplifier settings) with one touch!

However, there is a longer answer...

Let's step back in time to the days of analog scopes.

There are 2 independent axies, X and Y. The X axis is controlled by the timebase oscillator, the Y axis by the signal you are wanting to display,

The timebase oscillator has several functions:
  1. To move the electron beam across the display at an adjustable (and known) rate.
  2. To move the electron beam across the display at a constant speed (i.e. it is not faster in some position on the screen than on another)
  3. To be able to be triggered to start moving across.
Essentially speaking, your setting of the speed of the timebase will be such that you can see a useful amount of a signal. For example, if you want to see 2 complete cycles of a signal that repeats 10,000 times per second, the timebase should sweep across the screen in 1/5000th of a second (not to be confused with 5000 times per second!)

To allow you to make measurements, the oscillator has a graticule which divides the screen up into small squares (kind of like graph paper). To make things easier the controls are calibrated to the graticule. In the vertical direction the calibration is in volts per division. In the horizontal direction it is in seconds (or parts of seconds) per division.

In the case above, if our scope has 11 graticule marks (meaning 10 divisions) in the horizontal direction, then we now know that we want the timebase to sweep across those 10 divisions in 1/5000 of a second. So, each division must be swept in 1/50,000 of a second. 1/50,000 of a second is 20us, so the timebase setting is 20us. Most oscilloscopes have (or had) a 1, 2, 5 arrangement of values on controls, so 20us would be a value you would find on the control that you could manually adjust (but probably not know the exact timebase once you did. More advanced scopes allow arbitrary settings of exact timebase settings down to ps per division.

In the analog oscilloscope, only a single value on the Y axis was ever known. The historical values were displayed by virtue of both persistence of vision of the observer and by the persistence of the image on a phosphor screen. In the past you could select an oscilloscope on the basis of the persistence of the phosphor, and some had options to extend the persistence. In an analog environment the resolution in the X direction is largely limited by the vision of the observer and the width of the electron beam.

In the digital environment, most things stay conceptually the same. However, instead of the timebase controlling how fast the electron beam sweeps the screen, it now controls (in a fairly complex manner) how the historical values of the signal are scanned for display.

It is the complexity of this process that you are getting your head around, and it is not too surprising. It is often easier to ignore the how, and just accept the fat that it will try it's best and typically come up with a useful display. The use of the words "typically" and "useful" should be red flags that there may be cases where it fails. Whilst those failures are generally speaking pretty rare, they can result in you mis-reading something on your scope, and doing so in a manner that would not happen in an analog world. So... it's best to have an understanding so you know when your instrument is trying to mislead you.

Your digital scope will have some method of sampling the signal, and some amount of storage to hold these samples prior to display. Your scope will have a maximum rate at which it can take samples (let's say it's 100 million times per second).

If you are displaying a signal where the horizontal timebase is VERY fast, it is possible that you will have divisions between the fastest possible sample rate that exceed the width of a pixel on the screen. In this case the resolution on the screen will suffer. It is also possible to get into this state by zooming onto a stored signal -- much the same way as zooming into a digital image on a screen eventually shows pixelation.

Another possible problem is that where you are using a very slow timebase, the scope may not have the memory to hold an entire screen worth of signal.

The second problem is easier to solve, the sampling frequency is simply reduced.

A digital scope will give you some indication of the speed at which it is sampling the signal, and also the horizontal timebase (which may vary as you zoom in to a signal). As long as you've got many more samples than pixels between the graticules, you're generally in a safe space, but there still are drawbacks.

Going back to an analog scope, even when the horizontal timebase is very slow, the vertical bandwidth remains at the scope's maximum bandwidth. Thus, even if you've got 1 second per division, a signal of the scope's bandwidth (let's say 50MHz) will still cause deflection of the trace. Our mythical 100 Megasample per second scope at 1 seconds per division can only sample at a rate that will prevent it from filling it's memory in 10 seconds. If the memory depth is 1,000,000 samples then the scope cannot take more than 100,000 samples per second. Anything that occurs in 1/100,000th of a second or less may not even be recorded. Effectively the scope has a bandwidth of 50kHz.

This change in bandwidth of the scope with sampling frequency is obvious when you think about it, but can be deceiving in practice. An example is the ringing of a fast rising edge. An analog scope may not show the full height and may have a fuzzy looking trace following the peak, but the rough outline of the "shape" of the ringing can be observed. On a digital scope, if the ringing is fast compared with the timebase you may see a totally different signal caused by aliasing. For example, if the signal oscillates several times between each sample, the obtained samples will not form an outline of the signal, but may show a far lower frequency.

The advantage of the digital scope in the instance above (as long as your sampling frequency is high enough) is that the signal will not be faint or fuzzy. However this turns into a disadvantage when there is noise on the signal. The same level of noise may appear to have a higher amplitude on a digital scope.

Going a little further along in this direction, a 50MHz analog scope will typically display a 50MHz signal with reduced amplitude (possibly as much as halving the amplitude), but will continue to display a signal (at further reduced amplitude) to an even higher frequency. A 100MS/s digital scope will struggle to display a 50MHz signal (having only 2 signals per cycle), and a slight frequency variation can appear as a significantly changing amplitude as the position of the trace where the sample is taken shifts (there are ways of overcoming this though...).

A practical modern digital scope (even one rated for 20MHz bandwidth) may be capable of 1GS/s. It also may have storage for tens of millions of samples. In this case you will typically not run into many of the problems I've discussed above unless you zoom in a long way or operate the scope near its limits. On more expensive scopes there are even methods of making brief transitions look fainter so the trace looks more like one you would see on an analog scope (it's actually great for visualizing some signals).

If you know what your scope's sample rate is when viewing a display, and understand the bandwidth issues that result when this is reduced, you should have enough information to interpret what is shown on the screen.
 

electronicsLearner77

Jul 2, 2015
306
Joined
Jul 2, 2015
Messages
306
Thank you for the detailed reply. So what i have understood from your calculations. Can I proceed like this
If i want see 4 cycles of 5khz then it is 50000/4 =12500. So 1/12500 times per second which is 4us. Approximately can i proceed with this calculations for setting time base. If i move my timebase randomly i get the waveform but want to know exactly what i am doing.
 

(*steve*)

¡sǝpodᴉʇuɐ ǝɥʇ ɹɐǝɥd
Moderator
Jan 21, 2010
25,510
Joined
Jan 21, 2010
Messages
25,510
At 5kHz, each cycle takes 200us. If you want 4 of them that's 800us, so 80us per division.

If you can't choose this, either 50us or 100us might be the place to start.

Edit: you need a slower timebase to see more cycles, so your error was to divide rather than multiply. Then I think you made a math error.
 
Top