Solutions, Solutions, Solutions!

Solutions, Solutions, Solutions!

It seems like all we hear about today in the test and measurement industry is “solutions.” Why is the word “solutions” such a popular buzz word? Well, it has a great double meaning. The first (and most obvious) definition of solution is an answer or resolution to a problem or situation. The second meaning of “solution” comes from as far back as the year 1590, and means a liquid mixture that is completely mixed (solute into solvent). When we talk about solutions, we really imply both of these definitions. One of them literally, and one of them metaphorically.

Let’s start with the literal meaning. When I say that our DDR solution kicks booty (metaphorically, not literally), what I mean is that we have a robust, industry-proven oscilloscope that will simplify the complicated task of triggering, analyzing and debugging parallel buses. If you work with DDR, you’re probably now thinking, “tell me more about this solution.” Ok, here goes:

“The Keysight Infiniium V-Series oscilloscopes also have the world’s fastest digital channels, which means you can probe at the various command signals to easily trigger on the different DDR commands such as read, write, activate, precharge and more. DDR triggering makes read and write separation easy, providing fast electrical characterization, real-time eye analysis and timing measurements. The DDR protocol decoder can decipher the DDR packets and provide a time-aligned listing window to search for specific packet information.”

So I may have copied that from our DDR webpage, but it doesn’t count as plagiarizing if it’s from my own company. And, it’s one heckuva solution because it does the job you’ll need it to do, and it does it well. If it didn’t get the job done, it wouldn’t be a solution. I also can’t just call it an oscilloscope, because it’s more than that. It’s a combination of hardware, software, and probing – it’s a whole solution.

But here’s what it comes down to, we call it a solution because it’s the answer to a lot of your DDR problems, and it’s so much more than just a piece of hardware for your bench.

Ok, I hinted above about how this could get metaphorical. The literal, chemistry definition of solution is a liquid mixture that has a fully dissolved (same root word as solution!) solute in a solvent. I don’t literally work with solutions. But when we at Keysight are combining and integrating software and hardware, we’re creating a metaphorical solution. For us to do the job 100% of the way, our software has to fully integrate (or dissolve) with our hardware. That’s what makes it a solution. It flows, it integrates, it works as one!

Ok, I also hinted that I’d get (possibly too) philosophical.  I’ll go so far as to say that we can’t really call it a solution until it’s in the hands of an engineer and being used to find solutions to bugs in their design. A solution can only be a solution when the test equipment is fully integrated (dissolved) into an engineer’s workflow and design process. The solution consists of the test tools (the solute) and the engineer’s skill and wit (the solvent). In chemistry, the solute is considered the “minor component” and the solvent is considered the “major component.” This holds true for our metaphorical solution. We can only do so much to provide the solute, the real quality of the solution is dictated by you, the solvent!

So, there’s really two main reasons we talk about solutions. 1st, we want to convey that we can help solve your problems with a combination of tools. 2nd, we want to partner with you to create and find the real solution, a combination of quality equipment and quality engineering.

In closing, a haiku:

Solutions, complex
Combine wit and expertise
to solve tough problems

Or more traditional English:

Roses are red
Violets are blue
I’m an engineer not a poet
solutions.

Author’s note: there may or may not have been a challenge to see how many times I could use the word “solutions” in a blog post. The answer is 32. Solutions. 33.

Inventing the MSO

Inventing the MSO

A look into the history of the mixed-signal oscilloscope

1996 was a year to remember, it brought us the Macarena, the Nintendo 64, and the first Motorola flip phone.  But, also making its debut that year was the HP54645A mixed signal oscilloscope.  Today mixed signal oscilloscopes (MSOs) are an industry standard, but this was new and exciting technology 20 years ago.  Here’s an excerpt from the HP journal from April 1997:

“This entirely new product category combines elements of oscilloscopes and logic analyzers, but unlike previous combination products, these are “oscilloscope first” and logic analysis is the add-on.”

At this point in the tech industry, microcontrollers dominated the landscape. Gone were the 1980s and the days of microprocessors and their dozens of parallel signal lines, in was the 8-bit or 16-bit microcontroller. As the need to test dozens (or hundreds) of channels decreased, the thriving logic analyzer industry began to shift in favor of oscilloscopes.  As a result, Hewlett Packard released the 54620A; a 16-channel timing-only logic analyzer built into a 54645A oscilloscope frame. This was a big hit for engineers who only needed simple timing analysis from a logic analyzer and liked the simplicity and responsiveness of oscilloscopes.

These tools were all coming out of Hewlett Packard’s famed “Colorado Springs” division, which focused heavily on logic and protocol products. In hindsight it’s clear that the shift from a logic analyzer-focused landscape to an oscilloscope-focused landscape was inevitable.  But, when the project funding decisions had to be made the logic analyzer was king.

A few R&D engineers, however, saw it coming.  They strategized amongst themselves to get a new oscilloscope project underway.  However, they knew it was going to be a hard fought battle. Following the old adage “if you can’t beat them, join them,” the engineers proposed a new project combining the oscilloscope and the logic analyzer into one frame. The thought was that if an oscilloscope project wouldn’t get funding, then surely integrating a logic analyzer into the scope would do the trick. Below is a picture of Bob Witte’s (RW) original notes from the 1993 meeting in which the MSO was conceived. (Follow Bob Witte on Twitter: @BobWEngr) This product was internally code named the “Logic Badger,” stemming from the 54620A oscilloscope’s “Badger” code name and the 54645A’s “Logic Bud” code name.

Keysight MSO original notes

One thing led to another, and the 54620A and the 54645A were combined into the paradigm shifting 54645D. A new class of instrument was introduced into the world: the mixed signal oscilloscope. For the first time ever, engineers could view their system’s timing logic and a signal’s parametric characteristics in a single acquisition using the two analog oscilloscope channels and eight logic channels.

Hewlett Packard 54602B, original Keysight MSO

From its somewhat humble beginnings, the MSO has become an industry standard tool globally, with some estimating that up to 30% of new oscilloscopes worldwide are MSOs. Logic analyzers are also still sold today and are an invaluable tool for electrical engineers thanks to their advanced triggering capabilities, deep protocol analysis engines, and state mode analysis. If you’re debugging FPGAs, DDR memory systems, or other high-channel-count projects you’ll want to consider using a logic analyzer. However, mixed signal oscilloscopes dominate today’s bench for their ability to quickly and easily trigger and decode serial protocols.

Finally, it’s worth noting that the Hewlett Packard division is still alive and strong in its current form here at Keysight Colorado Springs. In fact, many of the same engineers from the very first MSO project are still here working on today’s (and tomorrow’s) MSOs.

To learn more about how the digital channels on an oscilloscope work, check out this 2-Minute Guru video on the Keysight Oscilloscopes YouTube channel.

Learn more about MSOs or view the mixed signal oscilloscopes available today from Keysight Technologies at Keysight.com.

Getting Started with Jitter: The Best Jitter Glossary I’ve Ever Written

Getting Started with Jitter: The Best Jitter Glossary I’ve Ever Written

(Also the only Jitter Glossary I’ve ever written)

Does jitter have you all shook up? This quick overview should help ease those jitters (puns intended, sorry). Learning this list of key terms will give you the confidence you need to start tackling the jitter bugs in your design.

Buckle up! Here’s the exhaustive list of terms you need to know:

Jitter: Essentially a measurement of where your signal’s edges actually are compared to where you want them to be. If your edges are too far off, bad things happen. Really, really bad things. Or sometimes just marginally bad things. Bit errors, timing errors, the works. You can hope it’s just marginally bad, or you can use the right equipment and know for sure.

Jitterbug: An old-school dance.  Note: you don’t actually need to learn this to talk intelligently about jitter. It will probably just have the opposite effect.

Probability Distribution Function (PDF): Remember your statistics class in college? Me neither.  But, you’ll probably remember the term “bell curve” because that affected your grades. A bell curve is just one type of probability distribution function and is simply another way to describe a “normal” or “Gaussian” distribution. A PDF is simply a chart of possible values based on their likelihood of occurring. The x-axis represents a possible value (sometimes marked by standard deviations away from zero) and the y-axis represents the possibility of that value occurring. We use PDFs to visualize and interpret jitter measurements.

Gaussian Distribution: or “normal distribution,” it’s unbounded and continuous. That’s a fancy way of saying that basically any value is possible. But the farther away from the middle of the PDF you go, the less likely it is that that value will occur.

Random Noise: Also “random jitter,” is 100% random and has a Gaussian distribution. It’s caused by physics (yay science!) and has three components: thermal noise, shot noise (or Poisson noise if you’re a math major), and pink noise. If you want to geek out more on this, just look it up on Wikipedia. So, you expected your clock to have a 60 ns period? Well, because you can’t get rid of random noise (earplugs don’t help) you could end up with a rogue 500 ns period every once and a while.  But you probably won’t unless you have a few years to run the test. But you could. This is why we like to measure and analyze jitter! You can analyze jitter on your oscilloscope using histograms.

Histograms: A tool that visually describes how a signal varies over time.  Figure 1 shows a jitter histogram on the Keysight InfiniiVision 6000 X-Series oscilloscope.  Because it looks like a bell, you can say “That’s Gaussian!” (and get smarty-pants points from your cubicle-mate). Because there’s only one peak on the histogram, you can say “Psh, it’s only random jitter so there’s nothing we can do about it!” (and get double bonus smarty-pants points from your cubicle-mate). But, look at Figure 2.  That looks a little bit scarier. Because the histogram has two peaks it means that there’s “deterministic” jitter.

histogram of a signal that with random noise
Figure 1: A histogram of a signal that just has random noise

 

Bimodal histogram showing deterministic jitter
Figure 2: a “Bimodal” histogram shows that there’s deterministic jitter

Deterministic Jitter (DJ): It’s not random.  It’s usually bounded, so it can’t go off to infinity even if it wants to. This is when it starts to get scary, because deterministic jitter is caused by system phenomena. Notice that there are two peaks with a random distribution around each of those peaks. Random and deterministic jitter are both in play here.  Deterministic jitter can be broken down into a few sub-categories:

Bounded Uncorrelated Jitter (BUJ): Gives engineers night terrors.  It’s bounded but isn’t really related to anything in that same system.  It could be something like cross talk or just interference from the wall.  (The wall? Yeah, there’s noise everywhere. Check out this awesome video: https://youtu.be/SJefUNAJZNA)

Data Dependent Jitter (DDJ): Can be one of two things.  The first is “duty cycle distortion” (DCD). This is when one bit value tends to have a longer period than the other (like when you can get one kid out of bed way easier than the other). The second is “Inter symbol interference” (ISI). This is caused by long strings of a single bit value. This is sort of like when you’ve been sitting too long in a weird position and one leg doesn’t work right when you get up and try to walk.

Periodic Jitter: can be correlated or uncorrelated, but is always periodic.  This means it’s pretty easy to identify like we’ve done in figure 2. Take your jitter measurement, and plot a trend of the measurement.  Then measure the frequency of the trend, and that will point you directly to the culprit (probably Professor Plum in the library with the candlestick).

“Whoa Daniel, that was too much at once. Remind me again how they all relate to each other?” I’m glad you asked; here’s a nice family tree (Figure 3).

Jitter components
Figure 3: Jitter and its components

Jitter Measurements: This probably doesn’t need defining; I just needed a segue. Ok, fine. Jitter measurements are measurements you make to get a better understanding of the jitter you’re dealing with. Here are a few jitter measurements you might care to make:

Time Interval Error: The mother of all jitter measurements. It’s usually measured as an RMS value and describes the difference between the ideal clock period and the actual clock period. Like I said, it’s the mother of all jitter measurements. You might think this is all you need to measure, but there are some other helpful measurements out there.

Period Jitter: Is usually measured as a peak-to-peak value, and yields the difference between the longest and shortest clock periods over a specified amount of time.

Cycle-to-Cycle Jitter: Is also usually measured as a peak-to-peak value, and is the maximum difference between adjacent clock periods. The longer you measure this, the larger it’ll get, so if you want to characterize this for posterity, use a set number of cycles that you measure. Basically, period jitter tells you how bad it is in the long run, and cycle-to-cycle jitter tells you how fast you are going to get there.

All of this should be enough to get you started if you want to measure (or just discuss) jitter. If your interest was piqued or you felt cheated because I didn’t talk about eye diagrams, clock recovery, or phase lock loops, check out this app note on Jitter Analysis written by Johnnie Hancock. It’s really good, but doesn’t have as many jokes. Although fewer jokes are probably a welcome relief by this point. You can also learn more about jitter  and jitter measurement tools at keysight.com.

Thanks for reading! If I didn’t coax you into clicking that link (who reads app notes, right?) check out our YouTube channel

Also, check out some of our other posts! We’ve talked about probing techniques with Kenny Johnson: Splurge, get an active probe and Measure ripple and noise on power supply voltage railsconfusion in Australia and normal triggering with Johnnie Hancock; signal modulation and DIY oscilloscope Bode Plots with Mike Hoffman and  measuring system bandwidth and measuring oscilloscope and probe bandwidth with Taku Furuta.

And of course, Melissa Spencer’s oscilloscope zombie apocalypse survival guide.

This Quick Trick Makes Your Oscilloscope Measurement 1,000 Times Better

This Quick Trick Makes Your Oscilloscope Measurement 1,000 Times Better

Do you want to make sure your oscilloscope measurements are the best they can possibly be?  Don’t settle for just an average measurement; simply scaling your signal properly can dramatically improve measurement quality.  Why? Because both sample rate and the bits of resolution of your oscilloscope play a part in your scope’s measurements.

Sample rate is affected by the oscilloscope’s horizontal scaling.  The equation to remember is:

Sample Rate = Memory Depth/Acquisition Length

Memory depth is a constant value, and the acquisition length (or trace length) is a variable dependent on your time-per-division settings. As the time/division setting increases, the acquisition length increases.  Since all of this must fit into the scope’s memory, at a certain point the oscilloscope’s ADC will have to decrease its sample rate.  What does this mean practically?  Let’s look at a frequency measurement on a 100 kHz square wave. We know the frequency is precisely 100 kHz and is very stable, so we can use the standard deviation of our measurement to judge the quality of the measurement.  Figure 1 has our 100 kHz square wave scaled to be viewed over 20 ms of time.  And, the scope’s sample rate has been automatically decreased from 5 GSa/s down to 100 MSa/s in order to fit the entire trace into the oscilloscope’s memory.  And, the standard deviation of our measurement is 1.49 kHz (about 1.5%) after around 1,500 measurements.

frequency measurement on a 100 kHz square wave
frequency measurement on a 100 kHz square wave

But, look at what happens when we choose a much smaller time/div setting, effectively shortening the acquisition length and increasing the sample rate.  Figure 2 has the same signal, but horizontally scaled to 1.2 us/div. The standard deviation is now 1.5 Hz, which is one thousand times smaller than our previous measurement.

signal horizontally scaled to 1.2 us/div

All that changed was the horizontal scaling of the signal, and in turn the sample rate of the oscilloscope. So, proper horizontal scaling of your oscilloscope can have a dramatic effect on the quality of your time dependent measurement.

Just as horizontal scaling effects your time dependent measurements, vertical scaling effects your vertically dependent measurements (peak to peak voltage, RMS, etc.).  Again, let’s take the same 100 kHz square wave, but instead look at peak to peak voltage.  Figure 3 has the signal scaled to 770 mV/div.  And, the standard deviation of the peak to peak measurement is 18 mV.  By decreasing the V/div settings on the scope to 66 mV/div the measurement’s standard deviation becomes 1.22 mV.  This is almost a 15x improvement!

vertical scaling effects your vertically dependent measurements

By decreasing the V/div settings on the scope, the measurement’s standard deviation becomes 1.22 mV, almost a 15x improvement

Why does the vertical scaling make a difference?  By scaling the signal to fill as much of the screen as possible, we are able to take advantage of the oscilloscope’s full bits of resolution.  Bits of resolution is essentially a signifier of how precise an ADC is capable of being.  The higher the bits of resolution, the more vertical levels the ADC is able to detect.  For example, the image below shows a two bit ADC.  The red sine wave is the analog input to the ADC, and the blue waveform is the digitized version.  As you can see, there are four different quantization levels possible.

This image shows a three bit ADC digitizing the analog waveform.

This image shows a three bit ADC digitizing the same analog waveform.  By having more quantization levels, the ADC’s digital output is able to more closely approximate the analog input.

properly scaling signals on your oscilloscope can make a dramatic difference in the quality of your measurements

When vertically scaling a signal to fill only a portion of the oscilloscope screen, you are not utilizing the ADC’s bits to the full potential.  For example, if you scaled a signal to half of the 3 bit ADC’s screen, you would leave two quantization levels unused above your signal and two levels below.  This would mean that your three bit ADC would only be able to use four quantization levels, rendering it just as precise as a fully utilized two bit ADC.

Knowing how to properly scale signals on your oscilloscope can make a dramatic difference in the quality of your measurements.  Proper horizontal scaling significantly effects your time dependent measurement, and proper vertical scaling effects your vertically dependent measurements.  Next time you are in front of your scope, remember: good signal scaling makes great measurements!