Modulation is the process of imparting a signal, usually audio, video, or data, onto a high-frequency carrier, for the purpose of transmitting that signal over a distance.
Let’s take a carrier signal,Â cos(Ï_c t)and a modulating signal,Â cos(Ï_m t), where Ï = 2 \pi f , and f is the signal frequency.
Amplitude modulation is simply the product of the carrier signal and (1 + modulating signal):
which multiplies out as: AM(t) = cos(Ï_c t) + cos(Ï_c t)\times cos(Ï_m t) ] . A carrier modulated by a sine wave is shown in the following example.
Note that such a signal is relatively easy to demodulate: a simple rectifier and low-pass filter will recover the modulation from this signal, as you can visualize by “erasing” the negative portion of the signal and averaging over the remaining waveform.Â Such a process is called envelope detection.
To analyze the composition of this signal, we take the trig product identity, cos(x)\,cos(y) = \frac{1}{2} [ cos(x-y)+cos(x+y) ], and apply it to the product term in AM(t), producing the following:
\displaystyle AM(t) = cos(Ï_c t) + \frac{1}{2} cos(Ï_c t - Ï_m t) + \frac{1}{2} cos(Ï_c t + Ï_m t) .
From this, we observe an important aspect of the process:Â amplitude modulation results in a signal composed of the following three components:
the carrier signal, cos(Ï_c t),
a lower sideband signal, \frac{1}{2} cos(Ï_c t - Ï_m t),
and anÂ upperÂ sideband signal, \frac{1}{2} cos(Ï_c t + Ï_m t).
By the way, the reason for the “1 + ” term in the modulation equation above is that it specifically generates the carrier component in the modulated signal. Without it, we would have the following Double-Sideband-Suppressed Carrier signal, which should make it apparent that we can’t use a simple envelope detector to demodulate; note how the envelope “crosses over” itself:
An analysis of modulation is aided by using a more complex modulating signal.Â A ramp signal is comprised of a fundamental sinusoid and integer harmonics of that fundamental.Â For illustration purposes, we will take an approximation that uses the fundamental and the next 8 harmonics. This modulating signal is shown below, as a function of time.
The spectrum of this signal, i.e., a plot of the frequency components versus level, is shown next; it consists of a fundamental (at “1”), followed by a series of harmonics with decreasing levels.
If we amplitude modulate a carrier with this ramp signal, we get the following time-varying signal; note again that this signal can be demodulated by an envelope detector:
The spectrum of the modulated ramp signal follows; note that there is a carrier at “0” and sidebands extending in both the positive and negative frequency directions. (In practice, this zero point would actually be at some high frequency, such as at 7 MHz for example. The spacing of the individual components in this example would be exactly that of the frequency of the fundamental component of the ramp signal.)
Recall from our earlier discussion that amplitude modulation results in a signal composed of the three components, the carrier signal, a lower sideband signal, and an upper sideband signal. Note the following as well: because the lower sideband component has a negative modulating-frequency term (cos(Ï_c t - Ï_m t), for a sine wave) the spectrum of the lower sideband is reversed compared with that of the upper sideband (and that of the baseband modulating signal).
We can also see from this example that amplitude modulation is rather wasteful of spectrum space, if our goal is to take up as little bandwidth as possible.Â For one, the two sidebands are merely reflections of each other, i.e., each one carries the same information content.Â For another, the carrier itself is unnecessary for the communication of the modulating signal as well — something that wastes power on the transmission side.
Taking that into account, we can choose to transmit only one sideband, resulting in a Single Sideband (SSB) Transmission. If we transmit only the lower sideband, its spectrum will look like this (note that the carrier is also absent):
SSB modulation can be implemented using a variety of methods, including an analog filter, or phase-shift network (PSN) quadrature modulation.Â (For a clue as to how PSN works, look upÂ and calculate the result of adding cos(x) cos(y) + sin(x) sin(y).)
The challenge in receiving this signal is how to demodulate it, as we can see from its time-domain plot:
As compared with amplitude modulation, a SSB signal cannot be demodulated with an envelope detector, because the envelope is no longer a faithful representation of the original signal.Â One way to demodulate it is to frequency-shift the signal down to its original range of baseband frequencies, by using a product detector which mixes it with the output of a beat frequency oscillator (BFO).
One can appreciate that, if the demodulator BFO is not exactly at the original carrier frequency, the resulting demodulated signal will be frequency-shifted up or down by the amount of the error, resulting in a kind of “Donald Duck”-sounding voice signal.Â While this was often an issue with analog transmitters and receivers, whose carrier frequencies were imprecise, and would drift over time, modern digital equipment is so accurate that a near-perfect re-synchronization is not difficult to achieve.
Learn the basic recording features of the app.Â Please use the following settings:
mp3 file type, which could involve a ConversionÂ step, as offered within the app;
44kHz sample rate;
mono;
128kbps bit rate.
A separate microphone on a stand will greatly improve the sound. You’ll need an appropriate adapter to connect the mic to your phone or PC.
Microphone (or phone) placement will greatly affect the quality of the recording.Â Here is a good non-technical article on How to Mic Woodwinds and Brass.
To get the best tempo synchronization, you’ll need to listen to a reference recording we’ll send you, while recording your part.Â That means you’ll need two devices: one to play back the reference recording (using headphones), and one to record your part.Â (There are ways to do this with just one device, but that’s a rather advanced technique that should be left to the more tech-savvy.)
If any part of this is too complicated, let us know, and we’ll either solve the issue, or figure out a simpler (but lesser) alternative. Because of the inherent limitations of the process, the better we make each step, the better the overall result will be.
The advent of low-cost antenna analyzers and vector network analyzers (VNAs) has resulted in a renewed interest in an 80-year old tool called the Smith chart, a graphical device used to view the characteristics of RF transmission lines, antennas, and matching circuits, and to aid in the design of those systems.Â But although questions regarding the chart appear in the Amateur Extra License exam, the current Question Pool has a mere 11 questions on the topic, leaving a full appreciation of the tool to the more inquisitive student. Here then, is a brief introduction to the Smith chart.
Motivation
Why use a seemingly-archaic tool such as the Smith chart, when calculators and software are readily available?Â For one, the chart remains useful as a visualization tool, representing a wealth of information in a very precise and intuitive manner.Â In addition, the knowledge gained in its use adds to our general knowledge of radio systems. And thatâs what ham radioâs all about, right? Otherwise, we may as well use a telephone or the Internet to communicate over large distances!Â And donât worry; weâll keep the equations to a minimum (or in footnotes).
Phillip Smith, 1ANB (SK), was an electrical engineer who graduated from Tufts College in 1928, and then went to work in the radio research department at Bell Telephone Laboratories.Â Fascinated by the repetitive nature of the impedance variation along a transmission line and its relation to the standing-wave amplitude ratio and wave position, Smith devised an impedance chart in which standing-wave ratios were represented by circles. In 1939, he had an article published in Electronics Magazine describing what later came to be known as the Smith chart.
Derivation
The Smith chart provides a concise way to view a number of different characteristics of RF systems, such as impedance and voltage standing-wave ratio (VSWR). Weâll use an example to show how the Smith chart was developed.
Radio operators are inherently familiar with the concept of standing-wave ratio, i.e., a measure of how well a load is matched to a transmission line. A slightly more obscure concept is that of the reflection coefficient. Notated using the Greek letter gamma (Î), the reflection coefficient describes how much of an electromagnetic wave is reflected by an impedance discontinuity such as a mismatched load. (For a perfect termination, Î=0, an open termination yields G=1, and a short results in Î=-1.) Being a complex quantity, Î can convey both the magnitude and phase of the reflected wave, at a particular frequency.^{[1]}^{, [2]}
We can measure Î by connecting a VNA to a device under test (DUT), âsweepâ a band of interest, and download the data.Â Our DUT for this example is a 50-ohm coaxial transmission line terminated by a 40m OCF dipole.
First, letâs plot the real and imaginary components of Î for this DUT on a rectilinear graph. See Figure 1: the green marker is at 14.175 MHz.
Reading off the chart, we see the following values:
frequency
14.000 MHz
14.175 MHz
14.350 MHz
Î
-0.12 + j 0.26
0.05 + j 0.20
0.08 + j 0.07
Because complex numbers can also be represented in polar form, we can also plot Î on a polar plot, as in Figure 2. On this plot, the same green marker is now at a magnitude (denoted as |Î|) of about 0.2 (i.e., its distance from the center or origin), and a phase angle of about 75Â°. (Note that the curve did not change shape, only the axes of the graph did.) Itâs useful to note that the magnitude of the reflection coefficient can never exceed 1.0, as this would imply that power is somehow âcreatedâ by the load. This means that Î is constrained to be within this circular space, such that |Î|â€ 1.
Smith was no doubt very familiar with the polar type of data presentation, for a practical reason. Back in the day, it would have been rather difficult to measure Î directly as a complex quantity, due to the limitations of test equipment. But it would have been much easier to measure the polar representation: what was required was an oscilloscope and directional couplers.Â With these, one could measure the magnitude and phase angle of a wave reflected from a termination, and plot them on a polar graph of the complex reflection coefficient. A simple trigonometric transformation converts these to the rectangular format.^{[1]}
There are several quantities that can be derived from the reflection coefficient. For one, the VSWR is easily calculated from the magnitude of Î, and is shown in Figure 3.^{[2]}
Another is the complex impedance, which is also easily calculated from the reflection coefficient at that same âreference plane,â i.e., the point at which the measurement is taken. Because this transformation is so elegant (and important), we present it here:
^{[1]} Because RF components behave differently at different frequencies, engineers use complex numbers to describe impedances and related concepts. The impedance measured at the input to an antenna, at a particular frequency, is expressed as a complex number in the form Z = R + jX, where R represents the real value of the impedance (in ohms), X represents the imaginary value (also in ohms), and j is the imaginary constant \sqrt{-1}.
^{[2]} The quantity Î is also denoted by the scattering parameter s_{11}. S-parameters generally describe the input-output relationship of RF networks; another such s-parameter is s_{21}, which represents the power transferred to a âPort 2â from a âPort 1â of a network.
Applying this transformation to our example data, we can produce a graph of the complex load impedance, as in Figure 4, with some interesting observations.Â First, although the curve looks similar to that of the reflection coefficient, the shape cannot be the same for the following reason: as we learned before, while the reflection coefficient is bounded to a magnitude of 1, the impedance can take on values from zero to infinity. Indeed, if Î is close to 1, the load impedance can be very high, usually on the order of thousands of ohms.
The other useful observation is that, while Î can have both positive and negative real and imaginary components, the real (resistive) part of the complex impedance can only be positive,^{[4]} and can range from 0 to +â, while the imaginary part can range anywhere from ââ to +â.
From a practical standpoint, this means that conventional (linear) representations of both reflection coefficient and impedance suffer from a number of shortcomings: impedance plots that cover a large extent (e.g., traversing a region close to Î = 1) would be unwieldy and lack precision; simultaneous plots of different systems would be hard (or impossible) to compare; multiple graphs would be needed to convey information that is intrinsically related; and the conventional representations would be limited to some applications of analysis, while impeding the practicality of system design.
^{[1]} If we denote the phase of Î by Îž, then \Gamma=\left |\Gamma \right |\cdot (\cos \theta +j\cdot \sin \theta) .
^{[3]} Keep in mind that Z_{L} and Î are complex quantities, so complex arithmetic must be used!
^{[4] }There is no such thing as negative resistance (!), except in tunnel diodes, but thatâs another story.
The Smith chart
Phillip Smith understood these limitations, and set forth to overcome them.Â He brilliantly realized that the various concepts of impedance, reflection, and VSWR could become graphically interrelated by means of the right coordinate transformation â and this would solve the âinfinite extentâ problem, as well.
Smithâs insight was to start with a rectangular grid of complex impedance, and then warp the axes according to the transformation given in Equation 1.Â This has the effect of throwing away the negative real (resistance) part, and wrapping the imaginary (reactance) axis so that the points at ââ and +â meet each other. Thus was born his eponymous chart, shown in Figure 5.
The chart consists of a number of internally-tangent circles, and arcs of circles; the number of circles and arcs is simply a function of the desired precision, and charts can also be constructed that âzoom-inâ on a particular region of interest. Looking first at the circles, we see they are all internally tangent at a point on the right-hand-side, as seen in Figure 6.Â Each of these circles represents a constant real (resistive) component, with the circles ranging from 0 ohms (as indicated at the extreme left of the horizontal line â the real, or resistance, axis) to +â ohms (as indicated at the extreme right of the horizontal line).
Intersecting these circles are arcs of circles that have centers lying on a line (not shown) that is perpendicular to the horizontal line, and which all pass through the point at +â ohms. (See Figure 7.) Each of these arcs represents a constant imaginary (reactive) component. Â The outermost full circle is the imaginary, or reactance, axis.
A few definitions and weâre there. Youâll recall from Equation 1 that the load impedance Z_{L} can be calculated by knowing Î and the characteristic impedance Z_{0}.Â We can rewrite this equation in a form that normalizes the impedances by dividing both sides of the equation by Z_{0}, which produces Equation 2:
With this normalization, we can use the Smith chart for any characteristic impedance, as long as we remember that the point at the center represents that characteristic impedance. Inspecting the chart, and from the above definitions, we see that the point at the chart center has a value of 1 + jÂ·0, i.e., 1.
The last component to observe is the outermost scale, which is calibrated in degrees and fractions of a wavelength. Â Among the useful properties of the Smith chart is the fact that adding a length of transmission line has the effect of rotating the complex reflection coefficient clockwise around the chart center, by an angular amount that is proportional to the added electrical length divided by the operating wavelength.
For example, if we add 4 meters of RG-8X transmission line (with velocity factor 0.82) to a system operating at a wavelength of 21.2Â meters, the curve will rotate^{[1]} by the following angle:
^{[1]} One wavelength corresponds to two complete rotations around the Smith Chart, hence the 720Â° factor.Â Note that adding multiples of Âœ wavelength simply brings us back to the same point!
[1] Itâs important to realize that the scales on the chart are actually performing the calculation for us; we started with a curve of Î on a conventional polar graph, and copied that same curve onto the Smith chart, where it now represents Z. Itâs only the axes that have changed.
[2] To get the phase of a point, draw line from the center through the point, and extend it to the outer scales, where you can read it off the 0-180Â° scale.
You may have noticed that the VSWR formula given previously looks curiously similar to the normalized impedance formula given at Equation 2, which leads to another wonderful property of the Smith chart: if you take any point on an impedance plot, and rotate it, around the center, to land on the right-hand half of the resistance axis, the value at that axis is the VSWR at that frequency! (See magnified center at Figure 9.) Note how rotating the green dot would intersect the horizontal axis at the value 1.5, which is what we calculated earlier. In fact, any point on the red circle has a VSWR of that same value. So, when analyzing (or designing) a system with a target VSWR, make sure that the normalized impedance values over the entire frequency of interest remain inside such a circle.
We specifically chose a limited range of frequencies to keep the visuals simple; in practice, extending the frequency over a wider range leads to some very interesting results! Note too, that while many of these measurements were originally done using paper-and-pencil (and drafting tools), we can easily do the equivalent by using graphical software that directly interfaces to a modern VNA.
Further Reading
Unfortunately, Smithâs own treatise, Electronic Applications of the Smith Chart in Waveguide Circuit and Component Analysis, published in hardcover in 1969, has been out of print for several years, and used copies fetch a premium price. Readers interested in an in-depth elaboration of the chart, as well as examples of using the chart to design transmission elements like matching networks, are directed to the ARRL Handbook Supplemental Articles and The ARRL Antenna Book.Â Many online resources are available as well, including an Oral History of Phillip Smith, and the Wikipedia page, Smith Chart.
Afterword
As of this writing, the Smith Chart on the Wikipedia page has a small typo; can you find it?
The author has an original copy of Smith’s book, complete with intact transparent overlays — one of his cherished antiques.
There will be a rare astronomical event this Monday, November 11, 2019, when Mercury passes in front of the sun. We’ll be streaming it live from NJ using a telescope. The event occurs only about 13 times a century.
Click on the player to see the live stream during the event!Â We’ll also stream the end, at 1:00pm EST, and on the quarter-hour in between, as well.Â Weather permitting!
Technical information
Questar 3.5 Cassegrain-Maksutov telescope with chromium solar filter
Lumix GH2 camera,Â 1080p24 source video
KanexPro HDMI/3G-SDI converter
Haivision Makito X video encoder /streamer,Â down-converted to 720p
HLS playback on HTML5 with Flash fallback for older browsers
Many consumers have long admired the Android operating system as a more “open” alternative to the closed iOS ecosystem.Â But more and more, Google seems intent on ruining this past advantage.Â Already, highly-admired features, like running apps from an SD card (and getting add-on storage), have been dropped from most phones.Â And while Google has claimed that change was for “better security,” many observers felt it was simply to upsell more storage.
Now, with its latest rev — Android 10 — Google adds more than 60 new features to the OS.Â And while some of these again are “security updates,” you can find gobs of information elsewhere that runs them down. BTW, rumor has it they didn’t give this “Q” release a food-name, because of lack of a good candidate.Â Really?Â What’s wrong with “quince,” or “quinoa”?
The rev does come with a host of new issues, however, such as the following:
Dark Mode.Â Introduced on Android 9 (Pie), Dark Mode provides a more-sexy theme based on blacks and dark colors.Â But amazingly, it doesn’t work with Google’s own Gmail or Maps apps.Â Duh.
Non-erasable Location Cards.Â This is a really annoying problem: visited location cards can’t be stopped from re-appearing in the Android Auto startup screen, even if you don’t have any interest in going to that location.Â One user complained about the dreadful situation where she was at a funeral for her mother, and now the damn phone keeps reminding her about that visit ad nauseum.Â While it may be possible to delete this by blocking all location history, that’s somewhat akin to using a sledgehammer to hang a picture.
Pixel Sensor Broken IssueÂ — Many users are finding that the sensors on their Pixel phones stopped working after the update.
A Memory Leak has been reported that allows a closed app to remain resident in RAM.
Of course, any software update will have its growing pains, along with some workarounds.Â But when you build a feature like Location Cards and don’t think through something that would be obvious to any user, you’ve got a serious problem somewhere in the product development process.Â And what’s really inexcusable is that, from the posts on Google Help, it seems that issues like the Location Card problem have been known to Google even with Android Pie — and their staff responds by saying the behavior is “subjective.”
You’d think Google would have the resources to develop features that are useful and not annoying.Â Perhaps 10.1?
Math can be truly awe-inspiring, as in this example of the unexpected places that Ï can show up. The proof is nothing short of elegant â be sure to watch parts 2 and 3.Â Astonishing!
The short answer is, no and yes. Some analysts will have you believe that “8K TV blows 4K away,” and that might suggest that you at least want a 4K TV.Â The reality, as it comes to electronics and perception, is more complicated.
One might assume that higher resolution always makes a picture better, because the pixels get smaller and smaller, to the point where you don’t see them anymore.Â But the human visual system — your eyes — has a finite capacity, and once you exceed this, any other “improvement” is wasted, because it just won’t be seen.
Here’s why (warning, geometry involved):
The term â20/20 visionâ is defined as the ability to just distinguish features that subtend one-arc-minute of angle (one-sixtieth of a degree). In other words, objects at a certain distance can only be resolved as separate objects if the objects are a certain distance apart.
Using trigonometry, this works out to be about 1/32″ as the smallest separation a person with 20/20 vision can see at a distance of ten feet. We can use the same math to show that the âoptimumâ distance from which to observe an HDÂ (1080-line) display (i.e., where a 20/20 observer can just resolve the pixels) is about 3 times the picture height.
On a 1080-line monitor with a 15â diagonal, this works out to an optimum viewing distance of just under two feet; with a 42â display, itâs about five-and-a-half feet.Â Sitting closer than this means the pixels will become visible; sitting further means that the resolution is “wasted.”Â Keep in mind, also, that most people sit about 9 feet away from the TV, what is sometimes called the “Lechner distance,” after a well-known TV systems researcher.
Of course, these numbers (and others produced by various respectable organizations) are based on subjective evaluation of the human visual system, and different observers will show different results, especially when the target applications vary.Â Nonetheless, the âthree picture heightsâ rule has survived critical scrutiny for several decades, and we haven’t seen a significant deviation in practice.
At 4K, the optimum distance becomes 1.6 picture-heights: at the same 1080-display viewing distance of 5.5 feet, one needs an 84â-diagonal display (7 feet), which is already available. For these reasons, some broadcasters believe that 4K is not a practical viewing format, since displaying 4K images would require viewing at 2.5 picture-heights to match normal human visual acuity.
At 8K, the numbers become absurd for the typical viewer: 0.7 picture heights, or a 195″ diagonal (16 feet) at a 5.5-foot distance.Â With a smaller display, or at a larger distance, the increased resolution is completely invisible to the viewer: that means wasted pixels (and money). Â Because such a display is very large (and thus very expensive), the 105-degree viewing angle it would subtend at the above viewing distance approaches a truly immersive and lifelike experience for a viewer — but how many people would put such a beast in their home?
From a production perspective, 4K does make some sense, because an environment that captures all content in 4K, and then processes this content in a 1080p workflow for eventual distribution, will produce archived material at a very high intrinsic quality.Â Of course, there’s a cost associated with that, too.
But there are two other reasons why one might be persuaded to upgrade their HDTV:Â HDR (High Dynamic Range) and HFR (High Frame Rate).Â Briefly, HDR increases the dynamic range of video from about 6 stops (64:1) to more than 200,000:1 or 17.6 stops, making the detail and contrast appear closer to that of reality.Â HFR increases the frame rate from the currently-typical 24, 30 or 60 fps to 120 fps.Â And these other features make a much more recognizable improvement in pictures — at almost any level of eyesight.Â But that’s another story.
This is one of my engineering pet peeves — I keep running into students and (false) advertisements that describe a power output in “RMS watts.”Â The fact is, such a construct, while mathematically possible, has no meaning or relevance in engineering.Â Power is measured in watts, and while the concepts of average and peak watts is tenable, “RMS power” is a fallacy.Â Here’s why.
The power dissipated by a resistive load is equal to the square of the voltage across the load, divided by the resistance of the load.Â Mathematically, this is expressed as [Eq.1]:
\large P=\frac{V^{2}}{R}
where P is the power in watts, V is the voltage in volts, and RÂ is the resistance in ohms.Â When we have a DC signal, calculating the power in the load is straightforward.Â The complication arises when we have a time-varying signal, such as an alternating current (AC), e.g, an audio signal or an RF signal.Â Â In the case of power, the most elementary time-varying function involved is the sine function.
In Figure 1, the dotted line (green) trace is our 1-volt (peak) sinusoid. (The horizontal axis is in degrees.) The square of this function (the power as a function of time) is the dark blue trace, which is essentially a “raised cosine” function.Â Since the square is always a positive number, we see that the power as a function of time rises and falls as a sinusoid, at twice the frequency of the original voltage.Â This function itself has relatively little use in most applications.
Another quantity is the peak power, which is simply Equation 1 above, where V is taken to be the peak value of the sinusoid, in this case, 1.Â This is alsoÂ known as peak instantaneous power (not to be confused with peak envelope power, or PEP).Â The peak instantaneous power is useful to understand certain limitations of electronic devices, and is expressed as follows:
\large P_{pk}=\frac{V^{2}_{pk}}{R}
A more useful quantity is the average power, which will provide the equivalent heating factor in a resistive device.Â This is calculated by taking the mean of the square of the voltage signal, divided by the resistance. Since the sinusoidal power function is symmetric about its vertical midpoint, simple inspection (see Figure 1 again) tells us that the mean value is equal to one-half of the peak power [Eq.2]:
which in this case is equal to 0.5.Â We can see this in Figure 1, whereÂ the average of the blue trace is the dashed red trace.Â Thus, our example of a one-volt-peak sinusoid across a one-ohm resistor will result in an average power of 0.5 watts.
Now the concept of “RMS” comes in, which stands for “root-mean-square,” i.e., the square-root of the mean of the square of a function.Â (The “mean” is simply the average.) The purpose of RMS is to presentÂ a particular statistical property of that function.Â In our case, we want toÂ associate a “constant” value with a time-varying function, one that provides a way of describingÂ the “DC-equivalent heating factor” of a sinusoidal signal.
Taking the square-root ofÂ V^{2}_{pk}/2_{Â }therefore provides us withÂ the root-mean-square voltage (not power) across the resistor; in this example, that means that the 1-volt (peak) sinusoid has an RMS voltage of
Note the RMS voltage is used to calculate the average power. As a rule, then, we can calculate the RMS voltage of a sinusoid this way:
\large V_{rms} \approx 0.7071 \cdot V_{pk}
Graphically, we can see this in Figure 2:
The astute observer will note that 0.7071 is the value of sin(45Â°) to four places. This is not a coincidence, but we leave it to the reader to figure out why.Â Note that for more complex signals, the 0.7071 factor no longer holds.Â A triangle wave, for example, yields V_{rms} â 0.5774 Â·Â V_{pkÂ }, where 0.5774 is the value ofÂ tan(30Â°) to four places.
For those familiar with calculus, the root-mean-square of an arbitrary function is defined as:
Replacing f(t) with sin(t) (or an appropriate function for a triangle wave) will produce the numerical results we derived above.
Additional thoughts on root-mean-square
Because of the squaring function, one may get the sense that RMS is only relevant for functions that go positive and negative, but this is not true.
RMS can be applied to any set of distributed values, including only-positive ones. Take, for example, the RMS of a rectified (absolute value of a) sine wave. As before, V_{rms}=.7071 Â· V_{pkÂ }, i.e., the RMS is the same as for the full-wave case. However, V_{avg}Â â 0.6366 Â·Â V_{pk} for the rectified wave (but equals zero for the full-wave, of course; 0.6366 is the value of 2/Ï to four places). So, we can take the RMS of a positive-only function, and it can be different than the average of that function.
The general purpose of the RMS function is to calculate a statistical property of a set of data (such as a time-varying signal). So the application is not just to positive-going data, but to any data that varies over the set.
AGC Systems has provided keen and valuable industry analysis for both private and journalism outlets. Â These include the most respected media that professionals turn to for technical, business and investment information.