Lompat ke konten Lompat ke sidebar Lompat ke footer

Given a Continuous Time Signal Determine the Minimum Sampling Rate for a Signal Defined by

Converting analog to digital signals and vice versa

Edmund Lai PhD, BEng , in Practical Digital Signal Processing, 2003

2.2.1 Sampling theorem

The sampling theorem specifies the minimum-sampling rate at which a continuous-time signal needs to be uniformly sampled so that the original signal can be completely recovered or reconstructed by these samples alone. This is usually referred to as Shannon's sampling theorem in the literature.

Sampling theorem:

If a continuous time signal contains no frequency components higher than W hz, then it can be completely determined by uniform samples taken at a rate f s samples per second where

or, in term of the sampling period

A signal with no frequency component above a certain maximum frequency is known as a bandlimited signal. Figure 2.4 shows two typical bandlimited signal spectra: one low-pass and one band-pass.

Figure 2.4. Two bandlimited spectra

The minimum sampling rate allowed by the sampling theorem (f s = 2W) is called the Nyquist rate.

It is interesting to note that even though this theorem is usually called Shannon's sampling theorem, it was originated by both E.T. and J.M. Whittaker and Ferrar, all British mathematicians. In Russian literature, this theorem was introduced to communications theory by Kotel'nikov and took its name from him. C.E. Shannon used it to study what is now known as information theory in the 1940s. Therefore in mathematics and engineering literature sometimes it is also called WKS sampling theorem after Whittaker, Kotel'nikov and Shannon.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780750657983500023

Information Theory and Coding

Richard E. Blahut , in Reference Data for Engineers (Ninth Edition), 2002

The Sampling Theorem

The sampling theorem is an important aid in the design and analysis of communication systems involving the use of continuous time functions of finite bandwidth. The theorem states that, if a function of time, f(t), contains no frequencies of W hertz or higher, then it is completely determined by giving the value of the function at a series of points spaced (2W)−1 seconds apart. The sampling rate of 2W samples per second is called the Nyquist rate.

If f(t) contains no frequencies of W hertz or higher, then it can be recovered from its samples by the Nyquist-Shannon interpolation formula:

f ( t ) = n = + f ( n / 2 W ) { [ sin π ( 2 W t n ) ] / π ( 2 W t n ) }

The sampling theorem makes no mention of the time origin of the samples; it is only the spacing of the samples that matters.

If function f(t) is negligible in magnitude outside a time interval T and has negligible energy at frequencies higher than W hertz, it can be specified by 2TW ordinates. If a Gaussian noise process with rectangular spectrum is sampled at the Nyquist rate, the samples are independent.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780750672917500273

Digitizing and Sampling Circuits

Dennis L. Feucht , in Handbook of Analog Circuit Design, 1990

12.10 The Sampling Theorem (Nyquist Criterion)

The sampling theorem gives a criterion for recovery of v(t) from v*(t). If ωs is not larger than twice the highest frequency in V(ω), then the frequency-shifted bands of V(ω) overlap (Fig. 12.42) and cannot be separated by filtering. The Nyquist criterion for recoverability of the original continuous signal is

FIG. 12.42. Overlapping spectra of V(ω) due to undersampling. An alias frequency from V 1(ω), ωa, at ω1 of V(ω) is indicated.

(12.102) ω s > 2 ω h

where ωh is the highest frequency component of V(ω). The original signal is recoverable from its sampled form when the highest frequency component is less than the Nyquist frequency, ω s/2. In Fig. 12.42, the band V 1(ω) is a replica of V(ω) centered at ωs. It has frequency components below ωs that overlap with the positive frequency components of V(ω). These are negative frequencies in V(ω) shifted up in frequency by ωs.

The significance of negative frequency components in V(ω) is that they are inverted (180° phase-shifted) from their corresponding positive counterparts. Since the frequency spectrum of V(ω) is symmetric around ω = 0, it is an even function and V(-ω) = V(ω). The phase, however, is an odd function and is negative for ω < 0; for negative n, the angle of c n , from (12.89), is v = −nωs t. Then v(−n) = −v(n).

In Fig. 12.42, V(ω) and V 1(ω) are symmetrical around the Nyquist frequency. In effect, V has been folded over at ωs/2. The larger ωh is, the further back toward lower frequencies the folding extends. These folded frequency components from V 1 are alias frequencies in v*(t) and have a frequency of ωa relative to ωs.

The significance of an alias frequency in the time domain is that a sequence of samples has more than one frequency interpretation. In Fig. 12.43, V(ω) has one frequency component at ω = 3/4ωs. The samples also fit a sinusoid of ω = −1/4ωs, an alias frequency within the band of V(ω). The alias sinusoid is inverted relative to that of V 1 because its frequency is negative.

FIG. 12.43. Aliasing in the time domain. The discrete samples fit sinusoids of two frequencies. The alias is inverted, being a negative frequency.

More generally, if ω1 of V(ω) is sampled at ωs, then from Fig. 12.42,

(12.103) ω 1 = ω s ( ω a ) ω s + ω a

and

(12.104) alias frequency = ω a = ω 1 ω s

In Fig. 12.43, sinusoids of both ω1 of V and ωa of V 1 fit the sample points. The discrete samples of v(t) are too few per cycle to eliminate ωa; v(t) is undersampled. The sampling theorem requires more than two samples per cycle for recovery of v(t). Such a v(t) is oversampled.

Recovery of V(ω) from V*(ω) for oversampled signals is achieved by a low-pass filter (LPF) that passes only V(ω). The ideal filter, H(ω), is shown in Fig. 12.44a. It has an immediate cutoff just above ωh. The ideal maximum-bandwidth filter has a cutoff at the Nyquist frequency.

FIG. 12.44. The ideal low-pass antialiasing filter H(ω) is a spectral "pulse" (a) corresponding to a sine-function convolver or interpolator in t (b), where * is the convolution operator.

In the time domain, this filter function transforms into a sine function (Fig. 12.44b). Since nonzero sine values extend to t = −∞, it is noncausal and can only be approximated by realizable circuits. The pulse shape of the ideal LPF transforms into a sine function in t just as a pulse in the time domain does in ω. H(ω) is multiplied by V*(ω) in ω to recover V(ω); in t, h(t) is convolved with v*(t) to produce v(t). For bandlimited v(t),

(12.105) υ ( t ) = k = ( k T s ) sin c ( ω s 2 ( t k T s ) ) , ω s 2 < ω < ω s 2

The sine function acts as an interpolator, filling in the missing values of v(t).

Our final derivation is the spectrum of a zero-order hold. This is the frequency response of a S/H. In the s-domain, a ZOH can be regarded as an integrator of weighted impulses, producing vˆ(t) in Fig. 12.45a. This is the typical waveform from a S/H or DAC. This integrator signal is periodic at the sampling rate. An integrator in s is 1/s. A periodic integrator is constructed by integrating for T s, or

FIG. 12.45. DAC output (or ADC input), (t) in (a) is zero-order hold response to v(t). Zero-order hold frequency-response magnitude H 0(ω) is |sinc| function (b).

(12.106) Z O H H 0 ( s ) = 1 s 1 s · e s T s = 1 e s T s s

In the time domain, this is a unit step turned off T s later, or

(12.107) ZOH u ( t ) u ( t T s )

The Laplace transform of (12.107) is (12.106). The frequency response of H 0 is found by letting s = jω. Then

(12.108) H 0 ( j ω ) = 1 e j ω T s j ω = T s sin c ( ω T s 2 ) · e j ω T s / 2

Then

(12.109) H 0 ( j ω ) = T s | sin c ( ω T s 2 ) | , < H o ( j ω ) = ω T s 2

Once again, the sine function appears. The magnitude plot of the frequency response is shown in Fig. 12.45b. The phase response is linear and only time-shifts the output. The phase delay can be seen in Fig. 12.45a by noting that a best-fit of v(t) to (t) requires v(t) to be shifted to the right (delayed in time) by half a step, or by – T s/2, as (12.109) predicts. Ideal recovery of v(t) from (t) requires an inverse sine filter, or sine compensator. This compensator can be implemented in either digital or analog form. It is digital if it precedes a DAC or follows an ADC and analog if it follows a DAC or precedes an ADC.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780122542404500175

Data Transmission Media

John S. Sobolewski , in Encyclopedia of Physical Science and Technology (Third Edition), 2003

VI.E Pulse Code Moudulation

In pulse amplitude modulation, the amplitude of the pulse can assume any value between zero and some maximum value. Pulse code modulation (PCM) is derived from PAM but is distinguished from the latter by two additional signal-processing steps, called quantizing and encoding, that take place before the signal is transmitted. Quantizing replaces the exact amplitude of the samples with the nearest value from a limited set of specific amplitudes. The sample amplitude is then encoded, and the codes are transmitted typically as binary codes. This means that, unlike other modulation techniques described so far, in PCM both the sampling time and the amplitude are in discrete form.

Representing an exact sample amplitude by one of 2 n predetermined and discrete amplitudes introduces an error called quantization noise that can be made negligible by using a sufficiently large number of quantizing levels. Studies have shown that using 8 bits per sample to represent one of 256 quantizing levels provides a satisfactory signal-to-noise ratio for speech signals (see Rey, 1983 ). The sampling rate is usually determined from the sampling theorem, which states that a baseband (information) signal of finite energy with no frequency components higher than W Hz is completely specified by the amplitudes of its samples taken at a rate of 2 W/sec. The corollary of the sampling theorem states that a baseband analog channel can be used to transmit a train of independent pulses at a maximum rate that is twice the channel bandwidth W. These results are important in determining appropriate sample rates and bandwidths for conversion between analog and digital signals.

Applying the sampling theorem to speech signals that are limited to 4000   Hz, we find that they need to be sampled 8000   times/sec to be completely specified. Using PCM with 8 bits to represent one of 256 discrete amplitude samples, 8   ×   8000 or 64,000   bits/sec are required to transmit the 4000-Hz voice signal. If we now use the corollary to the sampling theorem, we find that a channel with a bandwidth of 32,000   Hz is required to transmit the 64,000   bits/sec needed to specify the 4000-Hz voice signal. Although it is true that PCM requires more bandwidth than the baseband analog signal (32,000   Hz bandwidth for the 4000-Hz voice signal in the above example), this is more than offset by the following:

1.

PCM has very high immunity to noise.

2.

PCM repeater design is relatively simple.

3.

The PCM signal can be completely reconstructed at each repeater location by a process called regeneration.

4.

PCM provides a uniform modulation technique suitable for other signals on many different types of media including wire, coaxial cable, free space, and optical fibers.

5.

PCM is compatible with time division multiplexing.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B0122274105001654

Image Acquisition

E.R. Davies , in Computer and Machine Vision (Fourth Edition), 2012

25.4 The Sampling Theorem

The Nyquist sampling theorem underlies all situations where continuous signals are sampled and is especially important where patterns are to be digitized and analyzed by computers. This makes it highly relevant both with visual patterns and with acoustic waveforms, hence it is described briefly in this section.

Consider the sampling theorem first in respect of a 1-D time-varying waveform. The theorem states that a sequence of samples (Fig. 25.9) of such a waveform contains all the original information and can be used to regenerate the original waveform exactly, but only if (a) the bandwidth W of the original waveform is restricted and (b) the rate of sampling f is at least twice the bandwidth of the original waveform—i.e., f≥2W. Assuming that samples are taken every T seconds, this means that 1/T≥2W.

Figure 25.9. The process of sampling a time-varying signal: a continuous time-varying 1-D signal is sampled by narrow sampling pulses at a regular rate f r=1/T, which must be at least twice the bandwidth of the signal.

At first it may be somewhat surprising that the original waveform can be reconstructed exactly from a set of discrete samples. However, the two conditions for achieving this are very stringent. What they are demanding in effect is that the signal must not be permitted to change unpredictably (i.e., at too fast a rate), else accurate interpolation between the samples will not prove possible (the errors that arise from this source are called "aliasing" errors).

Unfortunately, the first condition is virtually unrealizable, since it is close to impossible to devise a low-pass filter with a perfect cut-off. Recall from Chapter 3 that a low-pass filter with a perfect cut-off will have infinite extent in the time domain, so any attempt at achieving the same effect by time domain operations must be doomed to failure. However, acceptable approximations can be achieved by allowing a "guard band" between the desired and actual cut-off frequencies. This means that the sampling rate must be higher than the Nyquist rate (in telecommunications, satisfactory operation can generally be achieved at sampling rates around 20% above the Nyquist rate—see Brown and Glazier, 1974).

One way of recovering the original waveform is by applying a low-pass filter. This approach is intuitively correct, since it acts in such a way as to broaden the narrow discrete samples until they coalesce and sum to give a continuous waveform. Indeed, this method acts in such a way as to eliminate the "repeated" spectra in the transform of the original sampled waveform (Fig. 25.10). This in itself shows why the original waveform has to be narrow-banded before sampling, so that the repeated and basic spectra of the waveform do not cross over each other and become impossible to separate with a low-pass filter. The idea may be taken further because the Fourier transform of a square cut-off filter is the sinc (sin u/u) function (Fig. 25.11). Hence, the original waveform may be recovered by convolving the samples with the sinc function (which in this case means replacing them by sinc functions of corresponding amplitudes). This has the effect of broadening out the samples as required, until the original waveform is recovered.

Figure 25.10. Effect of low-pass filtering to eliminate repeated spectra in the frequency domain (f r, sampling rate; L, low-pass filter characteristic). This diagram shows the repeated spectra of the frequency transform F(f) of the original sampled waveform. It also demonstrates how a low-pass filter can be expected to eliminate the repeated spectra to recover the original waveform.

Figure 25.11. The sinc (sin u/u) function shown in (b) is the Fourier transform of a square pulse (a) corresponding to an ideal low-pass filter. In this case, u=2πf c t, f c being the cut-off frequency.

So far we have considered the situation only for 1-D time-varying signals. However, recalling that there is an exact mathematical correspondence between time and frequency domain signals on the one hand and spatial and spatial frequency signals on the other, the above ideas may all be applied immediately to each dimension of an image (although the condition for accurate sampling now becomes 1/X≥2W X, where X is the spatial sampling period and W X is the spatial bandwidth). Here we accept this correspondence without further discussion and proceed to apply the sampling theorem to image acquisition.

Consider next how the signal from a camera may be sampled rigorously according to the sampling theorem. First, note that this has to be achieved both horizontally and vertically. Perhaps the most obvious solution to this problem is to perform the process optically, perhaps by defocusing the lens; however, the optical transform function for this case is frequently (i.e., for extreme cases of defocusing) very odd, going negative for some spatial frequencies and causing contrast reversals; hence, this solution is far from ideal (Pratt, 2001). Alternatively, we could use a diffraction-limited optical system or perhaps pass the focussed beam through some sort of patterned or frosted glass to reduce the spatial bandwidth artificially. None of these techniques will be particularly easy to apply, nor (apart possibly from the second) will it give accurate solutions. However, this problem is not as serious as might be imagined. If the sensing region of the camera (per pixel) is reasonably large, and close to the size of a pixel, then the averaging inherent in obtaining the pixel intensities will in fact perform the necessary narrow-banding (Fig. 25.12). To analyze the situation in more detail, note that a pixel is essentially square with a sharp cut-off at its borders. Thus its spatial frequency pattern is a 2-D sinc function, which (taking the central positive peak) approximates to a low-pass spatial frequency filter. This approximation improves somewhat as the border between pixels becomes more fuzzy.

Figure 25.12. Low-pass filtering carried out by averaging over the pixel region: an image with local high-frequency banding is to be averaged over the whole pixel region by the action of the sensing device.

The point here is that the worst case from the point of view of the sampling theorem is that of extremely narrow discrete samples, but clearly this worst case is most unlikely to occur with most cameras. However, this does not mean that sampling is automatically ideal—and indeed it is not, since the spatial frequency pattern for a sharply defined pixel shape has (in principle) infinite extent in the spatial frequency domain. The review by Pratt (2001) clarifies the situation and shows that there is a tradeoff between aliasing and resolution error. Overall, quality of sampling will be one of the limiting factors if the greatest precision in image measurement is aimed for: if the bandwidth of the presampling filter is too low, resolution will be lost; if it is too high, aliasing distortions will creep in; and if its spatial frequency response curve is not suitably smooth, a guard band will have to be included and performance will again suffer.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123869081000252

Image Acquisition

E.R. DAVIES , in Machine Vision (Third Edition), 2005

27.4 The Sampling Theorem

The Nyquist sampling theorem underlies all situations where continuous signals are sampled and is especially important where patterns are to be digitized and analyzed by computers. This makes it highly relevant both with visual patterns and with acoustic waveforms. Hence, it is described briefly in this section.

Consider the sampling theorem first in respect of a 1-D time-varying waveform. The theorem states that a sequence of samples (Fig. 27.9) of such a waveform contains all the original information and can be used to regenerate the original waveform exactly, but only if (1) the bandwidth W of the original waveform is restricted and (2) the rate of sampling f is at least twice the bandwidth of the original waveform—that is, f ≥ 2W. Assuming that samples are taken every T seconds, this means that 1/T ≥ 2W.

Figure 27.9. The process of sampling a time-varying signal. A continuous time-varying 1-D signal is sampled by narrow sampling pulses at a regular rate f r = 1/T, which must be at least twice the bandwidth of the signal.

At first, it may be somewhat surprising that the original waveform can be reconstructed exactly from a set of discrete samples. However, the two conditions for achieving perfect reconstruction are very stringent. What they are demanding in effect is that the signal must not be permitted to change unpredictably (i.e., at too fast a rate) or else accurate interpolation between the samples will not prove possible (the errors that arise from this source are called "aliasing" errors).

Unfortunately, the first condition is virtually unrealizable, since it is nearly impossible to devise a low-pass filter with a perfect cutoff. Recall from Chapter 3 that a low-pass filter with a perfect cutoff will have infinite extent in the time domain, so any attempt at achieving the same effect by time domain operations must be doomed to failure. However, acceptable approximations can be achieved by allowing a "guard-band" between the desired and actual cutoff frequencies. This means that the sampling rate must therefore be higher than the Nyquist rate. (In telecommunications, satisfactory operation can generally be achieved at sampling rates around 20% above the Nyquist rate—see Brown and Glazier, 1974.)

One way to recover the original waveform is to apply a low-pass filter. This approach is intuitively correct because it acts in such a way as to broaden the narrow discrete samples until they coalesce and sum to give a continuous waveform. This method acts in such a way as to eliminate the "repeated" spectra in the transform of the original sampled waveform (Fig. 27.10). This in itself shows why the original waveform has to be narrow-banded before sampling—so that the repeated and basic spectra of the waveform do not cross over each other and become impossible to separate with a low-pass filter. The idea may be taken further because the Fourier transform of a square cutoff filter is the sinc (sin u/u) function (Fig. 27.11). Hence, the original waveform may be recovered by convolving the samples with the sinc function (which in this case means replacing them by sinc functions of corresponding amplitudes). This broadens out the samples as required, until the original waveform is recovered.

Figure 27.10. Effect of low-pass filtering to eliminate repeated spectra in the frequency domain f r, sampling rate; L, low-pass filter characteristic). This diagram shows the repeated spectra of the frequency transform F(f) of the original sampled waveform. It also demonstrates how a low-pass filter can be expected to eliminate the repeated spectra to recover the original waveform.

Figure 27.11. The sinc (sin u/u) function shown in (b) is the Fourier transform of a square pulse (a) corresponding to an ideal low-pass filter. In this case, u = 2πfct, f c being the cutoff frequency.

So far we have considered the situation only for 1-D time-varying signals. However, recalling that an exact mathematical correspondence exists between time and frequency domain signals on the one hand and spatial and spatial frequency signals on the other, the above ideas may all be applied immediately to each dimension of an image (although the condition for accurate sampling now becomes 1/X2WX, where X is the spatial sampling period and WX is the spatial bandwidth). Here we accept this correspondence without further discussion and proceed to apply the sampling theorem to image acquisition.

Consider next how the signal from a TV camera may be sampled rigorously according to the sampling theorem. First, it is plain that the analog voltage comprising the time-varying line signals must be narrow-banded, for example, by a conventional electronic low-pass filter. However, how are the images to be narrow-banded in the vertical direction? The same question clearly applies for both directions with a solid-state area camera. Initially, the most obvious solution to this problem is to perform the process optically, perhaps by defocussing the lens. However, the optical transform function for this case is frequently (i.e., for extreme cases of defocusing) very odd, going negative for some spatial frequencies and causing contrast reversals; hence, this solution is far from ideal (Pratt, 2001). Alternatively, we could use a diffraction-limited optical system or perhaps pass the focused beam through some sort of patterned or frosted glass to reduce the spatial bandwidth artificially. None of these techniques will be particularly easy to apply nor will accurate solutions be likely to result. However, this problem is not as serious as might be imagined. If the sensing region of the camera (per pixel) is reasonably large, and close to the size of a pixel, then the averaging inherent in obtaining the pixel intensities will in fact perform the necessary narrow-banding (Fig. 27.12). To analyze the situation in more detail, note that a pixel is essentially square with a sharp cutoff at its borders. Thus, its spatial frequency pattern is a 2-D sinc function, which (taking the central positive peak) approximates to a low-pass spatial frequency filter. This approximation improves somewhat as the border between pixels becomes fuzzier.

Figure 27.12. Low-pass filtering carried out by averaging over the pixel region. An image with local high-frequency banding is to be averaged over the whole pixel region by the action of the sensing device.

The point here is that the worst case from the point of view of the sampling theorem is that of extremely narrow discrete samples, but this worst case is unlikely to occur with most cameras. However, this does not mean that sampling is automatically ideal—and indeed it is not, since the spatial frequency pattern for a sharply defined pixel shape has (in principle) infinite extent in the spatial frequency domain. The review by Pratt (2001) clarifies the situation and shows that there is a tradeoff between aliasing and resolution error. Overall, it is underlined here that quality of sampling will be one of the limiting factors if one aims for the greatest precision in image measurement. If the bandwidth of the presampling filter is too low, resolution will be lost; if it is too high, aliasing distortions will creep in; and if its spatial frequency response curve is not suitably smooth, a guard band will have to be included and performance will again suffer.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780122060939500307

Digital Communication System Concepts

Vijay K. Garg , Yih-Chen Wang , in The Electrical Engineering Handbook, 2005

2.3 Sampling Process

Analog information must be transformed into a digital format. The process starts with sampling the waveform to produce a discrete pulse-amplitude-modulated waveform (see Figure 2.3). The sampling process is usually described in a time domain. This is an operation that is basic to digital signal processing and digital communication. Using the sampling process, we convert the analog signal in a corresponding sequence of samples that are usually spaced uniformly in time. The sampling process can be implemented in several ways, the most popular being the sample-and-hold operation. In this operation, a switch and storage mechanism (such as a transistor and a capacitor, or shutter and a film strip) form a sequence of samples of the continuous input waveform. The output of the sampling process is called pulse amplitude modulation (PAM) because the successive output intervals can be described as a sequence of pulses with amplitudes derived from the input waveform samples. The analog waveform can be approximately retrieved from a PAM waveform by simple low-pass filtering, provided we choose the sampling rate properly. The ideal form of sampling is called instantaneous sampling.

FIGURE 2.3. Sampling Process

We sample the signal g(t) instantaneously at a uniform rate of f s once every Ts sec. Thus, we can write:

(2.1) g δ ( t ) = Σ n = g ( n T s ) δ ( t n T s ) ,

where gδ (t) is the ideal sampled signal and where δ(tnTs ) is the delta function positioned at time t = nTs .

A delta function is closely approximated by a rectangular pulse of duration Δt and amplitude g(nTS )/Δt; the smaller we make Δt, the better will be the approximation:

(2.2) g δ ( t ) = f s Σ m = G ( f m f s ) ,

where G(f) is the Fourier transform of the original signal g(t) and fs is sampling rate.

Equation 2.2 states that the process of uniformly sampling a continuous-time signal of finite energy results in a periodic spectrum with a period equal to the sampling rate.

Taking the Fourier transform of both side, of Equation 2.1 and noting that the Fourier transform of the delta function δ(tnTs ) is equal to e j 2 π n f T s :

(2.3) G δ ( f ) Σ n = g ( n T s ) e j 2 π n f T S .

Equation 2.3 is called the discrete-time Fourier transform. It is the complex Fourier series representation of the periodic frequency function Gδ (t), with the sequence of samples g(nTs ) defining the coefficients of the expansion.

We consider any continuous-time signal g(t) of finite energy and infinite duration. The signal is strictly band-limited with no frequency component higher than W Hz. This implies that the Fourier transform G(f) of the signal g(t) has the property that G(f) is zero for |f| ≥ W. If we choose the sampling period Ts = 1/2 W, then the corresponding spectrum is given as:

(2.4) G δ ( f ) = Σ n = g ( n 2 W ) e j π n f W = f s G ( f ) + f s Σ m = , m 0 G ( f m f s )

Consider the following two conditions:

(1)

G(f) = 0 for |f| ≥ W.

(2)

fs = 2 W.

We find from equation 2.4 by applying these conditions,

(2.5) G ( f ) = 1 2 W G δ ( f ) W < f < W . G ( f ) = 1 2 W Σ n = g ( n 2 W ) e ( j π n f W ) W < f < W .

Thus, if the sample value g(n/2 W) of a signal g(t) is specified for all n, then the Fourier transform G(f) of the signal is uniquely determined by using the discrete-time Fourier transform of equation 2.5. Because g(t) is related to G(f) by the inverse Fourier transform, it follows that the signal g(t) is itself uniquely determined by the sample values g(n/2 W) for −∞ < n < ∞. In other words, the sequence {g(n/2 W)} has all the information contained in g(t).

We state the sampling theorem for band-limited signals of finite energy in two parts that apply to the transmitter and receiver of a pulse modulation system, respectively.

(1)

A band-limited signal of finite energy with no frequency components higher than W Hz is completely described by specifying the values of signals at instants of time separated by 1/2 W sec.

(2)

A band-limited signal of finite energy with no frequency components higher than W Hz may be completely recovered from a knowledge of its samples taken at the rate of 2 W samples/sec.

This is also known as the uniform sampling theorem. The sampling rate of 2 W samples per second for a signal bandwidth W Hz is called the Nyquist rate and 1/2 W sec is called the Nyquist interval.

We discuss the sampling theorem by assuming that signal g(t) is strictly band-limited. In practice, however, an information-bearing signal is not strictly band-limited, with the result that some degree of under sampling is encountered. Consequently, some aliasing is produced by the sampling process. Aliasing refers to the phenomenon of a high-frequency component in the spectrum of the signal seemingly taking on the identity of a lower frequency in the spectrum of its sampled version.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780121709600500700

Discrete Revolution

Stéphane Mallat , in A Wavelet Tour of Signal Processing (Third Edition), 2009

Removal of Aliasing

To apply the sampling theorem, f is approximated by the closest signal f ˜ , the Fourier transform of which has a support in [–π/s, π/s]. The Plancherel formula (2.26) proves that

f f ˜ 2 = 1 2 π + | f ˆ ( ω ) f ˜ ˆ ( ω ) | 2 d ω = 1 2 π | ω | > π / s | f ˆ ( ω ) | 2 d ω + 1 2 π | ω | π / s | f ˆ ( ω ) f ˜ ˆ ( ω ) | 2 d ω .

This distance is minimum when the second integral is zero and therefore

(3.12) f ˜ ˆ ( ω ) = f ˆ ( ω ) 1 [ π / s , π / s ] ( ω ) = 1 s φ ˆ s ( ω ) f ˆ ( ω ) .

It corresponds to f ˜ = 1 s f φ s .

The filtering of f by φ s avoids aliasing by removing any frequency larger than π/s. Since f ˜ ˆ has a support in [–π/s, π/s], the sampling theorem proves that ˜(t) can be recovered from the samples ˜(ns). An analog-to-digital converter is therefore composed of a filter that limits the frequency band to [–π/s, π/s], followed by a uniform sampling at interval s.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123743701000070

Basics of Imaging Theory and Statistics

CHIEN-MIN KAO , ... XIAOCHUAN PAN , in Emission Tomography, 2004

a Ideal Bandlimited Interpolation

According to the sampling theorem, exact recovery of a bandlimited image from its samples can be achieved by use of the sinc interpolator (see Eq. [36])

(62) I 1 ( x ) = sin c ( x ) sin ( π x ) π x

provided that the sampling condition are satisfied. Unfortunately, this interpolator has an infinite length and decays slowly. Therefore, a long summation is needed in order to obtain good approximation of the infinite summation in Eq. (59), making this interpolation computationally unfavorable. In addition, real-world images are not bandlimited and using this interpolator generates aliasing errors that often appear as oscillatory artifacts. As demonstrated by Figure 7b, such aliasing errors are highly unfavorable for image perception. For these reasons, this ideal interpolator for bandlimited images is seldom used in practice. However, for special cases this interpolation can be achieved by DFT zero-padding. This implementation, to be discussed later, is computationally efficient and has been often used in practice.

FIGURE 7. Comparison of interpolation methods. The original image is a 256 x 256 MRI brain image shown in (a). This image is subsampled to generate a 64 x 64 image, which is then upsampled by five interpolation methods to reproduce the original image size, including (b) sinc interpolation implemented by DFT zero-padding, (c) nearest-neighbor interpolation, (d) linear interpolation, (e) cubic convolution interpolation, and (f) cubic B-spline interpolation.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780127444826500090

Stochastic Processes

Yûichirô Kakihara , in Encyclopedia of Physical Science and Technology (Third Edition), 2003

VII.B Sampling Theorem

Shannon's (1949) sampling theorem was obtained for deterministic functions on R (signal functions). This was extended to be valid for weakly stationary stochastic processes by Balakrishnan (1957).

First we consider bandlimited and finite-energy signal functions. A signal function X(t) is said to be of finite energy if X  L 2( R ) with the Lebesgue measure. Then its Fourier transform F X is defined in the mean-square sense by

( F X ) ( u ) = 1 2 π X ( t ) e iut dt = l . i . m . T T T X ( t ) e iut dt ,

where l.i.m. means "limit in the mean" and F: L 2( R )   L 2( R ) turns out to be a unitary operator. A signal function X(t) is said to be bandlimited if there exists some constant W  >   0 such that

( F X ) ( u ) = 0 for almost every u with | u | > W .

Here, W is called a bandwidth and [−W, W] a frequency interval.

Let W  >   0 and define a function S W on R by

S W ( t ) = { W π sin W t W t , t 0 W π , t = 0.

Then, S W is a typical example of a bandlimited function and is called a sample function. In fact, one can verify that

( F S W ) ( u ) = 1 2 π 1 W ( u ) , u R ,

where 1 W   =   1[−W, W], the indicator function of [−W, W]. Denote by BLW the set of all bandlimited signal functions with bandwidth W  >   0. Then it is easily seen that BLW is a closed subspace of L 2( R ). Now the sampling theorem for a function in BLW is stated as follows: Any X  BLW has a sampling expansion in L 2 - and L -sense given by

(20) X ( t ) = n = X ( n π W ) W π ϕ n ( t ) ,

where the ϕ n are defined by

ϕ n ( t ) = π W S W ( t n π W ) , t R , n Z ,

n }−∞ forms a complete orthonormal system in BLW, and it is called a system of sampling functions. We can say that a sampling theorem is a Fourier expansion of an L 2-function with respect to this system of sampling functions.

A sampling theorem holds for some stochastic processes. Let {X(t)} be an L 2 0(Ω)-valued weakly harmonizable process with the representing measure ξ, i.e.,

X ( t ) = e itu ξ ( du ) , t R .

We say that {X(t)} is bandlimited if there exists a W  >   0 such that the support of ξ is contained in [−W, W], i.e., ξ(A)   =   0 if A ∩ [−W, W]   =   ∅. If this is the case, the sampling theorem holds:

X ( t , ω ) = n = X ( n π W , ω ) sin ( W ( t n π / W ) ) W ( t n π / W ) , t R ,

where the convergence is in ∥ · ∥2 for each t  R .

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B0122274105007390

dooleyneenturnew.blogspot.com

Source: https://www.sciencedirect.com/topics/engineering/sampling-theorem

Posting Komentar untuk "Given a Continuous Time Signal Determine the Minimum Sampling Rate for a Signal Defined by"