# Difference between revisions of "Correlation Function"

(→The Spectral Densiry Function, Estimation and Application) |
|||

Line 8: | Line 8: | ||

where <math>f_{X(t_1), X(t_2)}dxdy</math> denotes the joint probability density function of X(t_2) and X(t_2). Similarly, we define the autocovariance function as | where <math>f_{X(t_1), X(t_2)}dxdy</math> denotes the joint probability density function of X(t_2) and X(t_2). Similarly, we define the autocovariance function as | ||

<math>R_X(t_1,t_2) = E[(X(t_1) - E[X(t_1)])E(X(t_2)-E[X(t_2)] )] = R_X(t_1,t_2) - E[X(t_1)]E[X(t_2)]</math> | <math>R_X(t_1,t_2) = E[(X(t_1) - E[X(t_1)])E(X(t_2)-E[X(t_2)] )] = R_X(t_1,t_2) - E[X(t_1)]E[X(t_2)]</math> | ||

+ | |||

+ | Something is wrong with this equation: The LHS cannot equal the RHS. | ||

A process is said to be wide sense stationary (WSS) if the process is mean stationary (1) and covariance stationary (2). Formally, we require | A process is said to be wide sense stationary (WSS) if the process is mean stationary (1) and covariance stationary (2). Formally, we require |

## Latest revision as of 00:23, 15 December 2011

Entry By Robin Kirkpatrick AP 2011

Introduction An autocorrellation function of a random variable X(t) is defined as <math>R_X(t_1,t_2) = E[X(t_1)X(t_1)] </math> where E[-] denotes the joint expectation of X(t_1) and X(t_2). Formally, <math>R_X(t_1,t_2) = E[X(t_1)X(t_2)] = \int \int xyf_{X(t_1), X(t_2)}dxdy</math> where <math>f_{X(t_1), X(t_2)}dxdy</math> denotes the joint probability density function of X(t_2) and X(t_2). Similarly, we define the autocovariance function as <math>R_X(t_1,t_2) = E[(X(t_1) - E[X(t_1)])E(X(t_2)-E[X(t_2)] )] = R_X(t_1,t_2) - E[X(t_1)]E[X(t_2)]</math>

Something is wrong with this equation: The LHS cannot equal the RHS.

A process is said to be wide sense stationary (WSS) if the process is mean stationary (1) and covariance stationary (2). Formally, we require (1) <math>E[X(t)] = m (constant)</math> for all t. In other words, the mean is constant over time. Note that the mean used here is the ensemble average, not a temporal average, etc. (2) <math>C_X(t1, t2) = C_X(t1-t2)</math> for all t1 and t2. In other words, the autocovariance seqnce only depends on the lag between the two times.

If the process is WSS, then the autocovariance function can be written as <math>C_X(t1, t2) = C_X(t1-t2)</math>

## The Spectral Density Function, Estimation and Application

Understanding an autocovariance function can be rather non-intuitive. It is often more practical to use the spectral density function, which is relate (for continuous random variables) as <math>S_{X(\tau)} = \int R_{X}(\tau)e^{-i2 \pi t \tau} d \tau</math> ie , the power spectral density function and autocovariance sequence form a fourier transform pair. We can also write this explicitly for discrete time processes, <math>S_X(f) = \sum_{k = - \inf}^{\inf} R_X(k) e^{ -j 2 \pi fk}</math>

The power spectral density arises from taking the fourier transform of realizations of a stochastic process. We thus define the periodogram estimate for a discrete time WSS process with k observations as
<math>p(f) = \frac{|\sum_{m = 0}^{k-1}X_me^{-j2 \pi fm}|^2}{k}</math>
It is straightforward to show that the expected value of the periodogram is not equal to the true value, but approaches it as the number of samples tends to infinity. Formally,
<math>E[p(f)] = \sum_{m = - (k-1)}^{(k-1)}{1 - \frac{|m|}{k}}R_X(m)e^{-j 2 \pi fk}</math>

and

<math>E[p(f)] -> S_X(f)</math> as k-> infinity.

## References

[1] Alberto Leon-Garcia. Probability, Statistics, and Random Processes for Electrical Engineering, 3rd ed.