Random variables. Discrete random variable. Mathematical expectation

Ministry of Education and Science of the Russian Federation

Cherepovets State University

Institute of Engineering and Economics

The concept of a random process in mathematics

Performed by a student

Group 5 GMU-21

Ivanova Yulia

Cherepovets


Introduction

Main part

· Definition of a random process and its characteristics

· Markov random processes with discrete states

Stationary random processes

Ergodic property of stationary random processes

Literature


Introduction

The concept of a random process was introduced in the 20th century and is associated with the names of A.N. Kolmogorov (1903-1987), A.Ya. Khinchin (1894-1959), E.E. Slutsky (1880-1948), N. Wiener (1894-1965).

This concept today is one of the central ones not only in probability theory, but also in natural science, engineering, economics, production organization, and communication theory. The theory of random processes belongs to the category of the fastest growing mathematical disciplines. There is no doubt that this circumstance is largely determined by its deep connections with practice. The 20th century could not be satisfied with the ideological heritage that was received from the past. Indeed, while the physicist, biologist, and engineer were interested in the process, i.e. change in time of the phenomenon being studied, the theory of probability offered them as a mathematical apparatus only means that studied stationary states.

To study changes over time, the probability theory of the late 19th - early 20th centuries did not have developed specific schemes, much less general techniques. And the need to create them literally knocked on the windows and doors of mathematical science. The study of Brownian motion in physics brought mathematics to the threshold of creating a theory of random processes.

I consider it necessary to mention two more important groups of studies, begun at different times and for different reasons.

Firstly, this work by A.A. Markov (1856-1922) on the study of chain dependencies. Secondly, the works of E.E. Slutsky (1880-1948) on the theory of random functions.

Both of these directions played a very significant role in the formation of the general theory of random processes.

For this purpose, significant initial material had already been accumulated, and the need to build a theory seemed to be in the air.

It remained to carry out a deep analysis of the existing works, the ideas and results expressed in them, and on its basis to carry out the necessary synthesis.


Definition of a random process and its characteristics

Definition: By a random process X(t) is a process whose value, for any value of the argument t, is a random variable.

In other words, a random process is a function that, as a result of testing, can take on one or another specific form, unknown in advance. For a fixed t=t 0 X(t 0) is an ordinary random variable, i.e. section random process at time t 0.

Examples of random processes:

1. population of the region over time;

2. the number of requests received by the company’s repair service over time.

A random process can be written as a function of two variables X(t,ω), where ω€Ω, t€T, X(t, ω) € ≡ and ω is an elementary event, Ω is the space of elementary events, T is the set of argument values t, ≡ is the set of possible values ​​of the random process X(t, ω).

Implementation random process X(t, ω) is the non-random function x(t) into which the random process X(t) turns as a result of testing (for a fixed ω), i.e. the specific form taken by the random process X(t), its trajectory.

Thus, random process X(t, ω) combines the features of a random variable and a function. If we fix the value of the argument t, the random process turns into an ordinary random variable; if we fix ω, then as a result of each test it turns into an ordinary non-random function. In the following discussion we will omit the argument ω, but it will be assumed by default.

Figure 1 shows several implementations of a random process. Let the cross section of this process for a given t be a continuous random variable. Then the random process X(t) for a given t is determined entirely by the probability φ(x‚ t). It is obvious that the density φ(x, t) is not an exhaustive description of the random process X(t), because it does not express the dependence between its sections at different times.

The random process X(t) is a collection of all sections for all possible values ​​of t, therefore, to describe it it is necessary to consider a multidimensional random variable (X(t 1), X(t 2), ..., X(t n)), consisting of all combinations this process. In principle, there are an infinite number of such combinations, but to describe a random process it is possible to get by with a relatively small number of combinations.

They say that a random process has ordern, if it is completely determined by the joint distribution density φ(x 1, x 2, …, x n; t 1, t 2, …, t n) n of arbitrary sections of the process, i.e. density of an n-dimensional random variable (X(t 1), X(t 2), ..., X(t n)), where X(t i) is a combination of the random process X(t) at time t i, i=1, 2 , …, n.

Like a random variable, a random process can be described by numerical characteristics. If for a random variable these characteristics are constant numbers, then for a random process - non-random functions.

Mathematical expectation random process X(t) is a non-random function a x (t), which for any value of the variable t is equal to the mathematical expectation of the corresponding section of the random process X(t), i.e. a x (t)=M .

Variance random process X(t) is a non-random function D x (t), for any value of the variable t equal to the dispersion of the corresponding combination of the random process X(t), i.e. D x (t)= D.

Standard deviationσ x (t) of a random process X(t) is the arithmetic value of the square root of its variance, i.e. σ x (t)= D x (t).

The mathematical expectation of a random process characterizes average trajectory of all its possible implementations, and its dispersion or standard deviation - spread implementations relative to the average trajectory.

The characteristics of a random process introduced above turn out to be insufficient, since they are determined only by a one-dimensional distribution law. If the random process X 1 (t) is characterized by a slow change in the values ​​of implementations with a change in t, then for the random process X 2 (t) this change occurs much faster. In other words, the random process X 1 (t) is characterized by a close probabilistic dependence between its two combinations X 1 (t 1) and X 1 (t 2), while for the random process X 2 (t) this dependence between the combinations X 2 (t 1) and X 2 (t 2) are practically absent. The indicated dependence between combinations is characterized by a correlation function.

Definition: Correlation function random process X(t) is called a non-random function

K x (t 1 , t 2) = M[(X(t 1) – a x (t 1))(X(t 2) – a x (t 2))] (1.)

two variables t 1 and t 2, which for each pair of variables t 1 and t 2 is equal to the covariance of the corresponding combinations X(t 1) and X(t 2) of the random process.

Obviously, for a random process X(t 1) the correlation function K x 1 (t 1, t 2) decreases as the difference t 2 - t 1 increases much more slowly than K x 2 (t 1, t 2) for a random process X (t 2).

The correlation function K x (t 1 , t 2) characterizes not only the degree of closeness of the linear relationship between two combinations, but also the spread of these combinations relative to the mathematical expectation a x (t). Therefore, the normalized correlation function of the random process is also considered.

Normalized correlation function random process X(t) is called the function:

P x (t 1, t 2) = K x (t 1, t 2) / σ x (t 1)σ x (t 2) (2)

Example #1

A random process is defined by the formula X(t) = X cosωt, where X is a random variable. Find the main characteristics of this process if M(X) = a, D(X) = σ 2.

SOLUTION:

Based on the properties of mathematical expectation and dispersion, we have:

a x (t) = M(X cosωt) = cosωt * M(X) = a cosωt,

D x (t) = D(X cosωt) = cos 2 ωt * D(X) = σ 2 cos 2 ωt.

We find the correlation function using formula (1.)

K x (t 1 , t 2) = M[(X cosωt 1 – a cosωt 1) (X cos ωt 2 – a cosωt 2)] =

Cosωt 1 cosωt 2 * M[(X – a)(X - a)] = cosωt 1 cosωt 2 * D(X) = σ 2 cosωt 1 cosωt 2 .

We find the normalized correlation function using formula (2.):

P x (t 1, t 2) = σ 2 cosωt 1 cosωt 2 / (σ cosωt 1)(σ cosωt 2) ≡ 1.

Random processes can be classified depending on whether the states of the system in which they occur change smoothly or abruptly, whether the set of these states is finite (countable) or infinite, etc. Among random processes, a special place belongs to the Markov random process.

Theorem. A random process X(t) is Hilbert if and only if there exists R(t, t^) for all (t, t^)€ T*T.

The theory of Hilbert random processes is called correlation theory.

Note that the set T can be discrete and continuous. In the first case, the random process X t is called a process with discrete time, in the second - with continuous time.

Accordingly, combinations of X t can be discrete and continuous random variables.

The random process is called X(t) selectively irregular, differentiable and integrable at a point ω€Ω if its realization x(t) = x(t, ω) is respectively continuous, differentiable and integrable.

The random process X(t) is called continuous: almost, probably If

P(A)=1, A = (ω € Ω : lim x(t n) = x(t))

IN mean square, If

Lim M[(X(t n) – X(t)) 2 ] = 0

By probability, If

Aδ ≥ 0: lim P[| X(t n) – X(t)| > δ] = 0

Mean square convergence is also denoted by:

X(t) = lim X(t n)

It turns out that from sample continuity follows continuity almost certainly, from continuity almost certainly and in the mean square follows continuity by probability.

Theorem. If X(t) is a Hilbert random process, continuous in the mean square, then m x (t) is a continuous function and the relation holds

Lim M = M = M .

Theorem. A Hilbert random process X(t) is mean square continuous if and only if its covariance function R(t, t^) at the point (t, t) is continuous.

A Hilbert random process X(t) is called mean square differentiable if there exists a random function X(t) = dX(t)/dt such that

X(t) = dX(t)/ dt = lim X(t+∆t) – X(t) / ∆t

(t € T, t +∆t € T),

those. When

Lim M [((X(t + ∆t) – X(t) / (∆t)) – X(t)) 2 ] = 0

We will call the random function X(t) mean square derivative random process X(t) at point t or on T, respectively.

Theorem. The Hilbert random process X(t) is differentiable in the mean square at the point t if and only if there exists

δ 2 R(t, t^) / δtδt^ at point (t, t^). Wherein:

R x (t, t^) = M = δ 2 R(t, t^) / δtδt^.

If a Hilbert random process is differentiable on T, then its mean square derivative is also a Hilbert random process; if sample trajectories of a process are differentiable on T with probability 1, then with probability 1 their derivatives coincide with the mean square derivatives on T.

Theorem. If X(t) is a Hilbert random process, then

M = (d / dt) M = dm x (t) / dt.

Let (0, t) be a finite interval, 0

X(t) is a Hilbert random process.

Y n = ∑ X(t i)(t i – t i-1) (n = 1,2, …).

Then the random variable

max (t i – t i -1)→0

Called integral in mean square process X(t) on (0, t) and is denoted by:

Y(t) = ∫ X(τ)dτ.

Theorem . The mean square integral Y(t) exists if and only if the covariance function R(t, t^) of the Hilbert process X(t) is continuous on T×T and the integral exists

R y (t, t^) = ∫ ∫ R(τ, τ^) dτdτ^

If the mean square integral of the function X(t) exists, then

M = ∫ Mdτ,

R Y (t, t^) = ∫ ∫ R(τ, τ^)dτdτ^

K y (t, t^) = ∫ ∫ K(τ, τ^)dτdτ^

Here R y (t, t^) = M, K y (t, t^) = M are the covariance and correlation functions of the random process Y(t).

Theorem. Let X(t) be a Hilbert random process with covariance function R(t, t^), φ(t) be a real function, and let there exist an integral

∫ ∫ φ(t)φ(t^)R(t, t^)dtdt^

Then there is a mean square integral

∫ φ(t)X(t)dt.

Random processes:

X i (t) = V i φ i (t) (i = 1n)

Where φ i (t) are given real functions

Vi - random variables with characteristics

They are called elementary.

Canonical expansion random process X(t) is called its representation in the form

Where V i are the coefficients, and φ i (t) are the coordinate functions of the canonical expansion of the process X(t).

From relations:

M(V I = 0), D(V I) = D I, M(V i V j) = 0 (i ≠ j)

X(t) = m x (t) + ∑ V i φ i (t) (t € T)

K(t, t^) = ∑ D i φ i (t)φ i (t^)

This formula is called canonical expansion correlation function of a random process.

In the case of the equation

X(t) = m x (t) + ∑ V i φ i (t) (t € T)

The following formulas apply:

X(t) = m x (t) + ∑ V i φ(t)

∫ x(τ)dt = ∫ m x (τ)dτ + ∑ V i ∫ φ i (t)dt.

Thus, if a process X(t) is represented by its canonical expansion, then its derivative and integral can also be represented as canonical expansions.

Markov random processes with discrete states

A random process occurring in a certain system S with possible states S 1, S 2, S 3, ... is called Markovsky, or random process without consequences, if for any moment t 0 the probable characteristics of the process in the future (at t>t 0) depend only on its state at the given moment t 0 and do not depend on when and how the system came to this state; those. do not depend on its behavior in the past (at t

An example of a Markov process: system S is a taxi meter. The state of the system at moment t is characterized by the number of kilometers (tenths of kilometers) traveled by the car up to this moment. Let at the moment t 0 the counter show S 0 / The probability that at the moment t>t 0 the counter will show this or that number of kilometers (more precisely, the corresponding number of rubles) S 1 depends on S 0, but does not depend on at what moments time, the meter readings changed until the moment t 0.

Many processes can be approximately considered Markovian. For example, the process of playing chess; system S is a group of chess pieces. The state of the system is characterized by the number of enemy pieces remaining on the board at time t 0 . The probability that at the moment t>t 0 the material advantage will be on the side of one of the opponents depends primarily on the state of the system at the moment t 0, and not on when and in what sequence the pieces with boards until time t 0 .

In some cases, the prehistory of the processes under consideration can simply be neglected and Markov models can be used to study them.

Markov random process with discrete states and discrete time (or Markov chain ) is called a Markov process, in which its possible states S 1, S 2, S 3, ... can be listed in advance, and the transition from state to state occurs instantly (jump), but only at certain times t 0, t 1, t 2, ..., called steps process.

Let us denote p ij – transition probability random process (system S) from state I to state j. If these probabilities do not depend on the number of the process step, then such a Markov chain is called homogeneous.

Let the number of states of the system be finite and equal to m. Then it can be characterized transition matrix P 1 , which contains all transition probabilities:

p 11 p 12 … p 1m

p 21 p 22 … p 2m

P m1 p m2 … p mm

Naturally, for each row ∑ p ij = 1, I = 1, 2, …, m.

Let us denote p ij (n) as the probability that, as a result of n steps, the system will move from state I to state j. In this case, for I = 1 we have transition probabilities that form the matrix P 1, i.e. p ij (1) = p ij

It is necessary, knowing the transition probabilities p ij , to find p ij (n) – the probabilities of the system transition from state I to state j in n steps. For this purpose, we will consider an intermediate (between I and j) state r, i.e. We will assume that from the initial state I in k steps the system will move to the intermediate state r with probability p ir (k), after which in the remaining n-k steps from the intermediate state r it will move to the final state j with probability p rj (n-k). Then, according to the total probability formula

P ij (n) = ∑ p ir (k) p rj (n-k) – Markov equality.

Let us make sure that, knowing all the transition probabilities p ij = p ij (1), i.e. matrix P 1 of transition from state to state in one step, you can find the probability p ij (2), i.e. matrix P 2 of transition from state to state in two steps. And knowing the matrix P 2, find the matrix P 3 of the transition from state to state in three steps, etc.

Indeed, putting n = 2 in the formula P ij (n) = ∑ p ir (k) p rj (n-k), i.e. k=1 (intermediate state between steps), we get

P ij (2) = ∑ p ir (1)p rj (2-1) = ∑ p ir p rj

The resulting equality means that P 2 = P 1 P 1 = P 2 1

Assuming n = 3, k = 2, we similarly obtain P 3 = P 1 P 2 = P 1 P 1 2 = P 1 3 , and in the general case P n = P 1 n

Example

The totality of families in a certain region can be divided into three groups:

1. families who do not have a car and do not intend to buy one;

2. families who do not have a car, but intend to purchase one;

3. families with a car.

The statistical survey carried out showed that the transition matrix for an interval of one year has the form:

(In the matrix P 1, the element p 31 = 1 means the probability that a family that has a car will also have one, and, for example, the element p 23 = 0.3 is the probability that a family that does not have a car, but decides to purchase, will fulfill his intention next year, etc.)

Find the probability that:

1. a family that did not have a car and was not planning to buy one will be in the same situation in two years;

2. a family that did not have a car, but intends to buy one, will have a car in two years.

SOLUTION: Let's find the transition matrix P 2 after two years:

0,8 0,1 0,1 0,8 0,1 0,1 0,64 0,15 0,21

0 0,7 0,3 0 0,7 0,3 0 0,49 0,51

0 0 1 0 0 1 0 0 1

That is, the probabilities sought in example 1) and 2) are equal, respectively

p 11 =0.64, p 23 =0.51

Next we will consider Markov random process with discrete states and continuous time, in which, unlike the Markov chain discussed above, the moments of possible transitions of the system from the state are not fixed in advance, but are random.

When analyzing random processes with discrete states, it is convenient to use a geometric scheme - the so-called schedule of events. Typically, system states are depicted by rectangles (circles), and possible transitions from state to state are depicted by arrows (oriented arcs) connecting the states.

Example. Construct a state graph of the following random process: device S consists of two nodes, each of which can fail at a random moment in time, after which the repair of the node immediately begins, continuing for a previously unknown random time.

SOLUTION. Possible system states: S 0 – both nodes are operational; S 1 – the first unit is being repaired, the second is operational; S 2 – the second unit is being repaired, the first one is operational; S 3 – both units are being repaired.

An arrow in the direction, for example, from S 0 to S 1, means a transition of the system at the moment of failure of the first node, from S 1 to S 0 - a transition at the moment of completion of repair of this node.

There are no arrows from S 0 to S 3 and from S 1 to S 2 on the graph. This is explained by the fact that the failures of nodes are assumed to be independent of each other and, for example, the probabilities of simultaneous failure of two nodes (transition from S 0 to S 3) or simultaneous completion of repairs of two nodes (transition from S 3 to S 0) can be neglect.

Stationary random processes

stationary in the narrow sense, If

F(x 1, …, x n; t 1, …, t n) = F(x 1, …, x n; t 1 +∆, …, t n +∆)

For arbitrary

n≥1, x 1, …, x n, t 1, …, t n; ∆; t 1 € T, t i + ∆ € T.

Here F(x 1, …, x n; t 1, …, t n) is the n-dimensional distribution function of the random process X(t).

The random process X(t) is called stationary in the broad sense, If

It is obvious that stationarity in the narrow sense implies stationarity in the broad sense.

From the formulas:

m(t) = m(t + ∆), K(t, t^) = K(t + ∆, t^ + ∆)

(t € T, t^ € T, t + ∆€ T), t^ + ∆€ T)

It follows that for a process that is stationary in the broad sense, we can write

m (t) = m x (0) = const;

D (t) = K(t, t) = K(0,0) = const;

K(t, t^) = K(t – t^, 0) = K (0, t^ - t)

Thus, for a process that is stationary in the broad sense, the mathematical expectation and variance do not depend on time, and K(t, t^) is a function of the form:

It can be seen that k(τ) is an even function, and

Here D is the variance of the stationary process

Х(t), α i (I = 1, n) – arbitrary numbers.

First equality of the system

K(0) = B = σ 2 ; |k(τ)| ≤ k(0); ∑ ∑ ά i α j k(t i - t j) ≥ 0

follows from the equation K(t, t^) = k(τ) = k(-τ), τ = t^ – t. First equality

K(0) = B = σ 2 ; |k(τ)| ≤ k(0); ∑ ∑ ά i α j k(t i - t j) ≥ 0 is a simple consequence of the Schwartz inequality for the sections X(t), X(t^) of the stationary random process X(t). Last inequality:

K(0) = B = σ 2 ; |k(τ)| ≤ k(0); ∑ ∑ ά i α j k(t i - t j) ≥ 0

Obtained as follows:

∑ ∑ α i α j k(t i - t j) = ∑ ∑ K(t i , t j)α i α j = ∑ ∑ M[(α i X i)(α j X j)] = M[(∑ α i X i) 2 ] ≥0

Taking into account the formula for the correlation function of the derivative dX(t)/dt of a random process, for a stationary random function X(t) we obtain

K 1 (t, t^) = M[(dX(t)/dt)*(dX(t^)/dt^)] = δ 2 K(t, t^) / δtδt^ = δ 2 k(t ^ - t) / δtδt^

Because the

δk(t^ ​​- t) / δt = (δk(τ) / δτ) * (δτ / δτ) = - δk(τ) / δτ,

δ 2 k(t^ - t) / δtδt^ = - (δ 2 k(τ) / δτ 2) * (δτ / δt^) = - (δ 2 k(τ) / δτ 2)

then K 1 (t, t^) = k 1 (τ) = - (δ 2 k(τ) / δτ 2), τ = t^ – t.

Here K 1 (t, t^) and k 1 (τ) are the correlation function of the first derivative of the stationary random process X(t).

For the nth derivative of a stationary random process, the formula of the correlation function has the form:

K n (τ) = (-1) n * (δ 2 n *k(τ) / δτ 2 n)

Theorem. A stationary random process X(t) with correlation function k(τ) is mean square continuous at point t € T if and only if

Lim k(τ) = k(0)

To prove it, let’s write down an obvious chain of equalities:

M [|X(t+τ)-X(T)| 2 ] = M[|X(t)| 2 ] – 2M[|X(t+τ)X(t)|] + M =

2D-2k(τ) = 2.

Hence it is obvious that the condition for continuity in the mean square process X(t) at the point t € T

Lim M[|X(t+τ) – X(t)| 2 ] = 0

Occurs if and only if Lim k(τ) = k(0)

Theorem. If the correlation function k(τ) of a stationary random process X(t) is continuous in the mean square at the point τ=0, then it is continuous in the mean square at any point τ € R 1 .

To prove this, let us write down the obvious equalities:

k(τ+∆τ)-k(τ) = M – M =

M(X(t))

Then, applying the Schwartz inequality to the factors in the curly brace and considering the relations:

K(t, t^) = k(τ) = k(-τ), τ = t^ – t.

K(0) = B = σ 2 ; |k(τ)| ≤ k(0); ∑ ∑ ά i α j k(t i - t j) ≥ 0

0 ≤ 2 ≤ MM[|X(t+τ+∆τ)-X(t+τ)| 2 ] =

Passing to the limit at ∆τ→0 and taking into account the condition of the theorem on the continuity of k(τ) at the point τ=0, as well as the first equality of the system

K(0) = B = σ 2 , we find

Lim k(τ+∆τ) = k(τ)

Since here τ is an arbitrary number, the theorem should be considered proven.

Ergodic property of stationary random processes

Let X(t) be a stationary random process on a time interval with characteristics

τ = t^ – t, (t, t^) € T×T.

The ergodic property of a stationary random process is that based on a sufficiently long implementation of the process, one can judge its mathematical expectation, dispersion, and correlation function.

We will call a more strictly stationary random process X(t) ergodic in mathematical expectation, If

Lim M (|(1 / T)∫ X(t)dt| 2 ) = 0

Theorem

Stationary random process X(t) with characteristics:

M = 0, K(t, t^) = M = k(τ),

τ = t^ – t, (t, t^) € T×T

is ergodic in mathematical expectation if and only if

Lim (2 / T) ∫ k(τ) (1 – τ/t)dτ = 0.

To prove it, obviously, it is enough to verify that the equality is true

Let us write down the obvious relations

C = M (|(1 / T)) ∫X(t)dt| 2 ) = (1 / T 2) ∫ ∫ k(t^ - t)dt^dt = (1/T) ∫ dt ∫ k(t^ - t)dt^.

Assuming here τ = t^ – t, dτ = dt^ and taking into account the conditions (t^ = T) → (τ = T - t),

(t^ = 0)→(τ = -t), we get

С = (1/T 2) ∫ dt ∫ k(τ)dτ = (1/T 2) ∫ dt ∫ k(τ)dτ + (1/T 2) ∫ dt ∫ k(τ)dτ =

= -(1/T 2) ∫ dt ∫ k(τ)dτ - (1/T 2) ∫ dt ∫ k(τ)dτ

Putting in the first and second terms of the right side of this equality, respectively, τ = -τ^, dτ = -dτ^, τ = T-τ^, dτ = -dτ^, we find

С = (1/T 2) ∫ dt ∫ k(τ)dτ + (1/T 2) ∫ dt ∫ k(T - τ)dτ

Applying the Dirichlet formula for double integrals, we write

С = (1/T 2) ∫ dt ∫ k(τ)dτ + (1/T 2) ∫ dt ∫ k(T - τ)dτ = (1/T 2) ∫ (T - τ) k(τ) dτ + (1/T 2) ∫ τk (T – τ)dτ

In the second term on the right side we can put τ^ = T-τ, dτ = -dτ^, after which we will have

From this and from the definition of constants it is clear that the equality

M((1 / T) ∫X(t)dt| 2 ) = (2 / T) ∫ k(τ) (1 – τ/t)dτ

Fair.

Theorem

If the correlation function k(τ) of a stationary random process X(t) satisfies the condition

Lim (1/T) ∫ |k(τ)| dt = 0

Then X(t) is ergodic in mathematical expectation.

Indeed, given the ratio

M((1 / T) ∫X(t)dt| 2 ) = (2 / T) ∫ k(τ) (1 – τ/t)dτ

You can write down

0 ≤ (2/T) ∫ (1 – τ/t) k(τ)dτ ≤ (2/T) ∫ (1- τ/t) |k(τ)|dτ ≤ (1/T) ∫ |k (τ)|dτ

From this it is clear that if the condition is satisfied, then

Lim (2/T) ∫ (1 – τ/T) k(τ)dτ = 0

Now, taking into account the equality

C = (1/T 2) ∫ (T - τ) k(τ)dτ – (1/T 2) ∫ (T - τ) k(τ)dτ = 2/T ∫ (1- (τ/T) ) k(τ)dτ

And the condition Lim M (|(1 / T)∫ X(t)dt| 2 ) = 0

Ergodicity by the mathematical expectation of the stationary random process X(t), we find that the required is proven.

Theorem.

If the correlation function k(τ) of a stationary random process

X(t) is integrable and decreases without limit as τ → ∞, i.e. condition is met

For arbitrary ε > 0, then X(t) is a stationary random process ergodic in mathematical expectation.

Indeed, given the expression

For T≥T 0 we have

(1/T) ∫ |k(τ)|dτ = (1/T)[ ∫ |k(τ)|dτ + ∫ |k(τ)|dτ ≤ (1/T) ∫ |k(τ)| dτ ε(1 – T 1 /T).

Passing to the limit as Т → ∞, we find

0 ≤ lim ∫ |k(τ)|dτ = ε.

Since here ε > 0 is an arbitrary, arbitrarily small value, then the condition of ergodicity in terms of mathematical expectation is satisfied. Since this follows from the condition

On the unlimited decrease of k(τ), then the theorem should be considered proven.

The proven theorems establish constructive criteria for the ergodicity of stationary random processes.

X(t) = m + X(t), m=const.

Then M = m, and if X(t) is an ergodic stationary random process, then the ergodicity condition Lim M (|(1 / T)∫ X(t)dt| 2 ) = 0 after simple transformations can be represented as

Lim M([(1/T) ∫ X(t)dt – m] 2 ) = 0

It follows that if X(t) is a stationary random process ergodic in mathematical expectation, then the mathematical expectation of the process X(t) = m + X(t) can be approximately calculated using the formula

M = (1/T) ∫ x(t)dt

Here T is a fairly long period of time;

x(t) – implementation of the process X(t) on the time interval.

We can consider the ergodicity of a stationary random process X(t) with respect to the correlation function.

A stationary random process X(t) is called ergodic in correlation function, If

Lim M ([ (1/T) ∫ X(t) X(t + τ)dt – k(τ)] 2 ]) = 0

It follows that for a stationary random process X(t) that is ergodic in the correlation function, we can set

k (τ) = (1/T) ∫ x(t)x(t + τ)dt

at a sufficiently large T.

It turns out that the condition

the boundedness of k(τ) is sufficient for the stationary normally distributed process X(t) to be ergodic in the correlation function.

Note that the random process is called normally distributed, if any of its finite-dimensional distribution functions is normal.

A necessary and sufficient condition for the ergodicity of a stationary normally distributed random process is the relation

τ 0: lim (1/T) ∫ (1 – τ/T)dτ = 0


Literature

1. N.Sh. Kremer “Probability theory and mathematical statistics” / UNITY / Moscow 2007.

2. Yu.V. Kozhevnikov “Probability theory and mathematical statistics” / Mechanical Engineering / Moscow 2002.

3. B.V. Gnedenko “Course in Probability Theory” / Main editorial office of physical and mathematical literature / Moscow 1988.

Interference in communication systems is described by methods of the theory of random processes.

A function is called random if, as a result of an experiment, it takes one form or another, and it is not known in advance which one. A random process is a random function of time. The specific form that a random process takes as a result of an experiment is called the implementation of a random process.

In Fig. Figure 1.19 shows a set of several (three) implementations of the random process , , . Such a collection is called an ensemble of realizations. With a fixed value of the moment of time in the first experiment we obtain a specific value, in the second - , in the third - .

The random process is dual in nature. On the one hand, in each specific experiment it is represented by its implementation - a non-random function of time. On the other hand, a random process is described by a set of random variables.

Indeed, let us consider a random process at a fixed point in time. Then in each experiment it takes one value, and it is not known in advance which one. Thus, a random process considered at a fixed point in time is a random variable. If two moments of time and are recorded, then in each experiment we will obtain two values ​​of and . In this case, joint consideration of these values ​​leads to a system of two random variables. When analyzing random processes at N points in time, we arrive at a set or system of N random variables .

Mathematical expectation, dispersion and correlation function of a random process. Since a random process considered at a fixed point in time is a random variable, we can talk about the mathematical expectation and dispersion of a random process:

, .

Just as for a random variable, dispersion characterizes the spread of values ​​of a random process relative to the average value. The larger , the greater the likelihood of very large positive and negative process values. A more convenient characteristic is the standard deviation (MSD), which has the same dimension as the random process itself.

If a random process describes, for example, a change in the distance to an object, then the mathematical expectation is the average range in meters; dispersion is measured in square meters, and Sco is measured in meters and characterizes the spread of possible range values ​​relative to the average.

The mean and variance are very important characteristics that allow us to judge the behavior of a random process at a fixed point in time. However, if it is necessary to estimate the “rate” of change in a process, then observations at one point in time are not enough. For this purpose, two random variables are used, considered together. Just as for random variables, a characteristic of the connection or dependence between and is introduced. For a random process, this characteristic depends on two moments in time and is called the correlation function: .

Stationary random processes. Many processes in control systems occur uniformly over time. Their basic characteristics do not change. Such processes are called stationary. The exact definition can be given as follows. A random process is called stationary if any of its probabilistic characteristics do not depend on the shift in the origin of time. For a stationary random process, the mathematical expectation, variance and standard deviation are constant: , .

The correlation function of a stationary process does not depend on the origin t, i.e. depends only on the difference in time:

The correlation function of a stationary random process has the following properties:

1) ; 2) ; 3) .

Often the correlation functions of processes in communication systems have the form shown in Fig. 1.20.

Rice. 1.20. Correlation functions of processes

The time interval over which the correlation function, i.e. the magnitude of the connection between the values ​​of a random process decreases by M times, called the interval or correlation time of the random process. Usually or. We can say that the values ​​of a random process that differ in time by the correlation interval are weakly related to each other.

Thus, knowledge of the correlation function allows one to judge the rate of change of a random process.

Another important characteristic is the energy spectrum of a random process. It is defined as the Fourier transform of the correlation function:

.

Obviously, the reverse transformation is also true:

.

The energy spectrum shows the power distribution of a random process, such as interference, on the frequency axis.

When analyzing an ACS, it is very important to determine the characteristics of a random process at the output of a linear system with known characteristics of the process at the input of the ACS. Let us assume that the linear system is given by an impulse transient response. Then the output signal at the moment of time is determined by the Duhamel integral:

,

where is the process at the system input. To find the correlation function, we write and after multiplication we find the mathematical expectation

– the number of boys among 10 newborns.

It is absolutely clear that this number is not known in advance, and the next ten children born may include:

Or boys - one and only one from the listed options.

And, in order to keep in shape, a little physical education:

– long jump distance (in some units).

Even a master of sports cannot predict it :)

However, your hypotheses?

2) Continuous random variable – accepts All numerical values ​​from some finite or infinite interval.

Note : the abbreviations DSV and NSV are popular in educational literature

First, let's analyze the discrete random variable, then - continuous.

Distribution law of a discrete random variable

- This correspondence between possible values ​​of this quantity and their probabilities. Most often, the law is written in a table:

The term appears quite often row distribution, but in some situations it sounds ambiguous, and so I will stick to the "law".

And now very important point: since the random variable Necessarily will accept one of the values, then the corresponding events form full group and the sum of the probabilities of their occurrence is equal to one:

or, if written condensed:

So, for example, the law of probability distribution of points rolled on a die has the following form:

No comments.

You may be under the impression that a discrete random variable can only take on “good” integer values. Let's dispel the illusion - they can be anything:

Example 1

Some game has the following winning distribution law:

...you've probably dreamed of such tasks for a long time :) I'll tell you a secret - me too. Especially after I finished working on field theory.

Solution: since a random variable can take only one of three values, the corresponding events form full group, which means the sum of their probabilities is equal to one:

Exposing the “partisan”:

– thus, the probability of winning conventional units is 0.4.

Control: that’s what we needed to make sure of.

Answer:

It is not uncommon when you need to draw up a distribution law yourself. For this they use classical definition of probability, multiplication/addition theorems for event probabilities and other chips tervera:

Example 2

The box contains 50 lottery tickets, among which 12 are winning, and 2 of them win 1000 rubles each, and the rest - 100 rubles each. Draw up a law for the distribution of a random variable - the size of the winnings, if one ticket is drawn at random from the box.

Solution: as you noticed, the values ​​of a random variable are usually placed in in ascending order. Therefore, we start with the smallest winnings, namely rubles.

There are 50 such tickets in total - 12 = 38, and according to classical definition:
– the probability that a randomly drawn ticket will be a loser.

In other cases everything is simple. The probability of winning rubles is:

Check: – and this is a particularly pleasant moment of such tasks!

Answer: the desired law of distribution of winnings:

The following task is for you to solve on your own:

Example 3

The probability that the shooter will hit the target is . Draw up a distribution law for a random variable - the number of hits after 2 shots.

...I knew that you missed him :) Let's remember multiplication and addition theorems. The solution and answer are at the end of the lesson.

The distribution law completely describes a random variable, but in practice it can be useful (and sometimes more useful) to know only some of it numerical characteristics .

Expectation of a discrete random variable

In simple terms, this is average expected value when testing is repeated many times. Let the random variable take values ​​with probabilities respectively. Then the mathematical expectation of this random variable is equal to sum of products all its values ​​to the corresponding probabilities:

or collapsed:

Let us calculate, for example, the mathematical expectation of a random variable - the number of points rolled on a die:

Now let's remember our hypothetical game:

The question arises: is it profitable to play this game at all? ...who has any impressions? So you can’t say it “offhand”! But this question can be easily answered by calculating the mathematical expectation, essentially - weighted average by probability of winning:

Thus, the mathematical expectation of this game losing.

Don't trust your impressions - trust the numbers!

Yes, here you can win 10 and even 20-30 times in a row, but in the long run we will face inevitable ruin. And I wouldn't advise you to play such games :) Well, maybe only for fun.

From all of the above it follows that the mathematical expectation is no longer a RANDOM value.

Creative task for independent research:

Example 4

Mr. X plays European roulette using the following system: he constantly bets 100 rubles on “red”. Draw up a law of distribution of a random variable - its winnings. Calculate the mathematical expectation of winnings and round it to the nearest kopeck. How many average Does the player lose for every hundred he bet?

Reference : European roulette contains 18 red, 18 black and 1 green sector (“zero”). If a “red” appears, the player is paid double the bet, otherwise it goes to the casino’s income

There are many other roulette systems for which you can create your own probability tables. But this is the case when we do not need any distribution laws or tables, because it has been established for certain that the player’s mathematical expectation will be exactly the same. The only thing that changes from system to system is

Here, we will briefly consider the main issues of systematization (classification) of random processes.

A random process occurring (passing) in any physical system represents random transitions of the system from one state to another. Depending on the variety of these conditions
from many argument values all random processes are divided into classes (groups):

1. Discrete process ( discrete state) with discrete time.

2. Discrete process with continuous time.

3. Continuous process (continuous state) with discrete time.

4. Continuous process with continuous time.

In the 1st 3 cases a lot discretely, i.e. argument takes discrete values
usually
in the 1st case set of random function values
are defined by the equalities:, is a discrete set
(a bunch of
finite or countable).

In the third case, the set
uncountable, i.e. cross section of a random process at any time is a continuous random variable.

In the 2nd and 4th cases there are many continuous, in the second case the set of states of the system
finite or countable, and in the fourth case a set
uncountable.

Let us give some examples of random processes of classes 1-4, respectively:

1. A hockey player may or may not score one or more goals into the opponent’s goal during matches played at certain points (according to the game schedule) of time.

Random process
is the number of goals scored until .

2. Random process
- number of films watched at the Zvezda cinema

from the start of the cinema to the moment in time .

3. At certain points in time
temperature is measured
patient in some treatment center.
- is a random process of continuous type with discrete time.

4. Indicator of air humidity level during the day in city A.

Other more complex classes of random processes can also be considered. For each class of random processes, appropriate methods for studying them are developed.

You can find a number of varied and interesting examples of random flows in textbooks [V. Feller, part 1.2] and in the monograph. Here we will limit ourselves to this.

For random processes, simple functional characteristics are also introduced, depending on the parameter , similar to the basic numerical characteristics of random variables.

Knowledge of these characteristics is sufficient to solve many problems (recall that a complete characteristic of a random process is given by its multidimensional (finite-dimensional) distribution law.

In contrast to the numerical characteristics of random variables, in the general case the functional characteristics are specific functions.

4. Mathematical expectation and variance of a random process

Mathematical expectation of a random process

defined for any fixed argument value is equal to the mathematical expectation of the corresponding section of the random process:

(12)
.

To briefly denote the mathematical expectation of s.p. the designation is also used
.

Function
characterizes the behavior of a random process on average. Geometric meaning of mathematical expectation
interpreted as the “average curve”, around which the implementation curves are located (see Fig. 60).

(see Fig. 60 Letters).

Based on the property of the mathematical expectation of a random variable and taking into account that
random process, and
non-random function, we get properties mathematical expectation random process:

1. The mathematical expectation of a non-random function is equal to the function itself:
.

2. A non-random multiplier (non-random function) can be taken as a sign of the mathematical expectation of a random process, i.e.

3. The mathematical expectation of the sum (difference) of two random processes is equal to the sum

(differences) in the mathematical expectations of the terms, i.e.

Note that if we fix the argument (parameter) , then we move from a random process to a random variable (i.e., we move to the cross section of a random process), we can find the m.o. of this process at this fixed

Since, if the section of the s.p.
for a given there is a continuous r.v. with density
then its mathematical expectation can be calculated using the formula

(13)
.

Example 2. Let s.p. is determined by the formula, i.e.
s.v.,


Find the mathematical expectation of a random process

Solution. Property 2. we have

because
and therefore
.

Exercise. I will use equalities to calculate the mathematical expectation

,
,

and then, based on formula (13), calculate the integral and make sure that the result is the same.

Note. Take advantage of equality

.

Variance of a random process.

Variance of a random process
called a non-random function

Dispersion
s.p. is considered, also characterize the spread (dispersion) of possible values ​​of r.p. relative to its mathematical expectation.

Along with the dispersion of the sp. the standard deviation is also considered

(s.c.o. for short), which is determined by the equality

(15)

Function dimension
equal to the dimension of the s.p.
.

Realization values ​​of s.p. at every deviates from mathematical expectation
by an amount of the order
(see figure 60).

Let us note the simplest properties of the dispersion of random processes.

1. Variance of a non-random function
is equal to zero, i.e.

2. Variance of a random process
non-negative i.e.

3. Variance of the product of a non-random function
to a random function
is equal to the product of the square of the non-random function and the variance of the random function, i.e.

4. Dispersion of the sum of s.p.
and non-random function
equal to the dispersion of the sp., i.e.

Example 3. Lets.p. is determined by the formula, i.e.
s.v.

distributed according to the normal law with

Find the variance and standard deviation of the s.p.
.

Solution. Let's calculate the variance based on the formula from property 3. We have

But
, therefore, by definition of the dispersion of r.v.

Hence,
those.
And

Considering a random process as a system of already three or four random variables, difficulties arise in the analytical expression of the laws of distribution of a random process. Therefore, in a number of cases, they are limited to the characteristics of a random process, similar to the numerical characteristics of random variables.

The characteristics of a random process, in contrast to the numerical characteristics of random variables, are non-random functions. Among them, the functions of mathematical expectation and dispersion of a random process, as well as the correlation function of a random process, are widely used to evaluate a random process.

Mathematical expectation of the random process X(t)is a non-random function which, for each value of the argument t, is equal to the mathematical expectation of the corresponding section of the random process

.

From the definition of the mathematical expectation of a random process it follows that if the one-dimensional probability density is known, then

. (6.3)

Random process X(t) can always be represented as a sum of elementary random functions

, where is an elementary random function.

. (6.4)

If many implementations of a random process are given X(t), then for a graphical representation of the mathematical expectation a series of sections are carried out and in each of them the corresponding mathematical expectation (average value) is found, and then a curve is drawn through these points (Fig. 6.3).

Figure 6.3 – Graph of the mathematical expectation function

The more sections are made, the more accurately the curve will be constructed.

Expected value of a random process there is some non-random function around which implementations of a random process are grouped.

If the implementations of a random process are current or voltage, then the mathematical expectation is interpreted as the average value of the current or voltage.

Variance of the random process X(t)is a non-random function that, for each value of the argument t, is equal to the dispersion of the corresponding section of the random process.

.

From the definition of the variance of a random process it follows that if the one-dimensional probability density is known, then

or (6.5)

If a random process is represented in the form , That

The dispersion of a random process characterizes the spread or dispersion of implementations relative to the mathematical expectation function.

If the realizations of a random process are current or voltage, then the variance interpreted as the difference between the power of the entire process and the power of the average component of current or voltage in a given section, i.e.

. (6.7)

In some cases, instead of the variance of a random process, the standard deviation of the random process is used

.

The mathematical expectation and dispersion of a random process make it possible to identify the type of average function around which realizations of a random process are grouped and to estimate their spread relative to this function. However, the internal structure of the random process, i.e. the nature and degree of dependence (connection) of various sections of the process among themselves remains unknown (Fig. 6.4).

Figure 6.4 – Implementations of random processes X(t) And Y(t)

To characterize the connection between cross sections of a random process, the concept of a second-order mixed moment function is introduced - correlation function.

Correlation function random process X(t) is called a non-random function, which for each pair of values ​​is equal to the correlation moment of the corresponding sections of the random process:

Where , .

Relationship (see Fig. 6.4) between sections of a random process X(t) greater than between cross sections of a random process Y(t), i.e.

.

From the definition it follows that if a two-dimensional probability density is given random process X(t), That

The correlation function is a set of correlation moments of two random variables at moments , and both moments are considered in any combination of all current possible values ​​of the argument t random process. Thus, the correlation function characterizes the statistical relationship between instantaneous values ​​at different points in time.

Properties of the correlation function.

1) If , then . Consequently, the variance of a random process is a special case of the correlation function.