Uniform continuous distribution in EXCEL. Uniform probability distribution Uniform distribution of responsibilities

As an example of a continuous random variable, consider a random variable X uniformly distributed over the interval (a; b). The random variable X is said to be evenly distributed on the interval (a; b), if its distribution density is not constant on this interval:

From the normalization condition we determine the value of the constant c. The area under the distribution density curve should be equal to unity, but in our case it is the area of ​​a rectangle with base (b - α) and height c (Fig. 1).

Rice. 1 Uniform distribution density
From here we find the value of the constant c:

So, the density of a uniformly distributed random variable is equal to

Let us now find the distribution function using the formula:
1) for
2) for
3) for 0+1+0=1.
Thus,

The distribution function is continuous and does not decrease (Fig. 2).

Rice. 2 Distribution function of a uniformly distributed random variable

We'll find mathematical expectation of a uniformly distributed random variable according to the formula:

Dispersion of uniform distribution is calculated by the formula and is equal to

Example No. 1. Scale division price measuring instrument is equal to 0.2. Instrument readings are rounded to the nearest whole division. Find the probability that an error will be made during the counting: a) less than 0.04; b) large 0.02
Solution. The rounding error is a random variable uniformly distributed over the interval between adjacent integer divisions. Let us consider the interval (0; 0.2) as such a division (Fig. a). Rounding can be carried out both towards the left border - 0, and towards the right - 0.2, which means that an error less than or equal to 0.04 can be made twice, which must be taken into account when calculating the probability:



P = 0.2 + 0.2 = 0.4

For the second case, the error value can also exceed 0.02 on both division boundaries, that is, it can be either more than 0.02 or less than 0.18.


Then the probability of an error like this:

Example No. 2. It was assumed that the stability of the economic situation in the country (absence of wars, natural disasters etc.) over the past 50 years can be judged by the nature of the population distribution by age: in a calm environment it should be uniform. As a result of the study, the following data were obtained for one of the countries.

Is there any reason to believe that there was instability in the country?

We carry out the solution using a calculator. Testing hypotheses. Table for calculating indicators.

GroupsMidpoint of the interval, x iQuantity, f ix i * f iAccumulated frequency, S|x - x av |*f(x - x avg) 2 *fFrequency, f i /n
0 - 10 5 0.14 0.7 0.14 5.32 202.16 0.14
10 - 20 15 0.09 1.35 0.23 2.52 70.56 0.09
20 - 30 25 0.1 2.5 0.33 1.8 32.4 0.1
30 - 40 35 0.08 2.8 0.41 0.64 5.12 0.08
40 - 50 45 0.16 7.2 0.57 0.32 0.64 0.16
50 - 60 55 0.13 7.15 0.7 1.56 18.72 0.13
60 - 70 65 0.12 7.8 0.82 2.64 58.08 0.12
70 - 80 75 0.18 13.5 1 5.76 184.32 0.18
1 43 20.56 572 1
Distribution center indicators.
Weighted average


Variation indicators.
Absolute variations.
The range of variation is the difference between the maximum and minimum values ​​of the primary series characteristic.
R = X max - X min
R = 70 - 0 = 70
Dispersion- characterizes the measure of dispersion around its average value (a measure of dispersion, i.e. deviation from the average).


Standard deviation.

Each value of the series differs from the average value of 43 by no more than 23.92
Testing hypotheses about the type of distribution.
4. Testing the hypothesis about uniform distribution general population.
In order to test the hypothesis about the uniform distribution of X, i.e. according to the law: f(x) = 1/(b-a) in the interval (a,b)
necessary:
1. Estimate the parameters a and b - the ends of the interval in which possible values ​​of X were observed, using the formulas (the * sign denotes parameter estimates):

2. Find the probability density of the expected distribution f(x) = 1/(b * - a *)
3. Find the theoretical frequencies:
n 1 = nP 1 = n = n*1/(b * - a *)*(x 1 - a *)
n 2 = n 3 = ... = n s-1 = n*1/(b * - a *)*(x i - x i-1)
n s = n*1/(b * - a *)*(b * - x s-1)
4. Compare empirical and theoretical frequencies using the Pearson criterion, taking the number of degrees of freedom k = s-3, where s is the number of initial sampling intervals; if a combination of small frequencies, and therefore the intervals themselves, was carried out, then s is the number of intervals remaining after the combination.

Solution:
1. Find estimates of the parameters a * and b * of the uniform distribution using the formulas:


2. Find the density of the assumed uniform distribution:
f(x) = 1/(b * - a *) = 1/(84.42 - 1.58) = 0.0121
3. Let's find the theoretical frequencies:
n 1 = n*f(x)(x 1 - a *) = 1 * 0.0121(10-1.58) = 0.1
n 8 = n*f(x)(b * - x 7) = 1 * 0.0121(84.42-70) = 0.17
The remaining n s will be equal to:
n s = n*f(x)(x i - x i-1)

in in*in i - n * i(n i - n* i) 2(n i - n * i) 2 /n * i
1 0.14 0.1 0.0383 0.00147 0.0144
2 0.09 0.12 -0.0307 0.000943 0.00781
3 0.1 0.12 -0.0207 0.000429 0.00355
4 0.08 0.12 -0.0407 0.00166 0.0137
5 0.16 0.12 0.0393 0.00154 0.0128
6 0.13 0.12 0.0093 8.6E-5 0.000716
7 0.12 0.12 -0.000701 0 4.0E-6
8 0.18 0.17 0.00589 3.5E-5 0.000199
Total 1 0.0532
Let us determine the boundary of the critical region. Since the Pearson statistic measures the difference between the empirical and theoretical distributions, the larger its observed value K obs, the stronger the argument against the main hypothesis.
Therefore, the critical region for this statistic is always right-handed: \\

Thus, the uniform distribution density function has the form:

Figure 2.

The graph looks like this (Fig. 1):

Figure 3. Uniform probability distribution density

Uniform probability distribution function

Let us now find the distribution function for uniform distribution.

To do this, we will use the following formula: $F\left(x\right)=\int\limits^x_(-\infty )(\varphi (x)dx)$

  1. For $x ≤ a$, according to the formula, we get:
  1. At $a
  1. For $x> 2$, according to the formula, we get:

Thus, the distribution function looks like:

Figure 4.

The graph looks like this (Fig. 2):

Figure 5. Uniform probability distribution function.

Probability of a random variable falling into the interval $((\mathbf \alpha ),(\mathbf \beta ))$ with a uniform probability distribution

To find the probability of a random variable falling into the interval $(\alpha ,\beta)$ with a uniform probability distribution, we will use the following formula:

Mathematical expectation:

Standard deviation:

Examples of solving the problem of uniform probability distribution

Example 1

The interval between trolleybuses is 9 minutes.

    Compose the distribution function and distribution density of the random variable $X$ of waiting for trolleybus passengers.

    Find the probability that a passenger will wait for a trolleybus in less than three minutes.

    Find the probability that a passenger will wait for a trolleybus in at least 4 minutes.

    Find the expected value, variance and standard deviation

  1. Since the continuous random variable of waiting for a trolley bus $X$ is uniformly distributed, then $a=0,\ b=9$.

Thus, the distribution density, according to the formula of the uniform probability distribution density function, has the form:

Figure 6.

According to the formula of the uniform probability distribution function, in our case the distribution function has the form:

Figure 7.

  1. This question can be reformulated as follows: find the probability of a random variable of a uniform distribution falling into the interval $\left(6,9\right).$

We get:

\, if on this segment the distribution density is constant, and outside it it is equal to 0.

The uniform distribution curve is shown in Fig. 3.13.

Rice. 3.13.

Values/ (X) at extreme points A And b plot (a, b) are not indicated, since the probability of hitting any of these points for a continuous random variable X equals 0.

Expectation of a random variable X, having a uniform distribution over the area [a, d], /« = (a + b)/2. The variance is calculated using the formula D =(b- a)2/12, hence st = (b - a)/3.464.

Modeling of random variables. To model a random variable, you need to know its distribution law. The most general way to obtain a sequence of random numbers distributed according to an arbitrary law is a method based on their formation from an initial sequence of random numbers distributed in the interval (0; 1) according to a uniform law.

Evenly distributed in the interval (0; 1), sequences of random numbers can be obtained in three ways:

  • using specially prepared tables of random numbers;
  • using physical random number generators (for example, tossing a coin);
  • algorithmic method.

For such numbers, the mathematical expectation should be equal to 0.5, and the variance should be 1/12. If you need a random number X was in the interval ( A; b), different from (0; 1), you need to use the formula X=a + (b- a)r, Where G- a random number from the interval (0; 1).

Due to the fact that almost all models are implemented on a computer, an algorithmic generator (RNG) built into the computer is almost always used to obtain random numbers, although it is not a problem to use tables that have previously been converted into electronic form. It should be taken into account that using the algorithmic method we always obtain pseudo-random numbers, since each subsequent generated number depends on the previous one.

In practice it is always necessary to obtain random numbers distributed according to a given distribution law. For this purpose the most various methods. If the analytical expression for the distribution law is known F, then you can use inverse function method.

It is enough to play a random number uniformly distributed in the range from 0 to 1. Since the function F also changes in a given interval, then the random number X can be determined by taking the inverse function from a graph or analytically: x = F"(g). Here G- a number generated by the RNG in the range from 0 to 1; xt- the resulting random variable. Graphically, the essence of the method is shown in Fig. 3.14.


Rice. 3.14. Illustration of the inverse function method for generating random events X, whose values ​​are distributed continuously. The figure shows graphs of probability density and integral probability density from X

Let's consider the exponential distribution law as an example. The distribution function of this law has the form F(x) = 1 -exp(-br). Because G And F V this method are assumed to be similar and located in the same interval, then, replacing F for a random number r, we have G= 1 - exp(-bg). Expressing the required quantity X from this expression (i.e., reversing the exp() function), we obtain x = -/X? 1p(1 -G). Since in the statistical sense (1 - r) and G - it's the same thing then x = -УХ 1p(g).

Algorithms for modeling some common laws of distribution of continuous random variables are given in Table. 3.10.

For example, it is necessary to model loading time, which is distributed according to a normal law. It is known that the average loading duration is 35 minutes, and the standard deviation of real time from the average value is 10 minutes. That is, according to the conditions of the problem t x = 35, c x= 10. Then the value of the random variable will be calculated according to the formula R= ?g, where G. - random numbers from RNG in the range, n = 12. The number 12 was chosen as large enough based on the central limit theorem of probability theory (Lyapunov’s theorem): “For a large number N random variables X with any distribution law, their sum is a random number with a normal distribution law.” Then the random value X= o (7? - l/2) + t x = 10(7? -3) + 35.

Table 3.10

Algorithms for modeling random variables

Simulation of a random event. A random event implies that an event has several outcomes and which outcome will occur again is determined only by its probability. That is, the outcome is chosen randomly, taking into account its probability. For example, let's say we know the probability of producing defective products R= 0.1. You can simulate the occurrence of this event by playing a uniformly distributed random number from the range from 0 to 1 and determining which of the two intervals (from 0 to 0.1 or from 0.1 to 1) it fell in (Fig. 3.15). If the number falls within the range (0; 0.1), then a defective product was released, i.e. the event occurred, otherwise the event did not occur (a standard product was released). With a significant number of experiments, the frequency of numbers falling in the interval from 0 to 0.1 will approach the probability P= 0.1, and the frequency of numbers falling in the interval from 0.1 to 1 will approach P = 0.9.


Rice. 3.15.

The events are called incompatible, if the probability of these events occurring simultaneously is 0. It follows that the total probability of a group of incompatible events is equal to 1. Let us denote by a r I, a n events, and through P ]9 P 2 , ..., R p- the probability of occurrence of individual events. Since the events are incompatible, the sum of the probabilities of their occurrence is equal to 1: P x + P 2 + ... +Pn= 1. To simulate the occurrence of one of the events, we again use a random number generator, the value of which is also always in the range from 0 to 1. Let us plot the segments on a unit interval P r P v ..., R p. It is clear that the sum of the segments will form exactly a unit interval. The point corresponding to the dropped number from the random number generator on this interval will point to one of the segments. Accordingly, random numbers will appear in larger segments more often (the probability of these events occurring is greater!), and in smaller segments - less frequently (Fig. 3.16).

If necessary, modeling joint events they must be rendered incompatible. For example, to simulate the occurrence of events for which probabilities are given R(a) = 0,7; P(a 2)= 0.5 and P(a ]9 a 2)= 0.4, we determine all possible incompatible outcomes of the occurrence of events a g a 2 and their simultaneous appearance:

  • 1. Simultaneous occurrence of two events P(b () = P(a L , a 2) = 0,4.
  • 2. Event occurrence a ] P(b 2) = P(a y) - P(a ( , a 2) = 0,7 - 0,4 = 0,3.
  • 3. Event occurrence a 2 P(b 3) = P(a 2) - P(a g a 2) = 0,5 - 0,4 = 0,1.
  • 4. No events occur P(b 4) = 1 - (P(b) + P(b 2) + + P(b 3)) =0,2.

Now the probabilities of occurrence of incompatible events b must be represented on the number axis in the form of segments. By obtaining numbers using RNG, we determine their belonging to a particular interval and obtain the implementation of joint events A.

Rice. 3.16.

Often found in practice systems of random variables, i.e. such two (or more) different random variables X, U(and others) that depend on each other. For example, if an event occurs X and took on some random value, then the event U happens, although by chance, but taking into account the fact that X has already taken on some meaning.

For example, if as X If a large number appears, then as U a sufficiently large number should also appear (if the correlation is positive, and vice versa, if it is negative). In transport, such dependencies occur quite often. Longer delays are more likely on routes of significant length, etc.

If the random variables are dependent, then

f(x)=f(x l)f(x 2 x l)f(x 3 x 2 ,x l)- ... -/(xjx, r X„ , ...,x 2 ,x t), Where x. | x._v x (- random dependent variables: dropout X. provided that they fell out x._ (9 x._ ( ,...,*,) - conditional density

probability of occurrence x.> if you fell out x._(9 ..., x ( ; f(x) - probability of occurrence of vector x of random dependent variables.

Correlation coefficient q shows how closely related events are Hee W. If the correlation coefficient is equal to one, then the dependence of events Hee Woo one-to-one: same value X matches one value U(Fig. 3.17, A) . At q, close to unity, the picture shown in Fig. 3.17, b, i.e. one value X Several values ​​of Y may already correspond (more precisely, one of several values ​​of Y, determined randomly); i.e. in this event X And Y less correlated, less dependent on each other.


Rice. 3.17. Type of dependence of two random variables with a positive correlation coefficient: a- at q = 1; b - at 0 q at q, close to O

And finally, when the correlation coefficient tends to zero, a situation arises in which any value X can correspond to any value Y, i.e. events X And Y independent or almost independent of each other, do not correlate with each other (Fig. 3.17, V).

For example, let's take the normal distribution as the most common. The mathematical expectation indicates the most probable events; here the number of events is greater and the graph of events is denser. A positive correlation indicates that large random variables X cause the generation of large Y. Zero and close to zero correlation shows that the value of the random variable X is in no way related to a specific value of a random variable Y. It is easy to understand what has been said if we first imagine the distributions f(X) and/(U) separately, and then link them into a system, as shown in Fig. 3.18.

In the example under consideration Hee Y are distributed according to the normal law with the corresponding values t x, a and that, A,. The correlation coefficient of two random events is given q, i.e. random variables X and U are dependent on each other, U is not entirely accidental.

Then a possible algorithm for implementing the model will be as follows:

1. Six random numbers uniformly distributed over the interval are drawn: b r b:, b i, b 4 , b 5, b 6 ; their sum is found S:

S = b. A normally distributed random number n is found: using the following formula: x = a (5 - 6) + t x.

  • 2. According to the formula t!x = that + qoJo x (x -t x) is the mathematical expectation t y1x(sign u/x means that y will take random values, taking into account the condition that * has already taken some specific values).
  • 3. According to the formula = a d/l -C 2 is the standard deviation of a..

4. 12 random numbers r are drawn evenly distributed over the interval; their sum is found k: k = Zr. Find a normally distributed random number at according to the following formula: y = °Jk-6) + m r/x .


Rice. 3.18.

Event flow modeling. When there are many events and they follow each other, they form flow. Note that the events must be homogeneous, that is, somewhat similar to each other. For example, the appearance of drivers at gas stations wanting to refuel their car. That is, homogeneous events form a certain series. It is believed that the statistical characteristics of this 146

phenomena (the intensity of the flow of events) is given. The intensity of the event flow indicates how many such events occur on average per unit of time. But exactly when each specific event will occur must be determined using modeling methods. It is important that when we generate, for example, 1000 events in 200 hours, their number will be approximately equal to the average intensity of occurrence of events 1000/200 = 5 events per hour. This is a statistical value characterizing this flow as a whole.

The flow intensity in a sense is the mathematical expectation of the number of events per unit time. But in reality it may turn out that 4 events appear in one hour, 6 in another, although on average there are 5 events per hour, so one value is not enough to characterize the flow. The second quantity characterizing how large the spread of events is relative to the mathematical expectation is, as before, dispersion. It is this value that determines the randomness of the occurrence of an event, the weak predictability of the moment of its occurrence.

There are random streams:

  • ordinary - the probability of the simultaneous occurrence of two or more events is zero;
  • stationary - frequency of occurrence of events X permanent;
  • without aftereffect - the probability of a random event occurring does not depend on the moment of occurrence of previous events.

When modeling QS, in the overwhelming majority of cases, it is considered Poisson (simplest) flow - ordinary flow without aftereffect, in which the probability of arrival in a time interval t smooth T requirements is given by the Poisson formula:

A Poisson flow can be stationary if A.(/) = const(/), or non-stationary otherwise.

In a Poisson flow, the probability that no event occurs is

In Fig. 3.19 shows the dependence R from time to time. Obviously, the longer the observation time, the less likely it is that no event will occur. Moreover, the higher the value X, the steeper the graph goes, i.e., the faster the probability decreases. This corresponds to the fact that if the rate of occurrence of events is high, then the probability that the event will not occur decreases rapidly with observation time.

Rice. 3.19.

Probability of at least one event occurring P = 1 - skhr(-Ad), since P + P = . It is obvious that the probability of the occurrence of at least one event tends to unity over time, i.e., with appropriate long-term observation, the event will definitely occur sooner or later. By meaning R is equal to r, therefore, expressing / from the definition formula R, Finally, to determine the intervals between two random events we have

Where G- a random number uniformly distributed from 0 to 1, which is obtained using RNG; t- interval between random events (random variable).

As an example, consider the flow of cars arriving at the terminal. Cars arrive randomly - on average 8 per day (flow rate X= 8/24 cars/h). It is necessary to smo- 148

divide this process over T= 100 hours. Average time interval between cars / = 1/L. = 24/8 = 3 hours.

In Fig. Figure 3.20 shows the result of the simulation - the moments in time when cars arrived at the terminal. As can be seen, in just the period T = 100 terminal processed N=33 car. If we run the simulation again, then N may turn out to be equal, for example, 34, 35 or 32. But on average for TO algorithm runs N will be equal to 33.333.

Rice. 3.20.

If it is known that the flow is not ordinary then it is necessary to model, in addition to the moment of occurrence of the event, also the number of events that could occur at that moment. For example, cars arrive at the terminal at random times (an ordinary flow of cars). But at the same time, cars can have different (random) amounts of cargo. In this case, the flow of cargo is spoken of as stream of extraordinary events.

Let's consider the problem. It is necessary to determine the downtime of loading equipment at the terminal if AUK-1.25 containers are delivered to the terminal by vehicles. The flow of cars obeys Poisson's law, the average interval between cars is 0.5 chD = 1/0.5 = 2 cars/hour. The number of containers in a car varies according to the normal law with an average value T= 6 and a = 2. In this case, the minimum can be 2, and the maximum can be 10 containers. The unloading time of one container is 4 minutes and 6 minutes are required for technological operations. The algorithm for solving this problem, built on the principle of sequential posting of each application, is shown in Fig. 3.21.

After entering the initial data, the simulation cycle starts until the specified simulation time is reached. Using the RNG, we obtain a random number, then determine the time interval before the car arrives. We mark the resulting interval on the time axis and simulate the number of containers in the back of the arriving vehicle.

We check the resulting number for an acceptable interval. Next, the unloading time is calculated and summed up in the counter of the total operating time of the loading equipment. The condition is checked: if the vehicle arrival interval is greater than the unloading time, then the difference between them is summed up in the equipment downtime counter.

Rice. 3.21.

A typical example for a QS system would be the operation of a loading point with several posts, as shown in Fig. 3.22.


Rice. 3.22.

For clarity of the modeling process, we will construct a time diagram of the operation of the QS, reflecting on each line (time axis /) the state of an individual element of the system (Fig. 3.23). There are as many time lines as there are different objects in the QS (flows). In our example, there are 7 of them: a flow of applications, a waiting flow in the first place in the queue, a waiting flow in the second place in the queue, a service flow in the first channel, a service flow in the second channel, a flow of applications served by the system, a flow of rejected applications. To demonstrate the process of denial of service, we will agree that only two cars can be in the queue for loading. If there are more of them, then they are sent to another loading point.

The simulated random moments of receipt of requests for car servicing are displayed on the first line. The first request is taken and, since at this moment the channels are free, it is set to service the first channel. Bid 1 is transferred to the line of the first channel. The service time in the channel is also random. We find on the diagram the moment of the end of service, postponing the generated service time from the moment the service begins.

niya, and lower the application to the “Served” line. The application went all the way to the CMO. Now, according to the principle of sequential posting of orders, you can also model the path of the second order.


Rice. 3.23.

If at some point it turns out that both channels are busy, then the request should be placed in a queue. In Fig. 3.23 this is an application 3. Note that according to the conditions of the task, unlike channels, requests are not in the queue for a random time, but are waiting for one of the channels to become free. After the channel is released, the request is raised to the line of the corresponding channel and its servicing is organized there.

If the weight of the place in the queue at the time the next application arrives is occupied, then the application should be sent to the “Refused” line. In Fig. 3.23 this is an application 6.

The procedure for simulating application servicing continues for some time. T. The longer this time, the more accurate the simulation results will be in the future. Real for simple systems choose T, equal to 50-100 hours or more, although sometimes it is better to measure this value by the number of applications reviewed.

We will analyze the QS using the example already discussed.

First you need to wait for the steady state. We discard the first four requests as uncharacteristic, occurring during the process of establishing the operation of the system (“model warm-up time”). We measure the observation time, let’s assume that in our example G = 5 hours. We calculate the number of serviced applications from the diagram N o6c, idle time and other values. As a result, we can calculate indicators characterizing the quality of the QS operation:

  • 1. Probability of service Р = N,/N= 5/7 = 0.714. To calculate the probability of servicing an application in the system, it is enough to divide the number of applications that were able to be served over time T(see the “Served” line), L/o6s for the number of applications N, who arrived at the same time.
  • 2. System throughput A = NJT h = 7/5 = 1.4 cars/hour. For calculation bandwidth system, it is enough to divide the number of served applications N o6c for a while T, for which this service occurred.
  • 3. Probability of failure P = N /N=3/7 = 0.43. To calculate the probability of a request being denied service, it is enough to divide the number of requests N who were rejected during T(see the “Refused” line), per number of applications N, who wanted to be served during the same time, i.e. entered the system. Please note that the amount R op + R p(k in theory should be equal to 1. In fact, experimentally it turned out that R + R.= 0.714 + 0.43 = 1.144. This inaccuracy is explained by the fact that during the observation period T Insufficient statistics have been accumulated to obtain an accurate answer. The error of this indicator is now 14%.
  • 4. Probability of occupancy of one channel Р = T r JT H= 0.05/5 = 0.01, where T- busy time of only one channel (first or second). The time intervals during which certain events occur are subject to measurements. For example, the diagram looks for segments when either the first or second channel is busy. In this example, there is one such segment at the end of the diagram, 0.05 hours long.
  • 5. Probability of occupancy of two channels P = T / T = 4.95/5 = 0.99. The diagram looks for segments during which both the first and second channels are occupied at the same time. In this example there are four such segments, their sum is 4.95 hours.
  • 6. Average number of occupied channels: /V to - 0 P 0 + P X + 2 P, = = 0.01 +2? 0.99= 1.99. To calculate how many channels are occupied in the system on average, it is enough to know the share (the probability of occupancy of one channel) and multiply by the weight of this share (one channel), know the share (the probability of occupancy of two channels) and multiply by the weight of this share (two channels) and etc. The resulting figure of 1.99 indicates that out of two possible channels, on average 1.99 channels are loaded. This is a high load rate, 99.5%, the system makes good use of resources.
  • 7. Probability of downtime of at least one channel P*, = Г simple,/Г = = 0.05/5 = 0.01.
  • 8. Probability of downtime of two channels simultaneously: P = = T JT = 0.
  • 9. Probability of downtime of the entire system P* =T /T = 0.
  • 10. Average number of applications in the queue /V з = 0 P(h + 1 P and + 2P ъ= = 0.34 + 2 0.64 = 1.62 auto. To determine the average number of applications in the queue, it is necessary to determine separately the probability that there will be one application P in the queue, the probability that there will be two applications P 23 in the queue, etc., and add them again with the appropriate weights.
  • 11. The probability that there will be one application in the queue is P and = = TJTn= 1.7/5 = 0.34 (there are four such segments in the diagram, giving a total of 1.7 hours).
  • 12. The probability that two applications will be in the queue at the same time is R ъ= Г 2з /Г = 3.2/5 = 0.64 (there are three such segments in the diagram, giving a total of 3.25 hours).
  • 13. The average waiting time for an application in the queue is G roz = 1.7/4 = 0.425 hours. It is necessary to add up all the time intervals during which any application was in the queue and divide by the number of applications. There are 4 such requests on the time diagram.
  • 14. Average time for servicing an application 7’ srobsl = 8/5 = 1.6 hours. Add up all time intervals during which any application was being served in any channel and divide by the number of applications.
  • 15. Average time an application remains in the system: T = T +

g g avg. sung Wed cool

If the accuracy is not satisfactory, then the experiment time should be increased and thereby improve the statistics. You can do it differently if you run experiment 154 several times

for a while T and subsequently average the values ​​of these experiments, and then again check the results against the accuracy criterion. This procedure should be repeated until the required accuracy is achieved.

Analysis of simulation results

Table 3.11

Indicator

Meaning

indicator

Interests of the owner of the CMO

Client interests

Probability

service

The probability of service is low, many clients leave the system without service Recommendation: increase the probability of service

The probability of service is low, every third client wants to be served, but cannot. Recommendation: increase the probability of service

Average number of applications in queue

Almost always, before service, the car waits in a queue. Recommendation: increase the number of places in the queue, increase throughput

Increase throughput Increase the number of places in the queue so as not to lose potential customers

Customers are interested in significantly increasing throughput to reduce latency and dropouts

To decide on the implementation of specific activities, it is necessary to conduct a sensitivity analysis of the model. Target model sensitivity analysis is to determine possible deviations in output characteristics due to changes in input parameters.

Methods for assessing the sensitivity of a simulation model are similar to methods for determining the sensitivity of any system. If the output characteristic of the model R depends on parameters associated with variable quantities R =/(r g r 2, r), then changes in these

parameters D r.(/ = 1, ..G) cause change AR.

In this case, the sensitivity analysis of the model comes down to studying the sensitivity function dR/etc.

As an example of sensitivity analysis of a simulation model, let us consider the impact of changing variable vehicle reliability parameters on operating efficiency. As an objective function we use the indicator of reduced costs Zir. For sensitivity analysis, we use data on the operation of the KamAZ-5410 road train in urban conditions. Parameter change limits r. to determine the sensitivity of the model, it is enough to determine it by expert means (Table 3.12).

To carry out calculations using the model, a base point was selected at which the varied parameters have values ​​that correspond to the standards. Execution Idle Duration Option maintenance and repair in days is replaced by a specific indicator - downtime in days per thousand kilometers N.

The calculation results are shown in Fig. 3.24. The base point is at the intersection of all curves. Shown in Fig. 3.24 dependencies allow us to establish the degree of influence of each of the parameters under consideration on the magnitude of the change in 3. At the same time, the use of natural values ​​of the analyzed quantities does not allow us to establish comparative degree the influence of each parameter on 3, since these parameters have different units of measurement. To overcome this, we will choose the form of interpreting the calculation results in relative units. To do this, the base point must be moved to the origin of coordinates, and the values ​​of the changeable parameters and the relative change in the output characteristics of the model must be expressed as a percentage. The results of the transformations carried out are presented in Fig. 3.25.

Table 3.12

Values variable parameters

Rice. 3.24.


Rice. 3.25. The influence of the relative change in the varied parameters on the degree of change in the

The change in variable parameters relative to the base value is presented on one axis. As can be seen from Fig. 3.25, an increase in the value of each parameter near the base point by 50% leads to an increase in Zpr by 9% of the increase in T a, by more than 1.5% of C p, by less than 0.5% of N and to a decrease of 3 by almost 4% of the increase L. Decrease by 25 % b cr and D rg leads to an increase in Z pr, respectively, by more than 6%. Reducing parameters by the same amount N t0, P a g e leads to a decrease in Zpr by 0.2, 0.8 and 4.5%, respectively.

The given dependencies give an idea of ​​the influence of a single parameter and can be used when planning the operation of the transport system. According to the intensity of the influence on the environment, the considered parameters can be arranged in the following order: D, II, L, C 9 N .

’a 7 k.r 7 t.r 7 t.o

During operation, a change in the value of one indicator entails a change in the values ​​of other indicators, and the relative change in each of the varied parameters by the same value in the general case has an unequal physical basis. It is necessary to replace the relative change in the values ​​of varied parameters in percentage along the abscissa axis with a parameter that can serve as a single measure for assessing the degree of change in each parameter. It can be assumed that at each moment of operation of the vehicle, the value of each parameter has the same economic weight in relation to the values ​​of other variable parameters, i.e., from an economic point of view, the reliability of the vehicle at each moment of time has an equilibrium effect on all parameters associated with it . Then the required economic equivalent will be time or, more conveniently, a year of operation.

In Fig. Figure 3.26 shows dependencies built in accordance with the above requirements. The base value of Zpr is taken to be the value in the first year of operation of the vehicle. The values ​​of variable parameters for each year of operation were determined based on the results of observations.


Rice. 3.26.

During operation, the increase in Zpr during the first three years is primarily due to the increase in values H jo, and then, under the considered operating conditions, the main role in reducing the efficiency of vehicle use is played by an increase in the values ​​of C pp. To identify the influence of the quantity LKp, in calculations, its value was equated to the total mileage of the vehicle since the start of operation. Function type 3 =f(L) shows that the intensity of the decrease is 3 with increasing

pr J v k.r" 7 n.p. J

1 to r is significantly reduced.

As a result of the sensitivity analysis of the model, it is possible to understand which factors need to be influenced to change the objective function. To change factors, control efforts are required, which is associated with corresponding costs. The amount of costs cannot be infinite, like any resources, these costs are in reality limited. Therefore, it is necessary to understand to what extent the allocation of funds will be effective. If in most cases costs increase linearly with increasing control action, then the efficiency of the system grows rapidly only up to a certain limit, when even significant costs no longer provide the same return. For example, it is impossible to limitlessly increase the power of service devices due to space limitations or the potential number of serviced vehicles, etc.

If we compare the increase in costs and the system efficiency indicator in the same units, then, as a rule, graphically it will look like that shown in Fig. 3.27.


Rice. 3.27.

From Fig. 3.27 it is clear that when assigning a price C, per unit of cost Z and price C, per unit of indicator R these curves can be added. Curves are added if they need to be simultaneously minimized or maximized. If one curve is to be maximized and the other to be minimized, then their difference should be found, for example, by points. Then the resulting curve (Fig. 3.28), which takes into account both the effect of management and the costs of this, will have an extremum. The value of the parameter /?, which provides the extremum of the function, is the solution to the synthesis problem.


Rice. 3.28.

to...

Besides management R and indicator R there is a disturbance in the systems. Disturbance D= (d v d r...) is an input influence, which, unlike the control parameter, does not depend on the will of the owner of the system (Fig. 3.29). For example, low temperatures outside and competition, unfortunately, reduce the flow of customers; Equipment failures reduce system performance. The owner of the system cannot control these quantities directly. Usually, indignation acts “to spite” the owner, reducing the effect R from control efforts R. This happens because, in general, the system is created to achieve goals that are unattainable by themselves in nature. A person, organizing a system, always hopes to achieve some goal through it R. He puts effort into this R. In this context, we can say that a system is an organization accessible to man, studied by him natural ingredients to achieve some new goal that was previously unattainable by other means.

Rice. 3.29.

If we remove the dependence of the indicator R from management R again, but under the conditions of the emerging disturbance D, then perhaps the nature of the curve will change. Most likely, the indicator will be lower for the same control values, since the disturbance is negative, reducing the system’s performance. A system left to itself, without managerial efforts, ceases to achieve the goal for which it was created. If, as before, we construct a dependence of costs and correlate it with the dependence of the indicator on the control parameter, then the found extremum point will shift (Fig. 3.30) compared to the case of “perturbation = 0” (see Fig. 3.28). If the disturbance is increased again, the curves will change and, as a consequence, the position of the extremum point will change again.

Graph in Fig. 3.30 connects indicator P, management (resource) R and indignation D V complex systems, indicating how best to act for the manager (organization) making decisions in the system. If the control action is less than optimal, the total effect will decrease and a situation of lost profit will arise. If the control action is greater than optimal, then the effect will also decrease, since paying for the queue is 162

A further increase in control efforts will need to be greater than what you will get as a result of using the system.


Rice. 3.30.

A simulation model of the system for real use must be implemented on a computer. This can be created using the following tools:

  • universal user program type of mathematical (MATLAB) or spreadsheet processor (Excel) or DBMS (Access, FoxPro), which allows you to create only relatively simple model and requires at least basic programming skills;
  • universal programming language(C++, Java, Basic, etc.), which allows you to create a model of any complexity; but this is a very labor-intensive process that requires writing a large amount of program code and lengthy debugging;
  • specialized simulation language, which has ready-made templates and visual programming tools designed to quickly create the basis of a model. One of the most famous is UML (Unified Modeling Language);
  • simulation programs, which are the most popular means of creating simulation models. They allow you to create a model visually, only in the most complex cases resorting to manually writing program code for procedures and functions.

Simulation programs are divided into two types:

  • Universal simulation packages are designed to create various models and contain a set of functions that can be used to simulate typical processes in systems for various purposes. Popular packages of this type are Arena (developed by Rockwell Automation 1", USA), Extendsim (developed by Imagine That Ink., USA), AnyLogic (developed by XJ Technologies, Russia) and many others. Almost all universal packages have specialized versions for modeling specific classes objects.
  • Domain-specific simulation packages serve for modeling specific types of objects and have specialized tools for this in the form of templates, wizards for visually designing a model from ready-made modules, etc.
  • Of course, two random numbers cannot uniquely depend on each other, Fig. 3.17 is given for clarity of the concept of correlation. 144
  • Technical and economic analysis in the study of the reliability of KamAZ-5410 vehicles /Yu. G. Kotikov, I. M. Blankinshtein, A. E. Gorev, A. N. Borisenko; LISI. L.:, 1983. 12 p.-Dep. in CBNTI of the Ministry of Autotransport of the RSFSR, No. 135at-D83.
  • http://www.rockwellautomation.com.
  • http://www.cxtcndsiin.com.
  • http://www.xjtek.com.