Uniform distribution on the segment a b. Mathematics and informatics. Tutorial for the entire course. Uniform Distribution Characteristics

continuous random value X has a uniform distribution on the segment [a, b] if the distribution density is constant on this segment, and is equal to 0 outside it.

The uniform distribution curve is shown in fig. 3.13.

Rice. 3.13.

Values/ (X) in extreme points a and b plot (a, b) are not indicated, since the probability of hitting any of these points for a continuous random variable X equals 0.

Mathematical expectation of a random variable x, which has a uniform distribution on the section [a, d], / « = (a + b)/2. The dispersion is calculated by the formula D =(b- a) 2/12, hence st = (b - a) / 3.464.

Modeling of random variables. To model a random variable, it is necessary to know its distribution law. The most common way to obtain a sequence of random numbers distributed according to an arbitrary law is the method based on their formation from the original sequence of random numbers distributed in the interval (0; 1) according to a uniform law.

evenly distributed in the interval (0; 1) sequences of random numbers can be obtained in three ways:

  • according to specially prepared tables of random numbers;
  • using physical random number generators (for example, tossing a coin);
  • algorithmic method.

For such numbers, the value of the mathematical expectation should be equal to 0.5, and the variance should be 1/12. If necessary, the random number X was in the interval ( a; b) different from (0; 1), you need to use the formula X \u003d a + (b - a) g, where G- a random number from the interval (0; 1).

Due to the fact that almost all models are implemented on a computer, almost always an algorithmic generator (RNG) built into the computer is used to obtain random numbers, although it is not a problem to use tables that have been previously converted into electronic form. It should be borne in mind that by the algorithmic method we always get pseudo-random numbers, since each subsequent generated number depends on the previous one.

In practice, it is always necessary to obtain random numbers distributed according to a given distribution law. For this, a variety of methods are used. If the analytical expression for the distribution law is known F, then you can use inverse function method.

It is enough to play a random number uniformly distributed in the interval from 0 to 1. Since the function F also varies in this interval, then the random number X can be determined by taking inverse function graphically or analytically: x=F"(d). Here G- the number generated by the RNG in the range from 0 to 1; x t is the resulting random variable. Graphically, the essence of the method is shown in Fig. 3.14.


Rice. 3.14. Illustration of the inverse function method for generating random events X, whose values ​​are distributed continuously. The figure shows graphs of the probability density and the integral probability density from X

Consider, as an example, the exponential distribution law. The distribution function of this law has the form F(x) = 1 -exp(-bz). As G and F in this method are assumed to be similar and located in the same interval, then, by replacing F for a random number r, we have G= 1 - exp(-bz). Expressing the desired value X from this expression (i.e., by inverting the function exp()), we get x = -/X? 1p(1 -G). Since in the statistical sense (1 - d) and G - it's the same thing then x \u003d -YX 1p(r).

Algorithms for modeling some common laws of distribution of continuous random variables are given in Table. 3.10.

For example, it is necessary to simulate the loading time, which is distributed according to the normal law. It is known that the average duration of loading is 35 minutes, and the standard deviation of real time from the average value is 10 minutes. That is, according to the conditions of the task t x = 35, with x= 10. Then the value of the random variable will be calculated by the formula R= ?g, where G. - random numbers from the RNG in the range , n = 12. The number 12 is chosen as large enough on the basis of the central limit theorem of probability theory (Lyapunov’s theorem): “For a large number N random variables X with any distribution law, their sum is a random number with a normal distribution law. Then the random value X\u003d o (7? - l / 2) + t x = 10(7? -3) + 35.

Table 3.10

Algorithms for modeling random variables

Simulation of a random event. A random event implies that some event has several outcomes and which of the outcomes will happen again is determined only by its probability. That is, the outcome is chosen randomly, taking into account its probability. For example, suppose we know the probability of producing defective products R= 0.1. You can simulate the occurrence of this event by playing a uniformly distributed random number from the range from 0 to 1 and establishing which of the two intervals (from 0 to 0.1 or from 0.1 to 1) it fell (Fig. 3.15). If the number falls within the range (0; 0.1), then a defect was issued, i.e. the event occurred, otherwise the event did not occur (a conditioned product was produced). With a significant number of experiments, the frequency of numbers falling into the interval from 0 to 0.1 will approach the probability P= 0.1, and the frequency of hitting numbers in the interval from 0.1 to 1 will approach P. = 0.9.


Rice. 3.15.

The events are called incompatible, if the probability of the occurrence of these events simultaneously is equal to 0. It follows that the total probability of the group incompatible events equals 1. Denote by a r I, a n events, and through Р ]9 Р 2 , ..., R p- the probability of occurrence of individual events. Since the events are incompatible, the sum of the probabilities of their occurrence is equal to 1: P x + P 2 + ... +Pn= 1. Again, we use a random number generator to simulate the occurrence of one of the events, the value of which is also always in the range from 0 to 1. Let us set aside segments on the unit interval P r P v ..., R p. It is clear that the sum of the segments will be exactly a unit interval. The point corresponding to the dropped number from the random number generator on this interval will point to one of the segments. Accordingly, random numbers will fall into large segments more often (the probability of the occurrence of these events is greater!), In smaller segments - less often (Fig. 3.16).

If necessary, simulation joint events they must be made incompatible. For example, to simulate the occurrence of events for which the probabilities are given R(a() = 0,7; P(a 2)= 0.5 and P(a ]9 a 2)= 0.4, we define all possible incompatible outcomes of the occurrence of events a d a 2 and their simultaneous appearance:

  • 1. Simultaneous occurrence of two events P(b () = P(a L , a 2) = 0,4.
  • 2. Event occurrence a ] P (b 2) \u003d P (a y) - P (a ( , a 2) = 0,7 - 0,4 = 0,3.
  • 3. Event occurrence a 2 P(b 3) = P (a 2) - P (a g a 2) \u003d 0,5 - 0,4 = 0,1.
  • 4. Non-appearance of any event P(b 4) = 1 - (P(b) + P(b 2) + + P(b 3)) =0,2.

Now the probabilities of occurrence of incompatible events b must be represented on the numerical axis as segments. Receiving numbers with the help of the RNG, we determine their belonging to a particular interval and obtain the implementation of joint events a.

Rice. 3.16.

Often encountered in practice systems of random variables, i.e. such two (or more) different random variables X, At(and others) that depend on each other. For example, if an event occurs X and took some random value, then the event At happens, albeit by chance, but taking into account the fact that X has already taken on some value.

For example, if as X a large number fell out, then as At a sufficiently large number should also fall out (if the correlation is positive, and vice versa if it is negative). In transport, such dependencies are quite common. Longer delays are more likely on longer routes, etc.

If random variables are dependent, then

f(x)=f(x l)f(x 2 x l)f(x 3 x 2 ,x l)- ... -/(xjx, r X„ , ...,x 2 ,x t), where x. | x._ v x (- random dependent quantities: dropping out X. provided that they fell x._ (9 x._ ( ,...,*,) - conditional density

probability of occurrence x.> if dropped out x._ (9 ..., x ( ; f(x) - the probability of falling out of the vector x of random dependent variables.

Correlation coefficient q shows how closely related events are Hee W. If the correlation coefficient is equal to one, then the dependence of events hee woo one-to-one: one value X matches one value At(Fig. 3.17, a) . At q close to unity, the picture shown in Fig. 3.17, b, i.e. one value X may already correspond to several values ​​of Y (more precisely, one of several values ​​of Y, determined randomly); i.e. in this event X and Y less correlated, less dependent on each other.


Rice. 3.17. Type of dependence of two random variables with a positive correlation coefficient: a- at q = 1; b - at 0 q at q, close to O

And, finally, when the correlation coefficient tends to zero, a situation arises in which any value X can correspond to any value of Y, i.e. events X and Y do not depend or almost do not depend on each other, do not correlate with each other (Fig. 3.17, in).

For example, let's take normal distribution as the most common. The mathematical expectation indicates the most probable events, here the number of events is larger and the schedule of events is denser. A positive correlation indicates that large random variables X cause to generate big Y. Zero and close to zero correlation shows that the value of the random variable X has nothing to do with a certain value of a random variable Y. It is easy to understand what has been said if we first imagine the distributions f(X) and / (Y) separately, and then link them into a system, as shown in Fig. 3.18.

In this example Hee Y are distributed according to the normal law with the corresponding values t x, a and that, a,. The correlation coefficient of two random events is given q, i.e. random variables X and Y are dependent on each other, Y is not entirely accidental.

Then a possible algorithm for implementing the model will be as follows:

1. Six random numbers evenly distributed on the interval are played: b p b:, b i, b 4 , b 5, b 6 ; find their sum S:

S = b. A normally distributed random number n is found: following formula: x = a (5 - 6) + t x.

  • 2. According to the formula m!x = that + qoJo x (x -m x) is the mathematical expectation t y1x(sign u/x means that y will take random values, given the condition that * has already taken some certain values).
  • 3. According to the formula = a d/l -C 2 find the standard deviation a..

4. 12 random numbers r uniformly distributed on the interval are played; find their sum k:k= Zr. Find a normally distributed random number at according to the following formula: y = °Jk-6) + mr/x .


Rice. 3.18.

Modeling the flow of an event. When there are many events and they follow each other, they form flow. Note that the events in this case must be homogeneous, i.e., similar in some way to each other. For example, the appearance of drivers at gas stations who want to refuel their car. That is, homogeneous events form a series. It is assumed that the statistical characteristic of this 146

phenomena (intensity of the flow of events) is given. The intensity of the flow of events indicates how many such events occur on average per unit of time. But when exactly each specific event will occur, it is necessary to determine by modeling methods. It is important that when we generate, for example, 1000 events in 200 hours, their number will be approximately equal to the average intensity of the occurrence of events 1000/200 = 5 events per hour. This is a statistical value that characterizes this flow as a whole.

The intensity of the flow in a sense is the mathematical expectation of the number of events per unit of time. But in reality, it may turn out that 4 events will appear in one hour, and 6 in another, although on average 5 events per hour are obtained, so one value is not enough to characterize the flow. The second value that characterizes how large the spread of events relative to the mathematical expectation is, as before, the dispersion. It is this value that determines the randomness of the occurrence of an event, the weak predictability of the moment of its occurrence.

Random streams are:

  • ordinary - the probability of the simultaneous occurrence of two or more events is zero;
  • stationary - frequency of occurrence of events X constant;
  • without aftereffect - the probability of occurrence of a random event does not depend on the moment of the previous events.

When modeling QS, in the vast majority of cases, it is considered Poisson (simplest) flow - ordinary flow without aftereffect, in which the probability of arrival in the time interval t smooth t requirements is given by the Poisson formula:

A Poisson flow can be stationary if A.(/) = const(/), or non-stationary otherwise.

In a Poisson flow, the probability that no event will occur is

On fig. 3.19 shows the dependence R from time. Obviously, the longer the observation time, the less likely that no event will occur. Moreover, the higher the value x, the steeper the graph goes, i.e., the faster the probability decreases. This corresponds to the fact that if the intensity of the occurrence of events is high, then the probability that the event will not occur decreases rapidly with the time of observation.

Rice. 3.19.

Probability of at least one event occurring P = 1 - shr(-Hell), since P + P = . Obviously, the probability of the occurrence of at least one event tends to unity with time, i.e., with an appropriate long-term observation, the event will necessarily occur sooner or later. Within the meaning of R is equal to r, therefore, expressing / from the definition formula R, finally, to determine the intervals between two random events, we have

where G- a random number uniformly distributed from 0 to 1, which is obtained using the RNG; t- interval between random events (random variable).

As an example, consider the flow of cars arriving at the terminal. Cars arrive randomly - an average of 8 per day (flow rate X= 8/24 vehicles/h). Need to see 148

share this process in T\u003d 100 hours. Average time interval between cars / \u003d 1 / L. = 24/8 = 3 hours

On fig. 3.20 shows the result of the simulation - the moments in time when the cars came to the terminal. As can be seen, in just the period T = 100 terminal processed N=33 car. If we run the simulation again, then N may be equal to, for example, 34, 35 or 32. But on average for To algorithm runs N will be equal to 33.333.

Rice. 3.20.

If it is known that the flow is not ordinary then it is necessary to model, in addition to the moment of occurrence of the event, also the number of events that could appear at that moment. For example, cars arrive at the terminal at random times (ordinary car flow). But at the same time, cars can have a different (random) amount of cargo. In this case, the flow of cargo is said to be stream of extraordinary events.

Let's consider the task. It is necessary to determine the idle time of the loading equipment at the terminal if AUK-1.25 containers are delivered to the terminal by trucks. The flow of cars obeys Poisson's law, the average interval between cars is 0.5 hD = 1/0.5 = 2 cars/hour. The number of containers in a car varies according to the normal law with an average value t= 6 and a = 2. In this case, the minimum can be 2, and the maximum - 10 containers. The unloading time of one container is 4 minutes and 6 minutes are needed for technological operations. The algorithm for solving this problem, built on the principle of sequential posting of each application, is shown in Fig. 3.21.

After entering the initial data, the simulation cycle is started until the specified simulation time is reached. Using the RNG, we get a random number, then we determine the time interval before the arrival of the car. We mark the resulting interval on the time axis and simulate the number of containers in the body of the arrived car.

We check the resulting number for an acceptable interval. Next, the unloading time is calculated and summed up in the counter of the total operating time of the loading equipment. The condition is checked: if the interval of arrival of the car is greater than the unloading time, then the difference between them is summarized in the equipment idle time counter.

Rice. 3.21.

A typical example for a CMO would be a loading point with multiple posts, as shown in Fig. 3.22.


Rice. 3.22.

For clarity of the modeling process, we will construct a time diagram of the QS operation, reflecting on each ruler (time axis /) the state of a separate element of the system (Fig. 3.23). There are as many timelines as there are different objects in the QS (flows). In our example, there are 7 of them: the flow of requests, the flow of waiting in the first place in the queue, the flow of waiting in the second place in the queue, the flow of service in the first channel, the flow of service in the second channel, the flow of requests served by the system, the flow of refused requests. To demonstrate the denial of service process, let's assume that only two cars can be in the queue for loading. If there are more of them, then they are sent to another loading point.

The simulated random moments of receipt of applications for car maintenance are displayed on the first line. The first request is taken and, since the channels are free at this moment, it is set for service in the first channel. Application 1 transferred to the line of the first channel. The service time in the channel is also random. We find on the diagram the moment of the end of the service, postponing the generated service time from the moment the service began.

niya, and omit the application for the line "Served". The application went through the CMO all the way. Now, according to the principle of sequential posting of orders, it is also possible to simulate the path of the second order.


Rice. 3.23.

If at some point it turns out that both channels are busy, then the request should be placed in the queue. On fig. 3.23 is an application 3. Note that, according to the conditions of the task, in the queue, unlike channels, applications are not randomly located, but wait until one of the channels becomes free. After the release of the channel, the request is moved to the line of the corresponding channel and its servicing is organized there.

If the weight of the place in the queue at the moment when the next application arrives is occupied, then the application should be sent to the "Refused" line. On fig. 3.23 is an application 6.

The procedure of imitation of service of applications is continued for some time T. The longer this time, the more accurate the simulation results will be in the future. Real for simple systems choose T, equal to 50-100 hours or more, although sometimes it is better to measure this value by the number of considered applications.

We will analyze QS using the already considered example.

First you need to wait for the steady state. We reject the first four applications as uncharacteristic, occurring during the process of establishing the system operation (“model warm-up time”). We measure the observation time, let's say that in our example T = 5 hours. We calculate the number of serviced requests from the diagram N o6c , idle time and other values. As a result, we can calculate indicators characterizing the quality of QS work:

  • 1. Service Probability P \u003d N, / N \u003d 5/7 = 0.714. To calculate the probability of servicing an application in the system, it is enough to divide the number of applications that were served during the time T(see line “Served”), L/o6s per number of requests N, who arrived at the same time.
  • 2. System throughput A \u003d NJT h \u003d 7/5 \u003d 1.4 auto / h. To calculate the throughput of the system, it is enough to divide the number of serviced requests N o6c for a while T, for which this service took place.
  • 3. Probability of failure P \u003d N / N \u003d 3/7 \u003d 0.43. To calculate the probability of denial of service to a request, it suffices to divide the number of requests N who were denied for time T(see the "Rejected" line), for the number of applications N, who wanted to serve during the same time, i.e. entered the system. Please note that the amount R op + R p (k in theory should be equal to 1. In fact, it turned out experimentally that R + R.= 0.714 + 0.43 = 1.144. This inaccuracy is explained by the fact that during the observation T insufficient statistics have been accumulated to obtain an accurate answer. The error of this indicator is now 14%.
  • 4. Probability of one channel being busy P = T r JT H= 0.05/5 = 0.01, where T- busy time of only one channel (first or second). Measurements are subject to time intervals in which certain events occur. For example, on the diagram, such segments are searched for when either the first or the second channel is occupied. In this example, there is one such segment at the end of the diagram with a length of 0.05 hours.
  • 5. Probability of two channels being busy P = T / T = 4.95/5 = 0.99. On the diagram, such segments are searched for, during which both the first and second channels are simultaneously occupied. In this example, there are four such segments, their sum is 4.95 hours.
  • 6. Average number of busy channels: /V to - 0 P 0 + P X + 2 P, \u003d \u003d 0.01 +2? 0.99= 1.99. To calculate how many channels are occupied in the system on average, it is enough to know the share (probability of occupancy of one channel) and multiply by the weight of this share (one channel), know the share (probability of occupancy of two channels) and multiply by the weight of this share (two channels) and etc. The resulting figure of 1.99 indicates that out of the two possible channels, 1.99 channels are loaded on average. This is a high utilization rate of 99.5%, the system is making good use of resources.
  • 7. Probability of idle time of at least one channel Р*, = Г is simple, /Г = = 0.05/5 = 0.01.
  • 8. Probability of downtime of two channels at the same time: P = = T JT = 0.
  • 9. The probability of downtime of the entire system P * \u003d T / T \u003d 0.
  • 10. The average number of applications in the queue / V s = 0 P(h + 1 Р and + 2Р b= = 0.34 + 2 0.64 = 1.62 auth. To determine the average number of requests in the queue, it is necessary to determine separately the probability that there will be one request P in the queue, the probability that there will be two requests P 23 in the queue, and so on, and add them again with the appropriate weights.
  • 11. The probability that there will be one application in the queue, P and = = TJTn= 1.7 / 5 \u003d 0.34 (there are four such segments in the diagram, totaling 1.7 hours).
  • 12. The probability that two applications will be in the queue at the same time, R b\u003d Г 2з / Г \u003d 3.2 / 5 \u003d 0.64 (there are three such segments in the diagram, giving a total of 3.25 hours).
  • 13. The average waiting time for an application in the queue is Tro = 1.7/4 = = 0.425 hours. It is necessary to add up all the time intervals during which any application was in the queue and divide by the number of applications. There are 4 such applications on the timeline.
  • 14. Average service time for an application 7' ​​srobsl = 8/5 = 1.6 hours. Add up all the time intervals during which any application was serviced in any channel and divide by the number of applications.
  • 15. Average time spent by an application in the system: T = T +

y y wed sung wed. oh

If the accuracy is not satisfactory, then you should increase the experiment time and thereby improve the statistics. You can do it differently if you run experiment 154 several times

for a while T and subsequently average the values ​​of these experiments, and then again check the results for the criterion of accuracy. This procedure should be repeated until the desired accuracy is achieved.

Analysis of simulation results

Table 3.11

Indicator

Meaning

indicator

Interests of the CMO owner

Client interests

Probability

service

Service probability is low, many customers leave the system without service Recommendation: increase service probability

Possibility of service is low, every third customer wants to be served but cannot be served Recommendation: increase the probability of service

Average number of applications in the queue

The car is almost always in line before being serviced. Recommendation: increase the number of places in the queue, increase the capacity

Increase throughput Increase the number of places in the queue so as not to lose potential customers

Customers are interested in a significant increase in throughput to reduce latency and reduce failures

To make a decision on the implementation of specific activities, it is necessary to conduct a sensitivity analysis of the model. Target model sensitivity analysis is to determine the possible deviations of the output characteristics due to changes in the input parameters.

Methods for assessing the sensitivity of a simulation model are similar to methods for determining the sensitivity of any system. If the output characteristic of the model R depends on the parameters associated with the variables R =/(p g p 2 , p), then these changes

parameters D r.(/ = 1, ..G) cause a change AR.

In this case, the sensitivity analysis of the model is reduced to the study of the sensitivity function DR/others

As an example of sensitivity analysis of a simulation model, let us consider the effect of changing the variable parameters of vehicle reliability on operational efficiency. As the objective function, we use the indicator of reduced costs З ir. For sensitivity analysis, we use data on the operation of the KamAZ-5410 road train in urban conditions. Limits of parameter change R. to determine the sensitivity of the model, it is enough to determine it by expert means (Table 3.12).

To carry out calculations according to the model, a base point was chosen, at which the variable parameters have values ​​that correspond to the standards. Execution idle duration option Maintenance and repair in days is replaced by a specific indicator - downtime in days per thousand kilometers N.

The calculation results are shown in Figs. 3.24. The base point is at the intersection of all curves. Shown in fig. 3.24 dependencies allow you to establish the degree of influence of each of the parameters under consideration on the magnitude of the change Z pr. At the same time, the use of natural values ​​​​of the analyzed quantities does not allow you to establish the comparative degree of influence of each parameter on 3, since these parameters have different units of measurement. To overcome this, we choose the form of interpretation of the calculation results in relative units. To do this, the base point must be moved to the origin of coordinates, and the values ​​of the variable parameters and the relative change in the output characteristics of the model should be expressed as a percentage. The results of the carried out transformations are presented in fig. 3.25.

Table 3.12

Values variable parameters

Rice. 3.24.


Rice. 3.25. Influence of relative change of variable parameters on the degree of change

The change in variable parameters relative to the base value is represented on one axis. As can be seen from fig. 3.25, an increase in the value of each parameter near the base point by 50% leads to an increase in Z pr by 9% of the growth of Ts a, more than 1.5% of C p, less than 0.5% of H and to decrease 3 by almost 4% of the increase L. Decrease by 25 % b cr and D rg leads to an increase in Z pr, respectively, by more than 6%. Decreasing by the same amount of parameters H t0 , C tr and C a leads to a decrease in C pr by 0.2, 0.8 and 4.5%, respectively.

The given dependencies give an idea of ​​the influence of a single parameter and can be used when planning the operation of the transport system. According to the intensity of influence on Z pr, the considered parameters can be arranged in the following order: D, II, L, C 9 N .

'a 7 k.r 7 t.r 7 t.o

During operation, a change in the value of one indicator entails a change in the values ​​of other indicators, and the relative change in each of the variable parameters by the same value in the general case has an unequal value. physical basis. It is necessary to replace the relative change in the values ​​of the variable parameters as a percentage along the abscissa with a parameter that can serve as a single measure for assessing the degree of change in each parameter. It can be assumed that at each moment of the vehicle operation, the value of each parameter has the same economic weight in relation to the values ​​of other variable parameters, i.e., from an economic point of view, the reliability of the vehicle at every moment of time has an equilibrium effect on all parameters associated with it . Then the required economic equivalent will be the time or, more conveniently, the year of operation.

On fig. 3.26 shows dependencies built in accordance with the above requirements. The value in the first year of operation of the vehicle is taken as the base value of Z pr. The values ​​of the variable parameters for each year of operation were determined based on the results of observations.


Rice. 3.26.

In the process of operation, an increase in W pr during the first three years is primarily due to an increase in the values H jo , and then, under the considered operating conditions, the main role in reducing the efficiency of TS use is played by an increase in C tr To identify the influence of the value L Kp , in the calculations, its value was equated to the total mileage of the vehicle from the start of operation. Function type 3 =f(L) shows that the intensity of the decrease in 3 with increasing

etc J v k.r" 7 np J

1 to p significantly decreases.

As a result of the sensitivity analysis of the model, it is possible to understand what factors need to be influenced to change the objective function. To change the factors, it is necessary to apply control efforts, which is associated with corresponding costs. The amount of costs cannot be infinite, like any resources, these costs are in reality limited. Therefore, it is necessary to understand to what extent the allocation of funds will be effective. If in most cases the costs increase linearly with increasing control action, then the efficiency of the system grows rapidly only up to a certain limit, when even significant costs no longer give the same return. For example, it is impossible to limitlessly increase the capacity of service devices due to space limitations or the potential number of cars served, etc.

If we compare the increase in costs and the system efficiency indicator in the same units, then, as a rule, it will look graphically as shown in Fig. 3.27.


Rice. 3.27.

From fig. 3.27 it can be seen that when assigning a price C, per cost unit Z and a price C, per unit indicator R these curves can be added. Curves add up if they need to be minimized or maximized simultaneously. If one curve is to be maximized and the other to be minimized, then their difference should be found, for example, by points. Then the resulting curve (Fig. 3.28), which takes into account both the effect of management and the costs of this, will have an extremum. The value of the parameter /?, which delivers the extremum of the function, is the solution of the synthesis problem.


Rice. 3.28.

to by.

Beyond Management R and indicator R systems are disturbed. Disturbance D= (d v d r...) is an input action, which, unlike the control parameter, does not depend on the will of the system owner (Fig. 3.29). For example, low temperatures outside, competition, unfortunately, reduce the flow of customers; hardware failures reduce system performance. The owner of the system cannot manage these values ​​directly. Usually, indignation acts "in spite" of the owner, reducing the effect R from management efforts R. This is because, in the general case, the system is created to achieve goals that are unattainable by themselves in nature. A person, organizing a system, always hopes to achieve some goal through it. R. This is what he puts in his efforts. R. In this context, we can say that a system is an organization of natural components available to a person, studied by him, in order to achieve some new goal, previously unattainable in other ways.

Rice. 3.29.

If we remove the dependence of the indicator R from management R once again, but under the conditions of the perturbation D, then, perhaps, the nature of the curve will change. Most likely, the indicator will be lower for the same control values, since the perturbation is negative, reducing the system performance. A system left to itself, without the efforts of a managerial nature, ceases to provide the goal for which it was created. If, as before, we build the dependence of costs, correlate it with the dependence of the indicator on the control parameter, then the found extremum point will shift (Fig. 3.30) compared to the case of “perturbation = 0” (see Fig. 3.28). If the perturbation is increased again, then the curves will change and, as a result, the position of the extremum point will change again.

The graph in fig. 3.30 relates indicator P, management (resource) R and outrage D in complex systems, indicating how best to act to the manager (organization) who makes the decision in the system. If the control action is less than the optimal one, then the total effect will decrease, and a situation of lost profit will arise. If the control action is greater than the optimal one, then the effect will also decrease, since paying for the queue

Any increase in control effort will need to be larger than what you get as a result of using the system.


Rice. 3.30.

A simulation model of the system for real use must be implemented on a computer. This can be created using the following tools:

  • universal user program type of mathematical (MATLAB) or spreadsheet processor (Excel) or DBMS (Access, FoxPro), which allows you to create only a relatively simple model and requires at least initial programming skills;
  • universal programming language(C++, Java, Basic, etc.), which allows you to create a model of any complexity; but this is a very time-consuming process that requires writing a large amount of program code and lengthy debugging;
  • specialized language simulation modeling, which has ready-made templates and visual programming tools designed to quickly create the basis of the model. One of the most famous is UML (Unified Modeling Language);
  • simulation programs, which are the most popular means of creating simulation models. They allow you to create a model visually, only in the most difficult cases, resorting to manually writing program code for procedures and functions.

Simulation programs are divided into two types:

  • Versatile Simulation Packages are designed to create various models and contain a set of functions that can be used to simulate typical processes in systems of various purposes. Popular packages of this type are Arena (developer of Rockwell Automation 1 ", USA), Extendsim (developer of Imagine That Ink., USA), AnyLogic (developer of XJ Technologies, Russia) and many others. Almost all universal packages have specialized versions for modeling specific classes objects.
  • Domain-Specific Simulation Packages serve to model specific types of objects and have specialized tools for this in the form of templates, wizards for visually designing a model from ready-made modules, etc.
  • Of course, two random numbers cannot uniquely depend on each other, Fig. 3.17, a is given for clarity of the concept of correlation. 144
  • Technical and economic analysis in the study of the reliability of KamAZ-5410 / Yu. G. Kotikov, I. M. Blyankinshtein, A. E. Gorev, A. N. Borisenko; LISI. L.:, 1983. 12 p.-Dep. in TsBNTI Minavtotrans RSFSR, No. 135at-D83.
  • http://www.rockwellautomation.com.
  • http://www.cxtcndsiin.com.
  • http://www.xjtek.com.

As an example of a continuous random variable, consider a random variable X uniformly distributed over the interval (a; b). We say that the random variable X evenly distributed on the interval (a; b), if its distribution density is not constant on this interval:

From the normalization condition, we determine the value of the constant c . The area under the distribution density curve should be equal to one, but in our case it is the area of ​​a rectangle with a base (b - α) and a height c (Fig. 1).

Rice. 1 Uniform distribution density
From here we find the value of the constant c:

So, the density of a uniformly distributed random variable is equal to

Let us now find the distribution function by the formula:
1) for
2) for
3) for 0+1+0=1.
Thus,

The distribution function is continuous and does not decrease (Fig. 2).

Rice. 2 Distribution function of a uniformly distributed random variable

Let's find mathematical expectation of a uniformly distributed random variable according to the formula:

Uniform distribution variance is calculated by the formula and is equal to

Example #1. Scale division value measuring device equals 0.2 . Instrument readings are rounded to the nearest whole division. Find the probability that an error will be made during the reading: a) less than 0.04; b) big 0.02
Decision. The rounding error is a random variable uniformly distributed over the interval between adjacent integer divisions. Consider the interval (0; 0.2) as such a division (Fig. a). Rounding can be carried out both towards the left border - 0, and towards the right - 0.2, which means that an error less than or equal to 0.04 can be made twice, which must be taken into account when calculating the probability:



P = 0.2 + 0.2 = 0.4

For the second case, the error value can also exceed 0.02 on both division boundaries, that is, it can be either greater than 0.02 or less than 0.18.


Then the probability of an error like this:

Example #2. It was assumed that the stability of the economic situation in the country (the absence of wars, natural disasters, etc.) over the past 50 years can be judged by the nature of the distribution of the population by age: in a calm situation, it should be uniform. As a result of the study, the following data were obtained for one of the countries.

Is there any reason to believe that there was an unstable situation in the country?

We carry out the decision using the calculator Hypothesis testing. Table for calculating indicators.

GroupsInterval middle, x iQuantity, fix i * f iCumulative frequency, S|x - x cf |*f(x - x sr) 2 *fFrequency, f i /n
0 - 10 5 0.14 0.7 0.14 5.32 202.16 0.14
10 - 20 15 0.09 1.35 0.23 2.52 70.56 0.09
20 - 30 25 0.1 2.5 0.33 1.8 32.4 0.1
30 - 40 35 0.08 2.8 0.41 0.64 5.12 0.08
40 - 50 45 0.16 7.2 0.57 0.32 0.64 0.16
50 - 60 55 0.13 7.15 0.7 1.56 18.72 0.13
60 - 70 65 0.12 7.8 0.82 2.64 58.08 0.12
70 - 80 75 0.18 13.5 1 5.76 184.32 0.18
1 43 20.56 572 1
Distribution Center Metrics.
weighted average


Variation indicators.
Absolute Variation Rates.
The range of variation is the difference between the maximum and minimum values ​​of the attribute of the primary series.
R = X max - X min
R=70 - 0=70
Dispersion- characterizes the measure of spread around its mean value (measure of dispersion, i.e. deviation from the mean).


Standard deviation.

Each value of the series differs from the average value of 43 by no more than 23.92
Testing hypotheses about the type of distribution.
4. Testing the hypothesis about uniform distribution the general population.
In order to test the hypothesis about the uniform distribution of X, i.e. according to the law: f(x) = 1/(b-a) in the interval (a,b)
necessary:
1. Estimate the parameters a and b - the ends of the interval in which the possible values ​​of X were observed, according to the formulas (the * sign denotes the estimates of the parameters):

2. Find the probability density of the estimated distribution f(x) = 1/(b * - a *)
3. Find theoretical frequencies:
n 1 \u003d nP 1 \u003d n \u003d n * 1 / (b * - a *) * (x 1 - a *)
n 2 \u003d n 3 \u003d ... \u003d n s-1 \u003d n * 1 / (b * - a *) * (x i - x i-1)
n s = n*1/(b * - a *)*(b * - x s-1)
4. Compare empirical and theoretical frequencies using Pearson's test, assuming the number of degrees of freedom k = s-3, where s is the number of initial sampling intervals; if, however, a combination of small frequencies, and therefore the intervals themselves, was made, then s is the number of intervals remaining after the combination.

Decision:
1. Find the estimates of the parameters a * and b * of the uniform distribution using the formulas:


2. Find the density of the assumed uniform distribution:
f(x) = 1/(b * - a *) = 1/(84.42 - 1.58) = 0.0121
3. Find the theoretical frequencies:
n 1 \u003d n * f (x) (x 1 - a *) \u003d 1 * 0.0121 (10-1.58) \u003d 0.1
n 8 \u003d n * f (x) (b * - x 7) \u003d 1 * 0.0121 (84.42-70) \u003d 0.17
The remaining n s will be equal:
n s = n*f(x)(x i - x i-1)

in in*in i - n * i(n i - n* i) 2(n i - n * i) 2 /n * i
1 0.14 0.1 0.0383 0.00147 0.0144
2 0.09 0.12 -0.0307 0.000943 0.00781
3 0.1 0.12 -0.0207 0.000429 0.00355
4 0.08 0.12 -0.0407 0.00166 0.0137
5 0.16 0.12 0.0393 0.00154 0.0128
6 0.13 0.12 0.0093 8.6E-5 0.000716
7 0.12 0.12 -0.000701 0 4.0E-6
8 0.18 0.17 0.00589 3.5E-5 0.000199
Total 1 0.0532
Let us define the boundary of the critical region. Since the Pearson statistic measures the difference between the empirical and theoretical distributions, the larger its observed value of K obs, the stronger the argument against the main hypothesis.
Therefore, the critical region for this statistic is always right-handed :)