Principles of conducting detailed correlation of well sections. Correlation analysis. Detailed example of a solution Examples proving the Cuvier correlation principle

The purpose of correlation analysis is to identify an estimate of the strength of the connection between random variables (features) that characterize some real process.
Problems of correlation analysis:
a) Measuring the degree of coherence (closeness, strength, severity, intensity) of two or more phenomena.
b) Selection of factors that have the most significant impact on the resulting attribute, based on measuring the degree of connectivity between phenomena. Factors that are significant in this aspect are used further in regression analysis.
c) Detection of unknown causal relationships.

The forms of manifestation of relationships are very diverse. The most common types are functional (complete) and correlation (incomplete) connection.
Correlation manifests itself on average for mass observations, when the given values ​​of the dependent variable correspond to a certain series of probabilistic values ​​of the independent variable. The relationship is called correlation, if each value of the factor characteristic corresponds to a well-defined non-random value of the resultant characteristic.
A visual representation of a correlation table is the correlation field. It is a graph where X values ​​are plotted on the abscissa axis, Y values ​​are plotted on the ordinate axis, and combinations of X and Y are shown by dots. By the location of the dots, one can judge the presence of a connection.
Indicators of connection closeness make it possible to characterize the dependence of the variation of the resulting trait on the variation of the factor trait.
A more advanced indicator of the degree of crowding correlation connection is linear correlation coefficient. When calculating this indicator, not only deviations of individual values ​​of a characteristic from the average are taken into account, but also the very magnitude of these deviations.

The key questions of this topic are the equations of the regression relationship between the effective characteristic and the explanatory variable, the least squares method for estimating the parameters of the regression model, analyzing the quality of the resulting regression equation, constructing confidence intervals for predicting the values ​​of the effective characteristic using the regression equation.

Example 2


System of normal equations.
a n + b∑x = ∑y
a∑x + b∑x 2 = ∑y x
For our data, the system of equations has the form
30a + 5763 b = 21460
5763 a + 1200261 b = 3800360
From the first equation we express A and substitute into the second equation:
We get b = -3.46, a = 1379.33
Regression equation:
y = -3.46 x + 1379.33

2. Calculation of regression equation parameters.
Sample means.



Sample variances:


Standard deviation


1.1. Correlation coefficient
Covariance.

We calculate the indicator of connection closeness. This indicator is the sample linear correlation coefficient, which is calculated by the formula:

The linear correlation coefficient takes values ​​from –1 to +1.
Connections between characteristics can be weak and strong (close). Their criteria are assessed on the Chaddock scale:
0.1 < r xy < 0.3: слабая;
0.3 < r xy < 0.5: умеренная;
0.5 < r xy < 0.7: заметная;
0.7 < r xy < 0.9: высокая;
0.9 < r xy < 1: весьма высокая;
In our example, the relationship between trait Y and factor X is high and inverse.
In addition, the linear pair correlation coefficient can be determined through the regression coefficient b:

1.2. Regression equation(estimation of regression equation).

The linear regression equation is y = -3.46 x + 1379.33

Coefficient b = -3.46 shows the average change in the effective indicator (in units of measurement y) with an increase or decrease in the value of factor x per unit of its measurement. In this example, with an increase of 1 unit, y decreases by -3.46 on average.
The coefficient a = 1379.33 formally shows the predicted level of y, but only if x = 0 is close to the sample values.
But if x=0 is far from the sample values ​​of x, then a literal interpretation may lead to incorrect results, and even if the regression line describes the observed sample values ​​fairly accurately, there is no guarantee that this will also be the case when extrapolating left or right.
By substituting the appropriate x values ​​into the regression equation, we can determine the aligned (predicted) values ​​of the performance indicator y(x) for each observation.
The relationship between y and x determines the sign of the regression coefficient b (if > 0 - direct relationship, otherwise - inverse). In our example, the connection is reverse.
1.3. Elasticity coefficient.
It is not advisable to use regression coefficients (in example b) to directly assess the influence of factors on a resultant characteristic if there is a difference in the units of measurement of the resultant indicator y and the factor characteristic x.
For these purposes, elasticity coefficients and beta coefficients are calculated.
The average elasticity coefficient E shows by what percentage on average the result will change in the aggregate at from its average value when the factor changes x by 1% of its average value.
The elasticity coefficient is found by the formula:


The elasticity coefficient is less than 1. Therefore, if X changes by 1%, Y will change by less than 1%. In other words, the influence of X on Y is not significant.
Beta coefficient shows by what part of the value of its standard deviation the average value of the resulting characteristic will change when the factor characteristic changes by the value of its standard deviation with the value of the remaining independent variables fixed at a constant level:

Those. an increase in x by the standard deviation S x will lead to a decrease in the average value of Y by 0.74 standard deviation S y .
1.4. Approximation error.
Let us evaluate the quality of the regression equation using the error of absolute approximation. Average approximation error - average deviation of calculated values ​​from actual ones:


Since the error is less than 15%, this equation can be used as regression.
Analysis of variance.
The purpose of analysis of variance is to analyze the variance of the dependent variable:
∑(y i - y cp) 2 = ∑(y(x) - y cp) 2 + ∑(y - y(x)) 2
Where
∑(y i - y cp) 2 - total sum of squared deviations;
∑(y(x) - y cp) 2 - the sum of squared deviations due to regression (“explained” or “factorial”);
∑(y - y(x)) 2 - residual sum of squared deviations.
Theoretical correlation relationship for a linear connection is equal to the correlation coefficient r xy .
For any form of dependence, the tightness of the connection is determined using multiple correlation coefficient:

This coefficient is universal, as it reflects the closeness of the connection and the accuracy of the model, and can also be used for any form of connection between variables. When constructing a one-factor correlation model, the multiple correlation coefficient is equal to the pair correlation coefficient r xy.
1.6. Determination coefficient.
The square of the (multiple) correlation coefficient is called the coefficient of determination, which shows the proportion of variation in the resultant attribute explained by the variation in the factor attribute.
Most often, when interpreting the coefficient of determination, it is expressed as a percentage.
R2 = -0.742 = 0.5413
those. in 54.13% of cases, changes in x lead to changes in y. In other words, the accuracy of selecting the regression equation is average. The remaining 45.87% of the change in Y is explained by factors not taken into account in the model.

References

  1. Econometrics: Textbook / Ed. I.I. Eliseeva. – M.: Finance and Statistics, 2001, p. 34..89.
  2. Magnus Y.R., Katyshev P.K., Peresetsky A.A. Econometrics. Beginner course. Tutorial. – 2nd ed., rev. – M.: Delo, 1998, p. 17..42.
  3. Workshop on econometrics: Proc. allowance / I.I. Eliseeva, S.V. Kurysheva, N.M. Gordeenko and others; Ed. I.I. Eliseeva. – M.: Finance and Statistics, 2001, p. 5..48.

Page 17. Remember

Jean Baptiste Lamarck. He mistakenly believed that all organisms strive for perfection. If with an example, then some cat strived to become a human). Another mistake was that he considered only the external environment to be an evolutionary factor.

2. What biological discoveries were made by the middle of the 19th century?

The most significant events of the first half of the 19th century were the formation of paleontology and biological foundations stratigraphy, the emergence of cell theory, the formation of comparative anatomy and comparative embryology, the development of biogeography and the widespread dissemination of transformist ideas. The central events of the second half of the 19th century were the publication of “The Origin of Species” by Charles Darwin and the spread of the evolutionary approach in many biological disciplines (paleontology, systematics, comparative anatomy and comparative embryology), the formation of phylogenetics, the development of cytology and microscopic anatomy, experimental physiology and experimental embryology, the formation concepts of a specific pathogen of infectious diseases, proof of the impossibility of spontaneous generation of life in modern natural conditions.

Page 21. Questions for review and assignments.

1. What geological data served as a prerequisite for Charles Darwin’s evolutionary theory?

The English geologist C. Lyell proved the inconsistency of J. Cuvier's ideas about sudden catastrophes changing the surface of the Earth, and substantiated the opposite point of view: the surface of the planet changes gradually, continuously under the influence of ordinary everyday factors.

2. Name the discoveries in biology that contributed to the formation of Charles Darwin’s evolutionary views.

The following biological discoveries contributed to the formation of Charles Darwin's views: T. Schwann created the cell theory, which postulated that living organisms consist of cells, the general features of which are the same in all plants and animals. This served as strong evidence of the unity of origin of the living world; K. M. Baer showed that the development of all organisms begins with the egg, and at the beginning of embryonic development in vertebrates belonging to different classes, a clear similarity of embryos is revealed at the early stages; While studying the structure of vertebrates, J. Cuvier established that all animal organs are parts of one integral system. The structure of each organ corresponds to the principle of the structure of the whole organism, and a change in one part of the body must cause changes in other parts; K. M. Baer showed that the development of all organisms begins with the egg, and at the beginning of embryonic development in vertebrates belonging to different classes, a clear similarity of embryos is revealed at the early stages;

3. Characterize the natural scientific prerequisites for the formation of Charles Darwin’s evolutionary views.

1. Heliocentric system.

2. Kant-Laplace theory.

3. Law of conservation of matter.

4. Achievements of descriptive botany and zoology.

5. Great geographical discoveries.

6. Discovery of the law of germinal similarity by K. Baer: “Embryos exhibit a certain similarity within the type.”

7. Achievements in the field of chemistry: Weller synthesized urea, Butlerov synthesized carbohydrates, Mendeleev created the periodic table.

8. Cell theory of T. Schwann.

9. Large quantity paleontological finds.

10. Expedition material of Charles Darwin.

Thus, scientific facts collected in various fields of natural science contradicted previously existing theories of the origin and development of life on Earth. The English scientist Charles Darwin was able to correctly explain and generalize them, creating the theory of evolution.

4. What is the essence of J. Cuvier’s correlation principle? Give examples.

This is the law of the relationship between the parts of a living organism; according to this law, all parts of the body are naturally interconnected. If any part of the body changes, then there will directly be changes in other parts of the body (or organs, or organ systems). Cuvier is the founder of comparative anatomy and paleontology. He believed that if an animal has a large head, then it should have horns to defend itself from enemies, and if it has horns, then it has no fangs, then it is a herbivore, if it is a herbivore, then it has a complex multi-chambered stomach, and if it has a complex stomach and feeds on plant foods , which means a very long intestine, since plant foods have little energy value, etc.

5. What role did the development of agriculture play in the formation of evolutionary theory?

IN agriculture Various methods of improving old ones and introducing new, more productive breeds of animals and high-yielding varieties of animals began to be used more and more widely, which undermined the belief in the immutability of living nature. These advances strengthened Charles Darwin's evolutionary views and helped him establish the principles of selection that underlie his theory.

1) correlation analysis as a means of obtaining information;

2) features of the procedures for determining linear and rank correlation coefficients.

Correlation analysis(from the Latin “correlation”, “connection”) is used to test the hypothesis about the statistical dependence of the values ​​of two or more variables in the event that the researcher can record (measure) them, but not control (change).

When an increase in the level of one variable is accompanied by an increase in the level of another, then we are talking about positive correlations. If an increase in one variable occurs while the level of another decreases, then we speak of negative correlations. In the absence of a connection between variables, we are dealing with null correlation.

In this case, the variables can be data from tests, observations, experiments, socio-demographic characteristics, physiological parameters, behavioral characteristics, etc. For example, the use of the method allows us to give a quantitative assessment of the relationship between such characteristics as: success in studying at a university and degree professional achievements upon completion, level of aspirations and stress, number of children in the family and the quality of their intelligence, personality traits and professional orientation, duration of loneliness and dynamics of self-esteem, anxiety and intragroup status, social adaptation and aggressiveness in conflict...

As auxiliary tools, correlation procedures are indispensable in the construction of tests (to determine the validity and reliability of the measurement), as well as as pilot actions to test the suitability of experimental hypotheses (the fact of the absence of correlation allows us to reject the assumption of a cause-and-effect relationship between variables).

The growing interest in psychological science in the potential of correlation analysis is due to a number of reasons. First, it becomes possible to study a wide range of variables, the experimental verification of which is difficult or impossible. Indeed, for ethical reasons, for example, it is impossible to conduct experimental studies of suicide, drug addiction, destructive parental influences, and the influence of authoritarian sects. Secondly, it is possible to obtain valuable generalizations of data on large numbers of studied individuals in a short time. Third, many phenomena are known to change their specificity during rigorous laboratory experiments. And correlation analysis provides the researcher with the opportunity to operate with information obtained under conditions as close as possible to real ones. Fourthly, the implementation of a statistical study of the dynamics of a particular dependence often creates the prerequisites for reliable prediction of psychological processes and phenomena.

However, it should be borne in mind that the use of the correlation method is also associated with very significant fundamental limitations.

Thus, it is known that variables may well correlate even in the absence of a cause-and-effect relationship with each other.

This is sometimes possible due to random reasons, with heterogeneity of the sample, or due to the inadequacy of the research tools for the tasks set. Such a false correlation can become, say, “proof” that women are more disciplined than men, teenagers from single-parent families are more prone to delinquency, extroverts are more aggressive than introverts, etc. Indeed, it is worth selecting men working in higher education into one group and women, suppose, from the service sector, and even test both of them on knowledge of scientific methodology, then we will get an expression of a noticeable dependence of the quality of information on gender. Can such a correlation be trusted?

Even more often, perhaps, in research practice there are cases when both variables change under the influence of some third or even several hidden determinants.

If we denote the variables with numbers and the directions from causes to effects with arrows, we will see a number of possible options:

1 2 3 4

1 2 3 4

1 2 3 4

1 2 3 4 etc.

Inattention to the influence of real factors, but not taken into account by researchers, made it possible to present justifications that intelligence is a purely inherited formation (psychogenetic approach) or, on the contrary, that it is due only to the influence of social components of development (sociogenetic approach). In psychology, it should be noted that phenomena that have an unambiguous root cause are not common.

In addition, the fact that variables are interconnected does not make it possible to identify cause and effect based on the results of a correlation study, even in cases where there are no intermediate variables.

For example, when studying the aggressiveness of children, it was found that children prone to cruelty are more likely than their peers to watch films with scenes of violence. Does this mean that such scenes develop aggressive reactions or, on the contrary, such films attract the most aggressive children? It is impossible to give a legitimate answer to this question within the framework of a correlation study.

It is necessary to remember: the presence of correlations is not an indicator of the severity and direction of cause-and-effect relationships.

In other words, having established the correlation of variables, we can judge not about determinants and derivatives, but only about how closely interrelated changes in variables are and how one of them reacts to the dynamics of the other.

When using this method operate with one or another type of correlation coefficient. Its numerical value usually varies from -1 (inverse dependence of variables) to +1 (direct dependence). In this case, a zero value of the coefficient corresponds to a complete absence of interrelation between the dynamics of the variables.

For example, a correlation coefficient of +0.80 reflects the presence of a more pronounced relationship between variables than a coefficient of +0.25. Likewise, the relationship between variables characterized by a coefficient of -0.95 is much closer than that where the coefficients have values ​​of +0.80 or + 0.25 (“minus” only tells us that an increase in one variable is accompanied by a decrease in another) .

In the practice of psychological research, correlation coefficients usually do not reach +1 or -1. We can only talk about one degree or another of approximation to a given value. Often a correlation is considered strong if its coefficient is greater than 0.60. In this case, insufficient correlation, as a rule, is considered to be indicators located in the range from -0.30 to +0.30.

However, it should immediately be stipulated that the interpretation of the presence of correlation always involves determining critical values the corresponding coefficient. Let's consider this point in more detail.

It may well turn out that a correlation coefficient of +0.50 in some cases will not be considered reliable, and a coefficient of +0.30 will, under certain conditions, be a characteristic of an undoubted correlation. Much here depends on the length of the series of variables (i.e., on the number of compared indicators), as well as on the given value of the significance level (or on the accepted probability of error in the calculations).

After all, on the one hand, the larger the sample, the quantitatively smaller the coefficient will be considered reliable evidence of correlation relationships. On the other hand, if we are willing to accept a significant probability of error, we can consider a sufficiently small value for the correlation coefficient.

There are standard tables with critical values ​​of correlation coefficients. If the coefficient we obtain is lower than that indicated in the table for a given sample at the established significance level, then it is considered statistically unreliable.

When working with such a table, you should know that the threshold value for the level of significance in psychological research is usually considered to be 0.05 (or five percent). Of course, the risk of making a mistake will be even less if this probability is 1 in 100 or, even better, 1 in 1000.

So, it is not the value of the calculated correlation coefficient itself that serves as the basis for assessing the quality of the relationship between variables, but a statistical decision about whether the calculated coefficient indicator can be considered reliable.

Knowing this, let us turn to studying specific methods for determining correlation coefficients.

A significant contribution to the development of the statistical apparatus of correlation studies was made by the English mathematician and biologist Karl Pearson (1857-1936), who at one time was engaged in testing the evolutionary theory of Charles Darwin.

Designation Pearson correlation coefficient(r) comes from the concept of regression - an operation to reduce a set of partial dependencies between individual values ​​of variables to their continuous (linear) averaged dependence.

The formula for calculating the Pearson coefficient is as follows:

Where x, y- private values ​​of variables, -(sigma) is the designation of the amount, and
- average values ​​of the same variables. Let's consider how to use the table of critical values ​​of Pearson coefficients. As we see, the number of degrees of freedom is indicated in its left column. When determining the line we need, we proceed from the fact that the required degree of freedom is equal to n-2, where n- the amount of data in each of the correlated series. In the columns located with right side, specific values ​​of coefficient modules are indicated.

Number of degrees of freedom

Significance levels

Moreover, the further to the right the column of numbers is located, the higher the reliability of the correlation, the more confident the statistical decision about its significance.

If, for example, we have two rows of numbers correlated with 10 units in each of them and a coefficient equal to +0.65 is obtained using the Pearson formula, then it will be considered significant at the level of 0.05 (since it is greater than the critical value of 0.632 for the probability 0.05 and less than the critical value of 0.715 for a probability of 0.02). This level of significance indicates a significant likelihood of repeating this correlation in similar studies.

Now let's give an example of calculating the Pearson correlation coefficient. Suppose in our case it is necessary to determine the nature of the connection between the performance of two tests by the same persons. Data for the first of them are designated as x, and according to the second - as y.

To simplify the calculations, some identities are introduced. Namely:

In this case, we have the following results of the subjects (in test scores):

Subjects

Fourth

Eleventh

Twelfth


;

;

Note that the number of degrees of freedom in our case is 10. Referring to the table of critical values ​​of Pearson coefficients, we find out that with a given degree of freedom at a significance level of 0.999, any correlation indicator of variables higher than 0.823 will be considered reliable. This gives us the right to consider the obtained coefficient as evidence of an undoubted correlation of the series x And y.

The use of a linear correlation coefficient becomes unlawful in cases where calculations are made within the limits of an ordinal measurement scale rather than an interval one. Then the rank correlation coefficients are used. Of course, the results are less accurate, since it is not the quantitative characteristics, but only the order in which they appear one after another.

Among the rank correlation coefficients in the practice of psychological research, the one proposed by the English scientist Charles Spearman (1863-1945), the famous developer of the two-factor theory of intelligence, is often used.

Using an appropriate example, let's look at the steps required to determine Spearman's rank correlation coefficient.

The formula for calculating it is as follows:

;

Where d-differences between the ranks of each variable from the series x And y,

n- number of compared pairs.

Let x And y- indicators of the test subjects’ success in performing certain types of activities (assessment of individual achievements). At the same time, we have the following data:

Subjects

Fourth

Note that at first the indicators are ranked separately in the series x And y. If several equal variables are encountered, then they are assigned the same average rank.

Then a pairwise determination of the difference in ranks is carried out. The sign of the difference is not significant, since according to the formula it is squared.

In our example, the sum of squared rank differences
is equal to 178. Substitute the resulting number into the formula:

As we can see, the correlation coefficient in this case is negligibly small. However, let's compare it with the critical values ​​of the Spearman coefficient from the standard table.

Conclusion: between the indicated series of variables x And y there is no correlation.

It should be noted that the use of rank correlation procedures provides the researcher with the opportunity to determine the relationships of not only quantitative, but also qualitative characteristics, in the event, of course, that the latter can be ordered in increasing severity (ranked).

We examined the most common, perhaps, practical methods for determining correlation coefficients. Other, more complex or less commonly used versions of this method, if necessary, can be found in manuals devoted to measurements in scientific research.

BASIC CONCEPTS: correlation; correlation analysis; Pearson linear correlation coefficient; Spearman's rank correlation coefficient; critical values ​​of correlation coefficients.

Questions for discussion:

1. What are the possibilities of correlation analysis in psychological research? What can and cannot be detected using this method?

2. What is the sequence of actions when determining the Pearson linear correlation coefficients and Spearman rank correlation coefficients?

Exercise 1:

Determine whether the following indicators of correlation between variables are statistically significant:

a) Pearson coefficient +0.445 for data from two tests in a group of 20 subjects;

b) Pearson coefficient -0.810 with the number of degrees of freedom equal to 4;

c) Spearman coefficient +0.415 for a group of 26 people;

d) Spearman coefficient +0.318 with the number of degrees of freedom equal to 38.

Exercise 2:

Determine the linear correlation coefficient between two series of indicators.

Row 1: 2, 4, 5, 5, 3, 6, 6, 7, 8, 9

Row 2: 2, 3, 3, 4, 5, 6, 3, 6, 7, 7

Exercise 3:

Draw conclusions about the statistical reliability and degree of expression of correlation relationships with the number of degrees of freedom equal to 25, if it is known that
is: a) 1200; b) 1555; c) 2300

Exercise 4:

Perform the entire sequence of actions necessary to determine the rank correlation coefficient between extremely general indicators of schoolchildren’s performance (“excellent student,” “good student,” etc.) and the characteristics of their performance on the mental development test (MDT). Make an interpretation of the obtained indicators.

Exercise5:

Using the linear correlation coefficient, calculate the test-retest reliability of the intelligence test at your disposal. Perform a study in a student group with a time interval between tests of 7-10 days. Formulate your conclusions.

1 principle – identification and accounting of the sequence of rock bedding, those. determination of conformable or unconformable occurrence of layers. Consistent - each overlying layer was deposited directly on the underlying one. Unconformity - the presence of breaks, unconformities, tectonic disturbances in the section, which is reflected in the section by the absence of some deposits or repetition of underlying or overlying strata.

2nd principle – mutual position of the boundaries of coeval layers, those. with small changes in thickness, the top and bottom of the formation are approximately parallel; the boundaries of adjacent layers are also approximately parallel.

Principle 3 – tracking in terms of benchmarks and reference boundaries

4th principle – rhythmicity of sedimentation, i.e., sequential change of rocks of different lithological composition depending on the sign of oscillatory movements. Submergence of land – advance of the sea (transgression), or vice versa, regression – retreat of the coastline. During the transgressive cycle, the coarse grain size of the rocks increases up the section, and during the regressive cycle it decreases.

In this work, it is proposed to perform a detailed correlation for the productive part of the Yasnaya Polyana deposits section of the Gondyrevskoye field. The rocks occurring in the section of the field are located in a certain sequence, namely, there is an alternation of layers with different lithological composition, reservoir properties, etc. Identification in the section and tracing over the area of ​​the same horizons and layers, clarification of their continuity along the strike, conditions of occurrence, constancy composition and power is carried out using detailed correlation. It is carried out for the productive part of the section at the stage of preparing the field for development or during development, and solves the problem of constructing a primary static model deposits by identifying the boundaries of productive formations and geophysical benchmarks in well sections, determining the nature of reservoir variability by area (presence of pinching out zones, replacement), determining the division of the horizon into individual layers and interlayers. The comparison of well sections in this work will be carried out based on lithogenetic characteristics. These include the material composition of rocks - sandstones, siltstones, limestones, etc.

The methodology for constructing a correlation scheme is as follows:

2. Selecting a snap line- in this work, the first reference boundary in each well (interval OG II k) is taken as the correlation line. From the correlation line, which is taken as zero, a scale is drawn at intervals of 4 meters.

2. Location of well sections on the diagram– to build a correlation scheme, it is necessary to select a reference (reference) section. The reference is the most representative, clearly dissected section of the well, in which all reference layers are clearly distinguished, sufficient thickness of the section is presented, and a full set of well logging has been performed. All other wells are placed in random order, taking into account the position of the benchmarks closest to neighboring wells. The selected reference well along with the stratigraphic column is placed on the left side of the sheet. A horizontal reference line is drawn on a sheet of Whatman paper, along which the axes of correlated well sections are plotted at arbitrary distances.


3. Tracing the top and bottom of benchmarks, coeval strata and interlayers– then the results of lithological division of the section are applied to the axis of each well in the following order: intervals of occurrence of benchmarks, position of the roof and base of permeable layers. Next, a sequential comparison of the well sections with the reference one is carried out, i.e. trace and connect borders of the same name with straight lines. When connecting the boundaries of layers, the following conditions must be met:

The lines connecting the roofs and bottoms of the layers should be approximately parallel to the previously drawn lines connecting the boundaries of the benchmarks;

These lines must not intersect or have a significantly different slope;

If in one of the wells the formation is composed of reservoir rock, which in the next one is replaced by a non-reservoir, then at half the distance between these wells a vertical broken line shows the conditional boundary of facies replacement (14,a); the same is done if only part of the formation is replaced (14,b );

If the reservoir is identified only in one well and is not present in the neighboring wells, then pinching out of the formation is drawn (14, c). The formation is shown only half the distance between the wells.

Question 1: What is a practical system for classifying living organisms?
Even in ancient times, there was a need to organize the rapidly accumulating knowledge in the field of zoology and botany, which led to their systematization. Practical classification systems were created in which animals and plants were grouped depending on the benefit or harm they brought to humans.

For example, medicinal plants, garden plants, ornamental plants, poisonous animals, livestock. These classifications united organisms that were completely different in structure and origin. However, due to ease of use, such classifications are still used in popular and applied literary sources.

Question 2. What contribution did C. Linnaeus make to biology?
C. Linnaeus described more than 8 thousand species of plants and 4 thousand species of animals, established a uniform terminology and procedure for describing species. He united similar species into clans, clans into squads, and squads into classes. Thus, he based his classification on the principle of hierarchy (subordination) of taxa. The scientist established the use of binary (double) nomenclature in science, when each species is designated by two words: the first word means the genus and is common to all species included in it, the second is the specific name itself. Moreover, the names for all species are given in Latin and in their native language, which makes it possible for all scientists to understand what plant or animal we are talking about. For example, Rozana conana (Rose hip). K. Linnaeus created the most modern system of the organic world for his time, including in it all species of animals and plants known at that time.

Question 3. Why is Linnaeus’ system called artificial?
K. Linnaeus created the most perfect system of the organic world for his time, including in it all species of animals and plants known at that time. Being a great scientist, in many cases he correctly combined species of organisms based on similarity in structure. However, the arbitrariness in the choice of characteristics for classification - in plants the structure of stamens and pistils, in birds - the structure of the beak, in mammals - the structure of teeth - led Linnaeus to a number of mistakes. He was aware of the artificiality of his system and pointed out the need to develop a natural system of nature. Linnaeus wrote: “An artificial system serves only until a natural one is found.” As is now known, the natural system reflects the origin of animals and plants and is based on their kinship and similarity in a set of essential structural features.

Question 4. State the main provisions of Lamarck’s evolutionary theory.
J. B. Lamarck described the main provisions of his theory in the book “Philosophy of Zoology,” published in 1809. He proposed 2 provisions of the doctrine of evolution. The evolutionary process is presented in the form of gradations, i.e. transitions from one stage of development to another. As a result, there is a gradual increase in the level of organization, more perfect forms emerge from less perfect ones. Thus, the first proposition of Lamarck’s theory is called the “gradation rule.”
Lamarck believed that species do not exist in nature, that the elementary unit of evolution is an individual. The variety of forms arose as a result of the influence of the forces of the external world, in response to which organisms develop adaptive characteristics - adaptations. In this case, the influence of the environment is direct and adequate. The scientist believed that every organism has an inherent desire for improvement. Organisms, being influenced by the factors of the world around them, react in a certain way: by exercising or not exercising their organs. As a result, new combinations of characteristics and the characteristics themselves arise, which are transmitted over a number of generations (i.e., “inheritance of acquired characteristics” occurs). This second provision of Lamarck’s theory is called the “rule of adequacy”

Question 5. What questions were not answered in Lamarck’s evolutionary theory?
J. B. Lamarck could not explain the emergence of adaptations caused by “dead” structures. For example, the color of the shell of bird eggs is clearly adaptive in nature, but it is impossible to explain this fact from the standpoint of his theory. Lamarck's theory was based on the idea of ​​fused heredity characteristic of the whole organism and each of its parts. However, the discovery of the substance of heredity - DNA and the genetic code - finally refuted Lamarck's ideas.

Question 6. What is the essence of Cuvier’s correlation principle? Give examples.
J. Cuvier spoke about the conformity of the structure various organs animals to each other, which he called the principle of correlation (relativity).
For example, if an animal has hooves, then its entire organization reflects a herbivorous lifestyle: teeth are adapted to grinding coarse plant food, jaws have a corresponding structure, a multi-chambered stomach, very long intestines, etc. If an animal has a stomach used for digesting meat, then other organs are formed accordingly: sharp teeth, jaws adapted for tearing and capturing prey, claws for holding it, a flexible spine for maneuvering and jumping.

Question 7. What are the differences between transformism and evolutionary theory?
Among philosophers and natural scientists of the 18th-19th centuries. (J. L. Buffin,
E. J. Saint-Hilaire and others) the idea of ​​the variability of organisms, based on the views of some ancient scientists, was widespread. This direction was called transformism. Transformists assumed that organisms react to changes in external conditions by changing their structure, but did not prove the evolutionary transformations of organisms at the same time.