Linear dependence of a system of vectors. Collinear vectors. Criteria for linear dependence and independence of systems of vectors Criterion for linear dependence of a system of vectors

Introduced by us linear operations on vectors make it possible to create various expressions for vector quantities and transform them using the properties set for these operations.

Based on a given set of vectors a 1, ..., a n, you can create an expression of the form

where a 1, ..., and n are arbitrary real numbers. This expression is called linear combination of vectors a 1, ..., a n. The numbers α i, i = 1, n, represent linear combination coefficients. A set of vectors is also called system of vectors.

In connection with the introduced concept of a linear combination of vectors, the problem arises of describing a set of vectors that can be written as a linear combination of a given system of vectors a 1, ..., a n. In addition, there are natural questions about the conditions under which there is a representation of a vector in the form of a linear combination, and about the uniqueness of such a representation.

Definition 2.1. Vectors a 1, ..., and n are called linearly dependent, if there is a set of coefficients α 1 , ... , α n such that

α 1 a 1 + ... + α n а n = 0 (2.2)

and at least one of these coefficients is non-zero. If the specified set of coefficients does not exist, then the vectors are called linearly independent.

If α 1 = ... = α n = 0, then, obviously, α 1 a 1 + ... + α n a n = 0. With this in mind, we can say this: vectors a 1, ..., and n are linearly independent if it follows from equality (2.2) that all coefficients α 1 , ... , α n are equal to zero.

The following theorem explains why the new concept is called the term "dependence" (or "independence"), and provides a simple criterion for linear dependence.

Theorem 2.1. In order for the vectors a 1, ..., and n, n > 1, to be linearly dependent, it is necessary and sufficient that one of them is a linear combination of the others.

◄ Necessity. Let us assume that the vectors a 1, ..., and n are linearly dependent. According to Definition 2.1 of linear dependence, in equality (2.2) on the left there is at least one non-zero coefficient, for example α 1. Leaving the first term on the left side of the equality, we move the rest to the right side, changing their signs, as usual. Dividing the resulting equality by α 1, we get

a 1 =-α 2 /α 1 ⋅ a 2 - ... - α n /α 1 ⋅ a n

those. representation of vector a 1 as a linear combination of the remaining vectors a 2, ..., a n.

Adequacy. Let, for example, the first vector a 1 can be represented as a linear combination of the remaining vectors: a 1 = β 2 a 2 + ... + β n a n. Transferring all terms from the right side to the left, we obtain a 1 - β 2 a 2 - ... - β n a n = 0, i.e. a linear combination of vectors a 1, ..., a n with coefficients α 1 = 1, α 2 = - β 2, ..., α n = - β n, equal to zero vector. In this linear combination, not all coefficients are zero. According to Definition 2.1, the vectors a 1, ..., and n are linearly dependent.

The definition and criterion for linear dependence are formulated to imply the presence of two or more vectors. However, we can also talk about a linear dependence of one vector. To realize this possibility, instead of “vectors are linearly dependent,” you need to say “the system of vectors is linearly dependent.” It is easy to see that the expression “a system of one vector is linearly dependent” means that this single vector is zero (in a linear combination there is only one coefficient, and it should not be equal to zero).

The concept of linear dependence has a simple geometric interpretation. The following three statements clarify this interpretation.

Theorem 2.2. Two vectors are linearly dependent if and only if they collinear.

◄ If vectors a and b are linearly dependent, then one of them, for example a, is expressed through the other, i.e. a = λb for some real number λ. According to definition 1.7 works vectors per number, vectors a and b are collinear.

Let now vectors a and b be collinear. If they are both zero, then it is obvious that they are linearly dependent, since any linear combination of them is equal to the zero vector. Let one of these vectors not be equal to 0, for example vector b. Let us denote by λ the ratio of vector lengths: λ = |a|/|b|. Collinear vectors can be unidirectional or oppositely directed. In the latter case, we change the sign of λ. Then, checking Definition 1.7, we are convinced that a = λb. According to Theorem 2.1, vectors a and b are linearly dependent.

Remark 2.1. In the case of two vectors, taking into account the criterion of linear dependence, the proven theorem can be reformulated as follows: two vectors are collinear if and only if one of them is represented as the product of the other by a number. This is a convenient criterion for the collinearity of two vectors.

Theorem 2.3. Three vectors are linearly dependent if and only if they coplanar.

◄ If three vectors a, b, c are linearly dependent, then, according to Theorem 2.1, one of them, for example a, is a linear combination of the others: a = βb + γc. Let us combine the origins of vectors b and c at point A. Then the vectors βb, γс will have a common origin at point A and along according to the parallelogram rule, their sum is those. vector a will be a vector with origin A and the end, which is the vertex of a parallelogram built on component vectors. Thus, all vectors lie in the same plane, i.e., coplanar.

Let vectors a, b, c be coplanar. If one of these vectors is zero, then it is obvious that it will be a linear combination of the others. It is enough to take all coefficients of a linear combination equal to zero. Therefore, we can assume that all three vectors are not zero. Compatible started of these vectors at a common point O. Let their ends be points A, B, C, respectively (Fig. 2.1). Through point C we draw lines parallel to lines passing through pairs of points O, A and O, B. Designating the points of intersection as A" and B", we obtain a parallelogram OA"CB", therefore, OC" = OA" + OB". Vector OA" and the non-zero vector a = OA are collinear, and therefore the first of them can be obtained by multiplying the second by real numberα:OA" = αOA. Similarly, OB" = βOB, β ∈ R. As a result, we obtain that OC" = α OA + βOB, i.e. vector c is a linear combination of vectors a and b. According to Theorem 2.1, vectors a , b, c are linearly dependent.

Theorem 2.4. Any four vectors are linearly dependent.

◄ We carry out the proof according to the same scheme as in Theorem 2.3. Consider arbitrary four vectors a, b, c and d. If one of the four vectors is zero, or among them there are two collinear vectors, or three of the four vectors are coplanar, then these four vectors are linearly dependent. For example, if vectors a and b are collinear, then we can make their linear combination αa + βb = 0 with non-zero coefficients, and then add the remaining two vectors to this combination, taking zeros as coefficients. We obtain a linear combination of four vectors equal to 0, in which there are non-zero coefficients.

Thus, we can assume that among the selected four vectors, no vectors are zero, no two are collinear, and no three are coplanar. Let us choose point O as their common beginning. Then the ends of the vectors a, b, c, d will be some points A, B, C, D (Fig. 2.2). Through point D we draw three planes parallel to the planes OBC, OCA, OAB, and let A", B", C" be the points of intersection of these planes with straight lines OA, OB, OS, respectively. We obtain a parallelepiped OA" C "B" C" B"DA", and the vectors a, b, c lie on its edges emerging from the vertex O. Since the quadrilateral OC"DC" is a parallelogram, then OD = OC" + OC" In turn, the segment OC" is a diagonal. parallelogram OA"C"B", so OC" = OA" + OB" and OD = OA" + OB" + OC" .

It remains to note that the pairs of vectors OA ≠ 0 and OA" , OB ≠ 0 and OB" , OC ≠ 0 and OC" are collinear, and, therefore, it is possible to select the coefficients α, β, γ so that OA" = αOA , OB" = βOB and OC" = γOC. We finally get OD = αOA + βOB + γOC. Consequently, the OD vector is expressed through the other three vectors, and all four vectors, according to Theorem 2.1, are linearly dependent.

Let the functions , have limit derivatives (n-1).

Consider the determinant: (1)

W(x) is called the Wronski determinant for functions.

Theorem 1. If functions are linearly dependent in the interval (a, b), then their Wronskian W(x) is identically equal to zero in this interval.

Proof. According to the conditions of the theorem, the relation is satisfied

, (2) where not all are equal to zero. Let . Then

(3). We differentiate this identity n-1 times and,

Substituting instead their obtained values ​​into the Wronsky determinant,

we get:

(4).

In the Wronski determinant, the last column is a linear combination of the previous n-1 columns and is therefore equal to zero at all points in the interval (a, b).

Theorem 2. If the functions y1,…, yn are linearly independent solutions of the equation L[y] = 0, all of whose coefficients are continuous in the interval (a, b), then the Wronskian of these solutions is nonzero at each point of the interval (a, b).

Proof. Let's assume the opposite. There is X0, where W(X0)=0. Let's create a system of n equations

(5).

Obviously, system (5) has a non-zero solution. Let (6).

Let's make a linear combination of solutions y1,…, yn.

Y(x) is a solution to the equation L[y] = 0. In addition, . By virtue of the uniqueness theorem, the solution to the equation L[y] = 0 with zero initial conditions can only be zero, i.e. .

We get the identity where not all are equal to zero, which means that y1,..., yn are linearly dependent, which contradicts the conditions of the theorem. Consequently, there is no such point where W(X0)=0.

Based on Theorem 1 and Theorem 2, the following statement can be formulated. In order for n solutions of the equation L[y] = 0 to be linearly independent in the interval (a, b), it is necessary and sufficient that their Wronskian does not vanish at any point in this interval.

The following obvious properties of the Wronskian also follow from the proven theorems.

  1. If the Wronskian of n solutions to the equation L[y] = 0 is equal to zero at one point x = x0 from the interval (a, b), in which all coefficients pi(x) are continuous, then it is equal to zero at all points of this interval.
  2. If the Wronskian of n solutions to the equation L[y] = 0 is nonzero at one point x = x0 from the interval (a, b), then it is nonzero at all points of this interval.

Thus, for the linearity of n independent solutions of the equation L[y] = 0 in the interval (a, b), in which the coefficients of the equation рi(x) are continuous, it is necessary and sufficient that their Wronskian be nonzero at least at one point of this interval .

Def. System of elements x 1,…,x m linear. pr-va V is called linearly dependent if ∃ λ 1 ,…, λ m ∈ ℝ (|λ 1 |+…+| λ m | ≠ 0) such that λ 1 x 1 +…+ λ m x m = θ .

Def. A system of elements x 1 ,…,x m ∈ V is called linearly independent if the equality λ 1 x 1 +…+ λ m x m = θ ⟹λ 1 =…= λ m =0.

Def. An element x ∈ V is called a linear combination of elements x 1 ,…,x m ∈ V if ∃ λ 1 ,…, λ m ∈ ℝ such that x= λ 1 x 1 +…+ λ m x m .

Theorem (linear dependence criterion): A system of vectors x 1 ,…,x m ∈ V is linearly dependent if and only if at least one vector of the system is linearly expressed in terms of the others.

Doc. Necessity: Let x 1 ,…,x m be linearly dependent ⟹ ∃ λ 1 ,…, λ m ∈ ℝ (|λ 1 |+…+| λ m | ≠ 0) such that λ 1 x 1 +…+ λ m -1 x m -1 + λ m x m = θ. Let's say λ m ≠ 0, then

x m = (- ) x 1 +…+ (- ) x m -1.

Adequacy: Let at least one of the vectors be linearly expressed through the remaining vectors: x m = λ 1 x 1 +…+ λ m -1 x m -1 (λ 1 ,…, λ m -1 ∈ ℝ) λ 1 x 1 +…+ λ m -1 x m -1 +(-1) x m =0 λ m =(-1) ≠ 0 ⟹ x 1 ,…,x m - linearly independent.

Ven. linear dependence condition:

If a system contains a zero element or a linearly dependent subsystem, then it is linearly dependent.

λ 1 x 1 +…+ λ m x m = 0 – linearly dependent system

1) Let x 1 = θ, then this equality is valid for λ 1 =1 and λ 1 =…= λ m =0.

2) Let λ 1 x 1 +…+ λ m x m =0 – linearly dependent subsystem ⟹|λ 1 |+…+| λm | ≠ 0 . Then for λ 1 =0 we also get, |λ 1 |+…+| λ m | ≠ 0 ⟹ λ 1 x 1 +…+ λ m x m =0 – linearly dependent system.

Basis of linear space. Coordinates of the vector in a given basis. Coordinates of the sums of vectors and the product of a vector and a number. A necessary and sufficient condition for the linear dependence of a system of vectors.

Definition: An ordered system of elements e 1, ..., e n of a linear space V is called the basis of this space if:

A) e 1 ... e n are linearly independent

B) ∀ x ∈ α 1 … α n such that x= α 1 e 1 +…+ α n e n

x= α 1 e 1 +…+ α n e n – expansion of the element x in the basis e 1, …, e n

α 1 … α n ∈ ℝ – coordinates of element x in the basis e 1, …, e n

Theorem: If in a linear space V a basis e 1, …, e n is given then ∀ x ∈ V the column of coordinates x in the basis e 1, …, e n is uniquely determined (the coordinates are uniquely determined)

Proof: Let x=α 1 e 1 +…+ α n e n and x=β 1 e 1 +…+β n e n


x= ⇔ = Θ, i.e. e 1, …, e n are linearly independent, then - =0 ∀ i=1, …, n ⇔ = ∀ i=1, …, n etc.

Theorem: let e 1, …, e n be the basis of the linear space V; x, y are arbitrary elements of the space V, λ ∈ ℝ is an arbitrary number. When x and y are added, their coordinates are added; when x is multiplied by λ, the x coordinates are also multiplied by λ.

Proof: x= (e 1, …, e n) and y= (e 1, …, e n)

x+y= + = (e 1, …, e n)

λx= λ ) = (e 1, …, e n)

Lemma1: (necessary and sufficient condition for the linear dependence of a system of vectors)

Let e ​​1 …е n be the basis of space V. A system of elements f 1 , …, f k ∈ V is linearly dependent if and only if the coordinate columns of these elements in the basis e 1, …, e n are linearly dependent

Proof: let us expand f 1, …, f k according to the basis e 1, …, e n

f m =(e 1, …, e n) m=1, …, k

λ 1 f 1 +…+λ k f k =(e 1, …, e n)[ λ 1 +…+ λ n ] that is, λ 1 f 1 +…+λ k f k = Θ ⇔

⇔ λ 1 +…+ λ n = which is what needed to be proven.

13. Dimension of linear space. Theorem on the connection between dimension and basis.
Definition: A linear space V is called an n-dimensional space if there are n linearly independent elements in V, and a system of any n+1 elements of the space V is linearly dependent. In this case, n is called the dimension of the linear space V and is denoted by dimV=n.

A linear space is called infinite-dimensional if ∀N ∈ ℕ in the space V there is a linearly independent system containing N elements.

Theorem: 1) If V is an n-dimensional linear space, then any ordered system of n linearly independent elements of this space forms a basis. 2) If in a linear space V there is a basis consisting of n elements, then the dimension of V is equal to n (dimV=n).

Proof: 1) Let dimV=n ⇒ in V ∃ n linearly independent elements e 1, …, e n. We will prove that these elements form a basis, that is, we will prove that ∀ x ∈ V can be expanded in e 1, …, e n . Let's add x to them: e 1, ..., e n, x - this system contains n+1 vectors, which means it is linearly dependent. Since e 1, …, e n is linearly independent, then by Theorem 2 x linearly expressed through e 1, …, e n i.e. ∃ ,…, such that x= α 1 e 1 +…+ α n e n . So e 1, …, e n is the basis of the space V. 2) Let e ​​1, …, e n be the basis of V, so there are ∃ n linearly independent elements in V. Let us take arbitrary f 1 ,…,f n ,f n +1 ∈ V – n+1 elements. Let us show their linear dependence. Let's break them down according to their basis:

f m =(e 1, …,e n) = where m = 1,…,n Let’s create a matrix of coordinate columns: A= The matrix contains n rows ⇒ RgA≤n. Number of columns n+1 > n ≥ RgA ⇒ Columns of matrix A (i.e., columns of coordinates f 1 ,…,f n ,f n +1) are linearly dependent. From Lemma 1 ⇒ ,…,f n ,f n +1 are linearly dependent ⇒ dimV=n.

Consequence: If any basis contains n elements, then any other basis in this space contains n elements.

Theorem 2: If the system of vectors x 1 ,… ,x m -1 , x m is linearly dependent, and its subsystem x 1 ,… ,x m -1 is linearly independent, then x m is linearly expressed through x 1 ,… ,x m -1

Proof: Because x 1 ,… ,x m -1 , x m is linearly dependent, then ∃ , …, , ,

, …, | , | such that . If , , …, | => x 1 ,… ,x m -1 – are linearly independent, which cannot be. This means m = (- ) x 1 +…+ (- ) x m -1.

The following give several criteria for linear dependence and, accordingly, linear independence of vector systems.

Theorem. (Necessary and sufficient condition for linear dependence of vectors.)

A system of vectors is dependent if and only if one of the vectors of the system is linearly expressed through the others of this system.

Proof. Necessity. Let the system be linearly dependent. Then, by definition, it represents the zero vector non-trivially, i.e. there is a non-trivial combination of this system of vectors equal to the zero vector:

where at least one of the coefficients of this linear combination is not equal to zero. Let , .

Let's divide both sides of the previous equality by this non-zero coefficient (i.e. multiply by:

Let's denote: , where .

those. one of the vectors of the system is linearly expressed through the others of this system, etc.

Adequacy. Let one of the vectors of the system be linearly expressed through other vectors of this system:

Let's move the vector to the right of this equality:

Since the coefficient of the vector is equal to , then we have a nontrivial representation of zero by a system of vectors, which means that this system of vectors is linearly dependent, etc.

The theorem has been proven.

Consequence.

1. A system of vectors in a vector space is linearly independent if and only if none of the vectors of the system is linearly expressed in terms of other vectors of this system.

2. A system of vectors containing a zero vector or two equal vectors is linearly dependent.

Proof.

1) Necessity. Let the system be linearly independent. Let us assume the opposite and there is a vector of the system that is linearly expressed through other vectors of this system. Then, according to the theorem, the system is linearly dependent and we arrive at a contradiction.

Adequacy. Let none of the vectors of the system be expressed in terms of the others. Let's assume the opposite. Let the system be linearly dependent, but then it follows from the theorem that there is a vector of the system that is linearly expressed through other vectors of this system, and we again come to a contradiction.

2a) Let the system contain a zero vector. Let us assume for definiteness that the vector :. Then the equality is obvious

those. one of the vectors of the system is linearly expressed through the other vectors of this system. It follows from the theorem that such a system of vectors is linearly dependent, etc.

Note that this fact can be proven directly from a linearly dependent system of vectors.

Since , the following equality is obvious

This is a non-trivial representation of the zero vector, which means the system is linearly dependent.

2b) Let the system have two equal vectors. Let for . Then the equality is obvious

Those. the first vector is linearly expressed through the remaining vectors of the same system. It follows from the theorem that this system linearly dependent, etc.

Similar to the previous one, this statement can be proven directly by defining a linearly dependent system.

In this article we will cover:

  • what are collinear vectors;
  • what are the conditions for collinearity of vectors;
  • what properties of collinear vectors exist;
  • what is the linear dependence of collinear vectors.
Definition 1

Collinear vectors are vectors that are parallel to one line or lie on one line.

Example 1

Conditions for collinearity of vectors

Two vectors are collinear if any of the following conditions are true:

  • condition 1 . Vectors a and b are collinear if there is a number λ such that a = λ b;
  • condition 2 . Vectors a and b are collinear with equal coordinate ratios:

a = (a 1 ; a 2) , b = (b 1 ; b 2) ⇒ a ∥ b ⇔ a 1 b 1 = a 2 b 2

  • condition 3 . Vectors a and b are collinear provided that the cross product and the zero vector are equal:

a ∥ b ⇔ a, b = 0

Note 1

Condition 2 not applicable if one of the vector coordinates is zero.

Note 2

Condition 3 applies only to those vectors that are specified in space.

Examples of problems to study the collinearity of vectors

Example 1

We examine the vectors a = (1; 3) and b = (2; 1) for collinearity.

How to solve?

In this case, it is necessary to use the 2nd collinearity condition. For given vectors it looks like this:

The equality is false. From this we can conclude that vectors a and b are non-collinear.

Answer : a | | b

Example 2

What value m of the vector a = (1; 2) and b = (- 1; m) is necessary for the vectors to be collinear?

How to solve?

Using the second collinearity condition, vectors will be collinear if their coordinates are proportional:

This shows that m = - 2.

Answer: m = - 2 .

Criteria for linear dependence and linear independence of vector systems

Theorem

A system of vectors in a vector space is linearly dependent only if one of the vectors of the system can be expressed in terms of the remaining vectors of this system.

Proof

Let the system e 1 , e 2 , . . . , e n is linearly dependent. Let us write a linear combination of this system equal to the zero vector:

a 1 e 1 + a 2 e 2 + . . . + a n e n = 0

in which at least one of the combination coefficients is not equal to zero.

Let a k ≠ 0 k ∈ 1 , 2 , . . . , n.

We divide both sides of the equality by a non-zero coefficient:

a k - 1 (a k - 1 a 1) e 1 + (a k - 1 a k) e k + . . . + (a k - 1 a n) e n = 0

Let's denote:

A k - 1 a m , where m ∈ 1 , 2 , . . . , k - 1 , k + 1 , n

In this case:

β 1 e 1 + . . . + β k - 1 e k - 1 + β k + 1 e k + 1 + . . . + β n e n = 0

or e k = (- β 1) e 1 + . . . + (- β k - 1) e k - 1 + (- β k + 1) e k + 1 + . . . + (- β n) e n

It follows that one of the vectors of the system is expressed through all other vectors of the system. Which is what needed to be proven (etc.).

Adequacy

Let one of the vectors be linearly expressed through all other vectors of the system:

e k = γ 1 e 1 + . . . + γ k - 1 e k - 1 + γ k + 1 e k + 1 + . . . + γ n e n

We move the vector e k to the right side of this equality:

0 = γ 1 e 1 + . . . + γ k - 1 e k - 1 - e k + γ k + 1 e k + 1 + . . . + γ n e n

Since the coefficient of the vector e k is equal to - 1 ≠ 0, we get a non-trivial representation of zero by a system of vectors e 1, e 2, . . . , e n , and this, in turn, means that this system of vectors is linearly dependent. Which is what needed to be proven (etc.).

Consequence:

  • A system of vectors is linearly independent when none of its vectors can be expressed in terms of all other vectors of the system.
  • A system of vectors that contains a zero vector or two equal vectors is linearly dependent.

Properties of linearly dependent vectors

  1. For 2- and 3-dimensional vectors, the following condition is met: two linearly dependent vectors are collinear. Two collinear vectors are linearly dependent.
  2. For 3-dimensional vectors, the following condition is satisfied: three linearly dependent vectors are coplanar. (3 coplanar vectors are linearly dependent).
  3. For n-dimensional vectors, the following condition is satisfied: n + 1 vectors are always linearly dependent.

Examples of solving problems involving linear dependence or linear independence of vectors

Example 3

Let's check the vectors a = 3, 4, 5, b = - 3, 0, 5, c = 4, 4, 4, d = 3, 4, 0 for linear independence.

Solution. Vectors are linearly dependent because the dimension of vectors is less than the number of vectors.

Example 4

Let's check the vectors a = 1, 1, 1, b = 1, 2, 0, c = 0, - 1, 1 for linear independence.

Solution. We find the values ​​of the coefficients at which the linear combination will equal the zero vector:

x 1 a + x 2 b + x 3 c 1 = 0

We write the vector equation in linear form:

x 1 + x 2 = 0 x 1 + 2 x 2 - x 3 = 0 x 1 + x 3 = 0

We solve this system using the Gaussian method:

1 1 0 | 0 1 2 - 1 | 0 1 0 1 | 0 ~

From the 2nd line we subtract the 1st, from the 3rd - the 1st:

~ 1 1 0 | 0 1 - 1 2 - 1 - 1 - 0 | 0 - 0 1 - 1 0 - 1 1 - 0 | 0 - 0 ~ 1 1 0 | 0 0 1 - 1 | 0 0 - 1 1 | 0 ~

From the 1st line we subtract the 2nd, to the 3rd we add the 2nd:

~ 1 - 0 1 - 1 0 - (- 1) | 0 - 0 0 1 - 1 | 0 0 + 0 - 1 + 1 1 + (- 1) | 0 + 0 ~ 0 1 0 | 1 0 1 - 1 | 0 0 0 0 | 0

From the solution it follows that the system has many solutions. This means that there is a non-zero combination of values ​​of such numbers x 1, x 2, x 3 for which the linear combination of a, b, c equals the zero vector. Therefore, the vectors a, b, c are linearly dependent. ​​​​​​​

If you notice an error in the text, please highlight it and press Ctrl+Enter