Linear dependence of geometric vectors. Necessary condition for linear dependence of n functions. Linear dependence and linear independence of vectors. Basis of vectors. Affine coordinate system

A necessary and sufficient condition for the linear dependence of two

vectors is their collinearity.

2. Scalar product- an operation on two vectors, the result of which is a scalar (number) that does not depend on the coordinate system and characterizes the lengths of the multiplier vectors and the angle between them. This operation corresponds to the multiplication length given vector x on projection another vector y to the given vector x. This operation is usually viewed as commutative and linear in each factor.

Dot product properties:

3. Three vectors (or more) are called coplanar if they, being reduced to a common origin, lie in the same plane.

A necessary and sufficient condition for the linear dependence of three vectors is their coplanarity. Any four vectors are linearly dependent. basis in space any ordered triple of non-coplanar vectors is called. A basis in space allows one to unambiguously associate with each vector an ordered triple of numbers - the coefficients of the representation of this vector in a linear combination of vectors of the basis. On the contrary, with the help of a basis, we will associate a vector with each ordered triplet of numbers if we make a linear combination. An orthogonal basis is called orthonormal , if its vectors are equal to one in length. For an orthonormal basis in space, the notation is often used. Theorem: In an orthonormal basis, the coordinates of vectors are the corresponding orthogonal projections of this vector onto the directions of the coordinate vectors. A triple of non-coplanar vectors a, b, c called right, if the observer from their common origin bypasses the ends of the vectors a, b, c in that order seems to proceed clockwise. Otherwise a, b, c - left triple. All right (or left) triples of vectors are called equally oriented. A rectangular coordinate system on a plane is formed by two mutually perpendicular coordinate axes OX and OY. The coordinate axes intersect at a point O, which is called the origin, each axis has a positive direction. AT right hand coordinate system, the positive direction of the axes is chosen so that with the direction of the axis OY up, axis OX looked to the right.

Four angles (I, II, III, IV) formed by the coordinate axes X"X and Y"Y, are called coordinate angles or quadrants(see fig. 1).

if the vectors and with respect to an orthonormal basis on the plane have coordinates and, respectively, then scalar product of these vectors is calculated by the formula

4. Vector product of two vectors a and b is an operation on them, defined only in three-dimensional space, the result of which is vector with the following

properties:

The geometric meaning of the cross product of vectors is the area of ​​a parallelogram built on vectors. A necessary and sufficient condition for the collinarity of a nonzero vector and a vector is the existence of a number that satisfies the equality .

If two vectors and are defined by their rectangular Cartesian coordinates, or more precisely, they are represented in a vorthonormalized basis

and the coordinate system is right, then their vector product has the form

To remember this formula, it is convenient to use the determinant:

5. Mixed product vectors - the scalar product of a vector and the cross product of vectors and :

Sometimes it is called triple scalar product vectors, apparently due to the fact that the result is a scalar (more precisely, a pseudoscalar).

geometric sense: The module of the mixed product is numerically equal to the volume of the parallelepiped formed by the vectors .

By interchanging two factors mixed product reverses sign:

With a cyclic (circular) permutation of factors, the mixed product does not change:

The mixed product is linear in any factor.

The mixed product is zero if and only if the vectors are coplanar.

1. Complanarity condition for vectors: three vectors are coplanar if and only if their mixed product is zero.

§ A triple of vectors containing a pair of collinear vectors is coplanar.

§ Mixed product of coplanar vectors. This is a criterion for the coplanarity of three vectors.

§ Coplanar vectors are linearly dependent. This is also a criterion for coplanarity.

§ There are real numbers such that for coplanar , except for or . This is a reformulation of the previous property and is also a criterion for coplanarity.

§ In a 3-dimensional space, 3 non-coplanar vectors form a basis. That is, any vector can be represented as: . Then will be the coordinates in the given basis.

The mixed product in the right Cartesian coordinate system (in the orthonormal basis) is equal to the determinant of the matrix composed of the vectors and :



§6. General equation (complete) of the plane

where and are constants, moreover, and are not equal to zero at the same time; in vector form:

where is the radius vector of the point , the vector is perpendicular to the plane (normal vector). Direction cosines vector :

If one of the coefficients in the plane equation is zero, the equation is called incomplete. When the plane passes through the origin of coordinates, when (or , ) P. is parallel to the axis (respectively or ). For ( , or ), the plane is parallel to the plane (or , respectively).

§ Equation of a plane in segments:

where , , are the segments cut off by the plane on the axes and .

§ Equation of a plane passing through a point perpendicular to the normal vector :

in vector form:

(mixed product of vectors), otherwise

§ Normal (normalized) plane equation

§ Angle between two planes. If the P. equations are given in the form (1), then

If in vector form, then

§ Planes are parallel, if

Or (Vector product)

§ Planes are perpendicular, if

Or . (Scalar product)

7. Equation of a plane passing through three given points , not lying on the same line:

8. The distance from a point to a plane is the smallest of the distances between this point and the points of the plane. It is known that the distance from a point to a plane is equal to the length of the perpendicular dropped from this point to the plane.

§ Point Deviation from the plane given by the normalized equation

If and the origin lie on opposite sides of the plane, otherwise . The distance from a point to a plane is

§ The distance from the point to the plane given by the equation is calculated by the formula:

9. Plane bundle- the equation of any P. passing through the line of intersection of two planes

where α and β are any numbers not simultaneously equal to zero.

In order for the three planes given by their general equations A 1 x+B 1 y+C 1 z+D 1 =0, A 2 x+B 2 y+C 2 z+D 2 =0, A 3 x+B 3 y+C 3 z+D 3 =0 relative to PDSC belong to the same beam, eigen or improper, it is necessary and sufficient that the rank of the matrix be equal to either two or one.
Theorem 2. Let two planes π 1 and π 2 be given with respect to PDSC by their general equations: A 1 x+B 1 y+C 1 z+D 1 =0, A 2 x+B 2 y+C 2 z+D 2 = 0. In order for the π 3 plane, given relative to the PDSC by its general equation A 3 x+B 3 y+C 3 z+D 3 =0, to belong to the beam formed by the π 1 and π 2 planes, it is necessary and sufficient that the left side of the equation of the plane π 3 was represented as a linear combination of the left parts of the equations of the planes π 1 and π 2 .

10.Vector parametric equation of a straight line in space:

where is the radius vector of some fixed point M 0 lying on a straight line is a non-zero vector collinear to this straight line, is the radius vector of an arbitrary point on the straight line.

Parametric equation of a straight line in space:

M

Canonical Equation straight in space:

where are the coordinates of some fixed point M 0 lying on a straight line; - coordinates of a vector collinear to this line.

General vector equation of a straight line in space:

Since the line is the intersection of two different non-parallel planes, given respectively by the general equations:

then the equation of a straight line can be given by a system of these equations:

The angle between the direction vectors and will be equal to the angle between the lines. The angle between vectors is found using the scalar product. cosA=(ab)/IaI*IbI

The angle between a straight line and a plane is found by the formula:


where (A; B; C;) are the coordinates of the normal vector of the plane
(l;m;n;) directing vector coordinates of the straight line

Conditions for parallelism of two lines:

a) If the lines are given by equations (4) with a slope, then the necessary and sufficient condition their parallelism consists in the equality of their angular coefficients:

k 1 = k 2 . (8)

b) For the case when the lines are given by the equations in general view(6), the necessary and sufficient condition for their parallelism is that the coefficients at the corresponding current coordinates in their equations are proportional, i.e.

Conditions for perpendicularity of two lines:

a) In the case when the lines are given by equations (4) with a slope, the necessary and sufficient condition for their perpendicularity is that their slopes are reciprocal in magnitude and opposite in sign, i.e.

b) If the equations of straight lines are given in general form (6), then the condition for their perpendicularity (necessary and sufficient) is to fulfill the equality

A 1 A 2 + B 1 B 2 = 0. (12)

Direct called perpendicular to the plane if it is perpendicular to any line in that plane. If a line is perpendicular to each of two intersecting lines of a plane, then it is perpendicular to that plane. In order for a line and a plane to be parallel, it is necessary and sufficient that the normal vector to the plane and the directing vector of the line be perpendicular. For this, it is necessary that their scalar product be equal to zero.

In order for a line and a plane to be perpendicular, it is necessary and sufficient that the normal vector to the plane and the directing vector of the line be collinear. This condition is satisfied if the cross product of these vectors was equal to zero.

12. In space, the distance from a point to a straight line given by a parametric equation

can be found as the minimum distance from given point to an arbitrary point on the line. Coefficient t this point can be found by the formula

Distance between intersecting lines is the length of their common perpendicular. It is equal to the distance between parallel planes passing through these lines.

The following give several criteria for linear dependence and, accordingly, linear independence of systems of vectors.

Theorem. (A necessary and sufficient condition for the linear dependence of vectors.)

A system of vectors is dependent if and only if one of the vectors of the system is linearly expressed in terms of the others of this system.

Proof. Need. Let the system be linearly dependent. Then, by definition, it represents the null vector in a non-trivial way, i.e. there is a non-trivial combination of this system of vectors equal to the zero vector:

where at least one of the coefficients of this linear combination is not equal to zero. Let be , .

Divide both parts of the previous equality by this non-zero coefficient (i.e. multiply by:

Denote: , where .

those. one of the vectors of the system is linearly expressed in terms of the others of this system, etc.

Adequacy. Let one of the vectors of the system be linearly expressed in terms of other vectors of this system:

Let's move the vector to the right of this equality:

Since the coefficient of the vector is , then we have a non-trivial representation of zero by the system of vectors , which means that this system of vectors is linearly dependent, etc.

The theorem has been proven.

Consequence.

1. A system of vectors in a vector space is linearly independent if and only if none of the vectors of the system is linearly expressed in terms of other vectors of this system.

2. A system of vectors containing a zero vector or two equal vector, is linearly dependent.

Proof.

1) Necessity. Let the system be linearly independent. Assume the opposite and there is a system vector that is linearly expressed through other vectors of this system. Then, by the theorem, the system is linearly dependent, and we arrive at a contradiction.

Adequacy. Let none of the vectors of the system be expressed in terms of others. Let's assume the opposite. Let the system be linearly dependent, but then it follows from the theorem that there is a system vector that is linearly expressed through other vectors of this system, and we again come to a contradiction.

2a) Let the system contain a zero vector. Assume for definiteness that the vector :. Then the equality

those. one of the vectors of the system is linearly expressed in terms of the other vectors of this system. It follows from the theorem that such a system of vectors is linearly dependent, so on.

Note that this fact can be proved directly from a linearly dependent system of vectors.

Since , the following equality is obvious

This is a non-trivial representation of the zero vector, which means that the system is linearly dependent.

2b) Let the system have two equal vectors. Let for . Then the equality

Those. the first vector is linearly expressed in terms of the other vectors of the same system. It follows from the theorem that the given system is linearly dependent, and so on.

Similarly to the previous one, this assertion can also be proved directly from the definition of a linearly dependent system.

Def. System of elements x 1 ,…,x m lin. production V is called linearly dependent if ∃ λ 1 ,…, λ m ∈ ℝ (|λ 1 |+…+| λ m | ≠ 0) such that λ 1 x 1 +…+ λ m x m = θ .

Def. A system of elements x 1 ,…,x m ∈ V is called linearly independent if from the equality λ 1 x 1 +…+ λ m x m = θ ⟹λ 1 =…= λ m =0.

Def. An element x ∈ V is called a linear combination of elements x 1 ,…,x m ∈ V if ∃ λ 1 ,…, λ m ∈ ℝ such that x= λ 1 x 1 +…+ λ m x m .

Theorem (criterion of linear dependence): A system of vectors x 1 ,…,x m ∈ V is linearly dependent if and only if at least one vector of the system is linearly expressed in terms of the others.

Doc. Need: Let x 1 ,…,x m be linearly dependent ⟹ ∃ λ 1 ,…, λ m ∈ ℝ (|λ 1 |+…+| λ m | ≠ 0) such that λ 1 x 1 +…+ λ m -1 x m -1 + λ m x m = θ. Suppose λ m ≠ 0, then

x m \u003d (-) x 1 + ... + (-) x m -1.

Adequacy: Let at least one of the vectors be linearly expressed in terms of the other vectors: x m = λ 1 x 1 +…+ λ m -1 x m -1 (λ 1 ,…, λ m -1 ∈ ℝ) λ 1 x 1 +…+ λ m -1 x m -1 +(-1) x m =0 λ m =(-1) ≠ 0 ⟹ x 1 ,…,x m - are linearly independent.

Ven. linear dependence condition:

If the system contains a zero element or a linearly dependent subsystem, then it is linearly dependent.

λ 1 x 1 +…+ λ m x m = 0 – linearly dependent system

1) Let x 1 = θ, then this equality is valid for λ 1 =1 and λ 1 =…= λ m =0.

2) Let λ 1 x 1 +…+ λ m x m =0 be a linearly dependent subsystem ⟹|λ 1 |+…+| λ m | ≠ 0 . Then for λ 1 =0 we also obtain |λ 1 |+…+| λ m | ≠ 0 ⟹ λ 1 x 1 +…+ λ m x m =0 is a linearly dependent system.

Basis of a linear space. Vector coordinates in the given basis. The coordinates of the sums of vectors and the product of a vector by a number. Necessary and sufficient condition for linear dependence of a system of vectors.

Definition: An ordered system of elements e 1, ..., e n of a linear space V is called a basis of this space if:

A) e 1 ... e n are linearly independent

B) ∀ x ∈ α 1 … α n such that x= α 1 e 1 +…+ α n e n

x= α 1 e 1 +…+ α n e n – expansion of the element x in the basis e 1, …, e n

α 1 … α n ∈ ℝ are the coordinates of the element x in the basis e 1, …, e n

Theorem: If the basis e 1, …, e n is given in the linear space V, then ∀ x ∈ V the column of coordinates x in the basis e 1, …, e n is uniquely determined (the coordinates are uniquely determined)

Proof: Let x=α 1 e 1 +…+ α n e n and x=β 1 e 1 +…+β n e n


x= ⇔ = Θ, i.e. e 1, …, e n are linearly independent, then - =0 ∀ i=1, …, n ⇔ = ∀ i=1, …, n h.t.d.

Theorem: let e 1, …, e n be the basis of the linear space V; x, y are arbitrary elements of the space V, λ ∈ ℝ is an arbitrary number. When x and y are added, their coordinates are added, when x is multiplied by λ, the coordinates of x are also multiplied by λ.

Proof: x= (e 1, …, e n) and y= (e 1, …, e n)

x+y= + = (e 1, …, e n)

λx= λ ) = (e 1, …, e n)

Lemma1: (necessary and sufficient condition for the linear dependence of a system of vectors)

Let e ​​1 …e n be the basis of the space V. The system of elements f 1 , …, f k ∈ V is linearly dependent if and only if the coordinate columns of these elements in the basis e 1, …, e n are linearly dependent

Proof: expand f 1 , …, f k in the basis e 1, …, e n

f m =(e 1, …, e n) m=1, …, k

λ 1 f 1 +…+λ k f k =(e 1, …, e n)[ λ 1 +…+ λ n ] i.e. λ 1 f 1 +…+λ k f k = Θ ⇔

⇔ λ 1 +…+ λ n = as required.

13. Dimension of a linear space. Theorem on the relationship between dimension and basis.
Definition: A linear space V is called an n-dimensional space if there are n linearly independent elements in V, and a system of any n + 1 elements of the space V is linearly dependent. In this case, n is called the dimension of the linear space V and is denoted dimV=n.

A linear space is called infinite-dimensional if ∀N ∈ ℕ in the space V there exists a linearly independent system containing N elements.

Theorem: 1) If V is an n-dimensional linear space, then any ordered system of n linearly independent elements of this space forms a basis. 2) If in the linear space V there is a basis consisting of n elements, then the dimension of V is equal to n (dimV=n).

Proof: 1) Let dimV=n ⇒ in V ∃ n linearly independent elements e 1, …,e n . We prove that these elements form a basis, that is, we prove that ∀ x ∈ V can be expanded in terms of e 1, …,e n . Let's add x to them: e 1, …,e n , x – this system contains n+1 vectors, which means it is linearly dependent. Since e 1, …,e n is linearly independent, then by Theorem 2 x linearly expressed through e 1, …,e n i.e. ∃ ,…, such that x= α 1 e 1 +…+ α n e n . So e 1, …,e n is the basis of the space V. 2)Let e ​​1, …,e n be the basis of V, so there are n linearly independent elements in V ∃ n. Take arbitrary f 1 ,…,f n ,f n +1 ∈ V – n+1 elements. Let's show their linear dependence. Let's break them down in terms of:

f m =(e 1, …,e n) = where m = 1,…,n Let's create a matrix of coordinate columns: A= Matrix contains n rows ⇒ RgA≤n. Number of columns n+1 > n ≥ RgA ⇒ Columns of matrix A (ie columns of coordinates f 1 ,…,f n ,f n +1) are linearly dependent. From Lemma 1 ⇒ ,…,f n ,f n +1 are linearly dependent ⇒ dimV=n.

Consequence: If any basis contains n elements, then any other basis of this space contains n elements.

Theorem 2: If the system of vectors x 1 ,… ,x m -1 , x m is linearly dependent, and its subsystem x 1 ,… ,x m -1 is linearly independent, then x m - is linearly expressed through x 1 ,… ,x m -1

Proof: Because x 1 ,… ,x m -1 , x m is linearly dependent, then ∃ , …, , ,

, …, | , | such that . If , , …, | => x 1 ,… ,x m -1 are linearly independent, which cannot be. So m = (-) x 1 +…+ (-) x m -1.

Introduced by us linear operations on vectors make it possible to create different expressions for vector quantities and transform them using the properties set for these operations.

Based on a given set of vectors a 1 , ..., and n , you can compose an expression of the form

where a 1 , ..., and n are arbitrary real numbers. This expression is called linear combination of vectors a 1 , ..., a n . Numbers α i , i = 1, n , are linear combination coefficients. The set of vectors is also called vector system.

In connection with the introduced concept of a linear combination of vectors, the problem arises of describing the set of vectors that can be written as a linear combination of a given system of vectors a 1 , ..., a n . In addition, questions about the conditions under which there is a representation of a vector in the form of a linear combination, and about the uniqueness of such a representation, are natural.

Definition 2.1. Vectors a 1 , ..., and n are called linearly dependent, if there is such a set of coefficients α 1 , ... , α n that

α 1 a 1 + ... + α n a n = 0 (2.2)

and at least one of these coefficients is nonzero. If the specified set of coefficients does not exist, then the vectors are called linearly independent.

If α 1 = ... = α n = 0, then, obviously, α 1 a 1 + ... + α n a n = 0. With this in mind, we can say this: vectors a 1 , ..., and n are linearly independent if it follows from equality (2.2) that all coefficients α 1 , ... , α n are equal to zero.

The following theorem explains why the new concept is called the term "dependence" (or "independence"), and gives a simple criterion for linear dependence.

Theorem 2.1. In order for the vectors a 1 , ..., and n , n > 1, to be linearly dependent, it is necessary and sufficient that one of them is a linear combination of the others.

◄ Necessity. Assume that the vectors a 1 , ..., and n are linearly dependent. According to definition 2.1 of linear dependence, in equality (2.2) there is at least one non-zero coefficient on the left, for example α 1 . Leaving the first term on the left side of the equality, we move the rest to the right side, changing their signs as usual. Dividing the resulting equality by α 1 , we get

a 1 =-α 2 /α 1 ⋅ a 2 - ... - α n / α 1 ⋅ a n

those. representation of the vector a 1 as a linear combination of the remaining vectors a 2 , ..., and n .

Adequacy. Let, for example, the first vector a 1 can be represented as a linear combination of the remaining vectors: a 1 = β 2 a 2 + ... + β n a n . Transferring all the terms from the right side to the left, we get a 1 - β 2 a 2 - ... - β n a n = 0, i.e. linear combination of vectors a 1 , ..., and n with coefficients α 1 = 1, α 2 = - β 2 , ..., α n = - β n , equal to zero vector. In this linear combination, not all coefficients are equal to zero. According to definition 2.1, the vectors a 1 , ..., and n are linearly dependent.

The definition and criterion of linear dependence are formulated in such a way that they imply the presence of two or more vectors. However, one can also speak of a linear dependence of one vector. To realize this possibility, instead of "vectors are linearly dependent" we need to say "the system of vectors is linearly dependent". It is easy to verify that the expression "a system of one vector is linearly dependent" means that this single vector is zero (there is only one coefficient in a linear combination, and it must not be zero).

The concept of linear dependence has a simple geometric interpretation. This interpretation is clarified by the following three statements.

Theorem 2.2. Two vectors are linearly dependent if and only if they collinear.

◄ If the vectors a and b are linearly dependent, then one of them, for example a, is expressed through the other, i.e. a = λb for some real number λ. According to definition 1.7 works vectors by a number, the vectors a and b are collinear.

Now let the vectors a and b be collinear. If they are both zero, then it is obvious that they are linearly dependent, since any linear combination of them is equal to the zero vector. Let one of these vectors not be equal to 0, for example the vector b. Denote by λ the ratio of the lengths of the vectors: λ = |а|/|b|. Collinear vectors can be unidirectional or opposite directions. In the latter case, we change the sign of λ. Then, checking Definition 1.7, we see that a = λb. According to Theorem 2.1, the vectors a and b are linearly dependent.

Remark 2.1. In the case of two vectors, taking into account the criterion of linear dependence, the proved theorem can be reformulated as follows: two vectors are collinear if and only if one of them is represented as the product of the other by a number. This is a convenient criterion for the collinearity of two vectors.

Theorem 2.3. Three vectors are linearly dependent if and only if they coplanar.

◄ If three vectors a, b, c are linearly dependent, then, according to Theorem 2.1, one of them, for example a, is a linear combination of the others: a = βb + γс. Let us combine the origins of vectors b and c at point A. Then the vectors βb, γc will have a common origin at point A and parallelogram rule their sum, those. vector a, will be a vector with the beginning A and end, which is the vertex of a parallelogram built on summand vectors. Thus, all vectors lie in the same plane, that is, they are coplanar.

Let the vectors a, b, c be coplanar. If one of these vectors is zero, then it is obvious that it will be a linear combination of the others. It suffices to take all the coefficients of the linear combination equal to zero. Therefore, we can assume that all three vectors are not zero. Compatible start these vectors at a common point O. Let their ends be, respectively, the points A, B, C (Fig. 2.1). Draw lines through point C parallel to lines passing through pairs of points O, A and O, B. Denoting the intersection points by A" and B", we obtain a parallelogram OA"CB", therefore, OC" = OA" + OB" . Vector OA" and the non-zero vector a= OA are collinear, and therefore the first of them can be obtained by multiplying the second by real numberα:OA" = αOA. Similarly, OB" = βOB , β ∈ R. As a result, we obtain that OC" = α OA + βOB , i.e., the vector c is a linear combination of the vectors a and b. According to Theorem 2.1, the vectors a , b, c are linearly dependent.

Theorem 2.4. Any four vectors are linearly dependent.

◄ The proof follows the same scheme as in Theorem 2.3. Consider arbitrary four vectors a, b, c and d. If one of the four vectors is zero, or there are two collinear vectors among them, or three of the four vectors are coplanar, then these four vectors are linearly dependent. For example, if vectors a and b are collinear, then we can compose their linear combination αa + βb = 0 with non-zero coefficients, and then add the remaining two vectors to this combination, taking zeros as coefficients. We get a linear combination of four vectors equal to 0, in which there are non-zero coefficients.

Thus, we can assume that among the chosen four vectors there are no null ones, no two are collinear, and no three are coplanar. We choose point O as their common beginning. Then the ends of the vectors a, b, c, d will be some points A, B, C, D (Fig. 2.2). Draw three planes through the point D, parallel to planes OBC, OCA, OAB, and let A", B", C" be the points of intersection of these planes with the lines OA, OB, OS, respectively. We get a box OA"C"B"C"B"DA", and vectors a, b, c lie on its edges coming out of the vertex O. Since the quadrilateral OC"DC" is a parallelogram, then OD = OC" + OC" . In turn, the segment OS" is the diagonal of the parallelogram OA"C"B", so that OC" = OA" + OB" , and OD = OA" + OB" + OC" .

It remains to note that the pairs of vectors OA ≠ 0 and OA" , OB ≠ 0 and OB" , OC ≠ 0 and OC" are collinear, and, therefore, we can choose the coefficients α, β, γ so that OA" = αOA , OB" = βOB and OC" = γOC . Finally, we get OD = αOA + βOB + γOC . Consequently, the vector OD is expressed in terms of the remaining three vectors, and all four vectors, according to Theorem 2.1, are linearly dependent.

Definition 18.2 Function systemf, ..., f pcalledli-nape o h a in and c and m. o d in the gap(a, (3) if some nontrivial 5 the linear combination of these functions is equal to zero on this interval identically:

Definition 18.3 Vector system f 1 , ..., x n is called linear in a and c and m o d if some non-trivial, linear combination of these vectors is equal to the bullet vector:

L In order to avoid confusion, we will denote the number of the vector component (vector-function) by the lower index, and the number of the vector itself (if there are several such vectors) by the upper index.

"We remind you that a linear combination is called non-trivial if not all coefficients in it are zero.

Definition 18.4 The system of vector functions x 1 ^),..., x n (t) is called linear h and in and with and my about th on the interval,(a, /3) if some nontrivial linear combination of these vector functions is identically equal to the zero vector on this interval:

It is important to understand the connection of these three concepts (linear dependence of functions, vectors and vector functions) with each other.

First of all, if we present formula (18.6) in expanded form (remembering that each of x g (1) is a vector)


then it will be equivalent to the system of equalities

meaning linear dependence z component in the sense of the first definition (as functions). It is said that the linear dependence of vector functions implies their component by component linear dependency.

The converse is generally not true: it suffices to consider the example of a pair of vector functions

The first components of these vector functions simply coincide, which means they are linearly dependent. The second components are proportional, so. are also linearly dependent. However, if we try to build their linear combination, zero identically, then from the relation

immediately get the system

which has the only solution C - C-2 - 0. Thus, our vector functions are linearly independent.

What is the reason for such a strange property? What is the trick that allows you to build linearly independent vector functions from knowingly dependent functions?

It turns out that the whole point is not so much in the linear dependence of the components, but in the proportion of coefficients that is necessary to obtain zero. In the case of a linear dependence of vector functions, the same set of coefficients serves all components, regardless of the number. But in our example, for one component, one proportion of coefficients was required, and for the other, another. So the trick is really simple: in order to obtain a linear dependence of the entire vector functions from a "component-by-component" linear dependence, it is necessary that all components are linearly dependent "in the same proportion".

Let us now turn to the study of the relationship between the linear dependence of vector functions and vectors. Here, it is almost obvious that the linear dependence of the vector functions implies that for each fixed t* vector

will be linearly dependent.

The converse, generally speaking, does not hold: from the linear dependence of the vectors for each t does not follow a linear dependence of vector functions. This is easy to see in the example of two vector functions

At t=1, t=2 and t=3 we get pairs of vectors

respectively. Each pair of vectors is proportional (with coefficients 1,2 and 3 respectively). It is easy to see that for any fixed t* our pair of vectors will be proportional with the coefficient t*.

If we try to construct a linear combination of vector functions that is identically equal to zero, then the first components already give us the relation

which is only possible if With = With2 = 0. Thus, our vector functions turned out to be linearly independent. Again, the explanation for this effect is that in the case of a linear dependence of vector functions, the same set of constants Cj serves all values t, and in our example for each value t required its own proportion between the coefficients.