Theorems on the coordinates of a vector from a subspace. vector space

vector(or linear) space- a mathematical structure, which is a set of elements, called vectors, for which the operations of addition to each other and multiplication by a number - a scalar are defined. These operations are subject to eight axioms. Scalars can be elements of a real, complex, or any other number field. A special case of such a space is the usual three-dimensional Euclidean space, whose vectors are used, for example, to represent physical forces. It should be noted that a vector, as an element of a vector space, does not have to be specified as a directed segment. The generalization of the concept of "vector" to an element of a vector space of any nature not only does not cause confusion of terms, but also allows us to understand or even anticipate a number of results that are valid for spaces of an arbitrary nature.

Vector spaces are the subject of study in linear algebra. One of the main characteristics of a vector space is its dimension. The dimension is the maximum number of linearly independent elements of the space, that is, resorting to a rough geometric interpretation, the number of directions inexpressible in terms of each other by means of only the operations of addition and multiplication by a scalar. The vector space can be endowed with additional structures, such as the norm or the dot product. Such spaces naturally appear in calculus, predominantly in the form of infinite-dimensional (English), where the vectors are the functions . Many problems in analysis require finding out whether a sequence of vectors converges to given vector. Consideration of such questions is possible in vector spaces with additional structure, in most cases a suitable topology, which allows one to define the concepts of proximity and continuity. Such topological vector spaces, in particular Banach and Hilbert spaces, allow for deeper study.

The first works that anticipated the introduction of the concept of a vector space date back to the 17th century. It was then that analytical geometry, the doctrine of matrices, systems of linear equations, and Euclidean vectors received their development.

Definition [ | ]

Linear, or vector space V (F) (\displaystyle V\left(F\right)) over the field F (\displaystyle F) is an ordered quadruple (V , F , + , ⋅) (\displaystyle (V,F,+,\cdot)), where

  • V (\displaystyle V)- a non-empty set of elements of an arbitrary nature, which are called vectors;
  • F (\displaystyle F)- a field whose elements are called scalars;
  • Operation defined additions vectors V × V → V (\displaystyle V\times V\to V), matching each pair of elements x , y (\displaystyle \mathbf (x) ,\mathbf (y) ) sets V (\displaystyle V) V (\displaystyle V) calling them sum and denoted x + y (\displaystyle \mathbf (x) +\mathbf (y) );
  • Operation defined multiplication of vectors by scalars F × V → V (\displaystyle F\times V\to V), which matches each element λ (\displaystyle \lambda ) fields F (\displaystyle F) and each element x (\displaystyle \mathbf (x) ) sets V (\displaystyle V) the only element of the set V (\displaystyle V), denoted λ ⋅ x (\displaystyle \lambda \cdot \mathbf (x) ) or λ x (\displaystyle \lambda \mathbf (x) );

Vector spaces defined on the same set of elements but over different fields will be different vector spaces (for example, the set of pairs of real numbers R 2 (\displaystyle \mathbb (R) ^(2)) can be a two-dimensional vector space over the field of real numbers or one-dimensional - over the field of complex numbers).

The simplest properties[ | ]

  1. The vector space is an abelian group by addition.
  2. neutral element 0 ∈ V (\displaystyle \mathbf (0) \in V)
  3. 0 ⋅ x = 0 (\displaystyle 0\cdot \mathbf (x) =\mathbf (0) ) for anyone .
  4. For anyone x ∈ V (\displaystyle \mathbf (x) \in V) opposite element − x ∈ V (\displaystyle -\mathbf (x) \in V) is the only one that follows from group properties.
  5. 1 ⋅ x = x (\displaystyle 1\cdot \mathbf (x) =\mathbf (x) ) for anyone x ∈ V (\displaystyle \mathbf (x) \in V).
  6. (− α) ⋅ x = α ⋅ (− x) = − (α x) (\displaystyle (-\alpha)\cdot \mathbf (x) =\alpha \cdot (-\mathbf (x))=-( \alpha \mathbf (x))) for any and x ∈ V (\displaystyle \mathbf (x) \in V).
  7. α ⋅ 0 = 0 (\displaystyle \alpha \cdot \mathbf (0) =\mathbf (0) ) for anyone α ∈ F (\displaystyle \alpha \in F).

Related definitions and properties[ | ]

subspace[ | ]

Algebraic definition: Linear subspace or vector subspace is a non-empty subset K (\displaystyle K) linear space V (\displaystyle V) such that K (\displaystyle K) is itself a linear space with respect to those defined in V (\displaystyle V) the operations of addition and multiplication by a scalar. The set of all subspaces is usually denoted as L a t (V) (\displaystyle \mathrm (Lat) (V)). For a subset to be a subspace, it is necessary and sufficient that

The last two statements are equivalent to the following:

For any vectors x , y ∈ K (\displaystyle \mathbf (x) ,\mathbf (y) \in K) vector α x + β y (\displaystyle \alpha \mathbf (x) +\beta \mathbf (y) ) also belonged K (\displaystyle K) for any α , β ∈ F (\displaystyle \alpha ,\beta \in F).

In particular, a vector space consisting of only one zero vector is a subspace of any space; any space is a subspace of itself. Subspaces that do not coincide with these two are called own or non-trivial.

Subspace Properties[ | ]

Linear Combinations[ | ]

End sum of the view

α 1 x 1 + α 2 x 2 + … + α n x n (\displaystyle \alpha _(1)\mathbf (x) _(1)+\alpha _(2)\mathbf (x) _(2)+\ ldots +\alpha _(n)\mathbf (x) _(n))

The linear combination is called:

Basis. Dimension[ | ]

Vectors x 1 , x 2 , … , x n (\displaystyle \mathbf (x) _(1),\mathbf (x) _(2),\ldots ,\mathbf (x) _(n)) called linearly dependent, if there is a non-trivial linear combination of them equal to zero:

α 1 x 1 + α 2 x 2 + … + α n x n = 0 , | α 1 | + | α 2 | + … + | α n | ≠ 0. (\displaystyle \alpha _(1)\mathbf (x) _(1)+\alpha _(2)\mathbf (x) _(2)+\ldots +\alpha _(n)\mathbf ( x) _(n)=\mathbf (0) ,\quad \ |\alpha _(1)|+|\alpha _(2)|+\ldots +|\alpha _(n)|\neq 0.)

Otherwise, these vectors are called linearly independent.

This definition allows the following generalization: an infinite set of vectors from V (\displaystyle V) called linearly dependent, if some final its subset, and linearly independent, if any final subset is linearly independent.

Basis properties:

x = α 1 x 1 + α 2 x 2 + … + α n x n (\displaystyle \mathbf (x) =\alpha _(1)\mathbf (x) _(1)+\alpha _(2)\mathbf ( x) _(2)+\ldots +\alpha _(n)\mathbf (x) _(n)).

Linear shell[ | ]

Linear shell subsets X (\displaystyle X) linear space V (\displaystyle V)- intersection of all subspaces V (\displaystyle V) containing X (\displaystyle X).

Linear shell is a subspace V (\displaystyle V).

Linear shell is also called subspace generated X (\displaystyle X). It is also said that the linear span V (X) (\displaystyle (\mathcal (V))(X))- space, stretched over lots of X (\displaystyle X).

Linear shell V (X) (\displaystyle (\mathcal (V))(X)) consists of all possible linear combinations of various finite subsystems of elements from X (\displaystyle X). In particular, if X (\displaystyle X) is a finite set, then V (X) (\displaystyle (\mathcal (V))(X)) consists of all linear combinations of elements X (\displaystyle X). Thus, the null vector always belongs to the linear span.

If a X (\displaystyle X) is a linearly independent set, then it is a basis V (X) (\displaystyle (\mathcal (V))(X)), 1980. - 454 p.

A non-empty subset L of a linear space V is called linear subspace space V if

1) \mathbf(u)+\mathbf(v)\in L~~\forall \mathbf(u,v)\in L(the subspace is closed with respect to the operation of addition);

2) \lambda \mathbf(v)\in L~~ \forall \mathbf(v)\in L and any number \lambda (the subspace is closed with respect to the operation of multiplying a vector by a number).

To indicate a linear subspace, we will use the notation L\triangleleft V , and omit the word "linear" for brevity.

Remarks 8.7

1. Conditions 1, 2 in the definition can be replaced by one condition: \lambda \mathbf(u)+\mu \mathbf(v)\in L~~ \forall \mathbf(u,v)\in L and any numbers \lambda and \mu . Of course, here and in the definition we are talking about arbitrary numbers from the number field over which the space V is defined.

2. In any linear space V there are two linear subspaces:

a) the space V itself, i.e. V\triangleleft V ;

b) zero subspace \(\mathbf(o)\) , consisting of one zero vector of space V , i.e. . These subspaces are called improper, and all the rest are called proper.

3. Any subspace L of a linear space V is its subset: L\triangleleft V~\Rightarrow~L\subset V, but not every subset of M\subset V is a linear subspace, since it may turn out to be non-closed with respect to linear operations.

4. The subspace L of the linear space V is itself a linear space with the same operations of adding vectors and multiplying a vector by a number as in the space V , since axioms 1-8 hold for them. Therefore, we can talk about the dimension of a subspace, its basis, and so on.

5. The dimension of any subspace L of the linear space V does not exceed the dimension of the space V\colon\,\dim(L)\leqslant \dim(V). If the dimension of the subspace L\triangleleft V is equal to the dimension of the finite-dimensional space V (\dim(L)=\dim(V)) , then the subspace coincides with the space itself: L=V .

This follows from Theorem 8.2 (on the completion of a system of vectors to a basis). Indeed, taking the basis of the subspace L , we will complement it to the basis of the space V . If possible, then \dim(L)<\dim{V} . Если нельзя дополнить, т.е. базис подпространства L является базисом пространства V , то \dim{L}=\dim{V} . Учитывая, что пространство есть линейная оболочка базиса (см. следствие 1 теоремы 8.1), получаем L=V .

6. For any subset M of a linear space V, the linear span is a subspace of V and M\subset \operatorname(Lin)(M)\triangleleft V.

Indeed, if M=\varnothing (the empty set), then by definition \operatorname(Lin)(M)=\(\mathbf(o)\), i.e. is the null subspace and \varnothing\subset\(\mathbf(o)\)\triangleleft V. Let M\ne\varnothing . We need to prove that the set \operatorname(Lin)(M) is closed under the operations of adding its elements and multiplying its elements by a number. Recall that the elements of the linear span \operatorname(Lin)(M) serve as linear combinations of vectors from M . Since a linear combination of linear combinations of vectors is their linear combination, then, taking into account point 1, we conclude that \operatorname(Lin)(M) is a subspace of V , i.e. \operatorname(Lin)(M)\triangleleft V. Inclusion M\subset\operatorname(Lin)(M)- obvious, since any vector \mathbf(v)\in M can be represented as a linear combination 1\cdot\mathbf(v) , i.e. as an element of a set \operatorname(Lin)(M).

7. Linear shell \operatorname(Lin)(L) subspace L\triangleleft V coincides with the subspace L , i.e. .

Indeed, since the linear subspace L contains all possible linear combinations of its vectors, then \operatorname(Lin)(L)\subset L. Opposite inclusion (L\subset\operatorname(Lin)(L)) follows from point 6. Hence, \operatorname(Lin)(L)=L.

Examples of linear subspaces

We indicate some subspaces of linear spaces, examples of which were considered earlier. It is impossible to enumerate all subspaces of a linear space, except in trivial cases.

1. The space \(\mathbf(o)\) , consisting of one zero vector of the space V , is a subspace, i.e. \(\mathbf(o)\)\triangleleft V.

2. Let, as before, V_1,\,V_2,\,V_3 be sets of vectors (directed segments) on a straight line, on a plane, in space, respectively. If the line belongs to the plane, then V_1\triangleleft V_2\triangleleft V_3. On the contrary, the set of unit vectors is not a linear subspace, since when multiplying a vector by a number that is not equal to one, we get a vector that does not belong to the set.

3. In the n-dimensional arithmetic space \mathbb(R)^n, consider the set L of "semi-zero" columns of the form x=\begin(pmatrix) x_1&\cdots& x_m&0&\cdots&0\end(pmatrix)^T with last (n-m) elements equal to zero. The sum of "semi-zero" columns is a column of the same kind, i.e. the operation of addition is closed in L . Multiplying a "semi-zero" column by a number gives a "semi-zero" column, i.e. the operation of multiplication by a number is closed in L . That's why L\triangleleft \mathbb(R)^n, and \dim(L)=m . On the contrary, the subset of non-zero columns \mathbb(R)^n is not a linear subspace, since when multiplied by zero, a zero column is obtained, which does not belong to the set under consideration. Examples of other subspaces \mathbb(R)^n are given in the next subsection.

4. The space \(Ax=o\) of solutions of a homogeneous system of equations with n unknowns is a subspace of the n-dimensional arithmetic space \mathbb(R)^n . The dimension of this subspace is determined by the matrix of the system: \dim\(Ax=o\)=n-\operatorname(rg)A.

The set \(Ax=b\) of solutions of an inhomogeneous system (for b\ne o ) is not a subspace of \mathbb(R)^n , since the sum of two solutions is inhomogeneous; system will not be the solution of the same system.

5. In the space M_(n\times n) of square matrices of order l, consider two subsets: the set of symmetric matrices and the set M_(n\times n)^(\text(kos)) skew-symmetric matrices. The sum of symmetric matrices is a symmetric matrix, i.e. addition operation is closed in M_(n\times n)^(\text(sim)). Multiplication of a symmetric matrix by a number also does not break the symmetry, i.e. the operation of multiplying a matrix by a number is closed in M_(n\times n)^(\text(sim)). Therefore, the set of symmetric matrices is a subspace of the space of square matrices, i.e. M_(n\times n)^(\text(sim))\triangleleft M_(n\times n). It is easy to find the dimension of this subspace. The standard basis is formed by: l matrices with a single nonzero (equal to one) element on the main diagonal: a_(ii)=1~ i=1,\ldots,n, as well as matrices with two non-zero (equal to one) elements symmetric about the main diagonal: a_(ij)=a_(ji)=1, i=1,\ldots,n, j=i,i+1,\ldots, n. Total in the basis will be (n+(n-1)+\ldots+2+1= \frac(n(n+1))(2)) matrices. Consequently, \dim(M_(n\times n)^(\text(sim)))= \frac(n(n+1))(2). Similarly, we get that M_(n\times n)^(\text(kos))\triangleleft M_(n\times n) and \dim(M_(n\times n)^(\text(kos)))= \frac(n(n+1))(2).

The set of degenerate square matrices of the nth order is not a subspace of M_(n\times n) , since the sum of two degenerate matrices may turn out to be a non-degenerate matrix, for example, in the space M_(2\times2):

\begin(pmatrix)1&0\\0&0\end(pmatrix)+ \begin(pmatrix)0&0\\0&1\end(pmatrix)= \begin(pmatrix)1&0\\0&1\end(pmatrix)\!.

6. In the space of polynomials P(\mathbb(R)) with real coefficients, one can specify a natural chain of subspaces

P_0(\mathbb(R))\triangleleft P_1(\mathbb(R))\triangleleft P_2(\mathbb(R))\triangleleft \ldots \triangleleft P_n(\mathbb(R))\triangleleft \ldots \triangleleft P( \mathbb(R)).

The set of even polynomials (p(-x)=p(x)) is a linear subspace of P(\mathbb(R)) , since the sum of even polynomials and the product of an even polynomial by a number will be even polynomials. The set of odd polynomials (p(-x)=-p(x)) is also a linear space. The set of polynomials with real roots is not a linear subspace, since adding such two polynomials can result in a polynomial that does not have real roots, for example, (x^2-x)+(x+1)=x^2+1.

7. In the space C(\mathbb(R)) one can specify a natural chain of subspaces:

C(\mathbb(R))\triangleright C^1(\mathbb(R))\triangleright C^2(\mathbb(R)) \triangleright \ldots\triangleright C^m(\mathbb(R))\triangleright \ldots

Polynomials in P(\mathbb(R)) can be viewed as functions defined on \mathbb(R) . Since the polynomial is a continuous function along with its derivatives of any order, we can write: P(\mathbb(R))\triangleleft C(\mathbb(R)) and P_n(\mathbb(R))\triangleleft C^m(\mathbb(R)) \forall m,n\in\mathbb(N). The space of trigonometric binomials T_(\omega) (\mathbb(R)) is a subspace ×

Definition 6.1. subspace L n-dimensional space R is called the set of vectors that form a linear space with respect to the actions that are defined in R.

In other words, L is called a subspace of the space R if from x, yL follows that x+yL and if xL, then λ xL, where λ - any real number.

The simplest example of a subspace is the null subspace, i.e. subset of space R, consisting of a single zero element. The entire space can also be a subspace. R. These subspaces are called trivial or non-own.

subspace n-dimensional space is finite-dimensional and its dimension does not exceed n: dim L≤ dim R.

Sum and intersection of subspaces

Let L and M- two subspaces of space R.

Amount L+M is called the set of vectors x+y, where xL and yM. Obviously, any linear combination of vectors from L+M belongs L+M, Consequently L+M is a subspace of the space R(may coincide with the space R).

crossing LM subspaces L and M is the set of vectors that simultaneously belong to subspaces L and M(can only consist of a null vector).

Theorem 6.1. Sum of dimensions of arbitrary subspaces L and M finite-dimensional linear space R is equal to the dimension of the sum of these subspaces and the dimension of the intersection of these subspaces:

dim L+dim M=dim(L+M)+dim(L∩M).

Proof. Denote F=L+M and G=L∩M. Let G g-dimensional subspace. We choose a basis in it. Because GL and GM, hence the basis G can be added to the basis L and to the base M. Let the basis of the subspace L and let the basis of the subspace M. Let us show that the vectors

belongs to subspace G=L∩M. On the other hand, the vector v can be represented by a linear combination of the basis vectors of the subspace G:

(6.5)

From equations (6.4) and (6.5) we have:

Due to the linear independence of the basis of the subspace L we have:

are linearly independent. But any vector z from F(by definition of the sum of subspaces) can be represented by the sum x+y, where x∈L, y∈M. In its turn x is represented by a linear combination of vectors a y- a linear combination of vectors. Hence vectors (6.10) generate a subspace F. We have found that the vectors (6.10) form a basis F=L+M.

Studying the bases of subspaces L and M and subspace basis F=L+M(6.10), we have: dim L=g+l, dim M=g+m, dim (L+M)=g+l+m. Consequently:

dimL+dimM−dim(L∩M)=dim(L+M).

Direct sum of subspaces

Definition 6.2. Space F is a direct sum of subspaces L and M, if each vector x space F can only be represented as a sum x=y+z, where y L and zM.

The direct sum is denoted LM. They say that if F=LM, then F decomposes into a direct sum of its subspaces L and M.

Theorem 6.2. To n-dimensional space R was a direct sum of subspaces L and M, it is enough that the intersection L and M contains only the zero element and that the dimension of R is equal to the sum of the dimensions of the subspaces L and M.

Proof. Let us choose some basis in the subspace L and some basis in the subspace M. Let us prove that

(6.13)

Since the left side of (6.13) is a vector of the subspace L, and the right side is the subspace vector M and LM=0 , then

In any linear space it is possible to select such a subset vectors, which under operations from is itself a linear space. This can be done in a variety of ways, and the structure of such subsets carries important information about the linear space itself.

Definition 2.1. Linear space subset called linear subspace, if the following two conditions are met:

Definition 2.1 actually says that a linear subspace is any subset given linear space, closed under linear operations, those. applying linear operations to vectors belonging to this subset does not take the result out of the subset. Let us show that the linear subspace H as an independent object is a linear space with respect to operations given in the ambient linear space . Indeed, these operations are defined for any elements of the set , and hence for elements of the subset H. Definition 2.1 actually requires that for elements from H the result of the operations also belonged to H. Therefore, the operations specified in can be considered as operations on a narrower set H. For these operations on the set H linear space axioms a)-b) and e)-h) are satisfied due to the fact that they are valid in . In addition, the remaining two axioms are also satisfied, since, according to Definition 2.1, if then:

1) and 0- null vector in H;

2) .

In any linear space there are always two linear subspaces: the linear space itself and null subspace {0}, single element 0. These linear subspaces are called not own, while all other linear subspaces are called own. Let us give examples of proper linear subspaces.

Example 2.1. In the linear space of free vectors of three-dimensional space, a linear subspace is formed by:

a) all vectors parallel to the given plane;

b) all vectors parallel to the given line.

This follows from the following considerations. It follows from the definition of the sum of free vectors that two vectors and their sum are coplanar (Fig. 2.1, a). Therefore, if and are parallel to a given plane, then their sum will be parallel to the same plane. Thus, it is established that for case a) condition 1) of Definition 2.1 is satisfied. If the vector is multiplied by a number, you get a vector collinear to the original one (Fig. 2.1.6). This proves the fulfillment of condition 2) of Definition 2.1. Case b) is justified in a similar way.

Linear space gives a visual representation of what a linear subspace is. Indeed, we fix some point in space. Then different planes and different straight lines passing through this point will correspond to different linear subspaces from (Fig. 2.2).

It is not so obvious that there are no other proper subspaces in . If in a linear subspace H there are no non-zero vectors, then H - zero linear subspace, which is improper. If in H is a nonzero vector, and any two vectors from H are collinear, then all vectors of this linear subspace are parallel to some straight line passing through a fixed point. Consequently, H coincides with one of the linear subspaces described in case b). If in H there are two non-collinear vectors, and any three vectors are coplanar, then all vectors of such a linear subspace are parallel to some plane passing through a fixed point. This is case a). Let in a linear subspace H there are three non-coplanar vectors. Then they form basis in . Any free vector can be represented as linear combination these vectors. Hence, all free vectors fall into the linear subspace H, and therefore it is the same as . In this case, we get an improper linear subspace. So, all proper subspaces can be represented as planes or straight lines passing through a fixed point.

Example 2.2. Any solution of a homogeneous system of linear algebraic equations (SLAE) from P variables can be viewed as a vector in linear arithmetic spaces . The set of all such vectors is a linear subspace in . Indeed, the solutions of a homogeneous SLAE can be added component by component and multiplied by real numbers, i.e. according to the rules for adding vectors from . The result of the operation will again be the solution of a homogeneous SLAE. Hence, both conditions for the definition of a linear subspace are satisfied.

The equation has a set of solutions, which is a linear subspace in. But the same equation can be considered as an equation of a plane in some rectangular coordinate system. The plane passes through the origin, and the radius vectors of all points of the plane form a two-dimensional subspace in linear space

The set of solutions of a homogeneous SLAE

also forms a linear subspace in . At the same time, this system can be considered as general equations of a straight line in space, given in some rectangular coordinate system .. This line passes through the origin, and the set of radius vectors of all its points forms a one-dimensional subspace in .

Example 2.3. In the linear space of square matrices of order P a linear subspace is formed by:

a) all symmetric matrices;

b) all skew-symmetric matrices;

c) all upper (lower) triangular matrices.

When adding such matrices or multiplying by a number, we get a matrix of the same kind. In contrast, a subset of degenerate matrices is not a linear subspace, since the sum of two degenerate matrices can be a non-degenerate matrix:

Example 2.4. In the linear space of functions that are continuous on the segment , the following linear subspaces can be distinguished:

a) the set of functions that are continuous on an interval and continuously differentiable in the interval (0,1) (this statement is based on the properties of differentiable functions: the sum of differentiable functions is a differentiable function, the product of a differentiable function by a number is a differentiable function);

b) the set of all polynomials;

c) set all polynomials of degree at most n.

A non-empty subset L of a linear space V is called linear subspace space V if


1) \mathbf(u)+\mathbf(v)\in L~~\forall \mathbf(u,v)\in L(the subspace is closed with respect to the operation of addition);


2) \lambda \mathbf(v)\in L~~ \forall \mathbf(v)\in L and any number \lambda (the subspace is closed with respect to the operation of multiplying a vector by a number).


To indicate a linear subspace, we will use the notation L\triangleleft V , and omit the word "linear" for brevity.


Remarks 8.7


1. Conditions 1, 2 in the definition can be replaced by one condition: \lambda \mathbf(u)+\mu \mathbf(v)\in L~~ \forall \mathbf(u,v)\in L and any numbers \lambda and \mu . Of course, here and in the definition we are talking about arbitrary numbers from the number field over which the space V is defined.


2. In any linear space V there are two linear subspaces:


a) the space V itself, i.e. V\triangleleft V ;

b) zero subspace \(\mathbf(o)\) , consisting of one zero vector of space V , i.e. . These subspaces are called improper, and all the rest are called proper.


3. Any subspace L of a linear space V is its subset: L\triangleleft V~\Rightarrow~L\subset V, but not every subset of M\subset V is a linear subspace, since it may turn out to be non-closed with respect to linear operations.


4. The subspace L of the linear space V is itself a linear space with the same operations of adding vectors and multiplying a vector by a number as in the space V , since axioms 1-8 hold for them. Therefore, we can talk about the dimension of a subspace, its basis, and so on.


5. The dimension of any subspace L of the linear space V does not exceed the dimension of the space V\colon\,\dim(L)\leqslant \dim(V). If the dimension of the subspace L\triangleleft V is equal to the dimension of the finite-dimensional space V (\dim(L)=\dim(V)), then the subspace coincides with the space itself: L=V .


This follows from Theorem 8.2 (on the completion of a system of vectors to a basis). Indeed, taking the basis of the subspace L , we will complement it to the basis of the space V . If possible, then \dim(L)<\dim{V} . Если нельзя дополнить, т.е. базис подпространства L является базисом пространства V , то \dim{L}=\dim{V} . Учитывая, что пространство есть линейная оболочка базиса (см. следствие 1 теоремы 8.1), получаем L=V .


6. For any subset M of a linear space V, the linear span is a subspace of V and M\subset \operatorname(Lin)(M)\triangleleft V.


Indeed, if M=\varnothing (the empty set), then by definition \operatorname(Lin)(M)=\(\mathbf(o)\), i.e. is the null subspace and \varnothing\subset\(\mathbf(o)\)\triangleleft V. Let M\ne\varnothing . We need to prove that the set \operatorname(Lin)(M) is closed under the operations of adding its elements and multiplying its elements by a number. Recall that the elements of the linear span \operatorname(Lin)(M) serve as linear combinations of vectors from M . Since a linear combination of linear combinations of vectors is their linear combination, then, taking into account point 1, we conclude that \operatorname(Lin)(M) is a subspace of V , i.e. \operatorname(Lin)(M)\triangleleft V. Inclusion M\subset\operatorname(Lin)(M)- obvious, since any vector \mathbf(v)\in M can be represented as a linear combination 1\cdot\mathbf(v) , i.e. as an element of a set \operatorname(Lin)(M).


7. Linear shell \operatorname(Lin)(L) subspace L\triangleleft V coincides with the subspace L , i.e. .


Indeed, since the linear subspace L contains all possible linear combinations of its vectors, then \operatorname(Lin)(L)\subset L. Opposite inclusion (L\subset\operatorname(Lin)(L)) follows from point 6. Hence, \operatorname(Lin)(L)=L.

Examples of linear subspaces

We indicate some subspaces of linear spaces, examples of which were considered earlier. It is impossible to enumerate all subspaces of a linear space, except in trivial cases.


1. The space \(\mathbf(o)\) , consisting of one zero vector of the space V , is a subspace, i.e. \(\mathbf(o)\)\triangleleft V.


2. Let, as before, V_1,\,V_2,\,V_3 be sets of vectors (directed segments) on a straight line, on a plane, in space, respectively. If the line belongs to the plane, then V_1\triangleleft V_2\triangleleft V_3. On the contrary, the set of unit vectors is not a linear subspace, since when multiplying a vector by a number that is not equal to one, we get a vector that does not belong to the set.


3. In the n-dimensional arithmetic space \mathbb(R)^n, consider the set L of "semi-zero" columns of the form x=\begin(pmatrix) x_1&\cdots& x_m&0&\cdots&0\end(pmatrix)^T with last (n-m) elements equal to zero. The sum of "semi-zero" columns is a column of the same kind, i.e. the operation of addition is closed in L . Multiplying a "semi-zero" column by a number gives a "semi-zero" column, i.e. the operation of multiplication by a number is closed in L . That's why L\triangleleft \mathbb(R)^n, and \dim(L)=m . On the contrary, the subset of non-zero columns \mathbb(R)^n is not a linear subspace, since when multiplied by zero, a zero column is obtained, which does not belong to the set under consideration. Examples of other subspaces \mathbb(R)^n are given in the next subsection.


4. The space \(Ax=o\) of solutions of a homogeneous system of equations with n unknowns is a subspace of the n-dimensional arithmetic space \mathbb(R)^n . The dimension of this subspace is determined by the matrix of the system: \dim\(Ax=o\)=n-\operatorname(rg)A.


The set \(Ax=b\) of solutions of an inhomogeneous system (for b\ne o ) is not a subspace of \mathbb(R)^n , since the sum of two solutions is inhomogeneous; system will not be the solution of the same system.


5. In the space M_(n\times n) of square matrices of order l, consider two subsets: the set of symmetric matrices and the set M_(n\times n)^(\text(kos)) skew-symmetric matrices. The sum of symmetric matrices is a symmetric matrix, i.e. addition operation is closed in M_(n\times n)^(\text(sim)). Multiplication of a symmetric matrix by a number also does not break the symmetry, i.e. the operation of multiplying a matrix by a number is closed in M_(n\times n)^(\text(sim)). Therefore, the set of symmetric matrices is a subspace of the space of square matrices, i.e. M_(n\times n)^(\text(sim))\triangleleft M_(n\times n). It is easy to find the dimension of this subspace. The standard basis is formed by: l matrices with a single nonzero (equal to one) element on the main diagonal: a_(ii)=1~ i=1,\ldots,n, as well as matrices with two non-zero (equal to one) elements symmetric about the main diagonal: a_(ij)=a_(ji)=1, i=1,\ldots,n, j=i,i+1,\ldots, n. Total in the basis will be (n+(n-1)+\ldots+2+1= \frac(n(n+1))(2)) matrices. Consequently, \dim(M_(n\times n)^(\text(sim)))= \frac(n(n+1))(2). Similarly, we get that M_(n\times n)^(\text(kos))\triangleleft M_(n\times n) and \dim(M_(n\times n)^(\text(kos)))= \frac(n(n+1))(2).


The set of degenerate square matrices of the nth order is not a subspace of M_(n\times n) , since the sum of two degenerate matrices may turn out to be a non-degenerate matrix, for example, in the space M_(2\times2):


\begin(pmatrix)1&0\\0&0\end(pmatrix)+ \begin(pmatrix)0&0\\0&1\end(pmatrix)= \begin(pmatrix)1&0\\0&1\end(pmatrix)\!.


6. In the space of polynomials P(\mathbb(R)) with real coefficients, one can specify a natural chain of subspaces


P_0(\mathbb(R))\triangleleft P_1(\mathbb(R))\triangleleft P_2(\mathbb(R))\triangleleft \ldots \triangleleft P_n(\mathbb(R))\triangleleft \ldots \triangleleft P( \mathbb(R)).


The set of even polynomials (p(-x)=p(x)) is a linear subspace of P(\mathbb(R)) , since the sum of even polynomials and the product of an even polynomial by a number will be even polynomials. The set of odd polynomials (p(-x)=-p(x)) is also a linear space. The set of polynomials with real roots is not a linear subspace, since adding such two polynomials can result in a polynomial that does not have real roots, for example, (x^2-x)+(x+1)=x^2+1.


7. In the space C(\mathbb(R)) one can specify a natural chain of subspaces:


C(\mathbb(R))\triangleright C^1(\mathbb(R))\triangleright C^2(\mathbb(R)) \triangleright \ldots\triangleright C^m(\mathbb(R))\triangleright \ldots


Polynomials in P(\mathbb(R)) can be viewed as functions defined on \mathbb(R) . Since the polynomial is a continuous function along with its derivatives of any order, we can write: P(\mathbb(R))\triangleleft C(\mathbb(R)) and P_n(\mathbb(R))\triangleleft C^m(\mathbb(R)) \forall m,n\in\mathbb(N). The space of trigonometric binomials T_(\omega) (\mathbb(R)) is a subspace of C^m(\mathbb(R)) , since the derivatives of any order of the function f(t)=a\sin\omega t+b\cos\omega t continuous, i.e. T_(\omega)(\mathbb(R))\triangleleft C^m(\mathbb(R)) \forall m\in \mathbb(N). The set of continuous periodic functions is not a subspace of C(\mathbb(R)) , since the sum of two periodic functions may turn out to be a non-periodic function, for example, \sin(t)+\sin(\pi t).