In mathematics, a linear combination is an expression constructed from a set of terms by multiplying each term by a constant and adding the results (e.g. a linear combination of x and y would be any expression of the form ax + by, where a and b are constants).^{[1]}^{[2]}^{[3]} The concept of linear combinations is central to linear algebra and related fields of mathematics. Most of this article deals with linear combinations in the context of a vector space over a field, with some generalizations given at the end of the article.
Definition
Suppose that K is a field (for example, the real numbers) and V is a vector space over K. As usual, we call elements of V vectors and call elements of K scalars. If v_{1},...,v_{n} are vectors and a_{1},...,a_{n} are scalars, then the linear combination of those vectors with those scalars as coefficients is

a_1 v_1 + a_2 v_2 + a_3 v_3 + \cdots + a_n v_n. \,
There is some ambiguity in the use of the term "linear combination" as to whether it refers to the expression or to its value. In most cases the value is emphasized, like in the assertion "the set of all linear combinations of v_{1},...,v_{n} always forms a subspace". However, one could also say "two different linear combinations can have the same value" in which case the expression must have been meant. The subtle difference between these uses is the essence of the notion of linear dependence: a family F of vectors is linearly independent precisely if any linear combination of the vectors in F (as value) is uniquely so (as expression). In any case, even when viewed as expressions, all that matters about a linear combination is the coefficient of each v_{i}; trivial modifications such as permuting the terms or adding terms with zero coefficient do not give distinct linear combinations.
In a given situation, K and V may be specified explicitly, or they may be obvious from context. In that case, we often speak of a linear combination of the vectors v_{1},...,v_{n}, with the coefficients unspecified (except that they must belong to K). Or, if S is a subset of V, we may speak of a linear combination of vectors in S, where both the coefficients and the vectors are unspecified, except that the vectors must belong to the set S (and the coefficients must belong to K). Finally, we may speak simply of a linear combination, where nothing is specified (except that the vectors must belong to V and the coefficients must belong to K); in this case one is probably referring to the expression, since every vector in V is certainly the value of some linear combination.
Note that by definition, a linear combination involves only finitely many vectors (except as described in Generalizations below). However, the set S that the vectors are taken from (if one is mentioned) can still be infinite; each individual linear combination will only involve finitely many vectors. Also, there is no reason that n cannot be zero; in that case, we declare by convention that the result of the linear combination is the zero vector in V.
Examples and counterexamples
Let the field K be the set R of real numbers, and let the vector space V be the Euclidean space R^{3}. Consider the vectors e_{1} = (1,0,0), e_{2} = (0,1,0) and e_{3} = (0,0,1). Then any vector in R^{3} is a linear combination of e_{1}, e_{2} and e_{3}.
To see that this is so, take an arbitrary vector (a_{1},a_{2},a_{3}) in R^{3}, and write:

( a_1 , a_2 , a_3) = ( a_1 ,0,0) + (0, a_2 ,0) + (0,0, a_3) \,


= a_1 (1,0,0) + a_2 (0,1,0) + a_3 (0,0,1) \,

= a_1 e_1 + a_2 e_2 + a_3 e_3. \,
Let K be the set C of all complex numbers, and let V be the set C_{C}(R) of all continuous functions from the real line R to the complex plane C. Consider the vectors (functions) f and g defined by f(t) := e^{it} and g(t) := e^{−it}. (Here, e is the base of the natural logarithm, about 2.71828..., and i is the imaginary unit, a square root of −1.) Some linear combinations of f and g are:

\cos t = \begin{matrix}\frac12\end{matrix} e^{i t} + \begin{matrix}\frac12\end{matrix} e^{i t} \,

2 \sin t = (i ) e^{i t} + ( i ) e^{i t}. \,
On the other hand, the constant function 3 is not a linear combination of f and g. To see this, suppose that 3 could be written as a linear combination of e^{it} and e^{−it}. This means that there would exist complex scalars a and b such that ae^{it} + be^{−it} = 3 for all real numbers t. Setting t = 0 and t = π gives the equations a + b = 3 and a + b = −3, and clearly this cannot happen. See Euler's identity.
Let K be R, C, or any field, and let V be the set P of all polynomials with coefficients taken from the field K. Consider the vectors (polynomials) p_{1} := 1, p_{2} := x + 1, and p_{3} := x^{2} + x + 1.
Is the polynomial x^{2} − 1 a linear combination of p_{1}, p_{2}, and p_{3}? To find out, consider an arbitrary linear combination of these vectors and try to see when it equals the desired vector x^{2} − 1. Picking arbitrary coefficients a_{1}, a_{2}, and a_{3}, we want

a_1 (1) + a_2 ( x + 1) + a_3 ( x^2 + x + 1) = x^2  1. \,
Multiplying the polynomials out, this means

( a_1 ) + ( a_2 x + a_2) + ( a_3 x^2 + a_3 x + a_3) = x^2  1 \,
and collecting like powers of x, we get

a_3 x^2 + ( a_2 + a_3 ) x + ( a_1 + a_2 + a_3 ) = 1 x^2 + 0 x + (1). \,
Two polynomials are equal if and only if their corresponding coefficients are equal, so we can conclude

a_3 = 1, \quad a_2 + a_3 = 0, \quad a_1 + a_2 + a_3 = 1. \,
This system of linear equations can easily be solved. First, the first equation simply says that a_{3} is 1. Knowing that, we can solve the second equation for a_{2}, which comes out to −1. Finally, the last equation tells us that a_{1} is also −1. Therefore, the only possible way to get a linear combination is with these coefficients. Indeed,

x^2  1 = 1  ( x + 1) + ( x^2 + x + 1) =  p_1  p_2 + p_3 \,
so x^{2} − 1 is a linear combination of p_{1}, p_{2}, and p_{3}.
On the other hand, what about the polynomial x^{3} − 1? If we try to make this vector a linear combination of p_{1}, p_{2}, and p_{3}, then following the same process as before, we’ll get the equation

0 x^3 + a_3 x^2 + ( a_2 + a_3 ) x + ( a_1 + a_2 + a_3 ) \,

= 1 x^3 + 0 x^2 + 0 x + (1). \,
However, when we set corresponding coefficients equal in this case, the equation for x^{3} is

0 = 1 \,
which is always false. Therefore, there is no way for this to work, and x^{3} − 1 is not a linear combination of p_{1}, p_{2}, and p_{3}.
The linear span
Main article: linear span
Take an arbitrary field K, an arbitrary vector space V, and let v_{1},...,v_{n} be vectors (in V). It’s interesting to consider the set of all linear combinations of these vectors. This set is called the linear span (or just span) of the vectors, say S ={v_{1},...,v_{n}}. We write the span of S as span(S) or sp(S):

\mathrm{Sp}( v_1 ,\ldots, v_n) := \{ a_1 v_1 + \cdots + a_n v_n : a_1 ,\ldots, a_n \in K \}. \,
Linear independence
For some sets of vectors v_{1},...,v_{n}, a single vector can be written in two different ways as a linear combination of them:

v = \sum a_i v_i = \sum b_i v_i\text{ where } (a_i) \neq (b_i).
Equivalently, by subtracting these (c_i := a_i  b_i) a nontrivial combination is zero:

0 = \sum c_i v_i.
If that is possible, then v_{1},...,v_{n} are called linearly dependent; otherwise, they are linearly independent. Similarly, we can speak of linear dependence or independence of an arbitrary set S of vectors.
If S is linearly independent and the span of S equals V, then S is a basis for V.
Affine, conical, and convex combinations
By restricting the coefficients used in linear combinations, one can define the related concepts of affine combination, conical combination, and convex combination, and the associated notions of sets closed under these operations.
Because these are more restricted operations, more subsets will be closed under them, so affine subsets, convex cones, and convex sets are generalizations of vector subspaces: a vector subspace is also an affine subspace, a convex cone, and a convex set, but a convex set need not be a vector subspace, affine, or a convex cone.
These concepts often arise when one can take certain linear combinations of objects, but not any: for example, probability distributions are closed under convex combination (they form a convex set), but not conical or affine combinations (or linear), and positive measures are closed under conical combination but not affine or linear – hence one defines signed measures as the linear closure.
Linear and affine combinations can be defined over any field (or ring), but conical and convex combination require a notion of "positive", and hence can only be defined over an ordered field (or ordered ring), generally the real numbers.
If one allows only scalar multiplication, not addition, one obtains a (not necessarily convex) cone; one often restricts the definition to only allowing multiplication by positive scalars.
All of these concepts are usually defined as subsets of an ambient vector space (except for affine spaces, which are also considered as "vector spaces forgetting the origin"), rather than being axiomatized independently.
Operad theory
More abstractly, in the language of operad theory, one can consider vector spaces to be algebras over the operad \mathbf{R}^\infty (the infinite direct sum, so only finitely many terms are nonzero; this corresponds to only taking finite sums), which parametrizes linear combinations: the vector (2,3,5,0,\dots) for instance corresponds to the linear combination 2v_1 + 3v_2 5v_3 + 0v_4 + \cdots. Similarly, one can consider affine combinations, conical combinations, and convex combinations to correspond to the suboperads where the terms sum to 1, the terms are all nonnegative, or both, respectively. Graphically, these are the infinite affine hyperplane, the infinite hyperoctant, and the infinite simplex. This formalizes what is meant by \mathbf{R}^n being or the standard simplex being model spaces, and such observations as that every bounded convex polytope is the image of a simplex. Here suboperads correspond to more restricted operations and thus more general theories.
From this point of view, we can think of linear combinations as the most general sort of operation on a vector space – saying that a vector space is an algebra over the operad of linear combinations is precisely the statement that all possible algebraic operations in a vector space are linear combinations.
The basic operations of addition and scalar multiplication, together with the existence of an additive identity and additive inverses, cannot be combined in any more complicated way than the generic linear combination: the basic operations are a generating set for the operad of all linear combinations.
Ultimately, this fact lies at the heart of the usefulness of linear combinations in the study of vector spaces.
Generalizations
If V is a topological vector space, then there may be a way to make sense of certain infinite linear combinations, using the topology of V. For example, we might be able to speak of a_{1}v_{1} + a_{2}v_{2} + a_{3}v_{3} + ..., going on forever. Such infinite linear combinations do not always make sense; we call them convergent when they do. Allowing more linear combinations in this case can also lead to a different concept of span, linear independence, and basis. The articles on the various flavours of topological vector spaces go into more detail about these.
If K is a commutative ring instead of a field, then everything that has been said above about linear combinations generalizes to this case without change. The only difference is that we call spaces like this V modules instead of vector spaces. If K is a noncommutative ring, then the concept still generalizes, with one caveat: Since modules over noncommutative rings come in left and right versions, our linear combinations may also come in either of these versions, whatever is appropriate for the given module. This is simply a matter of doing scalar multiplication on the correct side.
A more complicated twist comes when V is a bimodule over two rings, K_{L} and K_{R}. In that case, the most general linear combination looks like

a_1 v_1 b_1 + \cdots + a_n v_n b_n \,
where a_{1},...,a_{n} belong to K_{L}, b_{1},...,b_{n} belong to K_{R}, and v_{1},...,v_{n} belong to V.
References

^ Lay, David C. (2006). Linear Algebra and Its Applications (3rd ed.).

^

^ Axler, Sheldon (2002). Linear Algebra Done Right (2nd ed.).
External links

Linear Combinations and Span: Understanding linear combinations and spans of vectors, khanacademy.org.
This article was sourced from Creative Commons AttributionShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, EGovernment Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a nonprofit organization.