In
mathematics, the dot product or scalar product^{
[note 1]} is an
algebraic operation that takes two equal-length sequences of numbers (usually
coordinate vectors), and returns a single number. In
Euclidean geometry, the dot product of the
Cartesian coordinates of two
vectors is widely used. It is often called the inner product (or rarely projection product) of Euclidean space, even though it is not the only inner product that can be defined on Euclidean space (see
Inner product space for more).
Algebraically, the dot product is the sum of the
products of the corresponding entries of the two sequences of numbers. Geometrically, it is the product of the
Euclidean magnitudes of the two vectors and the
cosine of the angle between them. These definitions are equivalent when using Cartesian coordinates. In modern
geometry,
Euclidean spaces are often defined by using
vector spaces. In this case, the dot product is used for defining lengths (the length of a vector is the
square root of the dot product of the vector by itself) and angles (the cosine of the angle between two vectors is the
quotient of their dot product by the product of their lengths).
The name "dot product" is derived from the
centered dot " · " that is often used to designate this operation;^{
[1]} the alternative name "scalar product" emphasizes that the result is a
scalar, rather than a vector (as with the
vector product in three-dimensional space).
Definition
The dot product may be defined algebraically or geometrically. The geometric definition is based on the notions of angle and distance (magnitude) of vectors. The equivalence of these two definitions relies on having a
Cartesian coordinate system for Euclidean space.
In modern presentations of
Euclidean geometry, the points of space are defined in terms of their
Cartesian coordinates, and
Euclidean space itself is commonly identified with the
real coordinate space$\mathbf {R} ^{n}$. In such a presentation, the notions of length and angle are defined by means of the dot product. The length of a vector is defined as the
square root of the dot product of the vector by itself, and the
cosine of the (non oriented) angle between two vectors of length one is defined as their dot product. So the equivalence of the two definitions of the dot product is a part of the equivalence of the classical and the modern formulations of Euclidean geometry.
Coordinate definition
The dot product of two vectors $\mathbf {a} =[a_{1},a_{2},\cdots ,a_{n}]$ and $\mathbf {b} =[b_{1},b_{2},\cdots ,b_{n}]$, specified with respect to an
orthonormal basis, is defined as:^{
[2]}
where $a{^{\mathsf {T}}}$ denotes the
transpose of $\mathbf {a}$.
Expressing the above example in this way, a 1 × 3 matrix (
row vector) is multiplied by a 3 × 1 matrix (
column vector) to get a 1 × 1 matrix that is identified with its unique entry:
In
Euclidean space, a
Euclidean vector is a geometric object that possesses both a magnitude and a direction. A vector can be pictured as an arrow. Its magnitude is its length, and its direction is the direction to which the arrow points. The
magnitude of a vector $\mathbf {a}$ is denoted by $\left\|\mathbf {a} \right\|$. The dot product of two Euclidean vectors $\mathbf {a}$ and $\mathbf {b}$ is defined by^{
[3]}^{
[4]}^{
[1]}
where $\theta$ is the
angle between $\mathbf {a}$ and $\mathbf {b}$.
In particular, if the vectors $\mathbf {a}$ and $\mathbf {b}$ are
orthogonal (i.e., their angle is ${\frac {\pi }{2}}$ or $90^{\circ }$), then $\cos {\frac {\pi }{2}}=0$, which implies that
$\mathbf {a} \cdot \mathbf {b} =0.$
At the other extreme, if they are
codirectional, then the angle between them is zero with $\cos 0=1$ and
These properties may be summarized by saying that the dot product is a
bilinear form. Moreover, this bilinear form is
positive definite, which means that $\mathbf {a} \cdot \mathbf {a}$ is never negative, and is zero if and only if $\mathbf {a} =\mathbf {0}$, the zero vector.
Equivalence of the definitions
If $\mathbf {e} _{1},\cdots ,\mathbf {e} _{n}$ are the
standard basis vectors in $\mathbf {R} ^{n}$, then we may write
The vectors $\mathbf {e} _{i}$ are an
orthonormal basis, which means that they have unit length and are at right angles to each other. Since these vectors have unit length,
$\mathbf {e} _{i}\cdot \mathbf {e} _{i}=1$
and since they form right angles with each other, if $i\neq j$,
where $a_{i}$ is the component of vector $\mathbf {a}$ in the direction of $\mathbf {e} _{i}$. The last step in the equality can be seen from the figure.
Now applying the distributivity of the geometric version of the dot product gives
which is precisely the algebraic definition of the dot product. So the geometric dot product equals the algebraic dot product.
Properties
The dot product fulfills the following properties if $\mathbf {a}$, $\mathbf {b}$, and $\mathbf {c}$ are real
vectors and $r$, $c_{1}$ and $c_{2}$ are
scalars.^{
[2]}^{
[3]}
because the dot product between a scalar $\mathbf {a} \cdot \mathbf {b}$ and a vector $\mathbf {c}$ is not defined, which means that the expressions involved in the associative property, $(\mathbf {a} \cdot \mathbf {b} )\cdot \mathbf {c}$ or $\mathbf {a} \cdot (\mathbf {b} \cdot \mathbf {c} )$ are both ill-defined.^{
[7]} Note however that the previously mentioned scalar multiplication property is sometimes called the "associative law for scalar and dot product"^{
[8]} or one can say that "the dot product is associative with respect to scalar multiplication" because $c(\mathbf {a} \cdot \mathbf {b} )=(c\mathbf {a} )\cdot \mathbf {b} =\mathbf {a} \cdot (c\mathbf {b} )$.^{
[9]}
Unlike multiplication of ordinary numbers, where if $ab=ac$, then $b$ always equals $c$ unless $a$ is zero, the dot product does not obey the
cancellation law: If $\mathbf {a} \cdot \mathbf {b} =\mathbf {a} \cdot \mathbf {c}$ and $\mathbf {a} \neq \mathbf {0}$, then we can write: $\mathbf {a} \cdot (\mathbf {b} -\mathbf {c} )=0$ by the
distributive law; the result above says this just means that $\mathbf {a}$ is perpendicular to $(\mathbf {b} -\mathbf {c} )$, which still allows $(\mathbf {b} -\mathbf {c} )\neq \mathbf {0}$, and therefore allows $\mathbf {b} \neq \mathbf {c}$.
If $\mathbf {a}$ and $\mathbf {b}$ are vector-valued
differentiable functions, then the derivative (
denoted by a prime${}'$) of $\mathbf {a} \cdot \mathbf {b}$ is given by the rule
Given two vectors ${\color {red}\mathbf {a} }$ and ${\color {blue}\mathbf {b} }$ separated by angle $\theta$ (see image right), they form a triangle with a third side ${\color {orange}\mathbf {c} }={\color {red}\mathbf {a} }-{\color {blue}\mathbf {b} }$. Let $a$, $b$ and $c$ denote the lengths of ${\color {red}\mathbf {a} }$, ${\color {blue}\mathbf {b} }$, and ${\color {orange}\mathbf {c} }$, respectively. The dot product of this with itself is:
Its value is the
determinant of the matrix whose columns are the
Cartesian coordinates of the three vectors. It is the signed
volume of the
parallelepiped defined by the three vectors, and is isomorphic to the three-dimensional special case of the
exterior product of three vectors.
The vector triple product is defined by^{
[2]}^{
[3]}
This identity, also known as Lagrange's formula,
may be remembered as "ACB minus ABC", keeping in mind which vectors are dotted together. This formula has applications in simplifying vector calculations in
physics.
Physics
In
physics, vector magnitude is a
scalar in the physical sense (i.e., a
physical quantity independent of the coordinate system), expressed as the
product of a
numerical value and a
physical unit, not just a number. The dot product is also a scalar in this sense, given by the formula, independent of the coordinate system. For example:^{
[10]}^{
[11]}
For vectors with
complex entries, using the given definition of the dot product would lead to quite different properties. For instance, the dot product of a vector with itself could be zero without the vector being the zero vector (e.g. this would happen with the vector $\mathbf {a} =[1\ i]$). This in turn would have consequences for notions like length and angle. Properties such as the positive-definite norm can be salvaged at the cost of giving up the symmetric and bilinear properties of the dot product, through the alternative definition^{
[12]}^{
[2]}
In the case of vectors with real components, this definition is the same as in the real case. The dot product of any vector with itself is a non-negative real number, and it is nonzero except for the zero vector. However, the complex dot product is
sesquilinear rather than bilinear, as it is
conjugate linear and not linear in $\mathbf {a}$. The dot product is not symmetric, since
The self dot product of a complex vector $\mathbf {a} \cdot \mathbf {a} =\mathbf {a} ^{\mathsf {H}}\mathbf {a}$, involving the conjugate transpose of a row vector, is also known as the norm squared, ${\textstyle \mathbf {a} \cdot \mathbf {a} =\|\mathbf {a} \|^{2}}$, after the
Euclidean norm; it is a vector generalization of the absolute square of a complex scalar (see also:
squared Euclidean distance).
The inner product of two vectors over the field of complex numbers is, in general, a complex number, and is
sesquilinear instead of bilinear. An inner product space is a
normed vector space, and the inner product of a vector with itself is real and positive-definite.
Functions
The dot product is defined for vectors that have a finite number of
entries. Thus these vectors can be regarded as
discrete functions: a length-$n$ vector $u$ is, then, a function with
domain$\{k\in \mathbb {N} :1\leq k\leq n\}$, and $u_{i}$ is a notation for the image of $i$ by the function/vector $u$.
This notion can be generalized to
continuous functions: just as the inner product on vectors uses a sum over corresponding components, the inner product on functions is defined as an integral over some
interval[a,b]:^{
[2]}
Inner products can have a
weight function (i.e., a function which weights each term of the inner product with a value). Explicitly, the inner product of functions $u(x)$ and $v(x)$ with respect to the weight function $r(x)>0$ is
A double-dot product for
matrices is the
Frobenius inner product, which is analogous to the dot product on vectors. It is defined as the sum of the products of the corresponding components of two matrices $\mathbf {A}$ and $\mathbf {B}$ of the same size:
The inner product between a
tensor of order $n$ and a tensor of order $m$ is a tensor of order $n+m-2$, see
Tensor contraction for details.
Computation
Algorithms
The straightforward algorithm for calculating a floating-point dot product of vectors can suffer from
catastrophic cancellation. To avoid this, approaches such as the
Kahan summation algorithm are used.
Libraries
A dot product function is included in:
BLAS level 1 real SDOT, DDOT; complex CDOTU, ZDOTU = X^T * Y, CDOTC, ZDOTC = X^H * Y