A family of cross products of vectors.
Version of Sunday 3 July 2016.
Dave Barber's other pages.

§1a. Extensively used in physics is the cross product of precisely two vectors in precisely three dimensions. In this report, we denote this operation as "the 2-in-3".

Often overlooked is that this cross product can be generalized to other numbers of factors or dimensions. This report gives definitions of, and lists algebraic properties of, the largest family of extensions, which lie in the following sequence:

 "the 2-in-3" ⇔ 2 factors in 3 dimensions "the 3-in-4" ⇔ 3 factors in 4 dimensions "the 4-in-5" ⇔ 4 factors in 5 dimensions ... "the n-in-n+1" ⇔ n factors in n+1 dimensions

We call these the plus-ones. Outside this pattern are two isolated cross products, being the 2-in 7 and 3-in-8. Although bearing resemblance to the plus-ones, these plus-fives are not covered here.

Later we will justify terming the inputs to the operations as factors, and their outputs as product.

§1b. Plus-ones are most conveniently explained using matrices, which here are represented simply (and somewhat primitively) by way of HTML tables. For example, a matrix with 2 rows and 3 columns, whose components are real numbers, might look like this:

M =
 +3.2 −4.7 0 +7.3 −0.6 −4.7 8.9 +13.4 +2.9 +5.4 11.6 −8.1

Components of the matrix are denoted by subscripts beginning with zero (as with array indexing in the C family of computer programming languages) rather than one (as is the usual practice among mathematicians). Thus:

M =
 M00 M01 M02 M03 M10 M11 M12 M13 M20 M21 M22 M23

The components of the matrix are regarded as scalars, in other words neither vectors nor matrices.

A row vector is regarded merely as a matrix with exactly one row:

R =
 2.6 −9.0 0 4.4 −0.7 −1.2 7.3

which is more compactly written:

R = [ +2.6, −9.0, 0.0, +4.4, −0.7, −1.2, +7.3 ]

Along the same lines, a column vector is regarded as a matrix with exactly one column:

C =
 +4.1 −2.2 +5.4 −1.8 +1.2

also written:

C = [ +4.1, −2.2, +5.4, −1.8, +1.2 ]T

where the superscript T refers to the matrix transpose operation.

We employ the usual definitions of addition, subtraction, and multiplication of two matrices; as well as the definition of multiplication of two matrices by a scalar. Therefore we do not detail the operations here.

§1c. In much of the discussion below, it is not necessary to sharply distinguish between row vectors and column vectors, implicit transposition to be performed when necessary. Indeed, we often write components of row or column vectors with only one subscript if no confusion will ensue. Thus these:

 R = [ R00, R01, R02, R03, R04, R05, R06 ] C = [ C00, C10, C20, C30, C40 ]T

become these:

 R = [ R0, R1, R2, R3, R4, R5, R6 ] C = [ C0, C1, C2, C3, C4 ]

Note that the superscript T is even omitted from the last.

In contrast, the supplied C++11 program consistently recognizes the difference, with an explicit tranposition operation available when needed. The rigor of the program in this matter is an outgrowth from the our decision to use very strong data typing in writing it.

§1d. What follows is a well-known rationale for establishing the plus-one sequence. First we define the 2-in-3 in a particular way.

Start with a three-dimensional vector space named B whose basis is { B0, B1, B2 }, where:

B0 = [ 1, 0, 0 ]
B1 = [ 0, 1, 0 ]
B2 = [ 0, 0, 1 ]

Now choose any two vectors U and V in B — it does not particularly matter whether we classify them as row or column vectors. Now the cross product Z = U × V can be defined by means of this determinant:

the 2-in-3:Z = U × V = det
 U0 U1 U2 V0 V1 V2 B0 B1 B2

which expands to:

Z = (U1V2U2V1) B0 + (U2V0U0V2) B1 + (U0V1U1V0) B2

which could be written:

Z0 = U1V2U2V1
Z1 = U2V0U0V2
Z2 = U0V1U1V0

which corresponds to the component-by-component definition of the 2-in-3 given in many sources.

§1e. The determinant can be defined for a square matrix of any size. With vectors and basis extended in the obvious manner to four dimensions, the 4-dimensional cross product of 3 vectors U × V × W becomes:

the 3-in-4:U × V × W = det
 U0 U1 U2 U3 V0 V1 V2 V3 W0 W1 W2 W3 B0 B1 B2 B3

and the version of next-higher order is:

the 4-in-5:U × V × W × X = det
 U0 U1 U2 U3 U4 V0 V1 V2 V3 V4 W0 W1 W2 W3 W4 X0 X1 X2 X3 X4 B0 B1 B2 B3 B4

All the inputs and the sole output of a plus-one operation are necessarily vectors with the same number of components.

Mathematicians usually write a cross product with infix notation, as we have done so far. However, prefix notation is equally valid, and is easier to use in many computer programming languages. These are equal:

 infix notation: U × V × W cross (U, V, W)

Comment. Determinants provide a convenient notation for defining plus-ones, and there is little practical alternative for the higher orders. However, determinants have drawbacks for calculating plus-ones, in the higher orders requiring numerous invocations of arithmetical operations. Further, if non-exact arithmetic is used (as with floating point), there is risk of numerical instability.

Plus-one cross products of smaller order than the 2-in-3 are of limited usefulness, but they do exist.

For the 1-in-2, infix notation does not work, and we write the definition using prefix notation:

the 1-in-2:cross (U) = det
 U0 U1 B0 B1

The 1-in-2 has the effect of rotating the vector through an angle of 90 degrees.

The 0-in-1 is a constant:

the 0-in-1:cross ( ) = det
 B0

§2a. Related to the cross product is the box product. Here, the number of input vectors is equal to the number of dimensions, and the output is a scalar. Like the cross product, the box product is most easily defined with a matrix. Here is a four-dimensional example:

[ U, V, W, X ] = det
 U0 U1 U2 U3 V0 V1 V2 V3 W0 W1 W2 W3 X0 X1 X2 X3

Other dimensionalities are analogous.

§2b. An important vector operation is the dot product of two vectors that have the same number of components. This operation is sometimes called the inner product, but researchers in inner product spaces might term it the standard inner product.

The dot product is easily introduced with an five-dimensional example. Given these:

U = [ U0, U1, U2, U3, U4 ]
V = [ V0, V1, V2, V3, V4 ]

then

U · V = U0V0 + U1V1 + U2V2 + U3V3 + U4V4

For other dimensionalities the procedure is completely analogous.

The dot product is not a vector, but rather of the same data type as the components of the vectors; in other words a scalar. For instance, if the components of the vectors are real numbers, so will be the dot product. For many purpose, it makes little difference whether the factors are row vectors, column vectors, or one of each.

The vector components can be complex numbers. In this case, researchers in many disciplines automatically conjugate one or the other of the inputs to the dot product; but there is not a consensus on whether it will be the first or second input. For that reason and others, our computer program does not automatically conjugate anything.

One might wonder about a dot product of three or more vectors, such as:

U · V · W = U0V0W0 + U1V1W1 + U2V2W2 + U3V3W3 + U4V4W4

Although this is possible, it has no obvious practical value.

§2c. Now we explain why it is appropriate to call the inputs to these operations factors and the output a product: they resemble conventional multiplication in two ways:

• The output is proportional to each input. Here are a few examples, with s as a scalar:

(sU) × V × W = s(U × V × W)

[sU, V, W] = s[U, V, W]

(sU) · V = s(U · V)

• They are distributive over addition:

U × V × (W + X) = (U × V × W) + (U × V × X)

[U, V, W + X] = [U, V, W] + [U, V, X]

U · (V + W) = (U · V) + (U · W)

Although the dot product is commutative (at least when invoked on real numbers), the cross and box products are anticommutative, meaning that if exactly two factors are exchanged, the result will be negated. Examples:

U × V × W = − U × W × V

[U, V, W] = − [U, W, V]

Anticommutativity arises because the cross and box products are defined by way of a determinant.

None of these three products is associative, nor has it an identity element, nor do vectors have inverse elements. Still, there is a zero vector for each of the products; every component is of course a zero.

Even if no factor is zero, the cross product and the box product will be zero if one factor is a linear combination of the others. A notable example of this is when two factors are equal.

The dot product can be zero when neither factor is zero; in this case we say that the vectors are perpendicular. Whether or not the factors are zero, when the dot product equals zero we say that the vectors are orthogonal.

§3a. Here are some properties of the plus-ones, given by example:

• The product is orthogonal to each factor:

the 2-in-3: (U × V) · U = (U × V) · V = 0 (U × V × W) · U = (U × V × W) · V = (U × V × W) · W = 0

• The dot square is equal to the Gram determinant of the factors:

the 3-in-4: (A × B × C) · (A × B × C) = det  A · A A · B A · C B · A B · B B · C C · A C · B C · C
the 4-in-5: (A × B × C × D) · (A × B × C × D) = det  A · A A · B A · C A · D B · A B · B B · C B · D C · A C · B C · C C · D D · A D · B D · C D · D

• Indeed, the plus-ones satisfy a more general Gram determinant:

the 4-in-5: (A × B × C × D) · (E × F × G × H) = det  A · E A · F A · G A · H B · E B · F B · G B · H C · E C · F C · G C · H D · E D · F D · G D · H

• Dot and cross products together form a cyclic pattern:

the 2-in-3: U · (V × W) = V · (W × U) = W · (U × V) + U · (V × W × X) = − V · (W × X × U) = + W · (X × U × V) = − X · (U × V × W) U · (V × W × X × Y) = V · (W × X × Y × U) = W · (X × Y × U × V) = X · (Y × U × V × W) = Y · (U × V × W × X) + U · (V × W × X × Y × Z) = − V · (W × X × Y × Z × U) = + W · (X × Y × Z × U × V) = − X · (Y × Z × U × V × W) = + Y · (Z × U × V × W × X) = − Z · (U × V × W × X × Y)

• Here is a more complicated set of identities:

the 2-in-3: the 3-in-4: the 4-in-5: given these vectors:  T, U, V, W define these vectors:  A = ( U × V ) ( T · W ) B = ( V × T ) ( U · W ) C = ( T × U ) ( V · W ) then these vectors are equal:  [ T, U, V ] W = ( + A + B + C ) given these vectors:  T, U, V, W, X define these vectors:  A = ( U × V × W ) ( T · X ) B = ( V × W × T ) ( U · X ) C = ( W × T × U ) ( V · X ) D = ( T × U × V ) ( W · X ) then these vectors are equal:  [ T, U, V, W ] X = ( + A − B + C − D ) given these vectors:  T, U, V, W, X, Y define these vectors:  A = ( U × V × W × X ) ( T · Y ) B = ( V × W × X × T ) ( U · Y ) C = ( W × X × T × U ) ( V · Y ) D = ( X × T × U × V ) ( W · Y ) E = ( T × U × V × W ) ( X · Y ) then these vectors are equal:  [ T, U, V, W, X ] Y = ( + A + B + C + D + E ) given these vectors:  T, U, V, W, X, Y, Z define these vectors:  A = ( U × V × W × X × Y ) ( T · Z ) B = ( V × W × X × Y × T ) ( U · Z ) C = ( W × X × Y × T × U ) ( V · Z ) D = ( X × Y × T × U × V ) ( W · Z ) E = ( Y × T × U × V × W ) ( X · Z ) F = ( T × U × V × W × X ) ( Y · Z ) then these vectors are equal:  [ T, U, V, W, X, Y ] Z = ( + A − B + C − D + E − F )

§4a. Here is a summary of the provided C++11 program.

Class number is a wrapper for the built-in type double. It provides:

• guaranteed initialization to zero unless otherwise specified;
• pseudorandom numbers for testing;
• tolerance-based comparison operations;
• uniform output formatting.
Otherwise it works much the same as double.

Class complex is a simple complex-number type whose components are of type number. The Standard Library's std:complex was not used because it is not guaranteed to work with any component types except float, double, and long double.

Class matrix is the principal class of the program. The component type, number of rows, and number of columns are specified at compile time. If a matrix operation is attempted, and the matrix object input(s) has/have inappropriate numbers of rows or columns, the error message will be generated at compile time. The minimum number of rows or columns is one.

A row_vector is merely a matrix constrained to have exactly one row; similarly for a col_vector. The program never implicitly converts from one to the other, but offers a transpose operation for explicit conversion.

§4b. This function:

```    template <typename compo, int cols, typename input_iter>
row_vector<compo,cols> cross_prod_row (input_iter begin, input_iter end);
```
takes two input iterators as parameters. Parameter begin initializes a local input iterator which must yield a row_vector each time it is dereferenced, until it becomes equal to end. A run-time check ensures that begin and end deliver the correct number of row_vectors.

Iterators were chosen to give the programmer maximum flexibility in choosing the source of the input row_vectors. A run-time (rather than compile-time) check is necessary because many sources cannot provide, at compile time, the number of row_vectors they hold.

Programmers might find convenient the next two functions, which extract iterators from standard containers and pass them to the function above:

```    template <typename compo, int cols>
row_vector<compo,cols> cross_prod_row (
std::initializer_list<row_vector<compo,cols>> ilm
);

template <typename compo, int cols>
row_vector<compo,cols> cross_prod_row (
std::vector<row_vector<compo,cols>> vec
);
```
Similar functions can be written for std::list, std::set and so forth. The functions for col_vectors work the same way.

The functions for the box products are used in the same way as those for the cross products:

```    template <typename compo, int dimen, typename input_iter>
compo box_prod_row (input_iter begin, input_iter end);

template <typename compo, int dimen, typename input_iter>
compo box_prod_col (input_iter begin, input_iter end);
```

Convenience functions for standard containers have not been provided, but the programmer can easily copy those for the cross product and make a few changes.

§4c. There are four functions for finding Gram matrixes. The determinant, if desired, can be obtained in one additional step. The code supports the generalized Gram matrix where the total number of input vectors is two less than twice their dimensionality, as with:

the 3-in-4: (A × B × C) · (D × E × F) = det
 A · D A · E A · F B · D B · E B · F C · D C · E C · F

A, B, and C must all be row_vectors or col_vectors; similarly for D, E, and F. However, it is permitted for A, B, and C to be row_vectors while D, E, and F are col_vectors, or vice versa.

```    template <typename compo, int dimen,
typename input_iter_row>
matrix<compo,dimen-1,dimen-1> gram_row_row (
input_iter_row begin_1, input_iter_row end_1,
input_iter_row begin_2, input_iter_row end_2
);

template <typename compo, int dimen,
typename input_iter_row, typename input_iter_col>
matrix<compo,dimen-1,dimen-1> gram_row_col (
input_iter_row begin_1, input_iter_row end_1,
input_iter_col begin_2, input_iter_col end_2
);

template <typename compo, int dimen,
typename input_iter_col, typename input_iter_row>
matrix<compo,dimen-1,dimen-1> gram_col_row (
input_iter_col begin_1, input_iter_col end_1,
input_iter_row begin_2, input_iter_row end_2
);

template <typename compo, int dimen,
typename input_iter_col>
matrix<compo,dimen-1,dimen-1> gram_col_col (
input_iter_col begin_1, input_iter_col end_1,
input_iter_col begin_2, input_iter_col end_2
);
```

The first pair of iterators produces the factors of the first cross product; the second pair produces the factors of the second.

§4d. Test routines are provided for dimensionalities two through six.