Using the orthonormal basis V = { V0, V1, V2, V3, V4, V5, V6 }, here is the baseline 2-in-7 cross product, chosen arbitrarily from 480 possibilities:
… × V0 | … × V1 | … × V2 | … × V3 | … × V4 | … × V5 | … × V6 | |
V0 × … | 0 | +V2 | −V1 | +V4 | −V3 | +V6 | −V5 |
V1 × … | −V2 | 0 | +V0 | +V5 | −V6 | −V3 | +V4 |
V2 × … | +V1 | −V0 | 0 | −V6 | −V5 | +V4 | +V3 |
V3 × … | −V4 | −V5 | +V6 | 0 | +V0 | +V1 | −V2 |
V4 × … | +V3 | +V6 | +V5 | −V0 | 0 | −V2 | −V1 |
V5 × … | −V6 | +V3 | −V4 | −V1 | +V2 | 0 | +V0 |
V6 × … | +V5 | −V4 | −V3 | +V2 | +V1 | −V0 | 0 |
The other 479 differ only by permutation of basis elements and changes of sign.
With Vn corresponding to Un+1, there is a clear resemblance to this excerpt from the baseline 3-in-8 table:
… × U1 | … × U2 | … × U3 | … × U4 | … × U5 | … × U6 | … × U7 | |
U0 × U1 × … | 0 | +U3 | −U2 | +U5 | −U4 | +U7 | −U6 |
U0 × U2 × … | −U3 | 0 | +U1 | +U6 | −U7 | −U4 | +U5 |
U0 × U3 × … | +U2 | −U1 | 0 | −U7 | −U6 | +U5 | +U4 |
U0 × U4 × … | −U5 | −U6 | +U7 | 0 | +U1 | +U2 | −U3 |
U0 × U5 × … | +U4 | +U7 | +U6 | −U1 | 0 | −U3 | −U2 |
U0 × U6 × … | −U7 | +U4 | −U5 | −U2 | +U3 | 0 | +U1 |
U0 × U7 × … | +U6 | −U5 | −U4 | +U3 | +U2 | −U1 | 0 |
Let W = { W0, W1, W2 … Wn−1 } be an orthonormal basis of an n-dimensional vector space over the real numbers. Then there is a widely recognized definition of the cross product of n − 1 factors, and it is most succinctly defined as the determinant of a formal matrix.
Here it is for the case of three vectors in four dimensions (the "3-in-4"):
A × B × C = det |
|
The extension to other numbers of dimensions is so obvious that there is little need to try writing the general formula. The two-factor three-dimensional cross product, indispensible to physicists, looks like this:
A × B = det |
|
With four factors in five dimensions, the following is obtained:
A × B × C × D = det |
|
Aside from the 2-in-7 and 3-in-8, mathematicians rarely define any cross product that does not fall into this determinant-based pattern. However, the next section describes a different multiplication, using multiple vector spaces, that resembles the cross product of n − 1 factors in n dimensions.
In all the discussions so far, the assumption has been that the output of cross multiplication resides in the same vector space as the inputs. If this constraint is lifted, another kind of generalization of the cross product becomes possible, namely the wedge product. Although the wedge product is often defined abstractly by certain of its behaviors, in this report we give a more concrete presentation by way of example.
Start with a vector space G1 that has orthonormal basis { e0, e1, e2, e3 }; it helps to call vectors within G1 univectors. For any two vectors A and B in G1, the wedge product is symbolized A ∧ B. The wedge product, like most multiplications, is distributive over addition:
A ∧ (B + C) = (A ∧ B) + (A ∧ C)
Like cross products, the product of two vectors from G1 is anticommutative:
A ∧ B = − B ∧ A
A ∧ A = 0
and it is linear in each factor:
(xA) ∧ (yB) = xy(A ∧ B)
Unlike the cross products, the wedge product is associative:
(A ∧ B) ∧ C = A ∧ (B ∧ C)
Thus with basis vectors:
ei ∧ ej = − ej ∧ ei
ei ∧ ei = 0
A key feature of the wedge product is that although ei ∧ ej is a valid expression, it cannot be simplified. In this regard, it resembles an expression like 3 + 4i in complex numbers. Moreover, ei ∧ ej is a vector that resides in the six-dimensional space G2:
an orthonormal basis for G2 = { e0 ∧ e1, e0 ∧ e2, e0 ∧ e3, e1 ∧ e2, e1 ∧ e3, e2 ∧ e3 }
In building the basis, it little matters whether we choose e0 ∧ e1 or its negative e1 ∧ e0; the same vector space results either way. A condensed notation for the products of G1's basis vectors is convenient: eij = ei ∧ ej. Of course, eij = − eji. Thus:
the same basis for G2 = { e01, e02, e03, e12, e13, e23 }
An element of G2 is called a bivector.
Multiplication is not limited to two vectors. Consider that ei ∧ ej ∧ ek (more briefly eijk) is nonzero when i, j and k are distinct. Such a product of three univectors resides in the four-dimensional space G3:
an orthonormal basis for G3 = { e012, e013, e023, e123 }
Because of associativity, there is little substantive difference between the product of three univectors versus the product of one univector and one bivector. Continuing the pattern of names, elements of G3 are trivectors.
Last in the sequence is the one-dimensional G4:
a basis for G4 = { e0123 }
While vectors in this space might be called quadrivectors, they are more likely termed pseudoscalars, because G4 is isomorphic to scalar space, which itself could be labeled G0 and which would have the basis { 1 }.
In total, there are five disjoint vector spaces here: G0, G1, G2, G3, and G4. With vector A in Ga and vector B in Gb, a general result is that A ∧ B, if not zero, is in Ga+b.
In the three-dimensional case, a likely choice for the vector spaces would be:
space | an orthonormal basis | containing |
---|---|---|
F0 | { 1 } | scalars |
F1 | { e0, e1, e2 } | univectors |
F2 | { e12, e20, e01 } | bivectors |
F3 | { e012 } | trivectors or pseudoscalars |
Using the Hodge dual, we can consolidate four vector spaces into two by declaring that:
In a consolidation of the earlier example, G4 would be identified with G0, and G3 would be identified with G1, but G2 would stand alone.
This consolidation is not always appropriate. Suppose the components of an F1 vector are not merely numbers, but numbers carrying a physical unit, say meters. Then the components of an F2 vector would be numbers carrying the physical unit square meters, and it is very difficult to establish a plausible identity between meters and square meters.