§ 1. Widely studied, and extensively used, is the matrix multiplication of elementary linear algebra. This operation takes two inputs that are twodimensional (hereafter "2D") matrices; the output is also a 2D matrix.
Later we will define more precisely what a matrix is, but for now note that it houses components (which are often real numbers) in a rectangular grid. When 2D, the components are said to be organized into rows and columns. To extend the applicability of multiplication:
Because matrices often contain many components, they are frequently manipulated by computer programs, often of a numerical nature. A programmer would use some sort of data structure, probably an array, to store the component information.
Can matrix multiplication be extended to matrices of three or more dimensions? Of course it can be, and it certainly has been. However, such operations are, relative to the 2D version, infrequently seen. Moreover, there are various ways that multiplication for nD matrices might be defined, and no one of them has risen to prominence. In this report, we offer a very general approach subsuming some of the definitions that already exist.
This report is an outgrowth of another project, the present author's mat_gen_dim, which developed an nD array storage method for the C++ programming language. The manner of implementating therein the outer product, and of implementing nD generalizations of contraction and the inner product, led to a broad definition of nD matrix multiplication which deserved a separate mathematicallyoriented description. The mat_gen_dim pages discuss the operations in a computer programming parlace, which reads altogether differently from this report.
As "generalized multidimensional matrix multiplication" is an unwieldy phrase, we will often use the symbol ⊕⊗ for the operation being defined. The rationale for this notation will become clear later.
Sections 2 through 5 of this report deal with preliminaries; ⊕⊗ itself is covered beginning in section 6. Note that we never denote any kind of multiplication by typographical juxtaposition, always preferring to use an explicit symbol of some sort.
§ 2. What is a matrix? A collection of components, typically numbers, which can be individually accessed by use of an index. "Individually accessed" means that reading or changing one component does not affect the others. Also, the value of one component does not constrain the set of possible values for other components.
An index is an ordered ntuple of integers; without loss of generality we choose in this report to limit them to positive values. Every matrix has a fixed dimensionality, which is a nonnegative integer. If a matrix is ndimensional, any index to be used with it must have exactly n elements. Note our terminology: matrices have components while indices have elements.
Many writers use the word subscript where we are using index, from the typographical custom of using subscripts for indices. We avoid that here because subscripts become difficult to read when indices are complicated, especially when subscripts themselves have subscripts. Another reason is that tensor algebra partitions an index's elements into opposing categories denoted by superscripts or subscripts, and we do not want to be appearing to suggest any tensorial interpretation. However, our approach to matrices can be used to carry out many operations of tensor algebra.
In this report, and in most applications elsewhere, matrices have finitely many components. Index elements reflect this, being restricted to certain values. When a matrix is created, not only is its dimensionality established, but also the ranges of its index elements; we use the double dot notation to denote an inclusive range of integers. For instance, this ordered ntuple of index ranges:
(1 .. 3, 1 .. 5, 1 .. 4)
describes a 3D matrix the index for any component of which must have three elements. The first element can be 1, 2, or 3; the second 1, 2, 3, 4, or 5; and the third 1, 2, 3, or 4. There are 60 = 3 × 5 × 4 possible combinations, all of which are valid, and each of which designates a different component of the matrix. Hence the matrix has 60 components, and this number will not change. We follow the popular mathematical convention that the minimum value of any index element can be 1; on the other hand are systems that always start with 0. Further, environments such as the Ada programming language and mat_gen_dim employ no universal base.
If matrix A has the index ranges above, a notation for a particular component is A[2, 1, 4], and for a general component is A[i_{1}, i_{2}, i_{3}]. Note the square brackets for component selection. Meanwhile, A[2, 1, 4, 7] is wrong for having too many components, and A[2, 9, 4] is wrong for being out of range in the middle element. If the dimensionality of matrix B is denoted as b, an index for B can be written as B[i_{1}, i_{2} … i_{b}].
If matrix C has zero dimensions, its sole component is written C[ ]. Why one component and not zero? Because the number of components in a matrix is determined by multiplying the number of indices in each dimension, and the multiplicative identity for integers is one, not zero.
When all of a matrix's index ranges are equal, for instance (1 .. 7, 1 .. 7, 1 .. 7, 1 .. 7), we say that the matrix is equilateral. Any matrix of 0 or 1 dimensions is equilateral by convention. Although equilateral matrices are favored in tensor algebra, the matrix multiplication to be defined in this report has little need for for that characteristic. To emphasize that point, example matrices will be equilateral as little as possible.
§ 3. Two matrices are comorphic if they have the same dimensionality and the same ranges for respective index elements. Between two comorphic matrices, a component of the first corresponds to a component of the second if the two components have the same index.
Two comorphic matrices are equal if every component of the first equals the corresponding component of the second. Symbolically, A = B means that A[i_{1}, i_{2} … i_{d}] = B[i_{1}, i_{2} … i_{d}]. The mutual dimensionality of A and B is represented by d, and all i_{n} must (of course) be within their respective index ranges.
Two comorphic matrices can be added, giving an output comorphic with the inputs. Corresponding components of the inputs are added, and the sum placed in the corresponding location of the output. More briefly, components are added in parallel. Here is an example using an HTML table for some 2D matrices:
 + 
 = 

Symbolically, A + B = C means that A[i_{1}, i_{2} … i_{d}] + B[i_{1}, i_{2} … i_{d}] = C[i_{1}, i_{2} … i_{d}].
For many operations, the notion of conformability can be defined. Broadly, it means that input matrices (and other items) have compatible dimensionalities, index ranges, et cetera. (Operations differ in their requirements, so "compatible" might not mean "equal".) In the case of addition, conformability is the same as comorphism.
Subtraction is completely analogous to addition, and frequently useful, but multiplication in parallel is almost never encountered. Valuable instead is multiplication of a matrix by a scalar, for example:
 × 12 = 

If y is a scalar, A × y = B has the componentbycomponent meaning A[i_{1}, i_{2} … i_{d}] × y = B[i_{1}, i_{2} … i_{d}]. Meanwhile, division of a matrix by a scalar amounts to multiplying the matrix by the scalar's reciprocal. Any attempt to define the opposite operation, dividing a scalar by a matrix, poses many questions not easily answered.
The multiplication of a matrix by a scalar is always conformable. Beyond that, the operation is distributive over addition; hence with matrices A and B, and with scalars y and z:
A × (y + z) = (A × y) + (A × z)
(A + B) × y = (A × y) + (B × z)
Negation is no problem: −A = A × −1.
§ 4. The outer product takes two matrices as input and delivers one as output. The input matrices need not be of the same dimensionality, nor need their index ranges match in any way. Indeed, the outer product exists for any two matrices; and by induction, for countably many matrices. Hence comformability is assured. We use the circletimes symbol as a prefix notation for this operation; thus the outer product of A and B is written ⊗(A, B).
In ⊗(A, B) = C, each component of A is multiplied by each component of B, and that product will be a component of C. If A has 15 components and B has 63, then C must have 15 × 63 = 945 components; the outer product can be quite a large matrix. The dimensionality of C is the sum of the dimensionalities of A and B. More specifically, the index of each component of C is the catenation of the indices of the contributing components of A and B:
A[i_{1}, i_{2} … i_{a}] × B[j_{1}, j_{2} … j_{b}] = C[i_{1}, i_{2} … i_{a}, j_{1}, j_{2} … j_{b}].
Similarly, the index ranges of C are catenative of the index ranges of A and B. A lengthy example of the outer product is on a separate page.
The outer product is associative, meaning this:
⊗(⊗(A, B), C) = ⊗(A, ⊗(B, C))
This it is unambiguous to write ⊗(A, B, C). Observe by contrast that commutativity fails; in other words ⊗(A, B) generally does not equal ⊗(B, A). They will have the same dimensionality, but they might not be comorphic. Still, the outer product does distribute over addition whenever the matrices are conformable for addition. This means:
⊗(A, B + C) = ⊗(A, B) + ⊗(A, C)
An identity for outer multiplication is the 0D matrix whose sole component is the number 1. Denoting that as U (for unit), this means:
⊗(A, U) = A = ⊗(U, A)
The outer product has never gained much currency in the field of linear algebra. A likely reason is that although the outer product is often huge, it contains no more information than the factors that comprise it. Yet for us, it is a valuable steppingstone toward defining ⊕⊗.
§ 5. Besides the outer product, we need a contraction operation in order to establish ⊕⊗. As a preliminary we define the bunting, which is an ordered ntuple of boolean values; we use Greek uppercase letters to represent buntings. Each boolean within the bunting is called a flag.
Contraction, which is tricky, involves three items:
Individual flags can be addressed with square brackets: Φ[1], Φ[2], et cetera. The number of true flags is termed the dimensionality of the contraction. Our notation for contraction uses the circleplus character in prefix position: ⊕(X, Φ) = Y.
Among the flags of Φ, the first is associated with the first of the X's index ranges, the second with the second, et cetera. Here is an example with 8D input and 5D output:
X's index ranges:  (  1 .. 5,  1 .. 4,  1 .. 5,  1 .. 6,  1 .. 5,  1 .. 8,  1 .. 4,  1 .. 5  ) 

↕  ↕  ↕  ↕  ↕  ↕  ↕  ↕  
Φ:  (  true,  false,  true,  false,  true,  false,  false,  false  ) 
↕  ↕  ↕  ↕  ↕  
Y's index ranges:  (  1 .. 4,  1 .. 6,  1 .. 8,  1 .. 4,  1 .. 5  ) 
The second of the two criteria for conformability is that all the index ranges corresponding to true flags must be equal. By contrast, the index ranges corresponding to the false flags need not equal anything in particular.
Y is a matrix whose index ranges are collected from those of X corresponding to the false flags, in this example (1 .. 4, 1 .. 6, 1 .. 8, 1 .. 4, 1 .. 5), so Y is a 5D matrix. On the other hand, X's index ranges corresponding to trues are absorbed by the operation. Useful fact:
The dimensionality of the input matrix
minus
the dimensionality of the contraction
equals
the dimensionality of the output matrix.
Each component of Y is a summation of certain components of X. Specifically, every component of X satisfying the following two properties is an addend:
Y[i_{2}, i_{4}, i_{6}, i_{7}, i_{8}]  =  X[1, i_{2}, 1, i_{4}, 1, i_{6}, i_{7}, i_{8}] 
+  X[2, i_{2}, 2, i_{4}, 2, i_{6}, i_{7}, i_{8}]  
+  X[3, i_{2}, 3, i_{4}, 3, i_{6}, i_{7}, i_{8}]  
+  X[4, i_{2}, 4, i_{4}, 4, i_{6}, i_{7}, i_{8}]  
+  X[5, i_{2}, 5, i_{4}, 5, i_{6}, i_{7}, i_{8}] 
Remarks:
§ 6. Now the background is laid for the definition of how to perform our generalized multidimensional matrix multiplication, which is a simple twostep procedure:
A notation for the inner product is thus:
⊕(⊗(A, B), Φ)
but fewer parentheses will suffice:
⊕⊗(A, B, Φ)
and with more matrices, it becomes:
⊕⊗(A, B, C, Φ) = ⊕(⊗(A, B, C), Φ)
⊕⊗(A, B, C, D, Φ) = ⊕(⊗(A, B, C, D), Φ)
et cetera
In fact, a contraction by itself can be regarded as a boundary case of ⊕⊗, because the unit U can be introduced as a second factor:
⊕(A, Φ) = ⊕(⊗(A, U), Φ) = ⊕⊗(A, U, Φ)
Note that the ⊗ symbol was chosen for the outer product because its internal operation is multiplication; and ⊕ was chosen for contraction because its internal operation is addition. We suggest retaining the ⊕⊗ symbol sequence for this generalization of the multidimensional matrix product, because the characters are distinctive and they emphasize the bipartite nature of the operation; and because many kinds of matrix multiplication have been defined elsewhere, with notations of all sort.
Because ⊗ and ⊕ are individually distributive over addition, so is ⊕⊗:
⊕⊗(A, B + C, Φ) = ⊕⊗(A, B, Φ) + ⊕⊗(A, C, Φ)
The author is open to suggestions about how to pronounce ⊕⊗. Incidentally, the HTML notation for it is:
⊕⊗
§ 7. Any two or more nD matrices have an outer product, and for any matrix there exists a bunting enabling a contraction. Therefore ⊕⊗ exists for any two or more matrices; depending on how many of their index ranges match, there may be several (but finitely many) valid ⊕⊗s, each with a different bunting. The choice of buntings is governed by the rules for conformability.
This guarantee of existence, and potential for multiple values, is one justification for characterizing ⊕⊗ as generalized multidimensional matrix multiplication. Another justification is that ⊕⊗ subsumes ordinary matrix multiplication. Here is an example of that:
Let Φ = (false, true, true, false).
Input matrix A:
A[1, 1] = 2  A[1, 2] = 3  A[1, 3] = −1 
A[2, 1] = 4  A[2, 2] = −2  A[2, 3] = 5 
Input matrix B:
B[1, 1] = 2  B[1, 2] = −1  B[1, 3] = 0  B[1, 4] = 6 
B[2, 1] = 1  B[2, 2] = 3  B[2, 3] = −5  B[2, 4] = 1 
B[3, 1] = 4  B[3, 2] = 1  B[3, 3] = −2  B[3, 4] = 2 
Intermediate matrix C = ⊗(A, B):
C[1, 1, 1, 1] = 4  C[1, 1, 1, 2] = −2  C[1, 1, 1, 3] = 0  C[1, 1, 1, 4] = 12 
C[1, 1, 2, 1] = 2  C[1, 1, 2, 2] = 6  C[1, 1, 2, 3] = −10  C[1, 1, 2, 4] = 2 
C[1, 1, 3, 1] = 8  C[1, 1, 3, 2] = 2  C[1, 1, 3, 3] = −4  C[1, 1, 3, 4] = 4 
C[1, 2, 1, 1] = 6  C[1, 2, 1, 2] = −3  C[1, 2, 1, 3] = 0  C[1, 2, 1, 4] = 18 
C[1, 2, 2, 1] = 3  C[1, 2, 2, 2] = 9  C[1, 2, 2, 3] = −15  C[1, 2, 2, 4] = 3 
C[1, 2, 3, 1] = 12  C[1, 2, 3, 2] = 3  C[1, 2, 3, 3] = −6  C[1, 2, 3, 4] = 6 
C[1, 3, 1, 1] = −2  C[1, 3, 1, 2] = 1  C[1, 3, 1, 3] = 0  C[1, 3, 1, 4] = −6 
C[1, 3, 2, 1] = −1  C[1, 3, 2, 2] = −3  C[1, 3, 2, 3] = 5  C[1, 3, 2, 4] = −1 
C[1, 3, 3, 1] = −4  C[1, 3, 3, 2] = −1  C[1, 3, 3, 3] = 2  C[1, 3, 3, 4] = −2 
C[2, 1, 1, 1] = 8  C[2, 1, 1, 2] = −4  C[2, 1, 1, 3] = 0  C[2, 1, 1, 4] = 24 
C[2, 1, 2, 1] = 4  C[2, 1, 2, 2] = 12  C[2, 1, 2, 3] = −20  C[2, 1, 2, 4] = 4 
C[2, 1, 3, 1] = 16  C[2, 1, 3, 2] = 4  C[2, 1, 3, 3] = −8  C[2, 1, 3, 4] = 8 
C[2, 2, 1, 1] = −4  C[2, 2, 1, 2] = 2  C[2, 2, 1, 3] = 0  C[2, 2, 1, 4] = −12 
C[2, 2, 2, 1] = −2  C[2, 2, 2, 2] = −6  C[2, 2, 2, 3] = 10  C[2, 2, 2, 4] = −2 
C[2, 2, 3, 1] = −8  C[2, 2, 3, 2] = −2  C[2, 2, 3, 3] = 4  C[2, 2, 3, 4] = −4 
C[2, 3, 1, 1] = 10  C[2, 3, 1, 2] = −5  C[2, 3, 1, 3] = 0  C[2, 3, 1, 4] = 30 
C[2, 3, 2, 1] = 5  C[2, 3, 2, 2] = 15  C[2, 3, 2, 3] = −25  C[2, 3, 2, 4] = 5 
C[2, 3, 3, 1] = 20  C[2, 3, 3, 2] = 5  C[2, 3, 3, 3] = −10  C[2, 3, 3, 4] = 10 
Output matrix D = ⊕(C, Φ) = ⊕(⊗(A, B), Φ) = ⊕⊗(A, B, Φ) = traditional matrix product of A and B:
D[1, 1] = 3  D[1, 2] = 6  D[1, 3] = −13  D[1, 4] = 13 
D[2, 1] = 26  D[2, 2] = −5  D[2, 3] = 0  D[2, 4] = 32 
Elements of C that are completely ignored in the contraction are shaded in red in the table above; those used are shaded in green. This highlights the fact that, although the most convenient way to define ⊕⊗ is by way of an outer product followed by a contraction, such is far from the most efficient way to implement ⊕⊗. In matrices of practical size, the proportion of wasted calculation can easily exceed 99 percent. The mat_gen_dim software uses a direct approach to figure ⊕⊗ for two matrix inputs, the key function there being called inner_product. There, unused products are not calculated.
Each component of D is the dot product of one row of A and one column of B. Nomenclature varies, but the dot product is often termed "the" inner product or the "standard" inner product. Thus ordinary matrix multiplication, which yields a matrix full of dot products, might be regarded as a multidimensional generalization of the inner product. That explains why the mat_gen_dim software refers to ⊕⊗ by the name inner_product, although another reason is that the C++ language in which the software is written does not allow characters such as ⊕ and ⊗ in the names of functions. Our avenue of generalizing the inner product, by increasing the dimensionality, is separate from, but not in conflict with, the approach employed in the study of inner product spaces.
§ 8. Matrices can be associative under ⊕⊗ with suitable choices of flags. Although a general theory has yet to emerge, examples are plentiful. Consider these ordinary matrices:
⊕⊗(⊕⊗(A, B, Ψ_{1}), C, Ψ_{2}) = ⊕⊗(A, ⊕⊗(B, C, Θ_{1}), Θ_{2})
A simple computer search turned up dozens of quadruples of buntings that result in associativity, including these five:
Yielding a 1D matrix: 
 
Yielding a 2D matrix: 
 
Yielding a 3D matrix: 
 
Yielding a 4D matrix: 
 
Yielding a 5D matrix: 

Aside from the trivial case of making all flags false, unknown is whether five buntings to satisfy this threemember equation exist:
⊕(⊗(A, B, C), Ω) = ⊕⊗(⊕⊗(A, B, Ψ_{1}), C, Ψ_{2}) = ⊕⊗(A, ⊕⊗(B, C, Θ_{1}), Θ_{2})
There is a family of associative extensions to the ordinary 2D matrix multiplication that appears in section 7 above. Let A, B, and C be matrices all of the same dimensionality d. Let Φ be a bunting with 2 × d elements, half of them true and half false; with all the true flags occupying consecutive positions. Then
⊕⊗(⊕⊗(A, B, Φ), C, Φ) = ⊕⊗(A, ⊕⊗(B, C, Φ), Φ)
whenever the individual ⊕⊗ operations are conformable; in other words, whenever the left member and right member exist. To illustrate, here is a table of the five configurations from the 4D case, where p, q, r and s are positive integers:
If these matrices have these index ranges:  and if:  then: 

A_{0}(1 .. q, 1 .. q, 1 .. q, 1 .. q)
B_{0}(1 .. r, 1 .. r, 1 .. r, 1 .. r) C_{0}(1 .. s, 1 .. s, 1 .. s, 1 .. s)  Φ_{0} = (true, true, true, true, false, false, false, false)  ⊕⊗(⊕⊗(A_{0}, B_{0}, Φ_{0}), C_{0}, Φ_{0})
= ⊕⊗(A_{0}, ⊕⊗(B_{0}, C_{0}, Φ_{0}), Φ_{0}) 
A_{1}(1 .. p, 1 .. q, 1 .. q, 1 .. q)
B_{1}(1 .. q, 1 .. r, 1 .. r, 1 .. r) C_{1}(1 .. r, 1 .. s, 1 .. s, 1 .. s)  Φ_{1} = (false, true, true, true, true, false, false, false)  ⊕⊗(⊕⊗(A_{1}, B_{1}, Φ_{1}), C_{1}, Φ_{1})
= ⊕⊗(A_{1}, ⊕⊗(B_{1}, C_{1}, Φ_{1}), Φ_{1}) 
A_{2}(1 .. p, 1 .. p, 1 .. q, 1 .. q)
B_{2}(1 .. q, 1 .. q, 1 .. r, 1 .. r) C_{2}(1 .. r, 1 .. r, 1 .. s, 1 .. s)  Φ_{2} = (false, false, true, true, true, true, false, false)  ⊕⊗(⊕⊗(A_{2}, B_{2}, Φ_{2}), C_{2}, Φ_{2})
= ⊕⊗(A_{2}, ⊕⊗(B_{2}, C_{2}, Φ_{2}), Φ_{2}) 
A_{3}(1 .. p, 1 .. p, 1 .. p, 1 .. q)
B_{3}(1 .. q, 1 .. q, 1 .. q, 1 .. r) C_{3}(1 .. r, 1 .. r, 1 .. r, 1 .. s)  Φ_{3} = (false, false, false, true, true, true, true, false)  ⊕⊗(⊕⊗(A_{3}, B_{3}, Φ_{3}), C_{3}, Φ_{3})
= ⊕⊗(A_{3}, ⊕⊗(B_{3}, C_{3}, Φ_{3}), Φ_{3}) 
A_{4}(1 .. p, 1 .. p, 1 .. p, 1 .. p)
B_{4}(1 .. q, 1 .. q, 1 .. q, 1 .. q) C_{4}(1 .. r, 1 .. r, 1 .. r, 1 .. r)  Φ_{4} = (false, false, false, false, true, true, true, true)  ⊕⊗(⊕⊗(A_{4}, B_{4}, Φ_{4}), C_{4}, Φ_{4})
= ⊕⊗(A_{4}, ⊕⊗(B_{4}, C_{4}, Φ_{4}), Φ_{4}) 
The expression defining associativity:
⊕⊗(⊕⊗(A, B, Φ), C, Φ) = ⊕⊗(A, ⊕⊗(B, C, Φ), Φ)
is complicated, and a simpler notation comes to mind:
⊕⊗(A, B, C, Φ)
However, there is a danger that this latter expression would be confused with
⊕(⊗(A, B, C), Φ)
which is one contraction of the outer product of three matrices, as opposed to two contractions each being of an outer product of two matrices.
It follows from the table above that when an nD matrix A is equilateral, there are at least n + 1 choices for a bunting Φ that will make the following true:
⊕⊗(⊕⊗(A, A, Φ), A, Φ) = ⊕⊗(A, ⊕⊗(A, A, Φ), Φ)
Thus equilateral matrices exhibit power associativity. Once a particular bunting is chosen, it makes sense to talk about raising A to a positive integer power. A notation for this is:
⊕⊗(A^{1}, Φ) = A
⊕⊗(A^{2}, Φ) = ⊕⊗(A, A, Φ)
⊕⊗(A^{m+1}, Φ) = ⊕⊗(A^{m}, A, Φ)
At this point, we can employ a Taylor series to define a sine function:
sin (A, Φ)  = ⊕⊗(A^{1}, Φ) ÷ 1! 
− ⊕⊗(A^{3}, Φ) ÷ 3!  
+ ⊕⊗(A^{5}, Φ) ÷ 5!  
− ⊕⊗(A^{7}, Φ) ÷ 7!  
et cetera 
If we regard the components of A as a collection of variables, then ⊕⊗(A^{n}, Φ) becomes an nthdegree polynomial in those variables. Within the terms of the series, the dividends thus grow exponentially, but the divisors grow factorially, so all the respective matrix components will ultimately converge.
The cosine is more difficult than the sine, because the usual expansion requires adding a (twosided) multiplicative identity value, which we have not yet established because among matrices such an identity usually fails to exist. To see more of what is going on, we need to introduce the onesided identity:
For an example of what can happen, define the matrix M as what one might suppose an identity matrix to be when its index ranges are (1 .. n, 1 .. n, 1 .. n):
This can be generalized to d dimensions, with the obvious modification to M, and with writing superscripts to indicate repeated flags in the buntings:
Beyond these, it is difficult to find any useful or interesting identity values in the equilateral associative environment.
This is obvious:
⊕⊗(⊕⊗(A, B, Φ), C, Φ) = ⊕⊗(A, ⊕⊗(B, C, Φ), Φ)
implies
⊕⊗(⊕⊗(A, A, Φ), A, Φ) = ⊕⊗(A, ⊕⊗(A, A, Φ), Φ)
Surprisingly, numerical experimentation suggests the converse:
⊕⊗(⊕⊗(A, A, Φ), A, Φ) = ⊕⊗(A, ⊕⊗(A, A, Φ), Φ)
is conjected to imply
⊕⊗(⊕⊗(A, B, Φ), C, Φ) = ⊕⊗(A, ⊕⊗(B, C, Φ), Φ)
§ 9. Consider for example:
⊕⊗(A, B, Φ) = ⊕(⊗(A, B), Φ) = ⊗(⊕(A, Φ_{A}), B)
In the left and middle members ⊗ is performed before ⊕, but in the right member ⊕ occurs before ⊗. Also, in the right member B no longer contributes to the contraction.
Naturally, the interchange of subordinate operations still works if the left operand of ⊕⊗ is trueless. With:
⊕⊗(B, A, Ψ) = ⊗(B, ⊕(A, Ψ_{A}))
§ 10. The reader may feel that notation such as this:
⊕⊗(⊕⊗(A, B, Ψ_{1}), C, Ψ_{2}) = ⊕⊗(A, ⊕⊗(B, C, Θ_{1}), Θ_{2})
is cumbersome. In response, observe that all three of the following would almost have to mean the same thing:
⊕⊗(A, B, Φ)
⊕⊗(A, Φ, B)
⊕⊗(Φ, A, B)
This prompts the following condensations:
⊕⊗(⊕⊗(A, B, Ψ_{1}), C, Ψ_{2})
⇒ ⊕⊗(A, B, Ψ_{1}, C, Ψ_{2})
⊕⊗(A, ⊕⊗(B, C, Θ_{1}), Θ_{2})
⇒ ⊕⊗(Θ_{2}, A, Θ_{1}, B, C)
If in some context the inputs of ⊕⊗ will always be exactly one bunting and two matrices, then the bunting could serve as an infix operator symbol:
⊕⊗(A, B, Φ) ⇒ A Φ B
⊕⊗(⊕⊗(A, B, Ψ_{1}), C, Ψ_{2})
⇒ (A Ψ_{1} B) Ψ_{2} C
⊕⊗(A, ⊕⊗(B, C, Θ_{1}), Θ_{2})
⇒ A Θ_{2} (B Θ_{1} C)
In these briefer notations, care must be exercised if A, B, or C is a 1D matrix whose components happen to be booleans instead of numbers; then the matrix could be mistaken for a bunting.