Notes on quaternion differentiation.
Version of Sunday 9 November 2014.
Dave Barber's e-mail, quaternion calculator, and other topics.

First we review important properties of quaternions, and later we try to differentiate them.

A quaternion is an ordered quadruple, whose components are typically real numbers, interpreted and manipulated according to specific rules. As with any ordered n-tuples, two quaternions are equal if and only if respective components are equal.

Notation varies from one author to the next, but we write the four components of a quaternion as a comma-separated list between two shallow angle brackets. Examples are ⟨ +1.3, −2.8, +3.14159, 0 ⟩ and ⟨ 0, 0, 0, 0 ⟩. The components of quaternion q can be distinguished by subscripts: q = ⟨ qh, qi, qj, qk ⟩. We generally use the names like p, q and r for quaternionic variables.

Bucking mathematical tradition, we never use typographical juxtaposition to indicate multiplication. Often a centered dot · is employed instead.

p + q = ⟨ ph + qh, pi + qi, pj + qj, pk + qk

which could have been written:

(p + q)h = ph + qh
(p + q)i = pi + qi
(p + q)j = pj + qj
(p + q)k = pk + qk

Addition has the familiar properties of associativity and commutativity. The identity is z = ⟨ 0, 0, 0, 0 ⟩.

pq = ⟨ phqh, piqi, pjqj, pkqk

Negation is as expected:

q = zq = ⟨ −qh, −qi, −qj, −qk

Multiplication by a real number x is straightforward:

q · x = x · q = ⟨ qh · x, qi · x, qj · x, qk · x

Division of a quaternion by a real number immediately follows: q ÷ x = q · x−1.

Define four particles:

h = ⟨ 1, 0, 0, 0 ⟩
i = ⟨ 0, 1, 0, 0 ⟩
j = ⟨ 0, 0, 1, 0 ⟩
k = ⟨ 0, 0, 0, 1 ⟩

Then:

q = qh · h + qi · i + qj · j + qk · k

The essence of quaternions is captured in multiplying two of them:

 (p · q)h = ph · qh − pi · qi − pj · qj − pk · qk (p · q)i = ph · qi + pi · qh + pj · qk − pk · qj (p · q)j = ph · qj − pi · qk + pj · qh + pk · qi (p · q)k = ph · qk + pi · qj − pj · qi + pk · qh

Although associative, multiplication is not commutative, as illustrated by i · j = +kj · i = −k. Single-sided multiplication does distribute over addition; in other words,

p · r + q · r = (p + q) · r
r · s + r · t = r · (s + t)

Note that p · r · s + q · r · t does not simplify, and this fact has enormous influence on how a polynomial in r would be constructed.

Commutativity is assured when two quaternions are proportional in the imaginaries. This means p · q = q · p when there exist real numbers a, b, c, d and e such that:

p = ⟨ ph, a · c, a · d, a · e
q = ⟨ qh, b · c, b · d, b · e

Proportionality in the imaginaries is not a transitive relation. For instance, i · h = h · i and h · j = j · h, but i · jj · i.

Because it serves as the multiplicative identity, h is identified with the real number 1, but we continue to write h when it effects a symmetry of notation. More broadly, ⟨ qh, 0, 0, 0 ⟩ equals the real number qh, and multiplication is commutative if either factor happens to be a real number.

If p · q = h, we say that each of p and q is a multiplicative inverse of the other. This might be written p = q−1 or q = p−1. Only z fails to have an inverse. Reverse commutativity applies: (p · q)−1 = q−1 · p−1.

Integral real powers of the same quaternion commute: pn · pm = pm · pn. Also, (pm)n = p(m · n) = p(n · m) = (pn)m. Non-integral powers generally induce multiple values, much as in the case of complex numbers, so they must be handled with special care.

Quaternions of the form ⟨ qh, qi, 0, 0 ⟩ (among many other possibilities) are isomorphic to the complex numbers. By analogy thereto, qh is termed the real part of q, while qi, qj and qk are the imaginary parts. A celebrated result is:

h = i · j · k = i2 = j2 = k2 = −1

Because −1 has at least three distinct square roots, quaternions are nontrivial in how they extend the complex numbers.

The selective negation ("sene") operations negate particular components of a quaternion:

 seneh (q) = ⟨ −qh, qi, qj, qk ⟩ senei (q) = ⟨ qh, −qi, qj, qk ⟩ senej (q) = ⟨ qh, qi, −qj, qk ⟩ senek (q) = ⟨ qh, qi, qj, −qk ⟩

There is a connection between multiplication and selective negation:

 − seneh (p · q) = seneh (q) · seneh (p) + senei (p · q) = senei (q) · senei (p) + senej (p · q) = senej (q) · senej (p) + senek (p · q) = senek (q) · senek (p)

Multiple subscripts combine the effects. For instance:

senehj (q) = senejh (q) = seneh (senej (q)) = ⟨ −qh, qi, − qj, qk

Thus:

seneij (p · q) = seneij (p) · seneij (q)

The conjugate of ⟨ qh, qi, qj, qk ⟩ negates all the imaginary parts:

q* = seneijk (q) = ⟨ qh, −qi, −qj, −qk

Consistent with its selective negation origin, conjugation engenders reverse commutativity: (p · q)* = q* · p*. Conveniently, q* · q = q · q* = (qh)2 + (qi)2 + (qj)2 + (qk)2, which is a real number. Note the difference between (qh)2 and (q2)h.

For use as an absolute value, the Euclidean norm is quite satisfactory: | q | = √ (q* · q). The norm is maintained under multiplication: | p · q | = | p | · | q |. Hence if p · q ≠ 0, then p ≠ 0 and q ≠ 0.

If qh = 0 and | q | = 1, then q2 = −1, meaning that that square roots of negative one form the surface of a sphere. We can write q−1 = q* ÷ | q |2 when q ≠ 0. Dividing a real number by a quaternion is not a problem: x ÷ q = x · q−1.

A subscript of l excludes the real part of a quaternion: ql = ⟨ 0, qi, qj, qk ⟩ = (qq*) ÷ 2.

Division of one quaternion by another is of limited usefulness. The usual implementation of the operation is to multiply the dividend by the inverse of the divisor, so that p ÷ q might equal p · q−1 (dexterior division). On the other hand, q−1 · p (sinisterior division), which is usually a different value, is just as valid. Moreover, when q = r · s, a mixture of sinisterior and dexterior is possible: r−1 · p · s−1 or s−1 · p · r−1. All four of these will produce answers with the same norm.

A divisor of zero inevitably fails.

The following extractive formulas, which have no counterpart in the complex numbers, are of considerable importance:

qh, 0, 0, 0 ⟩ = (h · q · hi · q · ij · q · jk · q · k) ÷ 4
⟨ 0, qi, 0, 0 ⟩ = (h · q · hi · q · i + j · q · j + k · q · k) ÷ 4
⟨ 0, 0, qj, 0 ⟩ = (h · q · h + i · q · ij · q · j + k · q · k) ÷ 4
⟨ 0, 0, 0, qk ⟩ = (h · q · h + i · q · i + j · q · jk · q · k) ÷ 4

Some linear algebra is helpful. There is an obvious way to embed a quaternion space in a real 4-dimensional space; here are the correspondences between the particles and column vectors (written here as transpositions of row vectors):

h ⇔ [ 1, 0, 0, 0 ]T
i ⇔ [ 0, 1, 0, 0 ]T
j ⇔ [ 0, 0, 1, 0 ]T
k ⇔ [ 0, 0, 0, 1 ]T

We say that two quaternions are linearly independent if the corresponding vectors in that real 4-space are themselves linearly independent; same for orthogonal.

Call a 4-by-4 orthonormal matrix of real numbers preservative if its determinant equals +1, and certain of its entries are fixed as below:

 1 0 0 0 0 m22 m23 m24 0 m32 m33 m34 0 m42 m43 m44

Regard the quaternion q as a column vector [ qh, qi, qj, qk ]T or equivalently:

 qh qi qj qk

We can calculate the product M ·· q, where the double dot stands for matrix multiplication. As applied here, the effect is to rotate the basis of the imaginary parts; the real part is untouched. From linearity, we obtain this distributive law:

M ·· p + M ·· q = M ·· (p + q)

Most valuable, but perhaps surprising, is that matrix multiplication also distributes over quaternion multiplication:

(M ·· p) · (M ·· q) = M ·· (p · q)

The matrix is called preservative because it preserves the rules of quaternion multiplication, such as i · j = k. Hence, when evaluating a quaternionic function, we might choose to rotate everything to a more convenient imaginary basis. For example q = ⟨ qh, qi, qj, qk ⟩ can be turned into q′ = ⟨ qh, | ql |, 0, 0 ⟩. Quaternions in this latter format are conspicuously isomorphic to the complex numbers written in rectangular coördinates: q′ ≈ ( qh, | ql | ). With that, we obtain a rationale for extending complex functions to the quaternions. For example, if M is chosen so that M ·· q = q′, then we could define a sine function:

sinqtrn (q) = M−1 ·· sincmpx (q′).

This is a reasonable approach for any complex function whose power series (which is of course a polynomial) has real coefficients. Note also that because those coefficients are real, (sin q) · q = q · (sin q). Now this commutativity means that sin (q) and q must be proportional in their imaginaries, and it will not be necessary to calculate the full matrix M to transform the complex sine function into its quaternion counterpart. Rather,

 if… sincmpx ( qh, | ql | ) = ( x, y ) let… real w = y ÷ | ql | then… sinqtrn ⟨ qh, qi, qj, qk ⟩ = ⟨ x, qi · w, qj · w, qk · w ⟩

Caution. If the determinant of M is −1, preservation of the quaternion multiplication rules fails. For instance, consider orthonormal matrix N, whose determinant is −1:

 +1 0 0 0 0 +1 0 0 0 0 +1 0 0 0 0 −1

In the matrix-multiplication-over-quaternion-multiplication distributive law, apply N to k:

(N ·· k) · (N ·· k) versus N ·· (k · k)
(−k) · (−k) versus −(k · k)
k2 versus −(k2)

which is not an equality. Generalizing about how such a matrix will change the multiplication rules is difficult, because there are so many 3-spaces of reflection possible.

Quaternions are equivalent to certain 4-by-4 real matrices; this is beneficial because sometimes linear algebra is useful in answering questions about quaternion behavior. Under one of the many possible mappings, quaternion q corresponds to this matrix Q:

 +qh +qi +qj +qk −qi +qh −qk +qj −qj +qk +qh −qi −qk −qj +qi +qh

Matrix addition and multiplication effect quaternion addition and multiplication. Also, transposing Q gives q*.

The product p · q is the sum of a commutative part pq:

 (p ⋈ q)h = ph · qh − pi · qi − pj · qj − pk · qk (p ⋈ q)i = ph · qi + pi · qh (p ⋈ q)j = ph · qj + pj · qh (p ⋈ q)k = ph · qk + pk · qh

and an anticommutative part pq:

 (p ⋉ q)h = 0 (p ⋉ q)i = pj · qk − pk · qj (p ⋉ q)j = pk · qi − pi · qk (p ⋉ q)k = pi · qj − pj · qi

In other words:

 p · q = p ⋈ q + p ⋉ q p ⋈ q = (p · q + q · p) ÷ 2 = + q ⋈ p p ⋉ q = (p · q − q · p) ÷ 2 = − q ⋉ p

This allows the amount of noncommutativity to be quantified, perhaps as | pq | ÷ | p · q |. Other approaches are to regard as pl and ql as vectors in real 4-space, and to find the angle between them, or to calculate the area of the parallelogram they define.

The pq operation closely resembles the standard cross product.

The review of quaternion fundamentals complete, we turn to the intricacies of differentiation.

Consider the simple functions f (q) = q · i and g (q) = i · q. We might expect the derivatives of f and g, with respect to q, to equal i. Yet it is usual throughout real and complex analysis that, if two functions have the same derivative, they differ by only a constant. Since the difference between f and g is more complicated than a constant, something has to give.

Appealing to the standard definition of the derivative sheds light. It involves division, and we choose the sinisterior version of that operation first; note that dq is a quaternionic variable that happens to have a two-letter name:

 f ′ = lim dq → 0 dq−1 · (f (q + dq) − f (q))

 f ′ = lim dq → 0 dq−1 · ((q + dq) · i − q · i)

 f ′ = lim dq → 0 dq−1 · dq · i = i

Dexterior division, however, will be less successful:

 f ′ = lim dq → 0 (f (q + dq) − f (q)) · dq−1

 f ′ = lim dq → 0 ((q + dq) · i − q · i) · dq−1

 f ′ = lim dq → 0 dq · i · dq−1

Remember that dq is a quaternion, and for the limit to exist, the same limit must be obtained no matter what path dq takes en route to zero. Using a real number x, we can effect a path along the h axis by implementing dq as x · h. Because x is real, it commutes freely among any imaginaries, and disappears when it meets x−1:

 path ⟨ x, 0, 0, 0 ⟩: lim x → 0 (x · h) · i · (x · h)−1 = h · i · h−1 = i

Next we try the i axis, dq becoming x · i:

 path ⟨ 0, x, 0, 0 ⟩: lim x → 0 (x · i) · i · (x · i)−1 = i · i · i−1 = i

Along the j and k axes, however, a different limit results:

 path ⟨ 0, 0, x, 0 ⟩: lim x → 0 (x · j) · i · (x · j)−1 = j · i · j−1 = −i

 path ⟨ 0, 0, 0, x ⟩: lim x → 0 (x · k) · i · (x · k)−1 = k · i · k−1 = −i

Even worse, an approach along the path ⟨ 0, x, x, 0 ⟩ rather alarmingly gives a derivative of zero for this nonconstant function f (q).

The situation is precisely reversed when we consider g (q), but neither sinisterior or dexterior division will give a unique limit on a function as simple as f (g (q)) = i · q · i. With the standard definition of the derivative failing for such an elementary function, we must make a choice:

• Declare that multiplication will not be a differentiable operation.
• Find a better definition of differentiation.

Between these, we opt for the second, on the grounds that multiplication is the characteristic operation of quaternions; if it is not differentiable, there is little point in continuing.

A luring workaround is to require that the limit variable dq always be real; this will give simple, familiar-looking calculations. However, it will give equal derivatives for f and g even though they do not differ by a constant, and that in our opinion does not capture all the information that a derivative ought.

Contrast that in physics is sometimes encountered the quaternionic function of a real argument, often designated t. In this specialized case, the traditional definition of differentiation succeeds, because t and dt, being real, commute in all multiplication.

 f ′ = lim dt → 0 (f (t + dt) − f (t)) ÷ dt

A real function of a quaternionic argument is also a viable candidate for differentiability by the traditional definition.

Far more productive is the differential, which is a function d (q, dq), real-linear in dq, satisfying this equation:

 lim dq → 0 | (f (q + dq) − f (q) − d (q, dq)) | · | dq−1 | = 0

By real-linear in dq, we mean that for any real x, the following equality is satisfied: x · d (q, dq) = d (q, x · dq). There need not be any particular behavior in q.

Note that many authors involve the derivative in their definition of the differential, but we do not, because in our context of noncommutative multiplication, the usual kind of derivative does not exist.

With fewer absolute value symbols, but nonetheless equivalent, are these:

 lim dq → 0 (f (q + dq) − f (q) − d (q, dq)) · | dq−1 | = 0

 lim dq → 0 | (f (q + dq) − f (q) − d (q, dq)) | · dq−1 = 0

Another equivalent version eliminates the radicals encountered in calculating norms, and thus will ease some calculations:

 lim dq → 0 | (f (q + dq) − f (q) − d (q, dq)) |2 · | dq−1 |2 = 0

For instance, let f (q) = i · q · j:

 lim dq → 0 | (i · (q + dq) · j − i · q · j − d (q, dq)) | · | dq−1 | = 0

 lim dq → 0 | (i · dq · j − d (q, dq)) | · | dq−1 | = 0

An obvious choice is d (q, dq) = i · dq · j. We can summarize this with the notation:

df = d (i · q · j) = i · dq · j

While this is not a derivative, it at least is the outcome of differentiation, and we will investigate constructing a derivative from it later. Strictly speaking, "d (i · q · j)" is the name, albeit a very complicated name, of a variable.

Many results reminiscent of those from algebras with commutative multiplications directly follow:

d (any constant) = 0
d (q) = dq
d (p + q) = dp + dq
d (p · q) = dp · q + p · dq
d (p · q · r) = dp · q · r + p · dq · r + p · q · dr
d (q−1) = − q−1 · dq · q−1 when q ≠ 0

Such a differential is precisely what goes under the integral sign. For example, ∫ (dq · q + q · dq) equals q2 plus an arbitrary constant. A side effect of the noncommutativity of quaternion multiplication is that ∫ dq · q and ∫ q · dq do not exist individually, as the value of either one as a definite integral will depend on the path of integration.

Because multiplication is not commutative, this is FALSE: d (sin (q)) = cos (q) · dq = dq · cos (q). In general, there is no convenient way to write the differential of a power-series function.

Differentials being additive, (dq)h = d(qh), et cetera.

The noncommutativity of multiplication would confound attempts to use the chain rule in finding derivatives, but for differentials the chain rule is merely substitution, and the problem is avoided. For instance, here are two ways to combine formulas from above to find d (q−2) when q ≠ 0:

 d (q−2) = d ((q−1)2) = q−1 · d(q−1) + d(q−1) · q−1 = q−1 · (− q−1 · dq · q−1) + (− q−1 · dq · q−1) · q−1 = − q−2 · dq · q−1 − q−1 · dq · q−2 d (q−2) = d ((q2)−1) = − (q2)−1 · d (q2) · (q2)−1 = − q−2 · (dq · q + q · dq) · q−2 = − q−2 · dq · q · q−2 − q−2 · q · dq · q−2 = − q−2 · dq · q−1 − q−1 · dq · q−2

Such differential expressions are a key reason that in this report we use an explicit symbol for multiplication.

In manipulating differentials, there is often no need to classify the variables as dependent or independent. But if, for example, we need to declare q independent with f dependent on it, then most likely dq will be independent, with df dependent on both q and dq.

Higher-order differentials follow without ado. Just as d(q) is a mechanism for creating a new variable named dq, d(dq) creates a new variable ddq, which is usually written with one d and a superscript: d 2q. More generally, d(d nq) creates d(d n+1q). Subtly different is to superscript the d operator itself, with the expected meaning: d 2(q) resolves to d 2q. Here is an example:

 d 2(p · q) = d(p · dq + dp · q) = p · ddq + dp · dq + dp · dq + ddp · q = p · d 2q + 2 · dp · dq + d 2p · q

Some similar-looking symbols must be distinguished into three categories:

• d 2q = d(dq) = d(d(q)) = ddq = d 2(q), the second-order differential of q
• dq2 = (dq)2 = dq · dq
• d(q2) = q · dq + dq · q

Because i · j equals k, the next three functions bear a superficial resemblance to one another:

f1 (q) = q · k
f2 (q) = k · q
f3 (q) = i · q · j

The differentials are:

df1 = dq · k
df2 = k · dq
df3 = i · dq · j

Were these functions complex, we could safely substitute dq = 1 to obtain the derivatives. However, trying that here yields f1′ = f2′ = f3′ = k, and again, we run into the problem where functions that do not differ by a constant still manage to have equal derivatives.

At this point, we take a broader look at the problem, noting that for a complex function to be differentiable, it must satisfy the famous Cauchy-Riemann equations, which are so restrictive that if we know the real part of such a function, we can reconstruct the imaginary part to within a constant. (By the same token, knowing the imaginary part, we can deduce the real.) An informal interpretation is that a differentiable complex function contains hardly any more functional information than its real part. Still, things work out well, because any complex function that does have a first derivative is guaranteed to also have a second derivative, and there are many other benefits from satisfying Cauchy-Riemann.

There may be a way to similarly constrain quaternion functions, so that given any one of the four parts we can determine the others to within a constant. However, it is difficult to imagine by what non-arbitrary criterion the three imaginary parts would be made significantly different. Moreover, a use for such constraint is not yet apparent.

An alternative approach is to introduce the Q-derivative, which is an ordered quadruple of functions, written in curly brackets (unrelated to the symbol for a set), consisting of a differential evaluated successively at (symbol "@") h, i, j and k. For instance, the Q-derivatives of the fn above are:

f1Q = df1 @ (dq = h, i, j, k) = {h · k, i · k, j · k, k · k} = {+k, −j, +i, −h}
f2Q = df2 @ (dq = h, i, j, k) = {k · h, k · i, k · j, k · k} = {+k, +j, −i, −h}
f3Q = df3 @ (dq = h, i, j, k) = {i · h · j, i · i · j, i · j · j, i · k · j} = {+k, − j, −i, +h}

Here is a concise listing of sixteen linear mononomials, all with distinct Q-derivatives:

 (h · q · h)Q = {+h, +i, +j, +k} (h · q · j)Q = {+j, +k, −h, −i} (i · q · i)Q = {−h, −i, +j, +k} (i · q · k)Q = {−j, −k, −h, −i} (j · q · j)Q = {−h, +i, −j, +k} (j · q · h)Q = {+j, −k, −h, +i} (k · q · k)Q = {−h, +i, +j, −k} (k · q · i)Q = {+j, −k, +h, −i} (h · q · i)Q = {+i, −h, −k, +j} (h · q · k)Q = {+k, −j, +i, −h} (i · q · h)Q = {+i, −h, +k, −j} (i · q · j)Q = {+k, −j, −i, +h} (j · q · k)Q = {+i, +h, −k, −j} (j · q · i)Q = {−k, −j, −i, −h} (k · q · j)Q = {−i, −h, −k, −j} (k · q · h)Q = {+k, +j, −i, −h}

Various linear combinations of these formulas (resembling the extractive formulas above) may be helpful in a search for antiderivatives, among them:

(h · q · hi · q · ij · q · jk · q · k)Q ÷ 4 = {h, 0, 0, 0}
(j · q · kk · q · jh · q · ii · q · h)Q ÷ 4 = {0, i, 0, 0}
(k · q · ij · q · hi · q · kh · q · j)Q ÷ 4 = {0, 0, j, 0}
(i · q · jh · q · kk · q · hj · q · i)Q ÷ 4 = {0, 0, 0, k}

The four components of a Q-derivative may legitimately be regarded as directional derivatives, leading to the observation that directions other than h, i, j and k are possible. Without loss of functional information, any four linearly independent quaternions (presumably of unit norm) can be used as the directions for successive evaluation in finding a Q-derivative.

With Q-differentiation, the existence of a first derivative is no guarantee that a second derivative exists, but if it does it will have sixteen components:

{ {#, #, #, #}, {#, #, #, #}, {#, #, #, #}, {#, #, #, #} }

This suggests the Jacobian matrix, and is in line with the observation that Q-differentiable quaternion functions correspond precisely to the differentiable functions from real 4-space into itself.

For complex numbers, but not quaternions, analyticity and differentiability are equivalent. As result, many authors (notably R. Fueter and A. Sudbery) who are not satisfied with how differentiability extends to the quaternions opt to investigate analyticity instead. Influenced by the complex Cauchy-Riemann conditions, they define f (q) to be left-regular if and only if:

h · (∂f / ∂qh) + i · (∂f / ∂qi) + j · (∂f / ∂qj) + k · (∂f / ∂qk) = 0

or right-regular if and only if:

(∂f / ∂qh) · h + (∂f / ∂qi) · i + ∂f / ∂qj) · j + (∂f / ∂qk) · k = 0

These two definitions give equivalent, but distinct, structures. However, a weakness of these definitions is that they require us to make a decision about in which direction multiplication will be; this is similar to the choice involved with sinisterior and dexterior variants of division above. As with division, we could devise a mixture by noting that i = j · k et cetera:

h · (∂f / ∂qh) · h + j · (∂f / ∂qi) · k + k · (∂f / ∂qj) · i + i · (∂f / ∂qk) · j = 0

Perhaps surprising is that the identity function f (q) = qh · h + qi · i + qj · j + qk · k is not left-regular:

h · h + i · i + j · j + k · k = 1 − 1 − 1 − 1 = − 2 ≠ 0

Consider the next two functions, which are left-regular:

g1(q) = ⟨ +qh, −qi, +qj, +qk
g2(q) = ⟨ +qh, +qi, −qj, +qk

Although their sum is left-regular, their product and composition are not:

g1(q) · g2(q) = ⟨ qh2 + qi2 + qj2qk2, 2 · qj · qk, 2 · qi · qk, 2 · qh · qk
g1(g2(q)) = ⟨ +qh, −qi, −qj, +qk

Any functional analysis scheme that rejects the identity function, the multiplication of functions, and the composition of functions is in unusual mathematical territory indeed.

Because complex analytic functions can be expressed as power series, quaternionic polynomials have been examined in hopes of establishing similar behavior. However, there are complications. For instance, the general cubic term is:

a · q · b · q · c · q · d

where a, b, c and d are constants. Moreover, the sum of two such terms:

a1 · q · b1 · q · c1 · q · d1 + a2 · q · b2 · q · c2 · q · d2

does not simplify. Sometimes helpful in manipulations is that a series of such terms can be separated into four real series of four real arguments, although this obfuscates the quaternionic character.

Restricting the coefficients to real numbers is uninteresting, because it gives behavior isomorphic to that of the complex numbers. Somewhat more fruitful has been allowing a quaternionic coefficient to reside in only the trailing position (or equivalently, the leading position), as in this cubic polynomial:

q3 · a + q2 · b + q · c + d

Addition of such functions is unsurprising, but the obvious definition of multiplication is rejected in favor of an artifice which keeps the constants in the trailing position:

(qm · a) · (qn · b) ≠ qm · a · qn · b
(qm · a) · (qn · b) = qm+n · (a · b)

The exchange of a and qn, which values would not ordinarily be commutative, raises the question of whether we might be using two subtly different varieties of quaternion multiplication.

Recent authors (F. Colombo, C. Stoppato, and G. Gentili, among others) have investigated the slice, which is a plane in quaternion space that passes through three points:

• the origin
• a non-zero real number
• a non-real quaternion

A slice contains the real axis, and an imaginary axis perpendicular to the real axis through the origin. Because in complex variables +i is isomorphic to −i, it does not matter which direction on the imaginary axis is labeled positive, and which negative. All the quaternions on a slice are commutative in multiplication, as would be expected on the complex plane.

A function is slice-regular if it is differentiable in every slice of quaternion space.

When f is a trailing-coefficient polynomial in q, and when q and dq are restricted to the same slice, then the following definition of the derivative has meaning:

 f ′ = lim dq → 0 dq−1 · (f (q + dq) − f (q))

Here is an example with f = q3 · a:

 f ′ = lim dq → 0 dq−1 · ((q + dq)3 · a − q3 · a)

 f ′ = lim dq → 0 dq−1 · ((q + dq)3 − q3) · a

 f ′ = lim dq → 0 dq−1 · (q · q · dq + q · dq · q + dq · q · q + dq · dq · q + dq · q · dq + q · dq · dq + dq · dq · dq) · a

Because q and dq are on the same slice, they commute in multiplication, and much simplification ensues:

 f ′ = lim dq → 0 (3 · q2 + 3 · dq · q + dq2) · a

 f ′ = 3 · q2 · a

Because we did not assume any particular slice, we can say that f = q3 · a is indeed slice regular. More generally,

d (qn · a) ÷ dq = n · qn−1 · a

While the commutativity of q and dq is essential, the constant coefficient a may lie anywhere in quaternion space.