10

Given a multivector, what is the easiest way to compute its inverse? To take a concrete example, consider a bivector $ B = e_1(e_2 + e_3) $. To compute $ B^{-1} $, I can use the dual of $ B $: $$ B = e_1e_2e_3e_3 + e_1e_2e_2e_3 = I(e_3-e_2) = Ib $$ $$ BB^{-1} = 1 = Ib B^{-1} $$ $$ B^{-1} = -b^{-1}I = -\frac{b}{b^2}I$$ But this won't work for a bivector in 4-dimensions for example. Is there a more general/easier way?

Raskolnikov
  • 16,108
user997712
  • 195
  • 1
  • 9

7 Answers7

9

Not sure where you got the idea that inverses should involve duality. Usually this is done merely through reversion. Let $B^\dagger$ denote the reverse of $B$. Then the inverse is

$$B^{-1} = \frac{B^\dagger}{B B^\dagger}$$

For a bivector, $B^\dagger = -B$. I believe this works for any object that can be written as a geometric product of vectors (i.e. that can be factored; which is why it works for rotors and spinors), but don't quote me on that. Of course in mixed signature spaces, anything that has a null factor is not invertible.

Muphrid
  • 19,902
  • 1
    what does it mean to have a null factor? – user997712 Jul 14 '13 at 21:36
  • 1
    That there is a null vector as one of the factors of a geometric product. Consider a mixed signature space with a basis vector $e_0$ such that $e_0 e_0 = -1$. A vector of the form $u = e_0 \pm e_1$ would be null, in the sense that $uu = 0$. Clearly, $u$'s inverse is no longer a multiple of itself. Indeed, it has no inverse. – Muphrid Jul 14 '13 at 21:43
  • Can a vector $ e $ exist for which $ ee = -1 $? Do you mean to say a blade which is in the basis? – user997712 Jul 14 '13 at 21:52
  • In a purely Euclidean space, no, there are no such vectors. Spaces in which these problems can arise are fundamentally different from Euclidean space (see, for example, Minkowski space and the associated GA on that space, which is called the spacetime algebra, or STA). – Muphrid Jul 14 '13 at 22:01
  • Will $B B^\dagger$ always result in an scalar? – HelloGoodbye Feb 25 '24 at 17:01
6

Given multivector $a$ that you want to invert, the function $x \mapsto a x$ is a linear transformation (of the algebra viewed as a $2^n$ dimensional vector space), right?

So, an uninspired but reliable way to find $a^{-1}$ would be to express $a$ as a $2^n$ by $2^n$ matrix $A$ (that is, the matrix $A$ whose $2^n$ columns are $a$ times each of the $2^n$ basis multivectors), and solve the linear equation $A x = 1$; then the solution $x$ is the desired $a^{-1}$. If there is no solution, then $A$ doesn't have an inverse, which means $a$ doesn't have an inverse.

Note that this always gives the answer if there is one, even in cases where the $a^†/({aa}^†)$ method fails due to ${aa}^†$ not being a scalar (where $a^†$ denotes the reverse of $a$). For example, if $a=2+e_1$, then $a^{-1}=(2-e_1)/3$.

Don Hatch
  • 1,047
5

An algorithm to calculate the inverse of a general multivector:

Start with an invertible general multivector (X) of Clifford's geometric algebra over a space of orthogonal unit vectors. Post-multiply repeatedly by a "suitable" Clifford number until a scalar (S) is reached; let the product of the post-multipliers be (R).

Then we have (X)(R) = (S)

Pre-multiply both sides by the required inverse (I) and divide by the scalar (S) and we have:

(R)/(S) = (I) which was to be determined.

For a "suitable" general multivector or Clifford number we try the Reverse or the Clifford conjugate. I notice that (X)(Xrev), for instance, results in only grades invariant to reversion; perhaps this would have been obvious beforehand to a mathematician. This elementary process works up to dimension 5, but fails at dimension 6. I have since seen 2 or 3 papers on the web which seem to agree with this result - but no one comments on it. Above dimension 5 it seems something more sophisticated is needed.

An example in dimension 5: (A)(Arev) = (B) gives grades B0 + B1 + B4 + B5.

B0 is the scalar and B1 is a vector; B4, B5 are the 4-vector and the pseudo-scalar.

In dimension 5 the pseudo-scalar commutes with all vectors and squares to +1 ; as a result we can use duality to re-arrange (B) as a paravector with coefficients in the duplex numbers (also known as hyperbolic, perplex or Study numbers) - that is as D0 + D1

Multiply by D0 - D1 to reach a duplex number which is readily reduced to a scalar.

For dimension 6 and above I found the following "in principle" process - but it doesn't look to me like an efficient one:

In dim 6 (and above) arrange (X) as A+Bn where (A) and (B) are in dim 5 and n is one of the unit vectors (e6) for instance. Post multiply by C+Dn so as to remove e6 in the result. This can be done by something looking rather like a projection operator as discussed by Bouma. Repeat the process to step down the dimensions. I don't see why this shouldn't be extended to as high a dimension as required.

  • It doesn't quite work in dimension 4: if you multiply by reverse, it zeros out grades 2,3 so you get scalar plus vector plus antiscalar $s+v+S$; or if you multiply by clifford conjugate, it zeros out grades 1,2 so you get scalar plus antivector plus antiscalar $s+V+S$. Neither of those make any further progress by multiplying by either "suitable" multivector, i.e. its reverse or its clifford conjugate. But you can fix that by adding a third kind of "suitable" multivector, that is, the multivector obtained by negating just the scalar part. That is, $-s+v+S$ or $-s+V+S$ respectively. – Don Hatch Nov 12 '18 at 18:14
1

If your multivector is a product of vectors ($x = x_1x_2\cdots{}x_n$) then the invers is given by

\begin{align} x^{-1} & = \left(x_1x_2\cdots{}x_n\right)^{-1}\\ & = x_n^{-1}\cdots{}x_1^{-1}. \end{align}

Since none null vectors are always invertible ($a^{-1} = \frac{a}{\Vert{}a\Vert^2}$) you can compute your inverse from that.

1

This question is fully answered by this paper. The authors do not give a uniform, sleek formula that works for all dimensions. Instead, there are separate formulas for each of the small dimensions, and then you can use certain isomorphisms between the Clifford algebras to handle higher dimensions.

Eckhard Hitzer, Stephen Sangwine, Multivector and multivector matrix inverses in real Clifford algebras, Applied Mathematics and Computation, Volume 311, 2017, Pages 375-389, ISSN 0096-3003, https://doi.org/10.1016/j.amc.2017.05.027.

First, let us define several involutions of the $n$-dimensional geometric algebra, with signature $(p,q)$, where $p+q=n$. Every multivector $M$ can be written as $$ M=M_0+M_1+\dots+M_n, $$ where each $M_k$ is a linear combination of $k$-blades. Define the hat involution via $$ \widehat M= \sum_{k=0}^n (-1)^k M_k $$ Then, let $M^\dagger$ denote the reversion of $M$, where each wedge of vectors comprising $M$ is written in reverse order. Equivalently, $$ M^\dagger =\sum_{k=0}^n (-1)^{k(k-1)/2}M_k $$ Finally, we can save some notation by defining the Clifford conjugate of $M$, $\overline M$, to be the composition of the two previous involutions. Explicitly, $$ \overline M= \widehat{M^\dagger}= (\widehat M)^\dagger =\sum_{k=0}^n (-1)^{k(k+1)/2} M_k $$ Finally, for any integers $j,k$ for which $1\le j<k\le n$, let $m_{j,k}$ denote the involution which negates the $j$-dimensional and $k$-dimensional parts of $M$ only. That is, $$ m_{j,k}(M)=M-2M_j-2M_k $$

Dimensions One and Two

For small dimensions, it turns out that $M\overline M$ is always a scalar. This implies that $$ M^{-1}=\frac{\overline M}{M\overline M} $$ In dimension $1$, reversion is trivial, so this is equivalent to $M^{-1}=\widehat M/(M\widehat M)$.

Three dimensions

Here, it turns out $M\overline M\widehat M M^\dagger$ is always a scalar, so $$ M^{-1}=\frac{\overline M\widehat M M^\dagger}{M\overline M\widehat M M^\dagger} $$

Four dimensions

Here is where things start to get less elegant.

$$ M^{-1}=\frac{\overline M m_{3,4}(M\overline M)}{M\overline M m_{3,4}(M\overline M)} $$

Five dimensions

The inverse in five dimensions is calculated as follows:

$$ M^{-1}=\frac{\overline M\widehat M M^\dagger m_{1,4}(M\overline M\widehat M M^\dagger )}{M\overline M\widehat M M^\dagger m_{1,4}(M\overline M\widehat M M^\dagger )} $$

The paper mentions how to compute the inverse in arbitrarily high dimensions, using certain isormorphisms between the various Clifford algebras. You will need to refer to the paper for the details.

Mike Earnest
  • 75,930
1

With respect to the geometric product, if B is a non-null versor (could be a mixed signature GA), the inverse of B will be:

$$B^{-1} = \frac{B^\dagger}{B B^\dagger}$$

For a null-versor, the inverse with respect to the geometric product does not exist.

0

The most straightforward answer is that one can just convert a multivector into a matrix form and inverse.

A multivector matrix represents the multivector; addition and multiplication still work as usual. The multivector can be recovered from looking at one of the columns of the matrix.

$$a_1 + a_{e_1}e_1 + a_{e_2}e_2 + a_{e_{12}}e_{12} \rightarrow \begin{pmatrix} a_1 & a_{e_1} & a_{e_2} & -a_{e_{12}}\\ a_{e_1} & a_1 & a_{e_{12}} & -a_{e_2}\\ a_{e_2} & -a_{e_{12}} & a_1 & a_{e_1}\\ a_{e_{12}} & -a_{e_2} & a_{e_1} & a_1 \end{pmatrix}$$

You can see that the first column contains the components of the multivector, and that under multiplication with another multivector matrix, produces a result equivalent to regular multivector multiplication. We can multiply by $(1, 0, 0, 0)$ to get only the first column.

Gauss-Jordan elimination can be used on the following augmented matrix to find the inverse.

$$\begin{pmatrix} a_1 & a_{e_1} & a_{e_2} & -a_{e_{12}} & 1\\ a_{e_1} & a_1 & a_{e_{12}} & -a_{e_2} & 0\\ a_{e_2} & -a_{e_{12}} & a_1 & a_{e_1} & 0\\ a_{e_{12}} & -a_{e_2} & a_{e_1} & a_1 & 0 \end{pmatrix}$$

In this example, our inverse multivector, in column form, in the order, $1, e_1, e_2, e_{12}$, is:

$$\frac{1}{-a_1^2+a_{e_1}^2+a_{e_2}^2-a_{e_{12}}^2}\begin{pmatrix} -a_1 \\ a_{e_1}\\ a_{e_2}\\ a_{e_{12}}\\ \end{pmatrix}$$