2

I am reading the book 'Perturbation Theory for Linear Operators' from Kato.

He defines in his Book (Chapter 1 §5) the resolvent for some operator $T: X \to X$ on a finite Banach space $X$ as a operator-valued function $$ R(\zeta) = (T - \zeta )^{-1} . $$ This map is only defined for $\zeta$ that are different from any of the eigenvalues of $T$.

He then wants to investigate the Laurent series of $R(\zeta)$. The Laurent series is defined for any function $$ f: D\subset \mathbb{C} \to \mathbb{C} $$ which is holomorphic on $\{z \in \mathbb{C} \mid r < |z - z_0| < r'\}$ by $$ f(z) = \sum_{n= -\infty}^\infty c_n(z-z_0)^m$$ and $$ c_n = \frac{1}{2 \pi i } \int_{ |z-z_0| = \rho} \frac{f(z)}{(z-z_0)^{n+1}} dz $$ for an arbitrary $r < \rho <r'$.

But how can you investigate the Laurent Series for the resolvent $R$? It is not a map from $$D \subset \mathbb{C} \to \mathbb{C}$$ but from $$D \subset \mathbb{C} \to \mathcal{B}(X).$$ How can one see that the equality in the above equations still holds if I replace $f$ with the operator-value function $R$?

Sahiba Arora
  • 10,847
Adam
  • 3,679
  • I thik it's a case of "use of function theory for operator-valued function", as suggested in the note 1 pag. 37 of Kato. – Emilio Novati Feb 11 '15 at 14:31
  • Since a Laurent series is absolutely convergent in his convergence annulus, I think that it matter the Th. 5.3.2 of: J. Deudonné - Foundations of modern analysys. – Emilio Novati Feb 11 '15 at 14:36

1 Answers1

5

For finite-dimensional spaces, you can work in terms of matrices. This provides a smooth transition because you should be used to vector functions of a variable, which is all that a matrix really is. When you integrate a matrix $R(\lambda)=[r_{j,k}(\lambda)]_{j,k}$ where $1 \le j,k \le N$, you can just integrate each entry of the matrix separately. So, the integral of a matrix function is the matrix of integrals of the individual terms of the matrix: $$ \int R(\lambda)d\lambda = \left[\int r_{j,k}(\lambda)d\lambda\right]_{j,k}. $$ Matrices are nice because you know things about inversion of matrices. For example, the inverse of an $N\times N$ matrix $A=[a_{j,k}]$ comes out of the adjunct matrix $A_{\mbox{adj}}=[(-1)^{j+k}A_{k,j}]$ where $A_{j,k}$ is the determinant of the $(N-1)\times(N-1)$ co-factor matrix obtained by eliminating the $j$-th row and $k$-th column of $A$ (note that the swap of $j,k$ in the adjunct matrix for $A$ is deliberate.) This is because $$ AA_{\mbox{adj}}=A_{\mbox{adj}}A = \mbox{det}(A)I, $$ where $I$ is the $N\times N$ identity matrix. Therefore, $A$ is invertible iff $\mbox{det}(A)\ne 0$ and, in that case, the inverse is $$ A^{-1}=\frac{1}{\mbox{det}(A)}A_{\mbox{adj}}. $$ That gives you a way to study the resolvent operator through the characteristic polynomial $p(\lambda)=\mbox{det}(\lambda I- A)=(-1)^{N}\mbox{det}(A-\lambda I)$. The resolvent operator for $A$ is $$ (A-\lambda I)^{-1} = \frac{(-1)^{N}}{p(\lambda)}(A-\lambda I)_{\mbox{adj}} $$ All of the entries of the adjunct matrix $(A-\lambda I)_{\mbox{adj}}$ are formed from determinants of the entries of $(A-\lambda I)$, which means that the following is a matrix whose entries are polynomial $q_{j,k}(\lambda)$ whose orders never exceed $N-1$: $$ (A-\lambda I)_{\mbox{adj}}=[q_{j,k}(\lambda)] $$ By expanding all of the entries out and gathering like powers into separate matrices, you get $$ (A-\lambda I)_{\mbox{adj}}=A_{0}+\lambda A_{1}+\lambda^{2}A_{2}+\cdots+\lambda^{N-1}A_{N-1}, $$ where $A_{j}$ are constant matrices for all $0 \le j \le N-1$. So the resolvent always has the form $$ (A-\lambda I)^{-1} = (-1)^{N}\left[\frac{1}{p}A_{0}+\frac{\lambda}{p}A_{1}+\cdots+\frac{\lambda^{N-1}}{p}A_{N-1}\right]. $$ Now when you integrate, you can treat the matrices $A_{j}$ as constants and just worry about scalar integration of the coefficient functions $\lambda^{n}/p(\lambda)$. Everything reduces to scalar integrals with big fat constants--which are matrices--along for the ride. You can show that $$ q(A) = \frac{1}{2\pi i}\oint_{C} q(\lambda)(\lambda I-A)^{-1}d\lambda = -\frac{1}{2\pi i}\oint_{C} q(\lambda)(A-\lambda I)^{-1}d\lambda $$ for any scalar polynomial $q$ and any simple closed positively-oriented contour enclosing all of the roots of the characteristic polynomial $p$ in its interior. In particular, because $p(\lambda)(A-\lambda I)^{-1}$ has only removable singularities, you get the Cayley-Hamilton theorem right away: $$ p(A) = 0. $$

Disintegrating By Parts
  • 87,459
  • 5
  • 65
  • 149