# Commuting matrices are simultaneously triangularizable

Recently, georgiosl posted a proof that commuting square matrices over the complex field are simultaneously triangularizable in the PM encyclopedia:

http://planetmath.org/encyclopedia/ProofOfCommutingMatricesAreSimultanen...

The result is known to hold with the complex field replaced by an arbitrary algebraically closed field (I saw it on an old qualifying exam). I am wondering how to prove it. Perhaps one can show that if A and B are commuting matrices over an algebraically closed field then there are enough simultaneous pseudo-eigenvectors of A and B to span the whole space. Hence, there is a basis in which both A and B are expressed as upper triangular matrices (in Jordan Canonical form). However, for some reason I am having difficulties writing this out rigorously. I feel like I am missing something very basic. Any suggestions?

Thanks.

### Re: Commuting matrices are simultaneously triangularizable

Thank you.

Getting the triangularizations block diagonal is actually exactly what I expected since I feel like this is basically an exercise in Jordan Canonical form (as I speculated in the original post). But I'm having trouble writing a legitamite proof along those lines.

I'll study your present proof more carefully.

### Re: Commuting matrices are simultaneously triangularizable

> Recently, georgiosl posted a proof that commuting square
> matrices over the complex field are simultaneously
> triangularizable in the PM encyclopedia:
> http://planetmath.org/encyclopedia/ProofOfCommutingMatricesAreSimultanen...
>
> The result is known to hold with the complex field replaced
> by an arbitrary algebraically closed field (I saw it on an
> old qualifying exam). I am wondering how to prove it.

Note that the theorem proved by georgiosl is stronger. It says
that these matrices are *unitarily* simultaneously triangularizable.

This would be hard to generalize to an arbitrary alg. closed field.
What is missing is the notion of a Hermitian inner product, or
equivalently the notion of a positive definite sesquilinear form.

But if you don't care about unitarity, then just follow
georgiosl's proof, but, instead of splitting the vector space
span{x} + X, where X is the orthogonal complement, just take X
to be any subspace such that the internal sum of span{x} and X
is the whole space.

Hope this helps.

### Re: Commuting matrices are simultaneously triangularizable

I have a proof written down of this (over arbitrary field, and stating the
triangularisation exists over any field extension where all characteristic
polynomials split), which I probably should upload to PM anyway, but
I'm not sure how to integrate it with the existing entries. Any suggestions?

Perhaps the "Commuting matrices are simultaneously triangularizable"
should really be renamed "Commuting matrices are simultaneously
*unitarily* triangularizable", since this is what it states. Then a theorem
without the unitarity would fit in nicely.

Another thing which troubles me slightly is that it is really possible to
prove something even stronger: that the triangularisation can be
made block-diagonal where each block corresponds to a particular
eigenvalue for each matrix (i.e., if two diagonal positions belong to the
same block then in each triangularised matrix of the set being
simultaneously triangularised these two diagonal elements have the
same value). I haven't got a proof of that written down yet, though.

Having written that far, it occurred to me that I could create a
"collaboration" with the material I have so that you can at least view
it. It should be available at the following URL:
http://planetmath.org/?op=getobj&from=collab&id=57

### Not Jordan form

The parenthetical remark (in Jordan Canonical form) in the original question is wrong. Matrices that can be simultaneously triangularised can almost never be put into Jordan Canonical form simultaneously. The notion is too rigid for that; it suffices to observe that a scalar multiple of a (non-diagonal) JCF is not a JCF (and two matrices being scalar multiples of one another is basis-independent).

### Non- linear failure functions

All the failure functions given in the examples (see sketch proof) are linear. Let me give an example of a non-linear failure function: Let the mother function be the quadratic x^2 + 1. When x = 4 we get the linear failure function x = 4 + 17k (k belongs to N). The corresponding non-linear failure function x = (867 + 2312k + 4913k^2) yields values of x such that when they are substituted in the mother function we get f(x) congruent to 0 (mod(289)).

### Non- linear failure functions

A correction: The non-linear failure function is x = 38 + 17^2*k (k belongs to N). The values of x generated by this failure function are such that f(x) is congruent to 0 mod(289).