Primary Tool
Proof Builder
The proof is short but easy to blur when written as one paragraph. In the Proof Builder, the cancellation step and the inductive step stay visually separate, which is the whole argument.
What the theorem says
If are eigenvectors of a linear map and their eigenvalues are all distinct, then those eigenvectors are automatically linearly independent.
That fact is one of the main reasons distinct eigenvalues are so useful: once the eigenvalues separate, the eigenvectors stop interfering with one another.
Why distinct eigenvalues matter
Suppose there were a linear relation . Applying does not scramble the vectors; it only multiplies each one by its own eigenvalue.
That is the opening the proof needs. Once you subtract times the original relation, the term disappears and the relation shrinks to one involving only .
The proof strategy
Everything turns on the same move: create two equations from the same relation, then subtract them in a way that kills one eigenvector.
- 1Assume there is a linear relation among the eigenvectors and apply to it.
- 2Use to rewrite the new equation with eigenvalue coefficients attached.
- 3Subtract times the original relation so that the term cancels.
- 4Notice that the remaining coefficients involve , which are all nonzero because the eigenvalues are distinct.
- 5Apply the inductive hypothesis to the shortened relation to force to vanish.
- 6Return to the original relation to conclude as well.
Common Pitfall
Distinct eigenvalues are a sufficient condition for independence, not a necessary one. Eigenvectors can still be independent even when some eigenvalues repeat, but then you need extra work to prove it.
Try a Variation
Look at a matrix with a repeated eigenvalue, such as . Does the repeated eigenvalue still give two independent eigenvectors, or does the argument break?
Related Pages
Keep moving through the cluster
Compute eigenvalues and eigenvectors of a matrix
A first serious 3 by 3 eigenproblem should show new structure without burying the point in arithmetic. This one does that: you see the characteristic polynomial, the eigenspaces, and the diagonalization test in one pass.
Open worked example →
What linear independence means geometrically
Linear independence is the condition that nothing in the list is wasted. Geometrically, each new vector must add a new direction instead of repeating one you already had.
Open explanation →
What eigenvectors and eigenvalues mean geometrically
An eigenvector is a direction that survives a linear transformation without rotating away from its own line. The eigenvalue tells you the scale and possible sign flip along that direction.
Open explanation →
What a basis does in linear algebra
A basis is what turns a vector space into something you can navigate. It reaches every vector, and it does so without redundancy, so coordinates become possible.
Open explanation →