Primary Tool
Math Workspace (CAS)
CAS is useful here because every projection coefficient has to be exact. The embed shows the vector subtraction and then checks the dot products, so the method is not reduced to memorizing a formula.
Problem
Apply Gram-Schmidt to , , and .
The goal is not to change the subspace. These vectors already form a basis of . The goal is to produce a new basis for the same space whose vectors meet at right angles.
Projection idea
Set . To build , remove from the component that points along : .
For , remove the components pointing along both and . That is the only new ingredient in three dimensions.
Step-by-step walkthrough
The fractions are not the point. The point is that each new vector is cleaned of every direction already chosen.
- 1Start with .
- 2Compute .
- 3Check , so the first two directions are orthogonal.
- 4Compute .
- 5Check and .
- 6The result is an orthogonal basis. Divide each by its length if you need an orthonormal basis.
Common Pitfall
Gram-Schmidt does not merely normalize vectors. Normalizing changes lengths; Gram-Schmidt first changes directions by removing projections so the vectors become orthogonal.
Try a Variation
Normalize the three vectors from the example. What are their lengths, and what orthonormal basis do you get?
Related Pages
Keep moving through the cluster
What a basis does in linear algebra
A basis is what turns a vector space into something you can navigate. It reaches every vector, and it does so without redundancy, so coordinates become possible.
Open explanation →
What linear independence means geometrically
Linear independence is the condition that nothing in the list is wasted. Geometrically, each new vector must add a new direction instead of repeating one you already had.
Open explanation →
Compute eigenvalues and eigenvectors of a matrix
A first serious 3 by 3 eigenproblem should show new structure without burying the point in arithmetic. This one does that: you see the characteristic polynomial, the eigenspaces, and the diagonalization test in one pass.
Open worked example →
Proof that eigenvectors for distinct eigenvalues are linearly independent
Distinct eigenvalues do more than label eigenspaces. They force eigenvectors to be independent, which is exactly why diagonalization becomes possible so often in practice.
Open proof →