I could be misunderstanding, but I think you'd get all the algebraic content out of it by computing the dimensions of the image and kernel, then working with them from there on. Why would you want to have a matrix decomposition that separated them?
Granted, computing the dimensions of the kernel is not so easy, especially because a pair of vectors can be arbitrarily close without being linearly dependent. No wonder there is no stable way to do it, it technically exceeds the capability of finite-precision numbers. Multiplying a vector of unequal components by most numbers, especially those with non-terminating base-two decimal representations, will produce a vector that is linearly independent when rounded back to finite precision.
Clearly then, linear independence on computers has to be considered in the continuous sense in which singular values reveal it, where a very small singular value represents an "almost-kernel," which is the closest thing to a kernel you are likely to find outside of carefully constructed examples or integer matrices.
> I could be misunderstanding, but I think you'd get all the algebraic
> content out of it by computing the dimensions of the image and kernel,
> then working with them from there on. Why would you want to have a
> matrix decomposition that separated them?
Well. Sometimes you want to know a solution to a problem and not only the dimension of the solution space : )
Also for composition: If you have "compatible" matrices B,C. How do you compute the restriction: A|_ker(B), A|_im(B) or co-restrictions (factor projections): A/ker(C), A/im(C), etc.
It's highly relevant from an algebraic perspective, hence it's curious that it's not covered (at all) in the numeric literature.