I'm telling you that the decimal proof is invalid.
Luckily for him, nobody is going to believe you unless you can explain why it's invalid.
On the other hand, I can explain why it's valid:
Let dec(f) = sum( f(n) * 10^n, n=-infinity..infinity ), where f is a function mapping integers to the natural numbers { 0, ..., 9 }. In other words, dec is a function that maps a sequence of decimal digits to a real number by assuming that the sequence is the decimal expansion of that number. For example, suppose f(n) = 4 if n=1, 2 if n=0, and 0 otherwise: then f(n) can be written out as "...00042.000...", and dec(f) = 42. Obviously, as long as there exists some N such that for n > N, f(n) = 0, dec(f) must converge.
Let s(n) = 9 if n < 0, and 0 otherwise. EnragedPenguin's proof goes as follows:
Let c = dec(s).
10c = 10 dec(s)
. . . = 10 * sum( s(n) * 10^n, n = infinity...infinity )
. . . = sum( 10 * s(n) * 10^n, n = -infinity...infinity )
. . . = sum( 10 * s(n) * 10^n, n = -infinity...-1 )
(since for n >= 0, s(n)=0). . . = sum( s(n) * 10^(n+1), n = -infinity...-1 )
. . . = s(-1) * 10^(-1+1) + sum(s(n) * 10^(n+1), n = -infinity...-2 )
. . . = s(-1) * 10^(-1+1) + sum(s(n-1) * 10^n, n = -infinity...-1 )
. . . = s(-1) * 10^(-1+1) + sum(s(n) * 10^n, n = -infinity...-1 )
(since for n <= -1, s(n-1) = s(n)). . . = s(-1) * 10^(-1+1) + dec(s)
. . . = 9 + c
10c - c = (9 + c) - c = 9.
9c = 9 => c = 1.
Therefore dec(s) = 1, so we have proved that the number represented by the infinite decimal string s(n) is the number 1. If you found my proof somewhat clunky, you'll realize why we use representations like "0.999...". They don't actually discard any information, since they refer to something abstract.
So basically you just have to remember that when you write a number in "decimal form", you are really talking about an infinite series, but simply use a much more commodious notation.