I wouldn't say "every computer scientist" needs to know about everything discussed in this paper on floating-point arithmetic. I do think they should all know about the first 1/3 of it, as many new developers get very confused/upset when they see something like 0.1 being output as 0.100000000000097.
The second third of the paper is interesting and gets into a number of the details around the bounds and theory behind how to handle floating-point both in the hardware and the programming language.
The last third of the paper is all theorems and proofs around the error(s) and bound(s) involved. I found it interesting since I love math, but I think it is well beyond the scope of what a computer scientist really needs to know about it. This was also written a little while ago, so they don't really address some more modern languages/implementations.
So, while I enjoyed it and learned a lot, I'm giving it 3 stars because I don't think most people need to read past the first part of the paper, and even if you do, the age makes it a little less applicable.