The physical speed of the connection represents the best case data throughput, but it does not represent the actual throughput.
To me throughput is the throughput. What the actual payload is, that's another matter.
Let say, if your signal drops out of range, so will your speed/bandwidth/throughput (physical speed as you call it)
Anyway, for consumer, thinkering about overhead doesn't really makes much of a difference and it shouldn't. TCP overhead (at usual MTUs) is no more than 3-4 percent (customer shouldn't have to worry about under-laying tech like ATM, and that shouldn't be counted). On properly maintained network, error overhead shouldn't be more than 1-2 percent either. Add everything else like peak times saturation, etc and your net payload should still be well over 80% at worse and averaging over 90% most of the time.
Just imagine if some network in medium/large enterprise would perform as badly as some of ISPs. People would get fired in no time...
On the end, network is network, no matter how big, no matter the technology, you still have to properly size it, expand if needed, maintain it, etc, so why would ISPs be an exception? Oh, maybe because they can do whatever the hell they want in their virtual monopolies.
I'll say again, blaming the consumer really goes a bit too far, I mean really, why would granma need to know all this tech crap just to get what she's paying for?