Your cousin isn't as crazy as you think. The calculation in the linked question assumes no intervening trees, buildings, hills, or mountains, but also assumes no atmospheric distortion and no refraction. I'll ignore those last two factors, for now.
That calculation is how far away one can see the very top of a very tall cloud under those conditions. The rest of the cloud would be below the horizon. This is how even ancient people knew the Earth is spherical. A ship that sailed out from a harbor started to sink below the horizon until all that was left was the crow's nest at the tip of the mast, and then that too disappeared.
When a sailing ship came into harbor, all that would have been visible at first would have been the crow's nest. This would initially would have been indistinguishable from a piece of flotsam. A similar issue arises with your faraway cloud that you can barely see the top of. Except perhaps for the fuzziness caused by atmospheric distortion, it will be nearly indistinguishable from a much closer but much lower and much smaller cloud.
The one saving grace for your argument is atmospheric refraction. The Sun is still below the horizon when we see the sun rise, and it appears to remain above the horizon after it has actually set. Atmospheric refraction will typically enable you to see more of that faraway cloud than a simple straight line computation. On the other hand, atmospheric refraction might preclude you from seeing even the very top of that faraway cloud. The atmospheric refraction index is roughly proportional to pressure but inversely proportional to absolute temperature. A very high lapse rate might well cause light to bend upward rather than downward. (This is why predicting the exact time of sunrise and sunset is rather difficult, if not impossible.)