I was trying to solve the following exercise problem from Hogg, McKean and Craig's "Introduction to Mathematical Statistics".
3.4.28. For $Z$ distributed $N(0,1)$, it can be shown that $E[\Phi(hZ + k)] = \Phi[{k}/{\sqrt{1 + h^2}}]$ ...
Here, $\Phi(x)$ and $\phi(x)$ stand for cdf and pdf of $N(0,1)$.
That is just part of the exercise. In order to prove it, I tried expanding it as a double integral and ran out of steam. Answers on SE threads seem to make use of the following claim
$$E[\Phi(hZ + k)] = P(X \leq (hZ + k)),$$ where $X$ is also $N(0,1)$.
Among bazillion threads which make use of this claim and present the rest of the trivial details of the proof as to why $E[\Phi(hZ + k)] = \Phi[{k}/{\sqrt{1 + h^2}}]$ (or something equivalent) is true, this is one such answer.
Why is this true? Is it a trivial result or is there quite a bit of proving involved to assert that $$E[\Phi(hZ + k)] = P(X \leq (hZ + k))?$$
This is what I know: It is known that $\Phi(hz + k) = P(X \leq (hz+k)) $ so that $ \Phi(hZ + k)\rvert_{Z=z} = P(X \leq (hz + k)).$ So is it that by applying the so-called law of total probability as was done here, we can simply conclude that $$P(X \leq (hZ + k)) = \int_{-\infty}^\infty P(X \leq (hZ + k))|_{Z=z}\phi(z)dz \\ = \int_{-\infty}^\infty \Phi(hz + k)\phi(z)dz \\ = E[\Phi(hZ + k)]?$$
Please let me know.