The lack of memory of the exponential distribution can be used to produce conceptual proofs that, for every $n\geqslant2$, $G$ is distributed as the maximum of $(n-1)$ i.i.d. random variables each exponentially distributed with parameter $a$. Since, however, the OP failed to explain their background, here is a direct, hands-on, approach.
Consider $U=\min\{A_k\,;\,1\leqslant k\leqslant n\}$ and $V=\max\{ A_k\,;\,1\leqslant k\leqslant n\}$, then $U\lt V$ almost surely and, for every $u\lt v$,
$$
[u\lt U,V\lt v]=\bigcap_{k=1}^n[u\lt A_k\lt v],
$$
hence, by independence of the random variables $(A_k)$,
$$
P(u\lt U,V\lt v)=(\mathrm e^{-au}-\mathrm e^{-av})^n.
$$
Differentiating this identity twice yields the density $f$ of $(U,V)$ as
$$
f(u,v)=n(n-1)a^2\mathrm e^{-au-av}(\mathrm e^{-au}-\mathrm e^{-av})^{n-2}\mathbf 1_{0\lt u\lt v}.
$$
By definition, $G=V-U$ hence, for every $x\gt0$,
$P(G\leqslant x)=(\ast)$ with
$$
(\ast)=\int_0^\infty\!\!\!\int_u^{u+x}f(u,v)\mathrm dv\mathrm du=\int_0^\infty na\mathrm e^{-au}\left[(\mathrm e^{-au}-\mathrm e^{-av})^{n-1}\right]_{v=u}^{v=u+x}\mathrm du,
$$
that is,
$$
(\ast)=\int_0^\infty na\mathrm e^{-au}(\mathrm e^{-au}-\mathrm e^{-au-ax})^{n-1}\mathrm du=(1-\mathrm e^{-ax})^{n-1}\int_0^\infty na\mathrm e^{-nau}\mathrm du,
$$
and finally,
$$
P(G\leqslant x)=(1-\mathrm e^{-ax})^{n-1}.
$$
To the OP: In the question you assert that $P(V\leqslant U+x)=P(V\leqslant U\mid U\leqslant x)$ and you explain that this holds "by Markov property". This is obviously wrong but the trouble is that I cannot even see what you think you are doing there... Please explain.