You're essentially using Vieta's formulas, when you relate the sum of the function's reciprocal roots to its series coefficients. This relation is sometimes called the Root Linear Coefficient Theorem.
It's quite straightforward to derive that from Vieta's formulas. Take a degree $n$ polynomial $P(x)=a_nx^n+\ldots+a_1x+a_0$ with $n$ roots $r_1,r_2,\ldots,r_n$ (up to multiplicity). The sum of its reciprocal roots is then
$$\frac{1}{r_{1}}+\frac{1}{r_{2}}+\ldots+\frac1{r_n}=\frac1{r_1r_2\ldots}\cdot \left(\not{r_1}r_2r_3\ldots+r_1\not{r_2}r_3\ldots+\ldots\right)$$
which, by Vieta's formulas for $n$ and $n-1$, is exactly $\frac{(-1)^{n-1}a_1/a_n}{(-1)^{n}a_0/a_n}=-\frac{a_1}{a_0}$.
And this would be fine for a polynomial... but you're not using a polynomial. So no, your proof is not correct. It has a similar spirit to the math of Euler, where results are derived by falsely assuming properties (eg, that a series is convergent or that a function is a polynomial, particularly in the proof of the Basel problem) and then liberally (even illegally) applying theorems on them.
The beauty of math is that these results can be derived through false proofs, likely by virtue of the simplicity and naturality of the underlying relations. But hand-waving arguments ultimately fail to answer the questions of why should the theorem apply there and not elsewhere? And where/how is the theorem valid generally? These are the question considered by analysis.