You can prove the existence of something, but you cannot prove its absence. If it doesn't exist, you cannot find any evidence of it, by the very definition of 'does not exist.'
To be secure, a software application must not have any vulnerabilities. To test that it is secure, you must test that vulnerabilities do not exist in the application. As above, this cannot be done. We have to have a different goal.
We can take the goal, "have no unacceptable known vulnerabilities." This is the goal chosen by most applications that do security testing. For this goal we can use tools like threat modeling, static code analysis, static binary analysis, and dynamic analysis.
We can take the goal 'our application's security controls must work as expected under all foreseeable conditions. This means testing the application's security controls just like QA tests functional, performance, and other characteristics of the application.
We can take the goal, 'our application must be built under a process that minimizes likelihood of vulnerability introduction. In this case, we use a secure software development lifecycle which includes training, cheatsheets, developer checklists, QA checklists, use of automated tools, use of expert testers (penetration testing), threat modeling, and other processes designed to minimize the risk of new vulnerabilities.
While these are all building the application, not using the application, they can be extended to the user's perspective. You can try to identify vulnerabilities in the application yourself - made easier if you have the source code. You can try to check correctness of the security controls with your own QA-style testing, if you can correctly decompose the product and determine the purposes of the controls. You can look at the OEM's secure development lifecycle and see how it stacks up against others (see BSIMM). You can also look at a product's historical public record (see NVD) and try to pull inferences: for example, when the OEM has a vulnerability of one type in a product, do they ever have that vulnerability in that product again (do they seem to be architecturally fixing or band-aiding the issue?