Before you get disappointed, let me suggest you to try another approach.
As I see it, you have two very different parts: knowing the equalization to do (personalized to each person), and applying it to a particular signal (your voice).
1st part: model of the internal human hearing system
There are professionals who are working to collect data on that, standardize that process, and so on. Afaik, there are efforts to develop measures and graphs beyond the classic audiogram (which measures air and bone signals). Some of them are "listening tests" (more subjective, but interesting as well).
Align to these professionals. If you follow their work, you just need their results. Let them do the heavy lifting. They know their part, which took them dozens of years of investigation. They are advancing in the knowledge you need.
Which is: a sort of audiogram to measure how someone hears 'within'. I bet they are graphing that. And you just need that graph.
2nd part: simulation
I've done something similar to what you try to do. From the audiogram of any person, you can hear on your own like him/her. This is done with ffmpeg. You can check it out here: comomeoyes.com
Basically, you record your voice, and an algorithm equalizes it with your personalized audiogram. This way, you can enter the audiogram of a person with hearing loss, and listen for yourself how he/she hears you.
I understand you would like to do the same, but with a different audiogram, one that models how the internal hearing system equalizes the sound.
I bet such kind of audiogram could already exist, and audiologists, medics, doctors in otorhinolaryngology, researchers and such may be discussing on the kind of accoustic tests to do to get the data they need to model a useful graph from the measurements.
Good luck. Your attempt could help others.