I would recommend against using Keplerian elements as TLEs are not Keplerian.
If you do insist on doing so (not recommended), I would first make a 3DOF (or maybe even 6DOF) propagator that advances Cartesian state, inertial position and velocity for a 3DOF propagator, that plus attitude and attitude rate for a 6DOF propagator. To do this you will need
- A model of Earth gravitation (non-spherical, as that is part of what the SGP4 algorithm tries to model,
- A model of third body perturbations from the Moon and the Sun (the SGP4 algorithm accounts for these as well). This in turn means you will need an ephemeris. JPL's SPICE does this somewhat nicely.
- A model of the Earth's atmosphere. Some of the SGP4 terms are ad hoc models of the affects of the Earth's atmosphere on a spacecraft.
- Predictions of the Sun's behavior over the time period of interest. How the Sun behaves or will behave has a huge impact on the behavior of the Earth's upper atmosphere.
- A model of the spacecraft's interaction with the Earth's atmosphere. This might not be as simple as TLEs. For example, the ISS typically reorients its solar arrays so that they reduce drag during the 40 minutes or so shadowing period, and then reorient them again to be orthogonal to the Sun when lit by the Sun.
- Finally, a good propagator. Using Euler's method is not a good idea (and that's an understatement). Even fourth order Runge Kutta (RK4) might not be a good idea. As TLEs are not good for much more than a week or so, RK4 might be okay, but there are much better techniques.
This by itself is a nontrivial undertaking.
The next issue is how to use these propagated states to produce a "pseudo TLE". This is by no means a non-trivial undertaking. Your initial state is not perfect. and to make it more interesting, there are significant correlations between elements of the state estimate. Your prediction of the atmosphere's future behavior is not perfect, either. You might want to perform a set of Monte Carlo runs with your propagator to obtain an idea (a guess) regarding the covariance matrix at various points in time in the future. You can use these growing uncertainties and covariances to weight how future predictions impact changes in your "pseudo TLE".
To generate your "pseudo TLE" I recommend nonlinear analysis of variance (ANOVA) techniques. There are two ways to go about this. One is to start with zero correction terms and then one by one adding a correction term to your pseudo TLE based on which of the remaining terms is most statistically significant, stopping when until adding even more correction terms falls in the noise. An alternative is to start with all terms being corrected, and then one by one removing the least correlated term, stopping when removing one more term falls outside in the noise. Either way, lather, rinse, and repeat as this is highly nonlinear. You might want to use something like simulated annealing to guide the "lather, rinse, and repeat" phase.
This also is a nontrivial undertaking. Creating TLEs is a highly nontrivial undertaking, and will take months (maybe even years) of effort to produce the required algorithms. Once you have those algorithms you will be able to reuse them with much greater ease. Getting to that point is not easy. You might want to rethink what you want to do, and why.
I did something akin to this 40 years ago. My employer at the time promised our customers 1 km accuracy as the satellites we were working with had a less than 1 km nadir resolution. My employer did not account for how lousy TLEs were at that time. (In fact, the Air Force intentionally made them lousy at that time.) We quickly discovered that most people used graphical rubber sheeting techniques to make the image and map line up. While this was very good for one satellite pass, it was completely useless for prediction purposes. In addition, everyone we talked to wanted to charge a large sum of money for the use of their rubber sheeting techniques.
I claimed to my employer that rubber sheeting was not the right approach, that we should instead use mismatches between the map and the image to generate "pseudo TLEs" so as to have some predictive power for future passes. And it worked! I don't remember whether I eventually settled on the additive or subtractive nonlinear ANOVA approach. Modern machine learning (ML) techniques could almost certainly one-up what I did 40 years ago, but that's mostly because modern ML techniques expertly throw multiple statistical tools at a given problem. Many of those statistical tools used by modern ML are well over a century old.