Probably adding more Q estimators that are trained on separate data would not improve performance, and may even degrade it.
At least there is no theoretical justification. Double Q learning addresses a specific problem with maximisation bias: When your estimators are noisy and you select the highest estimate (the greedy action), there will be a bias towards overestimating its value (in the update step) when using the same estimator for both. There is no equivalent bias to consider that would be impacted moving from 2 to 3 estimators.
In addition, Q learning can make use of both estimators in each update - one to select, the other to use value. A higher number of estimators would need to be rotated through.
However, it is possible that some other factor would make a 3 or 4 estimator agent effective. I have not experimented with this, and not aware of anything published. So you could always try the experiment. I suggest pick an environment in which double Q learning is already shown to perform well, and give it a go. These kinds of "what if I changed this thing?" experiments usually come to nothing, but they can be fun.
I suspect what you will find is that the learning is slower, but a little bit more robust against some kinds of error. However, in a double Q learner, decreasing the learning rate and/or increasing the time between copying to the frozen copies of estimators should have a very similar effect.