I like to debate these ideas, and will share my thoughts here. But I don't know if this is really appropriate for what this site was intended for based on the "warnings" I see above. The editors should feel free to delete my comments if they run too speculative for the intent of this site.... This is all speculation despite my common writing style of talking as if it were fact....
I approach these questions from the idea that the human brain is a reinforcement learning machine. It's fairly easy to create broad explanations for theses sorts of questions by tying everything back to this idea. I'm using reinforcement learning machine in the sense of the AI machine learning use of the term, which falls fairly directly from the behaviorists ideas of classical and operant conditioning. I am an engineer and computer scientist so I talk from that perspective.
Fear, in this sense, is just (at the broadest levels) avoidance behaviors. We learn to use any behavior that allows us to stay away from the things that punish us - that cause reduced expectations of future rewards. Thoughts are just behaviors of the brain as well, which are also subject to all the same conditioning effects of our external actions. So we will naturally, be conditioned to try and avoid thoughts that produce reduced expectations of future rewards.
Let me side-bar for a moment and point out something many people who have not studded reinforcement learning fail to understand. These sorts of machines don't just learn when a "bad" event happens. They don't just learn when they are hit with a stick. There's an indirect learning at work which creates a far greater complexity to the process. They are at their core, reward prediction machines. They constantly predict expected future rewards. (Basic TD learning). The "stick" events, train their prediction system. It accumulates statistics based on every time something bad or good happens, and uses that to predict expected future rewards. Behaviors, then, our conditioned not by the sticks and carrots, but by the prediction system. Any behavior which causes a drop in the expected future rewards by the internal prediction system, is punished (it's odds of being repeated in the future drops).
So the behaviors that emergent from such indirect learning, are all about tricking the prediction system into predicting a better future for the agent. The complexity of the behaviors that will emerge, then are tied directly to how good the prediction system is at predicting the future. AI program attempting to use reinforcement learning often fail to look very intelligent due to the fact that they have failed, to implement high quality reward prediction system, which is itself, trained by reinforcement.
So, back to fear now. Humans understand and use cause and effect. This is a direct fall out, of having a brain that conditions behaviors based on predictions of future rewards. The behaviors that emerge from such training will use cause and effect to their advantage to control our perceived view of the future based on the things we can change now, to effect the future though that causality chain.
We don't like getting hurt. So we learn the signs in the environment that predict something bad will happen, and we change our behavior to prevent it. We see our hand moving towards a fire, and the brain is able to predict the a hand near a fire is a predictor of future pain (from being burned). We move our hand away from the fire, and it make our "predictor" produce a reduced expectation of a burn, so that action of moving the hand away from the fire is reinforced, even though we did not get burned (this time). We learn behaviors to stay away from fires in this way. Of course, there are other conflicting rewards for being near fires (like the food that results from cooking), so we end up balancing our behaviors and simply become careful around fire, vs running in fear from it.
But, in life, we are often caught off guard. We are hurt by something that we had not seen before, or that we simply failed to predict. We learn that we can use cause and effect to avoid most of these bad things in life, like keeping our hand away from wire keeps us from being burned. But when we come across a new situation, stimulus signals very different from what we have dealt with in the past, we find the odds of being hurt goes up. When dealing with a new environment we have not experienced, we do not yet now know to leverage the causality of the environment to prevent the bad things, or to acquire the good things.
Our prediction system knows, that if we wander into an something we are unfamiliar with, or odds of being hurt rises. It can use this as a cause and effect prediction. That is, the less we know about our current environment, the higher the odds that something will hurt us. Because our reward prediction system can predict this correlation between an unknown environment, and higher rewards, the system can learn behaviors, to avoid the unknown.
We will be conditioned to stay in the "safe and known environment", and stay out of the "new and unknown". This attraction to the familiar, on one side, simply become behaviors to avoid the unknown on the others. This translates to a simple "fear" of the unknown.
Now, this all works well, because our internal prediction systems are able to pick up the sensory clues from the environment to guild our actions. But, we can just as easily learn, to fool our prediction system. If the brain senses something dangerous like a fire, it will raise it's predictions of being burnt in the future. But we can also learn to close our eyes, and block the brain from seeing the danger. However, the brain is smart enough to not fall for that trick. Once it sees the fire, it understand the danger. And it understands that when we close our eyes, the fire and the danger are "still there" despite not being able to see it. However, it's not prefect, and this sort of trick does work some. So, when we see something really bad, we have a learned response to just look away, to close our eyes, or turn our head, exactly because this "trick" does work. It takes the stimulus away from our reward prediction hardware, and as such, makes the future look a little better than it did before.
So the system can learn behaviors, to trick its prediction system, which is only harmful in the long run, but the low level system is not advanced enough to understand that. It's where are intelligence shows it's limits. The reward prediction system is not always smart enough to recognize it's being "tricked". When it can recognize the trick, it will not fall for it. It will likely return higher expectations of future bad things happening, because it's being deceived so it can't do it's job a well. But when the trick is good enough, that it does not recognize it's a trick, such "tricks" will emerge in our learned behaviors.
All this adds up to the simple result that we learn to fear the unknown, because we learned that the more unknown there is in the world, the higher the odds of being hurt in the future becomes.
But then we find the tricks. Language behaviors are a large part of what humans do. We not only learn to deal with the environment by using our hands and body to manipulate our environmental. We also learn to talk about our environment. The better we understand some aspect of our environment, the more words and language we have to talk about it. We use our langue to guide our actions, and we now that the more we can talk about our environment, the more we are likely to be able to guide our actions in a "safe" direction.
When faced with an unknown, just talking about it, helps us better determine how to act in the situation. But it also triggers our prediction system to reduce the odds of future dangers. We just the act of talking, "tricks" our prediction system, into making us "feel better" about the future. Our talk might be nonsense (for a given situation), but if the talk is good enough to fool the prediction system, it still makes us feel better.
So we develop these behaviors of rationalizing about an known, just because it tricks our internal reward prediction system into making the future look brighter for us. The better the "story" we put together in our random talking, the better the trick works.
Superstitions, mythology, rationalizing, all emerge from such a behavior learning system just because it is able to trick the internal reward prediction system.
The learning system can't tell the difference between a behavior learned that address the real danger (like learning to move our hand away from a fire), and from the tricks that only subverts the prediction system into predicting a bright future, rather than doing what needs to actually create a real brighter future.
Humans have a very advanced prediction system. It can pick up very subtle and complex clues from the sensory environment, and produce a highly accurate prediction of the rewards, and dangers, we are likely to run into. But it is only a learning machine, with finite limits. It can be tricked. And where it can be tricked, behaviors will automatically emerge to trick the prediction system, instead of actually addressing the truth of why it was predicting something bad to start with.
The mythologies of religions are just learned behaviors that trick our own internal brain into predicting a better future for us. Just making up a name for an unknown cause, like "God" is itself a trick. Having a name for such a thing, makes us feel like we have mastered some important aspect of the unknown. That we "know the cause". But just making up a name is nothing more than a language trick to fool our internal reward prediction system.
Our behavior selection system, and our prediction system are one and the same. They work hand in hand, to both predict the future, and decide how to react to it (what behaviors to produce from second to second over our lives). The more advanced our behaviors become, the more advanced our prediction system becomes.
As we learn more advanced language behaviors, we gain an improved prediction system, that is harder to "trick" with our own language behaviors. The better we understand that we are "tricking" ourselves into feeling better, the less the tricks work to actually make us feel better. As we learn they are just tricks, our prediction system is learning at the same time, to reduce our odds of a brighter future every time it detects such a trick is being used.
So we have a natural and obvious reason to fear the unknown. That which we don't understand, is more likely to hurt us. We explore and study and experiment in life so as to reduce the dangers of the unknown (aka increase the rewards). But we also play tricks on ourselves, to make the unknown seem less unknown than it really is. But the better we understand they are tricks, the less they work, and the less we are inclined to use them.
So why do we "praise" the unknown? Well, we don't praise the unknown. We praise God, not just "the unknown". Again, it's just a trick we use to make ourselves feel better. When we were kids, we learned to trust our parents and care-givers. They were far wiser than we were, and the best way to protect ourselves, was to obey them - to turn ourselves over to their desires. If they tell us not to play with fire, and we ignored them, we got burned so we learned to value of following the desires of a "higher power" - the more experienced adults in our lives.
As we grow up, we gain our own experience, and learn that the other adults are not really a "higher power" anymore. We learn we must face the world on our own, and make our own decisions. But we still yearn for that simpler time, when there was always a far wiser force in our life to tell us what needed to be done. Praising God is all just part of that trick of allowing ourselves to believe we still have wise parents in our lives to protect us from the dangers. It's a trick to make our prediction system feel we are safer, than we really are.
Now, that said, religions have evolved over thousands of years. The customs, and rules, and beliefs, in a religion have an "intelligence" about them, that comes from a 1000 years of experience. They carry with them an evolved wisdom of the ages. So when a religion says you should not kill someone, that's a belief that has survived 1000's of years of testing. The beliefs that did not work out as well, were removed from the religion, and new beliefs were added or modified over time. So the set of beliefs and customs that make up a religion is a real type of higher intelligence. So if mixed in all those beliefs, there is a "praising God" behavior, we can also understand what is really being praised is the traditions of the religion itself. The religion asks people to turn over themselves to the traditions of the religion, just like we turned ourselves over to the wisdom of our parents when we were kids. But they make up this mythological figure they call "god" and assign him to be the root cause of everything, just because it's a great trick, to make us feel more secure about not knowing the cause of of so many things, while at the same time, giving this mythical "god" the credit for creating the time tested customs of the religion itself (the bible is the word of god and all that).
The "God" at work there is just the evolution of religious memes. It's just another process of evolution. The evolution of time tested learned behaviors. But it is a higher power, and there are valid reasons for people to respect (act according to) the traditions of such a time tested set of memes. It is actual in our interest to not kill, or steal, or lie, etc.
So I can't think of any examples where we praise the unknown. But there are standard religious memes for praising the time tested wisdom of the religion, but doing it indirectly with a "trick" by making the people believe there is a "god" that created the rules.
It's all a very complex set of learned behaviors that help people maximize their future rewards. So of the behaviors actually work to make the future better (like by not killing people we reduce the odds of others killing us), but other parts of it are just tricks we have learned to manipulate our own reward prediction system, that is, commonly talked about as, "our feelings". They are tricks to make us feel better.
Now, also, don't get me wrong about the importance of feeling better. It's what we are hard wired to do. It's our sole purpose in life - it's what we are. We are machines build for the purpose of trying to make ourselves feel better. Our low level innate rewards are wired in us by the process of evolution so that what makes us feel better, is likely to help our species survive. So at the higher more boldly abstract level, we can be seen as a survival machine (though I would argue that exists more at the level of the species than the individual). At the level of the individual, we have in effect, just been giving the "job" of making ourselves feel better, and not worrying about the bigger picture of whether that helps us to survive of not. That is beyond our pay grade. It's the job of the process of evolution, to wire us "correctly" so that the things that make us feel better, work well to help us survive. And because evolution has, in general, done a good job of that, most human behaviors do tend to lean towards maximizing our survival odds.
But evolution is not perfect, and it's constantly exploring alternatives, which is why we find so many human behaviors that seem anti-survival.