I know that seems like a broad question, so let's restrict the discussion to primitives like ints.
If I have an algorithm like
int val;
for(int i = 0; i < arr.Length; ++i)
{
val = ... // a value I need for this iterate of the loop only, and never outside the loop
}
does that have a performance benefit over
for(int i = 0; i < arr.Length; ++i)
{
int val = ... // a value I need for this iterate of the loop only, and never outside the loop
}
I know that some people consider the second to be more proper, with the reasoning being that you should only define a variable in a scope that uses it. But I've seen both in professional settings. My question is about performance.