Joel David Hamkins gives a good reference, which you should read. I will instead focus on describing the pitfalls of different notions of "infinite processor speed".
The question hinges, as Qiaochu notes in the comments, on what precisely you mean by "infinite clock rate". And also what it means to potentially perform an infinite number of increments, which would clearly occur if the computer performed an increment once every clock cycle, if that clock cycle were infinitesimally breif. As such things are physically impossible,* it becomes important to explicitly describe (in well-defined terms) what you mean by these ideas.
[ * in accepted models of physics, and in models of physics satisfying common constraints, e.g. of physical systems having a finite amount of energy ]
As for performing an infinite number of increments: I would suggest that what your registers would have to store are ordinals (or some related type of data), so that the operation of incrementing possibly an infinite number of times is well defined, no matter what. (At the very least, the clock register will have to be an ordinal, as all it does is increment once every "clock cycle"; and we want the clock register to have a well-defined value at all times. For that matter, your register A may as well itself be the clock register.) Given this assumption, let's talk about clock speeds.
We need, for the sake of concreteness, to determine at precisely which times the clock register is incrementing: that is, those times t such that the value of A at any time t' > t is larger than at time t, because the instruction A = A+1 is carried out at time t. We must declare --- as a part of specifying the processing speed --- a subset S ⊆ ℝ which represent the set of instants at which the clock register increments.
There is a problem if you want your computer to have a "uniformly fast" infinite processing speed. This means that between any two points in time (modeled here by the real numbers ℝ as is usual), an infinite number of operations are performed. That is, there is always an element of S, and in fact infinitely many, between two distinct real numbers.† The problem is the following: for any element of S, there cannot be such a thing as the "next largest" element of S; because (exactly as we noted just now!) for any s ∈ S, and for any supposedly "next largest" s' ∈ S, there is an element of S which comes between s and s'.
[ † That is to say, S is dense in the real numbers.]
A consequence of this is that if the computer processing speed is "uniformly" infinite, the register A can never have the value 2! For even though we know that it starts at 1 at time zero, there is no "next" time that it increments; so there is no time at which A changes from 1 to 2. If it is supposed to take values larger than 2, but we cannot identify a time at which it had the value 2, it seems to me that this model of computer is ill-defined.
We can solve this if we abandon the idea of being uniformly fast over time. Instead, we may consider a Zeno computer: a computer which performs its first instruction in 1/2 second, its second instruction in 1/4 second, its third instruction in 1/8 second, and so on halving the execution time so that it performs a (countably) infinite number of instructions in 1 second. (You can then repeat this scheme every second, or some elaboration of this perhaps.) The point being that each instruction takes a finite amount of time, but that finite amount may be arbitrarily small, and there are intervals of time in which an infinite number of instructions are performed. What will then occur in this computer is that the register A is a function of time t as follows: it will be equal to
- 1, if 0 < t ≤ 1/2;
- 2, if 1/2 < t ≤ 3/4;
- 3, if 3/4 < t ≤ 7/8;
- ...
- ω (the smallest infinite ordinal), if t=1;
- ω + 1, if 1 < t ≤ 3/2;
- ω + 2, if 3/2 < t ≤ 7/4;
- ...
- ω⋅2, if t = 2;
- ω⋅2 + 1, if 2 < t ≤ 5/2;
- ω⋅2 + 2, if 5/2 < t ≤ 11/4;
etc. Note that at every integral t after zero, there is a value which A holds only for an instant, prior to being incremented again; this is a feature of our choice of how to define "when the clock register increments".
Then, the behaviour of the register A is clear; it increments at all times t ∈ S = { N − 2k | N,k ∈ ℤ }, and in particular performs a countably infinite number of operations "just prior to" (i.e. in any finite amount of time prior to) every integral value of t. The value of A is then well-defined.
An important feature of the Zeno computer model, which makes it well defined, is that the limit points of S form a well-ordered set. This is in fact nothing more than saying that the values of the clock register A are ordinals (or always have a meaningful "next largest" value) and is well-defined at all times. In particular, there do not exist any convergent, strictly decreasing sequences in S. This feature is necessary for the clock register to have a well-defined value at all times; and this is the criterion that fails for "uniformly" infinite clock speed.
Infinte clock rate just means it executes a infinte number of cycles in any finite time. An infinite register is just a Hilbert hotel.
A starts out like ..00000001. Then A = ..00000010. Next A = ..0000011 etc.
What is difficult whith this?
The point is that A can NEVER become infinte (not even at infinte clock rate) since there is no finite A such that A+1 = infinte
Thanks, Carl
– Carl Apr 21 '10 at 10:49