41

Suppose I build a machine which will be given Rubik's cubes that have been scrambled to one of the $\sim 2^{65}$ possible positions of the cube, chosen uniformly at random. Is it possible for the machine to solve the cubes without giving off any heat?

Solving the cube might be thought to consist of destroying about 65 bits of information because it takes 65 bits to describe the cube's state before entering the machine, but zero bits to describe it afterwards (since it is known to be solved).

If the information stored on a Rubik's cube is equivalent to any other type of physically-stored information, then by Landauer's principle we might expect the machine to have to give off a heat of $T \mathrm{d}S \sim 65 T k_B \ln(2)$, but is it valid to apply Landauer's principle to the information stored in such a manner? What sort of argument does it take to say that a certain type of information is physically meaningful, such that destroying it necessitates paying an entropy cost somewhere else?

Mark Eichenlaub
  • 52,955
  • 15
  • 140
  • 238
  • 5
    I'm unconvinced of your claim that a solved cube has less information. You need to identify the position of only a subset of the cubelets to uniquely identify the cube's configuration (because, e.g. one twisted corner forces a second corner to be twisted as well). You don't know a cube is solved until you've verified that every cubelet is in a valid position, so you never change the entropy of the cube, just whether the configuration is in an aesthetically "pleasing" state or not. – Carl Witthoft Jan 21 '15 at 13:01
  • 7
    @CarlWitthoft As always, the amount of information assigned to an ensemble is observer-dependent. From the point of view of an observer who knows (and indeed, trusts) that the cube is solved after the machine, the state before it is solved has higher entropy. From the point of view of an observer who knows nothing except the rules of how the Rubik's cube can move, I agree with you that the entropy is the same before or after. – Mark Mitchison Jan 21 '15 at 13:37
  • I also am skeptical of the claim that a solved cube contains less information just because it looks "correct" by convention. Furthermore, the solution is not unique. If you add orienting marks on the faces, you'll see that there is more than "solved" configuration for a conventional Rubik's cube. – 200_success Jan 22 '15 at 02:12
  • 2
    @200_successes - Even if you add orienting marks, it's still true that if you count the number of different possible arrangements of colored squares and orienting marks, the total number of arrangements that would qualify as a "solved" cube is much smaller than the number of all possible non-solved arrangements (or all non-solved arrangements that would be reachable by some sequence of moves from a newly-manufactured cube in a solved state, since we can assume an initial "randomized" arrangement was created by randomly shuffling around a newly-manufactured cube). – Hypnosifl Jan 22 '15 at 19:07
  • 3
    And of course it is merely a matter of convention which arrangements we label "solved", but the point is just that if we design a machine that takes a cube in an arbitrary initial state and always puts it in one of those states we have arbitrarily labeled "solved", then this machine is taking the cube from a large number of possible initial cube macrostates to a smaller number of final cube macrostates, so in order for phase space volume to be conserved, the environment has to end up in a macrostate with more microstates than were in its initial macrostate (see my comments to Nathaniel). – Hypnosifl Jan 22 '15 at 19:10

6 Answers6

36

Let's suppose you have a Rubik's cube that's made of a small number of atoms at a low temperature, so that you can make moves without any frictional dissipation at all, and let's suppose that the cube is initialised to a random one of its $\sim 2^{65}$ possible states. Now if you want to solve this cube you will have to measure its state. In principle you can do this without dissipating any energy. Once you know the moves you need to make to solve the cube, these can also be made without dissipating any energy.

So now you decide to build a machine that will solve the cube without dissipating any energy. First it measures the state and stores it in some digital memory. Then it calculates the moves required to solve the cube from this position. (In principle this needn't generate any heat either.) Then it makes those moves, solving the cube.

None of these steps need to give off any heat in principle, but your machine ends in a different state than the state it starts in. At the end of the process, 65 bits' worth of the machine's state has effectively been randomised, because it still contains the information about the cube's initial state. If you want to reset the machine so that it can solve another cube, you will have to reset those bits of state back to their initial conditions, and that's what has to dissipate energy according to Landauer's principle.

In the end the answer is just that you have to pay an entropy cost to erase information in all cases where you actually need to erase that information. If you only wanted to solve a finite number of cubes then you can just make the memory big enough to store all the resulting information, so there's no need to erase it, and no heat needs to be generated. But if you want to build a finite-sized machine that can keep on solving cubes indefinitely then eventually dumping entropy into the environment will be a necessity.

This is the case for Maxwell's demon as well: if the demon is allowed to have an infinite memory, all initialised to a known state, then it need not ever dissipate any energy. But giving it an infinite memory is very much the same thing as giving it an infinite source of energy; it's able to indefinitely reduce the thermodynamic entropy of its surroundings only by indefinitely increasing the information entropy of its own internal state.

N. Virgo
  • 33,913
  • 1
    How do we know that it is impossible to solve the cube without recording the position in the machine's state? – Mark Eichenlaub Jan 21 '15 at 15:14
  • 1
    @MarkEichenlaub: The scrambled cube has about $2^{65}$ possible states; the solved one has only one. On the microscopic level, physics is reversible, so when the cube is solved, the information needed to reverse the solution and reconstruct the original scrambled state from the solved one must go somewhere. That means it either has to be stored in some part of the system, or emitted from it (as heat). – Ilmari Karonen Jan 21 '15 at 15:53
  • @IlmariKaronen Okay, yeah, I think that sounds convincing. Thank you. – Mark Eichenlaub Jan 21 '15 at 16:12
  • That sounds like nonsense to me - the cube has one initial state and one final state. It is not in 2^65 states at one time. Whether it's possible to compute the steps needed to solve the cube is an entirely different question – Jon Story Jan 21 '15 at 17:21
  • 1
    @Jon Story - Are you familiar with the idea of phase space in thermodynamics, and of the idea of considering the evolution of an ensemble of different points (each of which is a 'microstate') that occupy some volume of phase space, and of the idea that the volume must be conserved (Liouville's theorem)? If so, just consider a starting ensemble consisting of microstates belonging to all the possible initial macrostates of the cube, the device, and the environment--if they all evolve to the same final macrostate – Hypnosifl Jan 21 '15 at 18:34
  • (continued) of the cube and the device, then the only way phase space volume can remain the same here is if the environment ends up in a macrostate with a larger volume than its initial macrostate (i.e., higher entropy). – Hypnosifl Jan 21 '15 at 18:35
  • @IlmariKaronen It seems this argument is predicated on being able to say that the state of the cube is identifiable with microstates somehow. E.g. if I could write the state of the system as the tensor product $\mathrm{system} = \mathrm{cube state} \otimes \mathrm{everything else}$, then the argument would work b/c reducing entropy in $\mathrm{cube state}$ forces me to increase it in $\mathrm{everything else}$, but how do I know that I can speak about the state of the cube in such a way? – Mark Eichenlaub Jan 21 '15 at 18:54
  • @Mark Eichenlaub - When you say "identifiable with microstates somehow", do you mean identifiable with a single microstate, or do you mean the cube's state (the arrangement of whatever is being used as the squares on each face) can be taken as a type of macrostate consisting of many possible microstates? I think the latter was how Ilmari Karonen meant it. – Hypnosifl Jan 21 '15 at 19:39
  • Yeah, I think that's right. We can look at the set of all microstates of the entire system and partition it based on the state of the cube in each microstate. Then by the cube's approximate symmetry, each state of the cube should have about the same number microstates of the entire system associated with it. Thus the entropy reduction based on counting microstates is roughly the same as that based on counting the information in the cube. – Mark Eichenlaub Jan 21 '15 at 21:40
  • 1
    "giving it an infinite memory is very much the same thing as giving it an infinite source of energy" - that's a rather fascinating statement. Do you have any references for this? – user253751 Jan 22 '15 at 10:25
  • "it's able to indefinitely reduce the thermodynamic entropy of its surroundings only by indefinitely increasing the information entropy of its own internal state." Can you give some reference for this claim? It is interesting but quite implausible. Just correlating state of one system with the state of many particle system does not seem sufficient to change latter's macrostate. – Ján Lalinský Jan 22 '15 at 20:54
  • @JánLalinský this might be down to imprecision of language on my part. I meant to say "it can indefinitely reduce the thermodynamic entropy of its environment, but only through processes that involve indefinitely increasing the information entropy of its own internal state." It's not just the correlation that makes the entropy decrease, it's the fact that it can make measurements and act on them without having to generate heat. I guess the reference would be Landauer's paper, though I have to admit I haven't read the original. (See next comment for link) – N. Virgo Jan 23 '15 at 00:18
  • @immibis I guess the reference would be Landauer's paper (http://www.pitt.edu/~jdnorton/lectures/Rotman_Summer_School_2013/thermo_computing_docs/Landauer_1961.pdf), though I have to admit I haven't read it. Any recent treatment of Maxwell's demon should have a good explanation of how the demon can reduce the entropy of a gas, allowing work to be done, unless it has to erase the information that it inevitably stores about the gas as it operates. – N. Virgo Jan 23 '15 at 00:20
  • @MarkEichenlaub (replying to your first comment) this is just because the machine has to know the state of the cube in order to solve it; once that information's been copied into the machine's state, there's no way to erase it without generating heat. To see it another way, there are $\sim 2^{65}$ possible initial (macro)states of the cube+machine system, so there must be the same number of possible final states unless information has been lost. The cube always ends up in the same state, so the machine must end up in one of $\sim 2^{65}$ states, depending on the initial state of the cube. – N. Virgo Jan 23 '15 at 00:30
  • @MarkEichenlaub it looks like the discussion resolved the rest of your questions - is that right? – N. Virgo Jan 23 '15 at 00:31
  • Intuitively, I think this is right. I think it is also not perfectly rigorous because the information stored in the cube is not obviously the same sort of information stored in microscopic degrees of freedom. Conservation of information is a theorem we can derive from Hamiltonian dynamics. If we want to apply it to the Rubik's cube, we need some justification that our ideas about Hamiltonian systems can be applied. There needs to be some definite link made between states of a Rubik's cube and the microstates for which we have relevant theorems about information. – Mark Eichenlaub Jan 23 '15 at 00:44
  • That was the intent of http://physics.stackexchange.com/questions/160585/is-there-a-thermodynamic-limit-on-how-efficiently-you-can-solve-a-rubiks-cube/160611?noredirect=1#comment337445_160611 – Mark Eichenlaub Jan 23 '15 at 00:44
  • 1
    @MarkEichenlaub I think your comment is right; another way to look at it is that the link you seek is exactly Landauer's principle. It says that although information need not be preserved on the macroscopic level, we can't erase macroscopic information without increasing the number of microscopic states. This is essentially because of what you say in your comment: at any given time the total state of the system can be partitioned into $\text{macrostate}\otimes\text{microstate}$, so by Liouville's theorem you can't decrease the first without increasing the second. – N. Virgo Jan 23 '15 at 01:03
  • Would it be possible to have a "dummy" cube where the state is unimportant? Then you could solve $n$ other cubes by storing the inverse of the moves in the dummy cube. You would never need to destroy the information as you simply don't care about it. – CJ Dennis Jan 27 '15 at 03:21
  • @CJDennis that's a good idea, and it took me a while to see why it can't work. Let's say you've solved the cube and now you want to offload the 'waste' information about its initial state into the dummy cube. If the dummy cube is in a known state (e.g. already solved), you can just 'swap' the information in memory with the dummy cube's state, clearing the memory. But if you (the machine's designer) don't know the dummy cube's state then this doesn't work; there's $2^{65}$ bits of unknown information in memory, and another $2^{65}$ bits in the dummy cube - it can't all compress into the cube. – N. Virgo Jan 27 '15 at 05:27
  • ...so if you have an inexhaustible supply of initially solved dummy cubes then you can indefinitely solve cubes by transferring their scrabledness into the dummy cubes - but with one dummy cube you can't solve more than one other cube. – N. Virgo Jan 27 '15 at 05:29
  • @Nathaniel If each move twists one face by 90, 180 or 270 degrees, can't you store that information in the dummy cube by twisting it in the opposite direction? If that's the case it never matters what the initial state of the dummy cube is or in what order you make moves on other cubes. You could alternate moves or chose a random cube to move each time. You'd be effectively XORing the state of each cube onto the dummy cube. – CJ Dennis Feb 16 '15 at 09:14
  • @CJDennis XOR is only a reversible operation if you also store one of its inputs. Given only the final state of the dummy cube you can't reconstruct the initial state of more than one input cube, so information has been lost. – N. Virgo Feb 16 '15 at 23:34
  • @Nathaniel Has the information been lost or encrypted? If I XOR 10 8-bit values together you will never get any of them back with any certainty unless you already know at least 9 of the original values. If you know any 9 you can always reconstruct the tenth. – CJ Dennis Feb 21 '15 at 04:48
  • @CJDennis that's the point - the information is not in the result from the XOR, it's in the correlations between the result and the inputs. If you keep all but one of the inputs, you still have the information, but if you erase more than one of the inputs, you don't. Thus, you can't use XOR to store information about the initial state of more than one input cube in a single 'dummy' cube, because solving the input cubes erases the inputs to the XOR. – N. Virgo Feb 22 '15 at 09:39
  • @Nathaniel So the original solution only works because you "know" the starting state of the dummy cube was solved? – CJ Dennis Feb 23 '15 at 01:46
  • @CJDennis I'm not sure what you mean by "the original solution." Do you mean a case where one input cube can be solved by transferring information about its initial state onto a dummy cube? This can indeed only work if the machine is designed on the assumption that the dummy cube has a particular initial state. Otherwise there will be at least two different inputs that go to the same output (where "input" includes the state of the cube to be solved and that of the dummy cube), so it can't be implemented without an information-erasing step that would generate heat. – N. Virgo Feb 23 '15 at 05:54
3

In principle I agree with your analysis, but I don't agree with the conclusion. From an algorithmic point of view, you can solve the cube without expending any heat, as long as information is not lost. So in principle you can have an extra cube in a known state, which you then transform in tandem to the cube you are trying to solve. The initial state of the first cube is then encoded in the final state of the second cube. In the field of reversible computing, the second cube represents an ancillary variable.

lionelbrits
  • 9,355
  • Sure, but the question is, supposing we are not going to record the information on a second cube (or anywhere else), can we then justify the conclusion that the machine must give off heat? – Mark Eichenlaub Jan 21 '15 at 15:17
  • 1
    The point is that it is not reversible. If you want to get rid of friction, etc., at some point your abstract Rubik's cube will just become a bunch of qubits. I don't think Landauer's principle has been proven for general representations of information, however. – lionelbrits Jan 21 '15 at 19:11
1

I actually read the title in a different way, so let me answer a different question: what is the minimum thermodynamic requirement to solve a cube? Now, if you analyze the starting position (which some algebraists have done), then you know how many moves it takes to solve. If you do a weighted sum over all starting states, i.e. weighted by the number of moves to solution from each state, you quickly find the expected energy (in "move units") , the std. dev, etc.

I guess this is more boring than the question intended :-( .

Carl Witthoft
  • 10,979
1

Suppose I build a machine which will be given Rubik's cubes that have been scrambled to one of the ~$2^{65}$ possible positions of the cube, chosen uniformly at random. Is it possible for the machine to solve the cubes without giving off any heat?

If by "giving off heat" you mean change of mechanical/electrical energy into internal energy, then in practice no - in real machines there is always some friction and dissipation of mechanical/electrical energy. It is extremely hard to prevent it entirely if there is some motion involved.

In theory, if we could build machine that transforms cube without dissipation of energy (obeying reversible mechanics where heat is not present or working with negligible amount of energy (slowly)) then I think the answer is yes, since there are algorithms for solving Rubik's cube and I do not see a reason why these algorithms could not be run by that kind of machine. I am not sure though.

Solving the cube might be thought to consist of destroying about 65 bits of information because it takes 65 bits to describe the cube's state before entering the machine, but zero bits to describe it afterwards (since it is known to be solved).

If by "destroy information" you mean "restore the cube into solved state and reset the machine into state ready" then I agree; in the sense that after the cube is solved the information about the initial state of the cube cannot be acquired from it anymore.

However, let me elaborate on one point that often gets confusing; physical state is not information. Using term "information gets destroyed" confuses the analysis, because the process actually results in increase of information about the cube; we did not know the initial state, but in the end we know it is solved.

That's why it is important to distinguish between physical state of cube and information about the state of the cube. What gets destroyed in the process is not information, but the initial physical state; the information actually increases.

Of course, the information about the initial state may still be acquirable from the state of the machine or its environment.

...by Landauer's principle we might expect the machine to have to give off a heat of $TdS$~$65Tk_B\ln(2)$, but is it valid to apply Landauer's principle to the information stored in such a manner?

No.

When the machine is reset by the action of the environment, information entropy of the machine + cube decreases. If information entropy was the same thing as thermodynamic entropy and the whole process could be reasonably described as reversible thermodynamic process, one could think that this is accompanied with system giving off heat to the environment since Clausius has shown that in such case $\Delta S_{thermodynamic} = \int dQ/T$.

But this is not that case at all. Even if we assume that information entropy of the environment increases as a result of the process, this by itself is not sufficient to conclude that thermodynamic entropy does the same. It may not even be applicable to the environment. If it is, the whole process may still occur with arbitrarily small energy transfer so no lower bound on the amount of heat can be implied.

I do not understand why people put some much belief and enthusiasm in the Landauer principle. The concepts of temperature, heat transfer and thermodynamic entropy are of limited applicability and their proper area of use is thermodynamics of macroscopic systems. It makes little sense to complicate description of computational processes by using only limited terms of thermodynamics or statistical physics.

What sort of argument does it take to say that a certain type of information is physically meaningful, such that destroying it necessitates paying an entropy cost somewhere else?

I am not sure why you use expression "physically meaningful". Information is not a physical property of bodies. It is a non-physical concept. Originally information resides in the mind. Then it may be encoded into physical state of other body such as a book, hard drive or Rubik's cube, but the mind is still needed to transform the state into information.

However, the entropic cost is plausible, that is information entropy cost. After the environment has interacted with a system of unknown state (Rubik's cube), the amount of information about environment we have most probably decreases. This does mean information entropy (our ignorance of environment's state) increases, so that's why the cost.

However, I like to say again here that there is no direct implication for change in thermodynamic entropy (or generation of heat) in any of these systems.

Information entropy and thermodynamic entropy are very different concepts and there is no universally valid correlation between their changes. Only in thermodynamically reversible thermodynamic process, they correspond to each other. It is not necessary that environment undergoes such process as the machine solves Rubik's cube.

  • "The concepts of temperature and entropy are of limited applicability and their proper area of use is (statistical) thermodynamics. The computer does not need to be described in terms of this scheme." It may not need to be, but the question is about what conclusions you reach if you choose to describe it in these terms. If you use the arrangement of squares on the cube and the state of the computer's memory as macro-variables to describe the macrostate (perhaps along with others like temperature), then if the system is guaranteed to end up with the same value of these variables, – Hypnosifl Jan 22 '15 at 00:44
  • (continued) that implies that the some combination of the macrostates for the cube, the cube-solving computer, and the environment must have a higher multiplicity (thus higher entropy) than the initial macrostates, presumably due to an increase in temperature. See my comments on Nathaniel's post about conservation of volume in phase space for the reasoning. – Hypnosifl Jan 22 '15 at 00:46
  • This is an obvious point that wasn't the intent of the question. What if the computer needs to solve a very large number of cubes and doesn't have enough memory to store them all? – Mark Eichenlaub Jan 22 '15 at 03:07
  • I've reworded and expanded the answer. – Ján Lalinský Jan 22 '15 at 20:41
  • @MarkEichenlaub : I am not sure it is that obvious, so I think it is a good thing to express it in the answer. If the machine does not have enough memory, it needs to be reset before next cube is solved. So environment comes to play - see above. – Ján Lalinský Jan 22 '15 at 20:43
  • @Hypnosifl : I think it makes little sense to use thermodynamics and macrovariables such as temperature and thermodynamic entropy to describe purely mechanical process, even if information entropy may be applied, so that's why I said that. I see no good in introducing macrovariables when microvariables are necessary to define and discuss details of the process. True, it can be done for the cube, more hardly for the computer and maybe for some limited model of the environment. I just see no point in doing that. But I could be wrong. – Ján Lalinský Jan 22 '15 at 21:12
  • But the point isn't to come up with a realistic estimate of the heat that would be generated by the mechanical process, but just to use statistical mechanics to show that there's an absolute lower bound on the entropy that could be generated by any process of this type (one where macro-variables associated with both the cube's arrangement and the device's memory were guaranteed to end up with a known set of final values regardless of the initial values). Do you actually disagree with the statistical reasoning for such a lower bound, or are you just saying it's not useful in practice? – Hypnosifl Jan 22 '15 at 21:55
  • Also, I'd say that thermodynamic entropy is a particular type of information entropy--self-information is the -log of the probability of a given result from any sample space you might wish to use, whereas the thermodynamic entropy of a macrostate is k times the -log of the probability of the "result" of the system being in a particular microstate, given a known macrostate (or k x log total number of microstates). But you're free to pick any arbitrary set of macro-variables to define your macrostate. – Hypnosifl Jan 23 '15 at 00:26
  • @Hypnosifl, information entropy for the system (cube + machine) decreases. It makes little sense to introduce information entropy for environment, but if we do, and evolution of the supersystem is Hamiltonian, then total information entropy remains constant. If the information entropy is defined to be additive, information entropy of the environment increases. This is not very interesting. It depends on the special assumptions mentioned and even if it is valid, there is no implication for transfer of heat or thermodynamic entropy. – Ján Lalinský Jan 23 '15 at 21:11
  • @Hypnosifl, k times log of number of microstates is not thermodynamic entropy, but special instance of information entropy for case where all states are equally probable. By making additional assumptions, its value for mechanical model of a thermodynamic system (gas in a vessel) can be made proportional to thermodynamic entropy for states of thermodynamic equilibrium. But the cube, the machine and the environment are not models of any thermodynamic system and it makes no sense to say they are in state of thermodynamic equilibrium. – Ján Lalinský Jan 23 '15 at 21:17
  • I referred to the self-information of an individual result rather than the information entropy H (which is really the expectation value of the self-information for a result, before you know what the result is). The self-information is about equal to the length of a message that would be needed to convey a result, using an ideal scheme like Huffman encoding that uses less symbols for more probable results. In these terms, the stat mech. entropy of a macrostate is just k times the length of a message that would be needed to specify the microstate, given that you already know the macrostate. – Hypnosifl Jan 23 '15 at 22:55
  • Also, it's sort of a matter of perspective but I would say the other statistical mechanics definition of entropy, $S = -k \sum_i p_i \log p_i$, is still ultimately based on the idea of treating all microstates as equally likely, since my understanding is that you normally derive the $p_i$ for each microstate i of your system A by assuming it's in contact with a reservoir B, and that the larger combined system (A+B) is equally likely to be in any of its microstates. – Hypnosifl Jan 23 '15 at 22:55
  • Then if you define macrostates for the combined (A+B) system in terms of the microstates of A (each macrostate consists of all microstates of A+B in which A is in one specific microstate), with the macrostates so defined, $S = -k \sum_i p_i \log p_i$ gives k times the expectation value for the length of a message needed to specify the macrostate of A+B (or equivalently, to specify the microstate of A) without specifying the full microstate A+B, using Huffman encoding with the assumption that each full microstate for A+B is equally probable. – Hypnosifl Jan 23 '15 at 22:56
  • Finally, $S = k \log \Omega$ can be taken as a definition of the entropy of a macrostate which still applies even if the system starts from out of equilibrium. The idea is then to use Liouville's theorem, which says that the dynamics must conserve volume in phase space. If you have an ensemble of microstates at time t0 which are all in one of some set of macrostates A1, A2, … , An, they cannot all evolve into microstates which are all members of macrostates B1, B2, … , Bm at time t1 if the sum of entropies of the second set is less than the sum of entropies in the first set, – Hypnosifl Jan 23 '15 at 22:57
  • (cont.) since that would mean the ensemble had reduced its volume in phase space. Do you disagree that this is a logical consequence of Liouville's theorem? This is the type of argument used here to derive Landauer's principle, for example. I believe you can also use a similar sort of argument to derive the conclusion that an isolated system starting in a far-from-equilibrium macrostate is likely to evolve in the direction of higher-entropy macrostates, see section 2 here. – Hypnosifl Jan 23 '15 at 22:58
0

Depending on how broadly you interpret the idea of a Rubik's cube, a quantum mechanical version requires no heat either to randomize or solve. Suppose we have a virtual cube whose state is represented by 65 qbits. It is desirable that different states of the system have very low coupling, but in practice they must have some, so a system that begins in a basis state with each bit having a definite value will, in the long run, evolve into a superposition. To randomize the system we wait a very long (but random) time and then read the qbits. We then conduct a series of unitary operations to return the qbits to the state $|0,0,0...0 \rangle$, representing a solved cube. Since in principle neither operation requires energy, there is no thermodynamic cost.

user27118
  • 793
-1

Rubik's cube can store information. The information can be changed. Rubik's cube is a memory device. Changing one bit of information in a Rubik's cube takes at least energy kT ln 2. That's the Landauer's principle.

The energy of changing a bit in a Rubik's cube becomes heat energy of the Rubik's cube.

A virtual Rubik's cube in computer's memory obeys that same law.

stuffu
  • 1,978
  • 11
  • 11
  • 5
    I don't agree with this. The state of the Rubik's cube can be changed (in principle) using a unitary operation that costs no energy. Changing between pure states does not have to cost energy. But randomising the state (deleting information) does cost energy. This is Landauer's principle, see Nathaniel's answer. – Mark Mitchison Jan 21 '15 at 12:49
  • You overwrite old information, when you store a bit in a Rubik's cube. Rubik's cube is an irreversible device. To make Rubik's cube reversible requires changing the Rubik's cube, the same way as a laptop must be changed quite a lot, if we want to make a reversible computer out of it. – stuffu Jan 21 '15 at 13:18
  • 2
    A normal physical Rubik's cube is of course irreversible. However, I don't see anything in principle that stops you from making one with very small friction, and change the configuration quasi-statically, which obviously costs a vanishingly small amount of energy. – Mark Mitchison Jan 21 '15 at 13:27
  • Thermal vibrations would randomize the low friction Rubik's cube during the slow writing attempt ... But see lionelbrits' answer. – stuffu Jan 21 '15 at 14:50
  • Sure, then you have to do the thing at zero temperature. Or if you like, at a temperature such that $\hbar/k_B T$ is much longer than the time it takes to write. – Mark Mitchison Jan 21 '15 at 15:07