1

I was analyzing this source code of the Ising model. I found the term "transient state".

I also found the term in this text:

There are two absorbing states in this Markov chain because once either Jane or Eddie wins, the game is over, and the die is not rolled again. That the winner’s side of the die remains up forever is reflected in the value of unity along the diagonal and the value of zero in the nondiagonal elements for states 1 and 2. Also note that one of the 10 sides must be up, and so the sum of all the elements in each row of Mdie must be unity. We multiply the matrix Mdie on the left-hand side by a unit row vector VT with a 1 in the state the die is in before it is rolled. For the game to start the initial vector must be in the transient state, that is, it must be in state 3.

In this text:

enter image description here

And, in this text:

...
5. Except actually this one just goes steps instead of checking for convergence because all I want to do is make an animation of the transient part.
...
You would normally run this on a 10x10 or maybe 20x20 lattice, and run for many more steps to get better convergence, but blog posts need visuals, and I wanted an animation of the transient state.
...

However, Firstly, I am not sure if these texts are related to the C# source code.

Secondly, I am not sure if that term is transient state or transition state.

  • What is a "transient state" or "transition state" in Ising models?

How do we treat these values during the simulation?

Do we discard these values? If so, why?

1 Answers1

0

The first source, the program, is talking about what I would call "burn-in" or "thermalization". This is best understood through talking about simulation and computational physics.

The second source about the dice game is talking about markov chains, but it makes the conversation more complicated because once Jane wins once, she wins forever (so we don't have the desirable property that we can just keep going forwards in the markov chain to get more statistically independent samples). This is best understood through the language of statistics, with formulas involving $P(X^i|X^{i-1})$.

The third source talks about continuous time statistics, master equations, and Glauber dynamics, and this is best understood in terms of stochastic processes.

The fourth source seems to be talking about what parts of the simulation are the most interesting to plot!

So it's very hard to give an answer that is true in all these different contexts. The term "transient" is just being used in the English language meaning of the word. But I can address the first program and what they mean by discarding the "transient" time steps.


  1. In the Ising model and many other statistical models, we care about finding observables by sampling spins from a distribution $P(\{s\})\propto e^{-\beta H(s)}$.
  2. This is complicated, so we proceed by a Markov-chain monte carlo method. In the code, the spins are initialized to $s_i^0=1$ for all spins, where $i$ is an index that runs through all spin positions and $0$ means we are at step 0 of the algorithm.
  3. $s_i^0$ is not a typical configuration (it is not what we would expect from sampling from the true $P$ distribution), so we should toss it, it is not good data. So the idea of many of these algorithms around the Ising model is to take many steps in a Markov chain until we are really sampling from the true distribution $P(\{s\})$.
  4. We take a step in our algorithm, and now $s_i^1$ is a bit better, but still not a good sample from the true distribution, so we should discard it too. When do we stop discarding data? Well, in terms of the markov chain matrices in the second source, we should take enough timesteps that $v^T M^t$ is close enough to the true distribution.*

So, in the program, the "transient" referred to is just when we aren't sampling typical values from the distribution we want to sample from. It's going to be mostly unphysical, correlated with the initial condition, and dependent on the algorithm that we use to go from one state to the next.

*Note that in the Ising model on an $N\times N$ lattice, the transition matrix $M$ is the probability to go from one set of the possible $2^{N^2}$ spins to another, so $M$ is a $2^{N^2}$ by $2^{N^2}$ matrix which would depend on which algorithm we use. It's not something we're going to do computations with directly, except on small lattices.

David
  • 716