Sie sind hier: ICP » R. Hilfer » Publikationen

III Simulation Methods and Boundary Conditions

A Multicanonical Monte-Carlo (MCMC) Simulation

[69.2.2.1] Monte Carlo (MC) simulations with simple sampling (SS) probe configurations according to their geometrical multiplicity and re-weight them with their thermodynamic probability \exp(-\beta E_{i}), so for an observable A the average is computed as

\langle A\rangle _{{SS}}=\frac{\sum _{i}A_{i}\exp(-\beta E_{i})}{\sum _{i}\exp(-\beta E_{i})}. (13)

[page 70, §0]    [70.1.0.1] Standard importance sampling (IS) methods like the Metropolis-algorithm accept and reject configurations according to their relative thermodynamic probability, so that the thermodynamic weight is built into the sampling process instead of the re-weighting, and therefore the thermodynamic average reduces to a simple average

\langle A\rangle _{{IS}}=\frac{\sum _{i}A_{i}}{\sum _{i}1}. (14)

[70.1.0.2] In some cases, Metropolis-type sampling can be inefficient because some configurations may be ”rare” with respect to their thermodynamic weight, but ”important” because their contribution A_{i} is disproportionately large, or because the range with small probability contains a ”barrier” to cross so that other, more ”important” configurations can be reached. [70.1.0.3] To overcome this problem of sampling ”rare events”, Berg and Neuhaus [26] proposed a method that modifies the importance sampling procedure in such a way that ”artificial” probabilities P_{i} are introduced for each A_{i}. Because not a single canonical sampling is used, but each observable lives on it’s ”own” canonical average, the method was called ”multi-canonical Monte Carlo” (MCMC). [70.1.0.4] The Metropolis-type averages of eq. (14) are then modified to

\langle A\rangle _{{MCMC}}=\frac{\sum _{i}A_{i}P_{i}}{\sum _{i}P_{i}}. (15)

[70.1.0.5] The weights P_{i} can be chosen for convenience, e.g. in such a way that all A_{i} are sampled uniformly, or some part of the phase space is sampled with higher frequency than another part [27].

B Implementation

[70.1.1.1] We implemented a Monte-Carlo algorithm on a square grid with Glauber dynamics. The grid has an even number of sites in each direction, so that we can use the checker-board update scheme, which has the smallest correlation time [28] under all single-spin update-schemes in the straightforward Metropolis-algorithm. [70.1.1.2] Our Fortran 90 program using sub-arrays allowed a simple implementation of fixed or periodic boundaries. We choose not to implement bit-coding, as the bulk of the computer time would be spent in updating the information of the MCMC-procedure rather than for the straightforward algorithm. [70.1.1.3] The implementation using non-overlapping sub-arrays also allows vectorization. [70.1.1.4] In addition we parallelized the algorithm.

[70.1.2.1] To sample the magnetizations evenly, theweights P_{i} in eq. (15) is chosen according to the magnetization, P_{i}=P(M_{i}). The MCMC proceeds in several iterations j, during which the intermediate P_{i}^{{(j)}} are consecutively refined using the previously computed entries so that P_{i}^{{(j)}}\rightarrow P_{i} are obtained. [70.1.2.2] Probabilities P_{i}^{{(j)}} are evaluated from the histogram of the visited magnetizations during each spin update. [70.1.2.3] Details of our algorithm for the magnetization distribution will be published elsewhere.

C Convergence

[70.2.1.1] The objective of our Monte-Carlo simulations is to obtain information about the equilibrium states (i.e. long time limit) for an infinite system (i.e. large L limit).

[70.2.2.1] We must distinguish between two kinds of convergence:

  • MCS-convergence: By this we mean that, at given L and T, the individual simulation run is converged in the sense that increasing the number of Monte-Carlo steps (MCS) will not change the order parameter distribution. The measured distribution is the “true” distribution for the given system size and temperature.

  • L-convergence: By this we mean the convergence of the distribution with L at given T to its form for the infinite system.

[70.2.3.1] The autocorrelation time needed by the algorithm to go from large negative magnetizations to large positive magnetizations increases rapidly as the system size becomes large. [70.2.4.1] Therefore it is difficult to obtain fully MCS-converged results at large system sizes. [70.2.4.2] Because we are interested only in the tail behaviour we need to exclude all cutoffs not resulting from the system size, and hence need fully MCS-converged results.

[70.2.5.1] Within available resources and with simulation for 10^{5} Monte Carlo steps per iteration and 50 multicanonical iterations on a Cray-T3E with 128 processors we could obtain statistics all the way up to the saturation magnetization for system sizes L=16,~32,~64. [70.2.5.2] Simulations for L=128 did not MCS-converge fully within the available computer time. [70.2.5.3] For L=128 the simulation runs do not reach m=1, and achieve statistics only upto m=0.95. [70.2.5.4] Although this is a significant improvement over the tails statistics presented in Ref. [9], it is still not sufficient for our tail analysis. [70.2.5.5] Therefore our results below are limited to system sizes L\leq 64.

D Far tail regime

[70.2.6.1] MCMC simulations of the two-dimensional Ising model provide far better statistics in the tails than the Swendsen-Wang cluster flip algorithm [9]. [70.2.6.2] As discussed in Section II adequate statistics is required in the “far tail regime” close and prior to saturated magnetization. [70.2.6.3] This regime is defined as

m_{{\rm mp}}\ll m\ll 1 (16)

where m is the magnetization per spin and m_{{\rm mp}} is the most probable magnetization. [70.2.6.4] We define the position of the (local or global) maxima of p(m) as the most probable magnetization denoted by m_{{\rm mp}}(T,L). [70.2.6.5] For the scaling variable defined in eq. (9) this implies

x_{{\rm mp}}\ll x\ll L^{{1/8}}. (17)

[page 71, §1]

E Boundary Conditions

[71.1.1.1] Most previous investigations have concen-trated on periodic boundary conditions. [71.1.1.2] These boundary conditions have the advantage of preserving the fundamental symmetry. [71.1.1.3] In this paper we present also results for fixed symmetry breaking boundary conditions where all boundary spins are fixed to +1.

[71.1.2.1] Our motivation for investigating fixed boundary conditions comes from Ref. [23]. In particular one expects that the order parameter distribution becomes asymmetric, and this raises the question whether or not the left and the right tail behave in the same way.