Welcome to TiddlyWiki created by Jeremy Ruston; Copyright © 2004-2007 Jeremy Ruston, Copyright © 2007-2011 UnaMesa Association
A collection of notes on miscellaneous [[topics|Topics]] of statistical physics. Please refer to the sidebar for titles.
Geckos can stick to almost any surface because of the setae on their feet; and interestingly enough the setae are self cleaning. A tokay gecko can support its whole body weight with its single toe by millions of keratinous setae on its toe pad. The millions of hair, setae, further branches into hundreds of 200-nm spatula which makes very close contacts with surfaces maximizing the van der waals interaction to generate large adhesive and shear force. A Geckos with dirty feet can recover their ability to stick to vertical surfaces in only a few steps. Adhesion of a single isolated gecko seta is equally effective. The talk can be accessed [[here|http://www.imsc.res.in/~rsingh/discussion/cond-mat/files/slides/gecko.pdf]].
!!References:
*Autumn, K., Sitti, M., Liang, Y. A., Peattie, A. M., Hansen, W. R., Sponberg, S., ..., Full, R. J. //Evidence for van der Waals adhesion in gecko setae//, Proceedings of the National Academy of Sciences, 99(19), 12252-12256, 2002.
*Hansen, W. R., & Autumn, K., //Evidence for self-cleaning in gecko setae//, Proceedings of the National Academy of Sciences of the United States of America, 102(2), 385-389, 2005
!!References:
*Baierlein, R. (2001). //The elusive chemical potential//. American Journal of Physics, 69(4), 423-434.
*Job, G., & Herrmann, F. (2006). //Chemical potential—a quantity in search of recognition//. European journal of physics, 27(2), 353.
The kinetics of phase transition has been considered here. We will start with the Ising model and ask the question that if a system is quenched from a disordered state to an ordered state, what will be dynamics of the system. Following is an outline of this discussion,
!Outline
*Ising Model
*Kinetic Ising Models
**Non-conserved Order Parameter
**para-ferro transition
*Nucleation and spinodal growth
*TDGL equation
*Conserved Order Parameter
*The binary (AB) mixture or Lattice Gas
*Cahn Hilliard equation
*Time dependent length scales
!! Ising model
Look [[here|Ising model]].
!!Mean Field Approximation
MF of Ising model due to Braggs William replaces spin in the Hamiltonian by a spatially uniform magnetization, $\langle S \rangle= m$. The energy can thus be written as
\begin{equation}
E(m) \simeq -J \sum_{\langle ij \rangle} \langle S_{i} \rangle \langle S_{j} \rangle -h \sum_{i} \langle S_{i} \rangle = -\frac{NqJ}{2} m^{2} - Nhm
\end{equation}
The entropy, S, can be calculated exactly
\begin{equation}
S(m) = k \ln \binom{N}{N_{\uparrow}} = k \ln \binom{N}{N(1+m)/2} \\
= -N k\left[\frac{1+m}{2} \ln \frac{1+m}{2} + \frac{1-m}{2} \ln \frac{1-m}{2} -ln2 \right]
\end{equation}
where $N_{\uparrow}$ is number of up spins and $N = N_{\uparrow} + N_{\downarrow}$ is total number of
sites in the lattice.
The complete Braggs William free energy is
\begin{equation}
f(T,m) = (E - TS)/N \\
= -\frac{NqJ}{2} m^{2} -N k T\left[\frac{1+m}{2} \ln \frac{1+m}{2} + \frac{1-m}{2} \ln \frac{1-m}{2} -ln2 \right]
\end{equation}
The expression can be expanded in the powers of $m$ to obtain a simplified expression of free energy,
\begin{equation}
f = \frac{k(T-T_{c})}{2}m^{2} + \frac{kT}{12}m^{4}-kTln2+O(m^{6})
\end{equation}
where
\begin{equation}
T_{c} = \frac{qJ}{k}
\end{equation}
for $T>T_{c}, f$ has a positive curvature at origin and negative curvature for $T<T_{c} $.
Also, by minimizing free energy at fixed (T,h)} we can arrive at equilibrium value of order parameter:
\begin{equation}
m_{0} = \tanh(\beta q J m_{0} + \beta h)
\end{equation}
For h = 0, we can again identify the MF critical temperature
\begin{equation}
T_{c} = \frac{qJ}{k}
\end{equation}
! Ginzburg Landau theory
MF free energy of Ising model can be written in the form
\begin{equation}
f(m) = \frac{F(m)}{N} = \frac{1}{2}(kT-qJ)m^{2}- hm + \frac{kT}{12}m^{4}-kTln2+O(m^{6})
\end{equation}
\begin{equation}
\mathcal{L} = \frac{a}{2} m^{2} + \frac{u}{4} m^{4}
\end{equation}
Ginzburg Landau functional considers spatial variation of order parameter as well,
\begin{equation}
\mathcal{G} = \frac{a}{2} m^{2} + \frac{u}{4} m^{4} + \frac{K}{2} (\nabla m)^{2} + \frac{K^{'}}{2} (\nabla^{2} m)^{2}
\end{equation}
So all the theories we have discussed so far talks about the phenomenon of phase transition once a variable like temperature is varied but what how do the domains grow. So suppose at time, t=0, system is in the paramagnetic phase and you change the temperature in such a way that an ordered state is more stable. Ising model does not answers question on kinetics of the phase transition. Now we will try to addrress these question,
!Kinetic Ising Models
*Ising model has no Hamiltonian given dynamics. For kinetics we assume that an associated heat bath generates spin flip $( S_{i} \rightarrow - S_{i})$.
*Purely dissipative and stochastic models are ofter referred to as Kinetic Ising models.
*Conserved and non-conserved cases can be describe as below:
**The spin system: At the microscopic level, Spin Flip Glauber model is used to describe the non-conserved kinetics of the paramagnetic to ferromagnetic transition.
**The binary (AB) mixture or Lattice Gas: The spin-exchange Kawasaki model is used to describe the conserved kinetics of binary mixtures at the microscopic level.
*Both these models must satisfy the detailed-balance condition.
*At the coarse-grained level the respective order parameters, $\phi(\vec{r},t)$ are used to describe the dynamics.
Lets also define the process terminology used frequently to describe the phenomenon of domain growth,
!!Nucleation
*First-order phase transitions usually occurs by nucleation and growth while second-order phase transitions proceed smoothly.
*Nucleation is the process whereby new phases appear at certain sites within a metastable phase.
**Homogeneous nucleation - occurs spontaneously and there is no preferred nucleation site but it requires superheating or supercooling of the medium. It might be favourable to make a new phase but you have to pay a cost of making the surface. So you have to strike a balance! This is captured by the expressions of net change in free energy.
\begin{equation}
\Delta G = 4\pi r^2 \sigma - \frac43 \pi r^3 \Delta G_f
\end{equation}
**Heterogeneous nucleation - occurs at preferential sites such as container surfaces, impurities, grain boundaries, dislocations. The effective surface area is lower here, diminishing the free energy barrier and hence facilitating nucleation.
*Spinodal decomposition is more subtle than nucleation and occurs uniformly throughout.
!!Spinodal decomposition
*Mechanism of phase separation in SD differs from nucleation as it happens uniformly and throughout the system and not just at the nucleation sites.
*In spinodal region $\frac{\partial ^{2}F}{\partial c^{2}}<0$, and hence there is no thermodynamic barrier to the growth of a new phase, i.e., the phase transformation is solely diffusion controlled.
*Phase separation usually occurs by nucleation and spinodal decomposition will not be observed. To observe SD, a very fast transition, a quench, is required to move from the stable to the spinodally unstable region.
!Domain Growth with non-conserved kinetics
*At $t = 0,$ a paramagnetic phase is quenched below the critical temperature $T_{c}$.
*The paramagnetic state is no longer the preferred equilibrium state.
*The far-from-equilibrium, homogenous, state evolves towards its new equilibrium state by separating in domains.
*These domains coarsen with time and are characterized by length scale $L(t)$.
* A finite system becomes ordered in either of two equivalent states as $t \rightarrow \infty$.
* The simplest kinetics Ising model for non-conserved scalar field $\phi(\vec{r})$ is the time dependent Ginzburg Landau (TDGL) model.
Using motion for the form of free energy from Landau theory, the equation of motion for $\phi$ can be written as:
\begin{equation}
\frac{\partial \phi}{\partial t} = -\Gamma \frac{\delta \mathcal{F}}{\delta \phi} +\theta(\vec{r},t)
\end{equation}
where $\frac{\delta \mathcal{F}}{\delta \phi}$ denotes functional derivative of free-energy functional
\begin{equation}
\mathcal{F(\phi)} = \int \left[ F(\phi) + \frac{1}{2}K(\nabla\phi)^{2}\right]
\end{equation}
Typical form of the free energy $F(\phi)$ is given by Landau theory.
The noise term has zero mean and has a white noise spectrum
\begin{equation}
\langle\theta(\vec{r},t)\theta(\vec{r}^{'},t^{'})\rangle = 2T\Gamma\delta(\vec{r}-\vec{r}^{'}) \delta(t-t^{'})
\end{equation}
*TDGL equation
*Using the $\phi^{4}$-form of free energy we arrive at the TDGL equation
\begin{equation}
\frac{\partial \phi}{\partial t} = \Gamma\left [a(T_{c}-T)\phi-b\phi^{3}+ k\nabla^{2}\phi \right] + \theta(\vec{r},t)
\end{equation}
*It is evident that $\phi = 0$ is unstable for $T<T_{c}$ and stable for $T>T_{c} $.
*For $T<T_{c}$ we can write TDGL in terms of rescaled variables as:
\begin{equation}
\frac{\partial \phi}{\partial t} = \phi-\phi^{3}+ \nabla^{2}\phi
\end{equation}
We can go ahead and Taylor expand $\tanh$ for the case of zero field to obtain
\begin{equation}
\lambda^{-1} \frac{\partial \phi}{\partial t} = \left(\frac{T_{c}}{T}-1\right)\phi-\frac{1}{3} \left(\frac{T_{c}}{T}\right)^{3}\phi^{3}+ \frac{T_{c}}{qT}a^{2}\nabla^{2}\phi+ ...
\end{equation}
where a is the lattice spacing. This equation is referred to as the time-dependent Ginzburg Landau (TDGL) equation. TDGL equation can be written in dimensionless form as,
\begin{equation}
\frac{\partial \phi}{\partial t} = \phi-\phi^{3}+ \nabla^{2}\phi
\end{equation}
Alternatively, we can derive this using the generalized Langevin equation:
\begin{equation}
\frac{\partial \phi}{\partial t} = -\Gamma \frac{\delta \mathcal{H_{T}}}{\delta \phi} +\theta(\vec{r},t)
\end{equation}
The approximation of neglecting higher order terms means that it is justifiable at $T\simeq T_{c}$. But it gives right physics even for deep quenches $(T\ll T_{c})$.
!!Domain Growth
Lets linearize the rescaled TDGL equation about $\phi^{*}$, i.e, $\phi = \phi^{*}+\delta \phi$. Plugging it back in TDGL equation and retaining only linear terms in $\delta \phi$, we get
\begin{equation}
\frac{\partial \delta \phi}{\partial t} = \phi^{*}+\delta \phi -\phi^{*3} - \phi^{*2}\delta \phi + \nabla^{2} \delta \phi
= (1-3\phi^{*2}) \delta \phi + \nabla^{2} \delta \phi
\end{equation}
Doing a Fourier transform we get
\begin{equation}
\frac{\partial \delta \phi}{\partial t} = (1-3\phi^{*2}-k^{2}) \delta \phi
\end{equation}
So, fluctuations along $\phi =0$ will keep growing at $ k=0$ unless higher order terms become dominant and stabilize them.
!!!Static Interfaces or Kinks
TDGL equation in dimensionless form is
\begin{equation}
\frac{\partial \phi}{\partial t} = \phi-\phi^{3}+ \nabla^{2}\phi
\end{equation}
Interface or kink can be obtained by steady state
\begin{equation}
\frac{d^{2} \phi}{d z^{2}} = \phi-\phi^{3}
\end{equation}
The kink solution is
\begin{equation}
\phi_{s}(z) = \tanh\left[\pm\frac{(z-z_{0})}{\sqrt{2}}\right]
\end{equation}
where $z_{0}$ is center of the kink. Thus $\phi = \pm 1$ except in the inter-facial region.
!!!Allen Cahn equation of motion for the interfaces
Writing TDGL equation in terms of inter-facial coordinates $(n,\vec{a})$
\begin{equation}
\nabla \phi = \left.\frac{\partial \phi}{\partial n}\right|_{t} \hat{n} \\
\nabla^{2} \phi = \left.\frac{\partial^{2} \phi}{\partial n^{2}}\right|_{t} \hat{n} \cdot \hat{n} + \left.\frac{\partial \phi}{\partial n}\right|_{t} \nabla\cdot\hat{n}
\end{equation}
Finally, we use the identity
\begin{equation}
\left.\frac{\partial\phi}{\partial t} \right|_{ n}\left.\frac{\partial t}{\partial n}\right|_{ \phi}\left.\frac{\partial n}{\partial \phi}\right|_{ t} = -1
\end{equation}
in the TDGL equation,
\begin{equation}
-\left.\frac{\partial n}{\partial t}\right|_{ \phi}\left.\frac{\partial \phi}{\partial n}\right|_{ t} = \phi-\phi^{3}+ \left.\frac{\partial^{2} \phi}{\partial n^{2}}\right|_{t} \hat{n} \cdot \hat{n} + \left.\frac{\partial \phi}{\partial n}\right|_{t} \nabla\cdot\hat{n}\\
\simeq \left.\frac{\partial \phi}{\partial n}\right|_{t} \nabla\cdot\hat{n}
\end{equation}
We make the identification that $\left. \frac{\partial n}{\partial t}\right|_{\phi} = v(\vec{a})$ is normal interfacial velocity which yields the Allen Cahn equation
\begin{equation}
v(\vec{a}) = -\nabla \cdot \hat{n} = -K(\vec{a})
\end{equation}
where the curvature goes as $K\sim 1/L$ and $v\sim dL/dt$, which gives the diffusive growth law for non-conserved scalar fields
\begin{equation}
L(t) \sim t^{1/2}
\end{equation}
Here, $L(t)$ is the typical domain size.
!!The binary (AB) mixture or Lattice Gas}
AB mixtures can be modeled using Ising model as follows
*Here $n_{i}^{\alpha}=1$ or 0 is occupation number of species $\alpha$.
*$n_{i}^{A}+n_{i}^{B}=1$ for all the sites. The dynamics is conserved as numbers of A and B species are constant.
*So we can identify these numbers with $S_{i}$ in the Ising Hamiltonian, i.e., $ S_{i} = 2 n_{i}^{A} -1=1-2 n_{i}^{B}$.
*And hence all the analysis of critical temperature goes through.
*Order parameter, $\phi = n^{A}(\vec{r},t)-n^{B}(\vec{r},t)$, is conserved as it satisfies the continuity equation.
!!!Cahn Hilliard equation
Order parameter satisfies continuity equation
\begin{equation}
\frac{\partial \phi(\vec{r},t)}{\partial t} = - \nabla\cdot \vec{J}(\vec{r},t) \hspace{1cm}\vec{J}\text{ is current} \\
\vec{J} = -D \nabla\mu(\vec{r},t) \hspace{1cm}\mu\text{ is chemical potential}
\end{equation}
The chemical potential is determined as
\begin{equation}
\mu(\vec{r},t) = \frac{\delta \mathcal{F}}{\delta \phi}
\end{equation}
Plugging this back in continuity equation gives the Cahn Hilliard (CH) equation for phase separation of binary mixture.
\begin{equation}
\frac{\partial \phi}{\partial t} = D \nabla^{2}\left(\frac{\delta \mathcal{F}}{\delta \phi}\right)
\end{equation}
!!!Domain Growth
For the $\phi^{4}$-form of free energy, CH equation is
\begin{equation}
\frac{\partial \phi}{\partial t} = \nabla \cdot {D\nabla[-a(T_{c}-T)\phi + b\phi^{3}-k\nabla^{2}\phi]}
\end{equation}
Typical chemical potential of a domain of size $L$ is $\mu \sim \frac{\sigma}{L}$.
The concentration current is $D|\nabla\mu|\sim \frac{D\sigma}{L^{2}}$, where D is the diffusion constant.
So domains grow as
\begin{equation}
\frac{dL}{dt} \sim \frac{D\sigma}{L^{2}} \nonumber\\
L(t) \sim (D\sigma t)^{1/3}
\end{equation}
!Summary
*A system evolves from its unstable or metastable state to its preferred equilibrium state as parameters like temperature, etc. are changed.
* Initially homogenous phase separates in phases rich in one of the constituents after quenching below $T_{c}$ which is marked by emergence and growth of domains.
* The domain growth law depends critically on:
**conservation law governing the coarsening.
**nature of defects and dimensionality ( d).
**relevance of hydrodynamic flow fields
*The domain growth law for diffusive regime scales as:
\begin{equation}
L(t)\sim t^{\eta}
\end{equation}
$\eta = 1/2$: for $d\geq 2$ and non-conserved order parameters.
$\eta = 1/3$: for $d\geq 2$ and conserved order parameters.
!!References:
*Bray, A. J. //Theory of phase-ordering kinetics//, Advances in Physics, 43.3 : 357-459, 1994.
*Hohenberg, Pierre C., and Bertrand I. Halperin, //Theory of dynamic critical phenomena//, Reviews of Modern Physics 49.3 (1977): 435, 1997.
*Puri, Sanjay, and Vinod Wadhawan, eds. //Kinetics of phase transitions//. CRC Press, 2009.
*Chaikin, Paul M., and Tom C. Lubensky. //Principles of condensed matter physics//. Cambridge: Cambridge university press, 2000.
*Halperin, B. I., and P. C. Hohenberg. //Scaling laws for dynamic critical phenomena//, Physical Review 177.2 : 952, 1969.
Collective phenomena occurs when a collection of large number of objects exhibits a property, which is totally different from what the constituents are capable of. Some relevant stuff,
* A very nice illustration of [[collective phenomenon|https://plus.google.com/+YonatanZunger/posts/Q8Hn9HuCiQG]]
* The article, More Is Different by $P. W.~Anderson$
!! References
*Principles of condensed matter physics: //P. M. Chaikin// and //T. C. Lubensky//
*Statistical mechanics: entropy, order parameters, and complexity: //JP Sethna//
*Stochastic Processes in Physics and Chemistry: //N.G. Van Kampen//
In mean field theory we replace the order parameter m(r) by its average value m. This means there are no fluctuations in MFT. And hence if fluctuations are suppressed somehow then MFT becomes exact. MFT considers order parameter to be spatially constant. Thus mean field theory fails quantitatively in low dimensions at critical points as spatial and temporal fluctuations are large. In higher dimensions as number of neighbours is large, it sees some kind of averaged effect and hence MFT becomes reasonable! Also MFT is valid if all spin interact with each other with same weight, infinite range of interactions. This assumption of no fluctuations is justified if $\langle \delta m^2\rangle\ll\langle m\rangle^2$. Lets consider the correlation function in mean field theory of the Ising model.
The most important thing being the partition function is written as,
\begin{equation}
Z = Tr\left(exp[-\beta\mathcal{H} + \beta h M \right)
\end{equation}
The other thermodynamic quantities of interest can be derived starting from this partition function,
\begin{equation}
\langle M \rangle = \frac{\partial \ln Z}{\partial (\beta h)} = \frac{1}{Z} ~
Tr\left(M~exp[-\beta\mathcal{H} + \beta h M] \right)
\end{equation}
\begin{equation}
\chi = \frac{\partial M}{\partial h} = \frac{1}{k_B T}~\left(\langle M^2 \rangle - \langle M \rangle^2\right)
\end{equation}
and so on..
In principle, if we are able to calculate this we have all the information about the system. This function in general encompasses microscopic degrees of freedom and hence is very cumbersome to account for at each step of the calculations. Moreover, all the details are not important as we will see in our discussion of Renormalisation group. So we take recourse to coarse graining schemes and get rid of some of the degrees of the freedom. Also, we are looking at the problem in a field theoretic way, although our system might be defined on a lattice. We are doing some sort of coarse-graining over a length scale $a < L < \xi $. Here, $a$ is the lattice spacing and $\xi$ is the [[correlation length|Mean field theory]]. The overall magnetisation in this scheme is
\begin{equation}
M = \int d^{d}x~ m(x)
\end{equation}
where m(x) is a field defined at each point in space and hence has a spatial variation in general. And hence the susceptibility is,
\begin{equation}
\chi = \int d^{d}x~d^{d}x'~\left(\langle m(x) m(x')\rangle - \langle m(x) \rangle \langle m(x')\rangle \right)
\end{equation}
Now, lets also defined our average magnetisation of the system as,
\begin{equation}
\langle m(x) \rangle = m
\end{equation}
The function describing the correlation of the local fluctuations.
\begin{equation}
G(x-x') = \langle m(x) m(x')\rangle
\end{equation}
Also, another quantity of interest is the connected correlation where we subtract the uncorrelated part of the average.
\begin{equation}
\langle m(x) m(x') \rangle_c = \langle [m(x)-\langle m(x)\rangle][m(x')-\langle m(x') \rangle]\rangle
\end{equation}
Similarly, connected correlation function can be defined as,
\begin{equation}
G_c(x-x') = \langle \delta m(x)\delta m(x')\rangle
\end{equation}
\begin{equation}
G_c(x-x') = G(x-x') - m^2
\end{equation}
For a system with translation invariance the Green's function only depends on the separation and hence the response function can be written as
\begin{equation}
k_B T~\chi = \int d^{d}x~d^{d}x'~G_c(x-x')
\end{equation}
\begin{equation}
k_B T~\chi = V \int d^{d}x~G_c(x)
\end{equation}
Probability of one configuration is
\begin{equation}
P[m(x)] \propto exp\{-\beta \mathcal{H}\}=exp\Big[-\beta\int d^d x \Big[\frac{r}{2} m^2 + u m^4 + k (\nabla m)^2\Big]\Big]
\end{equation}
So most probable configuration corresponds to the uniform m(x)=m.
\[ m = \left\{
\begin{array}{l l}
0 & \quad \text{if $T>T_{c};$}\\
\pm(-r/4u)^{1/2} & \quad \text{if $T<T_{c}.$}
\end{array} \right.\]
Lets introduce some fluctuation around this
\begin{equation}
m(x) = \underbrace{[m + \phi_l (x)]\hat{e}_1}^{magnitude fluctuation} + \underbrace{\sum_{\alpha =2}^{n} \phi_{t,\alpha}\hat{e}_{\alpha}}_{phase fluctuation}
\end{equation}
This kind of order parameter is for any general $O(N)$ [[model|Models of spin systems]] where one of the direction is special and is chosen either spontaneously due to the spontaneous symmetry breaking or due to an applied field in that direction. The all other $N-1$ direction are transverse to this special direction and are called transverse direction to this longitudinal chosen one! Later in the notes we will dwell more into the case of $O(2)$ which is also called xy model. In this model there is only one transverse direction and of course a single longitudinal direction. The Laplacian of the variation of the order parameter about the mean can be broken in the transverse and longitudinal part.
\begin{equation}
(\nabla \phi)^2 = (\nabla \phi_l)^2 + (\nabla \phi_t)^2
\end{equation}
Considering out inherent Hamiltonian of the system
\begin{equation}
\mathcal{H} = \int d^d x \Big[\frac{r}{2} m(x)^2 + u m(x)^4 + k (\nabla m(x))^2\Big]
\end{equation}
Plugging all the values of $m(r)^2$, $m(r)^4$, $(\nabla m(r))^2$ we get,
\begin{equation}
\mathcal{H} = \int d^d x \left( \frac{r}{2} m^2 + u {m}^4\right) +
\int d^d x \Big[ \frac{k}{2}(\nabla \phi_l)^2 + \frac{r+12 m u^2}{2} (\phi_l)^2 \Big]+ \\
\int d^d x \Big[ \frac{k}{2}(\nabla \phi_t)^2 + \frac{r+4 m u^2}{2} (\phi_l)^2 \Big] + higher~order
\end{equation}
By inspection we can write,
\begin{equation}
\frac{k}{\xi_l} = r + 12mu^2
\end{equation}
\[ \frac{k}{\xi_l} = r + 12mu^2 = \left\{
\begin{array}{l l}
r & \quad \text{if $T>T_{c};$}\\
-2r & \quad \text{if $T<T_{c}.$}
\end{array} \right.\]
Similarly for the transverse component,
\[ \frac{k}{\xi_t} = r + 4mu^2 = \left\{
\begin{array}{l l}
r & \quad \text{if $T>T_{c};$}\\
0 & \quad \text{if $T<T_{c}.$}
\end{array} \right.\]
These corresponds to the coefficient of the quadratic term and hence must be thought of as restoring potential. For the transverse case there is no restoring force for $T<T_c$ which corresponds to the Goldstone modes. The probability function, modulo the constants, then in Fourier modes can be written as
\begin{equation}
P[\phi_l,~\phi_t] \propto \prod_q exp\Big[ -\beta \frac{k}{2}(q^2 +\xi_{l}^{-2}) |\phi_{l,q}|^2 \Big]
exp\Big[ -\beta \frac{k}{2}(q^2 +\xi_{t}^{-2}) |\phi_{t,q}|^2 \Big]
\end{equation}
We can use our technology of Gaussian variables here to calculate the connected correlation function,
\begin{equation}
\langle \phi_{\alpha ,q} \phi_{\beta,q'} \rangle = \frac{k_B T\delta_{q,-q'}}{k(q^2 + \xi_{\alpha}^{-2})}
\end{equation}
As we have seen earlier that for O(N) models in general we have two correlations $G_{||}$ and $ G_{\bot}$, which is the longitudinal and the transverse components. And for temperature below $T_c$ the connected correlation function in the Fourier space goes as,
\begin{equation}
G_{||} = \frac{1}{q^{2}+\xi_{\alpha}^{-2}}
\end{equation}
\begin{equation}
G_{\bot} = \frac{1}{q^{2}}
\end{equation}
So in real space this reduces to be,
\begin{equation}
\langle\phi(x)\phi(0)\rangle = K_{B}T\int \frac{d^{d}k}{(2\pi)^{d}} \frac{e^{iqx}}{q^{2}+\xi_{\alpha}^{-2}}
\end{equation}
Lets us consider this integral in three dimensional space
\begin{equation}
\langle\phi(x)\phi(0)\rangle \sim \int \frac{e^{iq\cdot x}}{q^{2}+\xi_{\alpha}^{-2}} q^{2}dq~\sin\theta~d\theta~d\phi \sim \int \frac{\sin qx}{q^{2}+\xi_{\alpha}^{-2}} \frac{2iq^{2}dq}{iqx}
\end{equation}
The integral to be evaluated is:
\begin{equation}
\langle\phi(x)\phi(0)\rangle \sim \frac{1}{x}\int \frac{\sin qx}{q^{2}+\xi_{\alpha}^{-2}} q dq = \frac{1}{r} Imag\left( \int \frac{e^{iqx}}{q^{2}+\xi_{\alpha}^{-2}} q dq\right)
\end{equation}
This integrand has poles at $q = \pm1/\xi$. Closing the contour from above, we arrive a
\begin{equation}
\langle\phi(x)\phi(0)\rangle \sim \left(\frac{e^{-x/\xi_{\alpha}}}{x} \right)
\end{equation}
This is called the Ornstein Zernicke correlation.
We can do another alternative derivation of the Ornstein Zernicke correlation function. Minimizing the free energy, we get.
\begin{equation}
r\phi + 4u\phi^{3} -k \nabla^{2}\phi= 0
\end{equation}
If we do linearisation of this equation and keep only terms linear in of $\delta\phi$, i.e.
\begin{equation}
\phi = \phi_{0} + \delta\phi
\end{equation}
we get using $\phi_{0}= \sqrt{\frac{-r}{4u}}$
\begin{equation}
\nabla^{2} \delta\phi + \frac{2r}{k} \delta\phi = 0
\end{equation}
Now we perturb the system at origin by putting at delta function source there and we measure the fluctuation at $r$,
\begin{equation}
\nabla^{2} \delta\phi + \frac{2r}{k} \delta\phi = -\delta(r)
\end{equation}
This is solved by,
\begin{equation}
\delta\phi(x) \sim \left(\frac{e^{-x/\xi}}{x} \right)
\end{equation}
This is the Ornstein Zernicke correlation function, we derived above. So what we have done is perturbed the system at $r=0$ and seen at influence at any other $r$ and hence by definition, its the correlation between these two points.
Considering the O(2) or the xy model, lets look at these connected correlation functions in more light. The order parameter in this case is, which is a complex number in general, is defined as
\begin{equation}
\psi~(x) = \left|~\psi~(x)\right|~e^{i\theta}
\end{equation}
where $\theta$ corresponds to the phase fluctuations. Assuming a uniform magnitude with fluctuations only in the phase, the probability is written as
\begin{equation}
P[\theta(x)] \propto exp\{-\beta\frac{k}{2} \int d^d x~\left(\nabla \theta \right)^2 \}
\end{equation}
This decouples in independent modes when transformed in Fourier space, a useful property for quadratic theories!
\begin{equation}
P[\theta(q)] \propto exp\{-\beta \frac{k}{2} \sum_q q^2 |\theta(q)|^2 \}
\end{equation}
And hence the correlation in the phase turs out to be
\begin{equation}
\langle \theta(q)\theta(q')\rangle = \frac{k_B T\delta_{q,-q'}}{k q^2}
\end{equation}
Since the $\phi(x)$ is real and hence this can be written as,
\begin{equation}
\langle |\theta(q)|^2\rangle = \frac{k_B T}{k q^2}
\end{equation}
This result could have also been derived very easily using the equipartition of the energy as all the Fourier modes are decoupled and the inherent Hamiltonian is quadratic and hence each mode gets $1/2 k_B T$. Using the above result we can calculate the phase fluctuations in real space. Lets consider the continuum limit for the ease in calculation
\begin{equation}
\langle( \theta(x))^2\rangle =\int \frac{d^2 q}{(2\pi)^2} e^{iq(x-x')}\frac{1}{q^2}\sim ln q \Big|_{\Lambda_{min}}^{\Lambda_{max}}
\end{equation}
Thus this integral will diverge for $q\rightarrow 0$. This is called infrared divergence. This is circumvented by defining a lower cut off which can be justified by the fact that since we are working with a coarse grained model we do not have informations beyond a particular length scale and hence the lower cut off. This analysis goes straight through for any dimension and this can be seen for $d>2$ the phase fluctuation are finite while for $d\leq2$ the fluctuation diverge and hence a long range order can not survive. Now the parallel component takes care of magnitude of the magnetisation while the perpendicular component changes its direction. The perpendicular comp goes as $1/q^{2}$ and hence diverges for q = 0 which is reasonable also as GS is infinitely degenerate whenever symmetry is spontaneously broken (Mexican hat). It looks like all this issue has happened as the transverse part of the correlation function goes as $1/q^2$ below the critical temperature and thus things started blowing up in and below two dimensions when a continuous symmetry was spontaneously broken. See the integral involves $q^{d-1}dq/q^2$ and hence the phase space volume available to small $q$ is more for lower dimensions while the phase space volume is more for higher $q$ in higher dimensions (d>4).
Lets now also calculate the full correlation function this time, using result of Gaussian variables
\begin{equation}
\langle(\psi(x))^2\rangle = |\psi|^2\langle exp{(i \theta(x))}\rangle = |\psi|^2 exp\{-\frac12 \langle( \theta(x))^2\rangle\}
\end{equation}
Also the net fluctuations about a certain point can be calculated from following calculation which also can be used for understanding of the Goldstone modes. Since we are using a Hamiltonian of the form which is proportional to $q^2$ and hence uniform translation of the system has no cost which corresponds to the Goldstone modes. In essence whenever the Hamiltonian is proportional to $q^2$ w can expect Goldstone modes since the Hamiltonian is gapless as the Goldstone modes are also the massless or gapless excitation of the system. The important point is that for Goldstone modes are only formed if the broken symmetry is continuous and not discrete as we have an ordered state in two dimensional Ising model but the symmetry of Hamiltonian there is discrete and not continuous. Lets now consider the fluctuation about a point
\begin{equation}
\langle(\psi(x))^2\rangle = |\psi|^2 exp\Big[-\frac12 \frac{k_B T}{k} \ln \frac{\Lambda_{max}}{\Lambda_{min}}\Big]
\end{equation}
Thus the amplitude of the order parameter is exponentially suppressed in two dimensions and thus a long range order is not possible. Also, for one dimension the correlation diverges in Fourier space and hence again there is no order in one dimension while for higher dimension again the order is suppressed from its total amplitude. The factor occurring in the exponent is called the Debye Waller factor (W), i.e. $S = S_0 e^{-2W}$. This is essentially the suppression of order due to the fluctuations. Moreover this gives a qualitative proof of ~Mermin-Wagner theorem for no spontaneous symmetry breaking in $d\leq 2$ as the Fourier transform of the transverse part of the connected correlation function diverges for $d\leq 2$. Also the perpendicular component can be seen as a kind of fluctuation in the system. So no order persists for $d\leq 2$. Though there is another non trivial theorem due to Kosterlitz–Thouless (KT) which says that xy model exhibits phase transitions but it has topological origins with correlation function exhibiting power law decay in low temp and exponential in high temp. Vortex winding and unwinding and spin waves explain these as the energy is given as per Hamiltonian of $O(2)$ which is close to zero if spin waves are formed. So it is favorable at low temperature to undergo phase transition of topological origin in 2-d. These are used to explain the transition in superfluids. Also, 2-d is a different manifold since a circle cant be pulled out of the plane and so on...
!Ginzburg criterion
In mean field theory we replace the order parameter m(r) by its average value m. This means there are no fluctuations in MFT. And hence if fluctuations are suppressed somehow then MFT becomes exact. MFT considers order parameter to be spatially constant. Thus mean field theory fails quantitatively in low dimensions at critical points as spatial and temporal fluctuations are large. In higher dimensions as number of neighbours is large, it sees some kind of averaged effect and hence MFT becomes reasonable! Also MFT is valid if all spin interact with each other with same weight, infinite range of interactions. This assumption of no fluctuations is justified if $\langle \delta m^2\rangle\ll\langle m\rangle^2$. The fluctuations are only important in spherical ball of radius $\xi$ and hence this can be evaluates as
\begin{equation}
\frac{\int d^d x~ \langle \delta m(x)^2\rangle}{\int d^d x~ m^2} \sim \frac{\xi^{d-2}}{m^2}\sim\frac{1}{r^{(4-d)/2}}\sim |T-T_c|^{(4-d)/2}
\end{equation}
Now we know that fluctuations are important near the critical point and hence this quantity $|T-T_c|$ is small and hence diverges in dimensions $d < 4 $ and hence the fluctuation are important while for $d>4$ fluctuation are not important. This defines the upper critical dimensions for these systems, $d_c =4$. Also, in some cases mean field theory is a very good solution , e.g. superconductors. This can be understood by considering the fact that fluctuations become important near the critical temperature and how near the critical temperatures depends on the bare correlation length $\xi_0 = \xi(T=0)$ and in general the correlation length $\xi$ can be arbitrarily large than the bare correlation length $\xi_0$. Thus for a large $\xi_0$ w.r.t the lattice spacing, only very very near the $T_c$, the fluctuation become important which are generally inaccessible to experiments and hence mean field theory correctly predicts the experiments.
We are often interested in thermodynamic limit $N\rightarrow\infty$ in condensed matter physics, for which most of the properties do not depend on boundary conditions. The simplest and universally accepted boundary condition is the periodic boundary condition,
\begin{equation}
f(x) = f(x+L)
\end{equation}
Also, if there is a periodic boundary condition in some space then its dual space is discrete. This can be seen by considering the following Fourier transform
\begin{equation}
f(x) = A\sum_{q}e^{iqx}~f(q)
\end{equation}
where $A$ is a constant to be determined from //normalisation//. To ensure the periodic boundary condition $q$ is restricted to take only following discrete values
\begin{equation}
q=\frac{2\pi}{L}n,\qquad n =0,~\pm1,~\pm2, ...
\end{equation}
The discrete and continuous version are related to each other, as for $L\rightarrow\infty$, even the dual space is also continuous. Initially only discrete values were allowed at a gap of $2\pi/L$, now the measure goes like $\frac{dq}{\left(\frac{2\pi}{L}\right)}$, i.e.,
\begin{equation}
\sum_{q}\rightarrow \frac{L^{d}}{(2\pi)^{d}}\int d^{d}q
\end{equation}
And hence,
\begin{equation}
f(x) = \frac{AL}{2\pi}\int~dq~e^{iqx}~f(q)\overbrace{\rightarrow}^{LA\equiv 1}\frac{1}{2\pi}\int ~dq~e^{iqx}~f(q)
\end{equation}
So in the notations, followed here, there is a factor of $\frac{1}{2\pi}$ for $dk$ integral while there is no such term for the real integral in the definitions of Fourier transforms.
\begin{equation}
f(x) = \frac{1}{2\pi}\int dq~e^{iqx}~f(q) \\
\tilde{f}(q)= \int~dx~e^{-iqx}~f(x)
\end{equation}
Kronecker and Dirac delta can be realted by looking at the following
\begin{equation}
\sum_{q}\delta_{q,0} = 1 = \frac{L}{2\pi}\int~dq~\delta_{q,0}\rightarrow \int~dq~\delta(q)
\end{equation}
\begin{equation}
\lim_{L\rightarrow\infty}L~\delta_{q,0} = (2\pi)^{d}~\delta(q)
\end{equation}
this readily implies for $L\rightarrow\infty$
\begin{equation}
\int~dx~e^{i(q-q')x} = L\delta_{q-q',0} = (2\pi)\delta(x)
\end{equation}
So, lets sum up our Fourier transform rules
\begin{equation}
f(x) = \frac{1}{2\pi}\int~dq~e^{iqx}f(q) =\frac{1}{2\pi} \int~dq~dx'~e^{iqx}~e^{-iqx'}f(x')= f(x)
\end{equation}
Readers are invited to verify this for consistency using the Dirac delta
\begin{equation}
\delta(x) = \frac{1}{2\pi}\int~dq~e^{iqx}
\end{equation}
One of the most ubiquitous integral in physics is,
\begin{equation}
I = \int_{-\infty}^{\infty} e^{-\alpha x^2/2} = \sqrt{\frac{2\pi}{\alpha}}
\end{equation}
This result can be derived by considering $I^2$ and then doing the integration in plane polar coordinates $r, \theta$. This can be easily generalised to,
\begin{equation}
I = \int_{-\infty}^{\infty} e^{-\alpha x^2/2 + bx} = \sqrt{\frac{2\pi}{\alpha}} ~e^{b^2/2a}
\end{equation}
Lets now consider the general integration involving a set of variables,
\begin{equation}
I(A,b) = \int_{-\infty}^{\infty} (\prod_{i}^{n}~dx_i)~ \exp\left(-\frac12 x_i~A_{ij}x_j + b_jx_j\right) =\prod_i^n \left[\sqrt{\frac{2\pi}{\lambda_i}} ~ \exp \left(\frac{b_i A_{ij}^{-1} b_j}{2} \right)\right]
\end{equation}
Einstein summation convention is used; repeated indices are summed.
A$\rightarrow$ Real, symmetric matrix with positive eigenvalues, $\lambda>0$
b $\rightarrow$ column vector.
These expressions are of great importance for a physicists, as most of the theories we can solve exactly are Gaussian. All right, now we will look at the calculation of correlation functions in a Gaussian theory. Also, observe that,
\begin{equation}
x_{k_1} = \frac{\partial}{\partial b_{k_1}} \exp(b_j x_j)\Big|_{\vec{b}=0}
\end{equation}
In general,
\begin{equation}
\langle x_{k_1}x_{k_2}...x_{k_m}\rangle = \mathcal{N} \frac{\partial}{\partial b_{k_1}} \frac{\partial}{\partial b_{k_2}}... \frac{\partial}{\partial b_{k_m}} I(A, b)\Big|_{\vec{b}=0}
\end{equation}
Readers are encouraged to take this forward and derive two point correlation function for a Gaussian theory. Also, this can be used to verify the Wick's theorem which says that any n point correlation function can be written in terms of all possible pairs from the given set.
Since we are defining Gaussian integrals then it is worth noting that all the higher order cumulants, except the first two, are zero for a Gaussian distribution. This can be seen from the definition of the cumulants,
\begin{equation}
\langle e^{-ikx}\rangle = \exp\Big[ \frac{(-ik)^n}{n!}~\langle x^n \rangle_c \Big] =
\exp\Big[ -ikb - \frac{k^2}{2\alpha} \Big]
\end{equation}
The following result for $A$, which can be written in terms of Gaussian distributed variables, is also very useful,
\begin{equation}
\langle e^{A}\rangle = e^{\left(\langle A\rangle_c + \frac12\langle A^2\rangle_c\right)}
\end{equation}
! Broken symmetry and elasticity
Solids have the property of elasticity, i.e, their tendency to return to their original shape once deformed while liquid deform continuously on the application of the shear. This property of the elasticity came into picture because of the broken symmetry in the solid. Liquid have continuous translational and rotational invariance while the continuous symmetry is broken in the case of solids which leads to its rigidity. In this discussion we will see that whenever we break a continuous symmetry there is a Generalised elasticity. We have seen in our previous talks on [[MFT|Mean field theory]] that lowering in the temperature leads to solidification (BCC mostly). Lowest energy state are usually lower in symmetry and as temperature is increased the [[fluctuations|Fluctuations]] become important and the symmetry is restored in the system. Landau pointed out that the symmetry plays a vital role in the phase transitions. He prescribed an order parameter $\phi$ which is zero in the symmetric state and non-zero in the ordered state such that the order parameter clearly distinguishes the two phases. It is instructive to note that once the symmetry is broken then you need another variable to describe the system ([[always account for variable change!|https://en.wikipedia.org/wiki/21_%282008_film%29]]). Now what should be the choice of the order parameter is a complicated question in itself. It took many years to realize that a complex number is a suitable order parameter for superconductors and superfluids. Kosterlitz and Thouless have shown that the order parameters can be topological in origin as well.
In cricket, the prowess of a batsman lies on how he controls the handle of the bat and rest of it follows the motion of the handle. The motion of the bat as a whole can be explained by the fact that the energy is minimized if the phases are uniform everywhere, i.e. the symmetry is broken uniformly throughout. In general though the lattice can be deformed non-uniformly and we can have energy cost of the form $(\nabla u)^2$ to account for it. We are so accustomed to rigidity that we almost do not realize that rigidity is not built in the theory but is an 'emergent property' of the system. This phenomenon can be mapped to ferromagnetism, superconductivity, etc. and the generalization of this concept to all broken symmetry is the Generalised rigidity. In a magnet one can not flip one spin a time, they have to done in a macroscopic way. And hence the magnetization is uniform because of this rigidity.
A three dimensional solid can be described by periodic array of atoms at $\mathbf{R} = l_1 a_1 + l_2 a_2 + l_3 a_3$ or by collection of planes orthogonal to reciprocal lattice vector $\mathbf{G}$. The average density of the liquid phase is uniform and hence the liquid-solid order parameter is $\langle \delta n(x)\rangle = \langle n(x)\rangle - n_0 = \sum_G n_G e^{iG\cdot x}$. Thus the spatial modulations distinguishes the liquid from the solid. As we have already argued that a uniform translation of the system does not cause any energy. So a uniform translation is like a infinite wavelength, hence $k=0$ and no energy cost. Also, if the energy goes as ~$ K (\nabla u)^2$. Then it is very easy to show by variation that the
\begin{equation}
u(x, t) = u_0~ cos(kx - \omega t)~~~~ \text{where }~ \omega^2 = \frac{K}{\rho} k^2
\end{equation}
These are the phonons. Since the lowest energy is a broken symmetry state and hence the lowest lying modes (also called the Goldstone modes) are the first one to be excited. This is the cause of the stiffness in the solids as a large spatial variation have high energy cost. In magnets this leads to the generalised rigidity of the magnets and spin waves are formed as large variations locally have a high cost again! This idea of rigidity, once a symmetry is broken, is then easily generalised to superfluidity and superconductivity.
! XY model
! Liquid crystals
!!References:
*Principles of condensed matter physics: //P. M. Chaikin// and //T. C. Lubensky//
*Basic notions in condensed matter physics: //P. W. Anderson//
*Statistical mechanics: entropy, order parameters, and complexity://JP Sethna//
Hubbard model is one of the simplest model of interacting particles in a lattice, used to describe the transition between conducting and insulating systems. But before we go into the details of the model we should understand few concepts like second quantization, etc.
!Second quantization
Now the first question is why do you call it second quantization? Well, we have studied in our quantum mechanics courses that by imposing commutation relation between conjugate variables we do quantization. This is called the first quantization. Schrodinger equation is solved for those system for an appropriate wavefunction. Similar method is used in quantum chemistry where we write a many particle wavefunction $\psi(x_1,~x_2,~...,~x_n,~t)$. This can be handled if number of particles is not very large by writing the Hamiltonian as an operator,
\begin{equation}
\hat{H} = \sum_{j} \Big[ \frac{-\hbar^2}{2m} \nabla^2 + U(x_j)\Big] + \frac12 \sum_{i<j} V(x_i - x_j)
\end{equation}
This is very cumbersome once the number of particles becomes large. The problem is that any macroscopic system has very many particles and hence our theory should be capable of handle these many particles in a convenient way. For a condensed matter system where the number of particles is macroscopic, using concepts of fields is the way to go. In this theory, we promote the fields to operators and quantize the theory by specify the commutation relation between the field and its conjugate field. This business of quantization is called the second quantization. The Hamiltonian for second quantization is,
\begin{equation}
H = \int d^3 x~ \hat{\psi}^{\dagger}(x) \Big[ \frac{-\hbar^2}{2m} \nabla^2 + U(x)\Big]\hat{\psi(x)} +
\\
\frac12 \int d^3 x~d^3 x'~V(x-x'):\hat{\rho}(x)\hat{\rho}(x'):
\end{equation}
where $V(x-x')$ is the interaction potential between particles,
$\hat{\rho}(x) = \hat{\psi}^{\dagger}(x) \hat{\psi}(x)$ is the density,
and ''::'' denotes normal ordering such that annihilation operators lie right to the creation operator. The first term is simple non interacting system with their kinetic and net external lattice potential $u(x)$ acting on them while the second term corresponds to the interaction between particles. It should be noted that normal ordering of the density matrix, $:\hat{\rho}(x)\hat{\rho}(x'):$, ensures that there is self interaction or single particle particle interaction. This can be seen as there are two annihilation operators to the right, first they annihilate the particle and then act on vacuum to make its contribution zero!
!The Hubbard model
Considering a lattice of atoms localised in the atomic orbitals at each site. We can use localized set of orbitals to expand our creation operator,
\begin{equation}
\hat{c}^{\dagger}_{j\sigma} = \int d^3 x~ \hat{\psi}^{\dagger}(x) \phi(x-R_j)
\end{equation}
where $\phi(x)$ is the wavefunction of the particle in the localized Block state and we have introduced $\sigma$ index to include spin degrees of freedom of the particles. In this basis the Hamiltonian of the second quantization is written as,
\begin{equation}
H = \sum_{i,~j} \langle i|H_0|j \rangle \hat{c}^{\dagger}_{i\sigma} \hat{c}_{j\sigma} + \sum_{l,m,n,p}\langle lm| V |pn \rangle \hat{c}^{\dagger}_{l\sigma} \hat{c}^{\dagger}_{m\sigma}\hat{c}_{n\sigma} \hat{c}_{p\sigma}
\end{equation}
where $\langle i|H_0|j \rangle$ is one particle matrix element between state $i$ and $j$ while $\langle lm| V |pn \rangle$ is the interaction matrix between two particle states $|lm\rangle$ and $|pn\rangle$. Lets assume that energy of the electron on the site is $\epsilon$. Also, the amplitude to hop to tje next site is t. Thus matrix of motion of electron between sites is,
\[ \langle j| H^{0}|i\rangle = \left\{
\begin{array}
$\epsilon & \qquad \text{i=j} \\
-t & \qquad \text{i,j nearest neighbors}\\
0 & \qquad \text{otherwise}
\end{array} \right.\]
Moreover, if we assume that the states are well localised then on state interaction dominates any other. So in this case we can approximate the interaction as,
\[ \langle lm| V |pn\rangle = \left\{
\begin{array}
$U & \qquad \text{l=m=n=p} \\
0 & \qquad \text{otherwise}
\end{array} \right.\]
Pluuggin back all this information, the Hubbard model can be written as,
\begin{equation}
H = -t\sum_{j,\sigma,\sigma'} \Big[\hat{c}^{\dagger}_{j\sigma} \hat{c}_{j\sigma} + h.c.\Big] + \epsilon\sum_{j,\sigma} \hat{c}^{\dagger}_{j\sigma} \hat{c}_{j\sigma} + U\sum_{j} n_{j\downarrow}n_{j\uparrow}
\end{equation}
This can be written in momentum space in the following form, using,
\begin{equation}
c^{}_{j\sigma} = \frac{1}{\sqrt(N_s)}\sum_k c^{}_{k\sigma} e^{i k\cdot R_j}
\end{equation}
\begin{equation}
H = \sum_{k,\sigma} \epsilon_k c^{\dagger}_{k\sigma} c^{}_{k\sigma} + \frac{U}{N_s}\sum_{r,k,k'} c^{\dagger}_{k-q\uparrow} c^{\dagger}_{k'+q\downarrow} c_{k'\downarrow}c_{k\uparrow}
\end{equation}
where,
\begin{equation}
\epsilon_k = -2t (\cos k_x + \cos k_y + \cos k_z) + \epsilon
\end{equation}
!! References
*Principles of condensed matter physics: //P. M. Chaikin// and //T. C. Lubensky//
*Pattern formation and dynamics in nonequilibrium systems: //Michael Cross// and //Henry Greenside//
The Ising Hamiltonian can be written as:
\begin{equation}
H = -J \sum_{\langle i j \rangle}^{N} S_{i} S_{j} -h \sum_{i=1}^{N} S_{i}
\end{equation}
This Hamiltonian is explained as follows:
* The spins $S_{i}$ can take values $\pm 1$,
* $\langle i j \rangle$ implies nearest-neighbor interaction only,
* $J$ is the interaction energy, and is positive if the interaction is ferromagnetic and negative if the interaction is antiferromagnetic, and
* $h$ is the external magnetic field,
* $N$ is the total number of sites.
The Ising Model has the following properties:
* In equilibrium, for temperatures below the critical temperature, the system magnetizes.
* The system undergoes a second-order phase transition at $T_{c}$. There is a line of first order phase transitions at $h=0$ in $h \ \mathrm{vs.} \ T$ plane.
* The average magnetisation, $m$, acts as the order parameter here
\begin{equation}
m =\frac{\langle S \rangle}{N}
\end{equation}
* $m$ is zero in the disordered paramagnetic phase and non-zero in the ordered ferromagnetic phase.
!The binary (AB) mixture
The binary (AB) mixture or Lattice Gas is defined on a lattice as spins.
AB mixtures can also be modeled using Ising model as follows
* Here $n_{i}^{\alpha}=1$ or 0 is occupation number of species $\alpha$.
* $n_{i}^{A}+n_{i}^{B}=1$ for all the sites. The dynamics is conserved as numbers of A and B species are constant.
* So we can identify these numbers with $S_{i}$ in the Ising Hamiltonian, i.e., $ S_{i} = 2 n_{i}^{A} -1=1-2 n_{i}^{B}$.
* And hence all the analysis of critical temperature goes through.
* Order parameter, $\phi = n^{A}(\vec{r},t)-n^{B}(\vec{r},t)$, is conserved as it satisfies the continuity equation.
!Peierls argument
The one dimensional Ising model does not exhibit the phenomenon of phase transition while higher dimensions do. This can be argued based on arguments, due to Peierls, related to the net change in free energy, $F = E-TS$. We estimate the net change in free energy for introducing a disorder in an otherwise ordered system. The ordered state can only be stable, if we can find a finite and non zero temperature for which the net change in free energy is positive.
If we start with an ordered one dimensional Ising chain and flip one of the spin, then the change in free energy is always negative. The Energy increases by $4J$ if we flip a spin while entropy also increases by $\ln N$ as we have N possibility to do that. So net change in the free energy is
\begin{equation}
\Delta F = 4J - KT \ln N
\end{equation}
and hence for, $N\rightarrow\infty$, the system prefers a disordered state. In the limit of N going to infinite there is no finite, non zero temperature to avoid the destruction of long range order. So there is no spontaneous symmetry breaking in 1-d. This estimate can be made for any domain of length $L$. Increase in energy in this case is also $4J$ while we could introduce the defect at any $i = 1, 2,..., N-L$ site. And hence, the change in entropy is $k\ln (N-L)$ which again implies that change in free energy is negative for $N\rightarrow \infty$.
For two and higher dimension Peierls gave an argument that number of different islands, which cost only at the boundaries, varies with the perimeter, $L$. In two dimensions, number of islands scales as $3^{L} \approx e^{L\log 3}$. Also in two dimension one could have started the island at any of the $N^{2}$ locations. Energy also scales as $L,~E = 4JL$, so there is a competition between the the entropy and the energy. In this case, the length of the islands itself can be of the system size,
\begin{equation}
L = \eta N^{2}
\end{equation}
where $\eta$ is a number between 0 and 1. Using this free energy can be written as,
\begin{equation}
\Delta F = 4J\eta N^{2} - KT \ln (N^{2}3^{\eta N^{2}})
\end{equation}
which gives a rough estimate of $T_{c} \approx \frac{2J}{k} $. This can be easily generalised to higher dimensions.
Any transformation should, in general, be phase space volume conserving and which is like saying that it should preserve the information content otherwise inverse of that transformation is ill defined as u can't create some information out of nowhere. The Legendre transform (LT) from a function $f(u, v)$ to $g(A,v) = f-Au$ is defined as
\begin{equation}
df(u, v) = A du + B dv
\end{equation}
\begin{equation}
dg(A, v) = d[f - Au] = A du + B dv = -u dA + B dv
\end{equation}
This is the basic idea of Legendre transformation where A and B are identified as
\begin{equation}
df = \frac{\partial f}{\partial u} du + \frac{\partial f}{\partial v} dv = A~du + B~dv
\end{equation}
So a parabola can be defined as
\begin{equation}
y^{2} = 4ax \rightarrow \left(\frac{\partial y}{\partial x}\right)^{2} \sim \frac{1}{x}
\end{equation}
Also, geometrically it can be looked at as to define a curve either u need to specify $(x, y)$ at each point of the space or you need to say $(slope, intercept)$ at each point. Both will have the same information. Because with only slope you can not uniquely write a curve while same is true for intercept as u can have many slope but once u have fixed the slope and intercept then u can always use $y = mx + c$ and u are done- go home!
The concept of Legendre transform comes routinely in thermodynamics when u tranform one thermodynamic potentials to other.
\begin{equation}
F=U-TS
\end{equation}
\begin{equation}
G=U-TS+PV=\mu N
\end{equation}
where the last equation can be derived using the extensivity as follows.
Using the extensivity
\begin{equation}
U(\lambda S,~\lambda N,~\lambda V) = \lambda~U(S,~N,~V)
\end{equation}
\begin{equation}
\frac{\partial{U}}{\partial{(\lambda S)}} \frac{\partial (\lambda S)}{\partial{\lambda}} + ... = U
\end{equation}
\begin{equation}
U - TS + PV = \mu N \label{gmuN}
\end{equation}
\begin{equation}
\implies G = \mu N
\end{equation}
This can be used to derive the Gibbs - Duhem relation.
\begin{equation}
dU~(V, S, N) =TdS -PdV+ \mu dN,~~~~~~dF~(V, T, N)=-PdV- SdT+ \mu dN,~~~~~~dG~(P, T, N)=VdP - S dT+ \mu dN
\end{equation}
\begin{equation}
\implies SdT-VdP + Nd\mu =0~~~~\text{Gibbs-Duhem relation}
\end{equation}
Thus the change in chemical potential and temperature are not independent of change in presuure!
Hamiltonian in classical mechanics is the LT of the Lagrangian and thus both of them have the same information content. So a Lagrangian has a manifested realativistic invariance while Hamiltonian does not because of the way transformation is defined. This is again similar to the concept of choice of the thermodynamics potential based on which variables are fixed in the concerned system.
The partition function, $Z$ in statistical mechanics relates to the free energy, $F=-k_B T \ln Z$. Also, the grand-canonical partition function is like Legendre transform of the free energy for fixed $\mu$ which is
\begin{equation}
-k_B T\ln \mathcal{Q}= F-\mu N = U-TS-\mu N = -PV
\end{equation}
Alternatively, we can look at this using the fact that $U(S, V, N)$ and then if one does LT to get $F(T, V, N) = U-TS$ and if you still let go N for $\mu$ you get $\mathcal{F}(T, V. \mu)$. We have to ensure the extensivity of thie new potential. So, $\mathcal{F}=-PV$; as V is all you have at hand to ensure extensivity and P comes with V to get the right dimension of energy.
!! Heaviside step function
Let us begin with the Heaviside step function and ask what its Fourier transform?
Turns out the standard definition of the Heaviside function (as a simple step function) is difficult to work with, so we regularize it in the following way:
\begin{equation}
\theta(t) = \left\{
\begin{array}{l l}
e^{-\delta t} & \quad \text{if $t>0$}\\
0 & \quad \text{if $t<0$}
\end{array} \right.
\end{equation}
and zero otherwise.
!! Fermions confined on a ring
The Hamiltonian is:
\begin{equation}
\mathcal{H} = \int dx \ \frac{\hbar^2}{2m}\left( \partial_x c^\dagger \partial_x c - k_f^2 c^\dagger c\right)
\end{equation}
where we impose periodic boundary conditions on the creation and annihilation operators
\begin{equation}
c(x+L) = c(x)
\end{equation}
and also the equal-time anticommutation relations
\begin{equation}
\{c(x,t),c^\dagger(x',t)\} = \delta(x - x')
\end{equation}
that are typical of fermionic systems. This completely specifies our system. It should be clear that the imposition of the periodic boundary conditions is what puts our fermions on a ring, as opposed to a line.
We can expand $c(x)$ in the Fourier modes:
\begin{equation}
c(x) = \frac{1}{\sqrt{L}}\sum c_k e^{ikx}
\end{equation}
and the analogous inverse Fourier transform is
\begin{equation}
c_k = \frac{1}{\sqrt{L}} \int dx \ e^{-ikx} c(x)
\end{equation}
The Hamiltonian in momentum space is, then
\begin{equation}
\mathcal{H} = \frac{\hbar^2}{2m}\sum_k (k^2 - k_f^2) c^\dagger_k c_k ,
\end{equation}
and the corresponding anticommutation relations for the momentum-space creation and annihilation operators are:
\begin{equation}
\{c_k,c^\dagger_{k'}\} = \delta_{k, k'}
\end{equation}
The nice thing about this Hamiltonian is that the ground state may be straightforwardly defined. We ask ourselves what a ground state is: it is the state for which if we add a particle, the net energy increases. It may be straightforwardly verified that the ground state is the state in which all states up to the Fermi energy are filled.
Thus, the ground state \textit{is} the Fermi sea.
\begin{equation}
| GS \rangle \equiv | FS \rangle = \prod_{k<k_f} c^\dagger_k |0\rangle
\end{equation}
where $|0\rangle$ is the vacuum of our theory.
Now we ask the following question: what does a hole in the Fermi sea look like?
The answer to this question is
\begin{equation}
|h\rangle = c_{-k} |FS\rangle \quad \mathrm{for} \quad |k| < |k_f|
\end{equation}
where the abbreviation $h$ stands for "hole". A particle above the Fermi sea is
\begin{equation}
|p\rangle = c^\dagger_k |FS \rangle \quad \mathrm{for} \quad |k| > |k_f|
\end{equation}
where $p$ stands for "particle".
Energy of the holes is:
\begin{equation}
E_h | h \rangle = \mathcal{H}| h \rangle = \mathcal{H} c_{-k} | FS \rangle = (E_{FS} - \epsilon_{-k}) | h \rangle
\end{equation}
while the energy of the particles is
\begin{equation}
E_p | p \rangle = \mathcal{H}| p \rangle = \mathcal{H} c^\dagger_{k} | FS \rangle = (E_{FS} + \epsilon_k)|p \rangle .
\end{equation}
!! Green's functions
These tell you how a systems behaves after you kick it.
The retarded Green's function is defined as:
\begin{equation}
G_R(x',t';x,t) = \langle 0 | \left\{ c(x',t'), c^\dagger(x,t) \right\} | 0 \rangle \ \theta(t-t')
\end{equation}
where the Heaviside step function imposes causality.
On Fourier transforming the above retarded Green's function, we have (using the Fourier space anticommutation relations)
\begin{equation}
G_R (x',t';x,t) = \sum_k \ \left\langle 0 \left| \{ c_k(t) , c^\dagger_k(t) \} \right| 0 \right\rangle \ e^{ik(x - x')} \ \theta(t - t')
\end{equation}
Now observe the structure of the expression for $G_R$: we have a summation over $k$, and the summand contains a "kernel" that looks like a phase. Referring to the above expression that defines our Fourier transforms, we can identify everything in the summand except for the phase as $G_R(k,t)$! Thus
\begin{equation}
G_R(k,t) = \left\langle 0 \left| \{ c_k(t) , c^\dagger_k(t) \} \right| 0 \right\rangle \ \theta(t - t')
\end{equation}
Here, time evolution of the creation and annihilation operators is via Heisenberg evolution, which means
\begin{equation}
c_k(t) = e^{iHt} c_k e^{-iHt}
\end{equation}
and so on.
Insert an identity operator and make the time-evolution of the creation and annihilation operators explicit. The $e^{iHt}$ factors, when encountering an energy eigenket, may be replaced by $e^{iE_n t}$. Thus, we get
\begin{equation}
G_R (k;t',t) = \sum_n \left( \left| \langle n | c^\dagger_k | FS \rangle \right|^2 e^{-iE_n (t-t')} + \Big| \langle n | c_k | FS \rangle \Big|^2 e^{iE_n (t-t')} \right) \ \theta(t-t')
\end{equation}
Now, what is relevant is the time interval, so we replace $(t-t')$ by just $t$. Further, perform a Fourier transform of the $t$ variable. Note that we have to use our regularized $\theta$ function to effect this transformation:
\begin{equation}
G_R(k,\omega) = \lim_{\delta \rightarrow 0} \int^{+\infty}_{-\infty} e^{i(\omega + i \delta)t} G_R(k,t) .
\end{equation}
This gives
\begin{equation}
G_R(k,\omega) = \sum_n \left( \frac{\left| \langle n | c^\dagger_k | FS \rangle \right|^2}{i\omega - iE_n - \delta} + \frac{\Big| \langle n | c_k | FS \rangle \Big|^2}{i\omega + iE_n - \delta} \right)
\end{equation}
We are now in position to define a spectral function $A(k,\omega)$ as
\begin{equation}
A(k,\omega) = \left\{
\begin{array}{l l}
\sum_n \delta(\omega - E_n) \left| \langle n | c^\dagger_k | FS \rangle \right|^2 & \quad \text{if $\omega>0$,}\\
\sum_n \delta(\omega + E_n) \left| \langle n | c^\dagger_k | FS \rangle \right|^2 & \quad \text{if $\omega<0$.}
\end{array} \right.
\end{equation}
Thus,
\begin{equation}
G(k,\omega) = \int^{+\infty}_{-\infty} d\omega' \ \frac{A(k,\omega')}{\omega - \omega' + i\delta} .
\end{equation}
up to a factor of $i$ that the reader is invited to hunt down for their pleasure and intellectual satisfaction.
What is the spectral function? To answer this equation, it is instructive to try and relate the spectral function to the retarded Green's function. The following remarkable relationship may be verified
\begin{equation}
A(k,\omega) = \frac{1}{\pi} \ \mathrm{Im} \ G_R(k,\omega)
\end{equation}
by using the Lorentzian definition of the nascent delta function. The above equations are remarkable because they tells us that the spectral function can be used to reproduce the whole retarded Green's function. This is surprising because the spectral function itself is only the imaginary part of the retarded Green's function. This little gem is a manifestation of a more general result called the ~Kramers-Kronig relation.
$A(k,\omega)$ is normalized to unity; this follows from the definition of the spectral function. This makes us suspect that the spectral function is a sort of probability function. It can be shown that for a non interacting theory it corresponds to a delta function while there is a spread in this function for an interacting theory.
Liquid crystals consists of rod like molecules with a high aspect ratio. In the isotropic fluid case, the molecules have random orientation and position. Thats is to say that there is no positional or orientational order in the isotropic case. The nematic phase is defined by if the system has order in the orientation but particles can still have random positions. So we are in a system of lower symmetry w.r.t the orientations. Now the system has a lower symmetry and hence we need a parameter which tells about the ordering in the system. By definition, this order parameter should capture the symmetry of the system which is $Z_2$ as the particles look the same once flipped like in Ising model with zero external field. Thus we might want to associate a particular unit vector in the direction of ordered state but this will not be invariant under $Z_2$ and we have to consider a tensor as the order parameter. And hence, for isotropic to nematic transition, the order parameter is a tensor. Nor for isotropic case this tensor should add to zero and hence we consider a syymetric and traceless order parameter constructed from vectors $v_i$ for particles located at any point $x_{\alpha}$
\begin{equation}
Q_{ij} = \langle v_i v_j -1/3\rangle \delta(x-x_{\alpha})
\end{equation}
where the unit vector "n" is called the Frank director. For a uniaxial liquid crystal the only important constribution comes from the pricipal direction, "n". And order paramter reduces to,
\begin{equation}
\langle Q \rangle = S(n_i n_j -1/3 \delta_{ij})
\end{equation}
where S is
\begin{equation}
S = \frac12\langle (3cos^2\theta^{\alpha} -1)\rangle
\end{equation}
[[About]]
[[Topics]]
[[Contact|Rajesh Singh]]
We mostly start in mean field theories by identifying an order parameter and then proceed by minimizing free energy $(F = E - TS)$ w.r.t the order parameter and so on. MFT can be viewed as ``zeroth-order" expansion of Hamiltonian in fluctuations. This means there are no fluctuations in MFT. And hence if fluctuations are suppressed somehow then MFT becomes exact. MFT considers order parameter to be spatially constant. Thus mean field theory fails quantitatively in low dimensions at critical points as spatial and temporal fluctuations are large. In higher dimesions as number of neighbours is large, it sees some kind of averaged effect and hence MFT becomes reasonable! Also MFT is valid if all spin interact with each other with same weight, infinite range of interactions. We will be considering [[Ising model|Ising model]] and its certain generalization for this very notes. Lets start this discussion with mean field theory approach to th Ising model due tp Braggs and William.
!Braggs William MFT
MF of Ising model due to Braggs William replaces spin in the Hamiltonian by a spatially uniform magnetization, m,
\begin{equation}
m = \frac{\langle S \rangle}{N} =\frac{N_{\uparrow} - N_{\downarrow}}{N}
\end{equation}
which takes values between 0 and 1. The energy can thus be written as
\begin{equation}
E(m) \simeq -J \sum_{\langle ij \rangle} \langle S_{i} \rangle \langle S_{j} \rangle -h \sum_{i} \langle S_{i} \rangle = -\frac{NqJ}{2} m^{2} - Nhm
\end{equation}
where, $q=2d$ is coordination number and depends on the dimensionality $(d)$. Before we proceed with the calculation its good to identify that energy goes as $\frac{NJq m^{2}}{2}$ and thermal fluctuation are like $\frac{kT}{2}$. So it becomes natural to identify $T_{c} = \frac{qJ}{k}$. Such that above this temperature the thermal fluctuations dominate over the Hamiltonian ordering and vice-versa. The entropy, S, can be calculated exactly
\begin{equation}
S(m) = k \ln \binom{N}{N_{\uparrow}} = k \ln \binom{N}{N(1+m)/2}
= -N k\left[\frac{1+m}{2} \ln \frac{1+m}{2} + \frac{1-m}{2} \ln \frac{1-m}{2} -ln2 \right]
\end{equation}
where $N_{\uparrow}$ is number of up spins and $N = N_{\uparrow} + N_{\downarrow}$ is total number ofsites in the lattice. The complete Braggs William free energy is
\begin{equation}
f(T,m) = -\frac{NqJ}{2} m^{2} -N k T\left[\frac{1+m}{2} \ln \frac{1+m}{2} + \frac{1-m}{2} \ln \frac{1-m}{2} -ln2 \right]
\end{equation}
The expression can be expanded in the powers of $m$ to obtain a simplified expression of free energy, f.
\begin{equation}
f = \frac{k(T-T_{c})}{2}m^{2} + \frac{kT}{12}m^{4}-kTln2+O(m^{6})
\end{equation}
where
\begin{equation}
T_{c} = \frac{qJ}{k}
\end{equation}
for $T>T_{c}, f$ has a positive curvature at origin and negative curvature for $T<T_{c} $.
Also, by minimizing free energy at fixed $T,~h$ we can arrive at equilibrium value of order parameter:
\begin{equation}
m_{0} = \tanh(\beta q J m_{0} + \beta h)
\end{equation}
where the quantity $h_{m}= h + T_{c}m$ is local molecular field at any given site. For $h = 0$, we can again identify the MF critical temperature.
\begin{equation}
T_{c} = \frac{qJ}{k}
\end{equation}
MF free energy of Ising model can be written in the form
\begin{equation}
f(m) = \frac{F(m)}{N} = \frac{1}{2}(kT-qJ)m^{2}- hm + \frac{kT}{12}m^{4}-kTln2+O(m^{6})
\end{equation}
This form of free energy makes contact with the Landau functional (to be discussed in later sections)
\begin{equation}
\mathcal{L} = \frac{r}{2} m^{2} + u m^{4}
\end{equation}
Thus this Mean field theory predicts phase transition irrespective of any dimension. But we know one dimension Ising model doesn't exhibits any phase transition. But in higher dimension it predicts correct qualitatively. This can be understood by considering the fact that in higher dimensions the number of neighbours is large and hence the fluctuations average out which is what we do in MFT. So a theory in higher dimension or a theory with long range interactions and not just nearest neighbours will be described properly by MFT.
!Landau theory
Landau theory is a phenomenological theory to describe phase transitions and the physics near a critical point. Analyticity of the free energy functional in the neighbourhood of a critical point is important as one wants to do maths with it. At the critical point itself, however, the free energy is not analytic, so Landau theory is not valid exactly at the critical point.
We will motivate Landau theory using Ising model, but as such the theory is very general and can be applied to any system by respecting their symmetries, etc involved.The Ising Hamiltonian can be written as
\begin{equation}
H = -J \sum_{\langle i j \rangle} S_{i} S_{j} -h \sum_{i=1} S_{i}
\end{equation}
* The spins $S_{i}$ can take values $\pm 1$,
* $\langle i j \rangle$ implies nearest-neighbor interaction only,
* $J$ is the interaction energy, and is positive if the interaction is ferromagnetic and negative if the interaction is antiferromagnetic, and
* $h$ is the external magnetic field.
The Ising Model has the following properties:
* In equilibrium, for temperatures below the critical temperature, the system magnetizes.
* The system undergoes a second-order phase transition at $T_{c}$. There is a line of first order phase transitions at $h=0$ in $h \ \mathrm{vs.} \ T$ plane.
* The average magnetisation, $m$, acts as the order parameter here
\begin{equation}
m =\frac{\langle S \rangle}{N}
\end{equation}
* $m$ is zero in the disordered paramagnetic phase and non-zero in the ordered ferromagnetic phase.
Lets consider the case of zero external field, i.e, $h=0$ and hence the Hamiltonian is symmetric under $m\rightarrow-m$. The first term of Landau functional which respects the $m\rightarrow-m$ symmetry is:
\begin{equation}
\mathcal{L} = \frac{r}{2} m^{2}
\end{equation}
This term can also be motivated using the [[central limit theorem|http://www.imsc.res.in/~rsingh/discussion/cond-mat/files/slides/clt.pdf]] as our order parameter is sum of spin at each site. Here $r(T)$ has to be strictly positive for the energy functional to have a well defines lower bound and hence stability enforces $r$ to be strictly positive. This form of free energy has only one minima at $m=0$ which corresponds to a paramagnetic phase. We have to consider higher term if we want to capture other interesting phenomenons Ising model can exhibit. The second term for a spatially uniform under parameter respecting the Hamiltonian symmetry is
\begin{equation}
\mathcal{L} = \frac{r}{2} m^{2} + u~m^{4}
\end{equation}
This is also called the $\phi^{4}$ theory. The functional should have a solution of $m=0$ at high temperature and a non zero $m$ at lower temperature. This can be taken care of as, $r=a(T-T_{c})$ can be negative. Thus for $T<T_{c}$ there are two minimas symmetrically placed about the origin. Thus we have mimicked a system which undergoes a second order phase transition at $T_c$. If cubic terms are also allowed in the Landau functional, then we will have first order phase transition as well. So if we consider the external magnetic field also in the Ising Hamiltonian, $h\neq 0$, then the up down symmetry is lost and we expect a first order phase transition. Also, for isotropic to nematic transition, the order parameter is a tensor,
Again, this system has the symmetry of the Ising model which is $Z_2$ but since the order parameter is invariant itself, we can have odd powers of order parameter for the free energy expansion and hence this system undergoes a first order phase transition.
If we also consider a gradient term which is the spatial variation of order parameter in free energy, we obtain Ginzburg Landau functional. Lets consider the Ginzburg Landau functional and take $\phi(x)$, the local magnetisation, as the order parameter which is a field and has spatial variations. Also, we are looking at the problem in a field theoretic way, although our system might be defined on a lattice. We are doing some sort of coarse-graining over a length scale $a << x << \xi $. Here, $a$ is the lattice spacing and $\xi$, we will see soon, is the correlation length.
\begin{equation}
\mathcal{L} = \frac{r}{2} \phi^{2} + u \phi^{4} + \frac{c}{2} (\nabla \phi)^{2}
\end{equation}
We can make some cute estimation with this form of free energy with the assumptions on the behavior of the coefficients. Simply by dimensional estimates, we can say that there is an inherent length scale $\xi$ in the system given by $\xi \sim \sqrt{\frac{c}{r}}$. We will see that this turns out to be the correlation length for our system and diverges at critical temperature. We can argue that $r = a(T-T_{c})$ changes sign near the critical temperature $T_{c}$ as it has to give spontaneous symmetry breaking. We assume that $u$ is temperature independent and has to be positive for stability, as energy has to be bounded below, so coefficient of highest power in energy expression has to be positive. The total free energy is thus,
\begin{equation}
F = \int~d^{d}r~\mathcal{L} = \int d^{d}r~\left[\frac{r}{2} \phi^{2} + u \phi^{4} + c (\nabla \phi)^{2}\right]
\end{equation}
[img[http://www.imsc.res.in/~rsingh/discussion/cond-mat/files/images/phi4.png]]
;Spontaneous symmetry breaking as the parameter r is varied.
For $T >T_{C}$,there is single minima of free energy at $\phi = 0$. For $T <T_{C}$, there are two minima, symmetrically about the origin, as shown in Fig. Also minimizing the free energy, ignoring spatial variations, we get.
\begin{equation}
r\phi + 4u\phi^{3} -c \nabla^{2}\phi= 0
\end{equation}
The solutions for a spatially uniform order parameter, reducing back to Landau theory, are,
\[ \phi = \left\{
\begin{array}{l l}
0 & \quad \text{if $T>T_{c};$}\\
\pm(-r/4u)^{1/2} & \quad \text{if $T<T_{c}.$}
\end{array} \right.\]
Thus mean field theory predicts a II order phase transition with critical exponent, $\beta$ for magnetization to be half.
\begin{equation}
\phi \sim (T-T_{c})^{\beta}, \qquad \beta = 1/2.
\end{equation}
Mean field theory ignores fluctuations and is hence exact in higher dimension and of course fails miserably in systems of lower dimensions and nearest neighbour interactions! In 3-d $\beta=1/3$
The Landau functional shows a phase transition in spite of being a well-behaved analytic function? Well, its the minima of the Landau functional which is really determining the value. And minima of Landau functional can be non-analytic; nothing stops you! The minima of the functional is non-analytic as for $T<T_{c}$ there are two possibilities, $\phi=\pm\phi_{0}$.
!!Domain Walls
The equation obtained by minimizing free energy reads as
\begin{equation}
r\phi + 4u\phi^{3} -c \nabla^{2}\phi= 0
\end{equation}
We have seen this can have two solution $\phi = \pm\phi_{0}$ below critical temperature $T_{c}$. Now for a non zero temperature below $T_{c}$ there can be regions of spin pointing up and down which corresponds to the two solutions. The two regions where spins point up or down are called domains and the place where they meet are called domain walls.
To understand the structure of domains we go back to the previous equation and choose one direction and the equation hence reduces
\begin{equation}
\nabla^{2}\phi= \frac{r\phi}{c} + \frac{4u\phi^{3}}{c}
\end{equation}
The solution is
\begin{equation}
\phi= \phi_{0}\tanh \left(\frac{x}{2\xi}\right)
\end{equation}
So the solution is sort of interpolating between the two regions. The width of the domain wall is is $2\xi$, twice the correlation length. Now we can plug this solution back in expression for energy and see that energy roughly scales as $\sim \frac{a^{2}}{u\xi}$. This form can also be argued from the form of functional simply by dimensional estimates. As critical points is approached the domain width increases and energy cost for the domains decreases as $\xi$ is becoming large as we approach the critical temperature.
!$\phi^3$ and first order transition between nematic and isotropic
[img[http://www.imsc.res.in/~rsingh/discussion/cond-mat/files/images/phi3.png]]
;Landau functional w.r.t order parameter depicting a first order phase transition due to the cubic term in the free energy.
Liquid crystals consists of rod like molecules with a high aspect ratio. In the isotropic fluid case, the molecules have random orientations and positions. Thats is to say that there is no positional or orientational order in the isotropic case. The nematic phase is defined by if the system has order in the orientation but particles can still have random positions. So we are in a system of lower symmetry w.r.t the orientations. Now the system has a lower symmetry and hence we need a parameter which tells about the ordering in the system. By definition, this order parameter should capture the symmetry of the system which is $Z_2$ as the particles look the same once flipped like in Ising model with zero external field. Thus we might want to associate a particular unit vector in the direction of ordered state but this will not be invariant under $Z_2$ and we have to consider a tensor as the order parameter. And hence, for isotropic to nematic transition, the order parameter is a tensor. Nor for isotropic case this tensor should add to zero and hence we consider a syymetric and traceless order parameter constructed from vectors $v_i$ for particles located at any point $x_{\alpha}$
\begin{equation}
Q_{ij} = \langle v_i v_j -1/3\rangle \delta(x-x_{\alpha})
\end{equation}
where the unit vector "n" is called the Frank director. For a uniaxial liquid crystal the only important constribution comes from the pricipal direction, "n". And order paramter reduces to,
\begin{equation}
\langle Q \rangle = S(n_i n_j -1/3 \delta_{ij})
\end{equation}
where S is
\begin{equation}
S = \frac12\langle (3cos^2\theta^{\alpha} -1)\rangle
\end{equation}
Again, this system has the symmetry of the Ising model which is $Z_2$ but since the order parameter is invariant itself, we can have odd powers of order parameter for the free energy expansion and hence this system undergoes a first order phase transition. Having specified the ordered paramter, the next step is to write the free energy and then self consistently solve for the order parameter.
\begin{equation}
f = \frac12 rS^2 - w S^3 + u S^4
\end{equation}
Notice that there is no linear term as the free energy has to be invariant under rotation and he the free energy is constructed out of trace of order parameter which is invariant under rotation and trace of the order parameter by definition is zero. r is again $T-T^*$. Lets define these temperature by looking at plot of Landay free enrgy w.r.t to the order parameter. $T^{**}$ is the point where the functional develops a local minima other than at the origin which eventually touches zero of the free energy at $T_c$ resulting in the first order phase transition with a discontinuous change in the order parameter. For $T_C<T<T^*$ the origin is still a local minima if not the global. $T^*$ is the point where the origin develops a negative curvature, and hence becomes unstable. So origin was a metastable state for these phases which eventually becomes unstable. Also to calculate things we minimize the free energy and set it equal to zero.
\begin{equation}
\frac{\partial f}{\partial S} = 0\\
f=0
\end{equation}
Using these we can calculate latent heat, etc.
!!Tricritical points
$\phi^3$ term in the Landau functional explains first order phase transition. But we can also have a first order phase transition with even powers of order parameter for expression in the free energy. Considering the following functional, it will be shoen that it is capabale of showing first order phase transition.
\begin{equation}
\mathcal{L} = \frac12 r~\phi^2 + u_4~\phi^4 + u_6~\phi^6
\end{equation}
If $u_4$ is negative then this model can exhibit a first order phase transition when the free energy exhibits a secondary minima at $\phi\neq 0$. $T^{**}$ is the point where the functional develops a local minima other than at the origin which eventually touches zero of the free energy at $T_c$ resulting in the first order phase transition with a discontinuous change in the order parameter. For $T_C<T<T^*$ the origin is still a local minima if not the global. $T^*$ is the point where the origin develops a negative curvature, and hence becomes unstable. So there are metastable state for these phases which eventually becomes unstable. Recall hot ice, heating a clean water glass in microwave oven, etc. Even if a phase is more favourable a system can be stuck in a metastable case if nucleation is not favoured. The phenomenon of phase growth and explanations of nucleation, spinodal growth, etc has been described in another section, [[coarsening|Coarsening]].
[img[http://www.imsc.res.in/~rsingh/discussion/cond-mat/files/images/phi6.png]]
;Free energy w.r.t order parameter showing first order and second order phase transition as parameters vary.
\[ r_c = a(T_c-T^*)= \left\{
\begin{array}{l l}
0 & \quad \text{if $u_4>0$}\\
\frac12 |u_4|^2/u_6 & \quad \text{if $u_4<0$}
\end{array} \right.\]
The line of second order phase transition $u_4 > 0$ is called the lambda line. This meets with the line of first order phase transition at //tricritical point// $(r,u_4) =(0,0)$ for $u_4<0$. So tricritical point is the point connecting lines of first and second order phase transition.
Antiferromagnet can be divide into two sublattices as order parameter is like $\sum_i (-1)^i S_i$ and we can define two order parameter on the two sublattice.. Lets label the sublattices by A and B. Then staggered magnetisation is given by,
\begin{equation}
m_s = \frac{m_A - m_B}{2}
\end{equation}
while the usual magnetisation , as in ferromagnetic case, is,
\begin{equation}
m = \frac{m_A + m_B}{2}
\end{equation}
!!The liquid solid transition
The order parameter for liquid solid phase transition is the reciprocal lattice vectors and hence there are N numbers of order parameters. The density is same at all space points for fluids while in a crystal it is periodic which can be written in terms of reciprocal lattice vectors. The relevant quantity to be determined here is the density density correlation function. Lets first write the average density ,
\begin{equation}
\langle \delta n(x)\rangle = \langle n(x)\rangle -n_0 = \sum_G n_G e^{iG\cdot x}
\end{equation}
Using this we can define, $S_{nn}$, static structure factor (look [[here|Scattering]] for details)
\begin{equation}
S_{nn} = \langle \delta n(x)\delta n(x')\rangle
\end{equation}
For fluids the static structure factor peaks at a radius $k_0 = \frac{2\pi}{l}$ where "l" is inter-atomic distance. Lowering the temperature we approach the solid phase, so to lowest approximation we expect the structure function to look the same,
\begin{equation}
S_{nn}(k) = \frac{T}{r+c(k^2-k_0^2)^2} = \frac{T}{r_G}
\end{equation}
The required form of strucuture factor can be obtained from the following free energy
\begin{equation}
F_{SL} = \int d^dx~d^dx' \langle \delta n(x)\rangle\chi^{-1}(x,x')\langle \delta n(x')\rangle\\
-w \int d^dx \langle \delta n(x)\rangle^3
+u \int d^dx \langle \delta n(x)\rangle^4
\end{equation}
This can be written as,
\begin{equation}
f_{SL} = \frac{F}{V} = \sum_G \frac12 r_G |n_G|^2 -
w \sum_{G_1,G_2, G_3} n_{G_1}n_{G_2}n_{G_3} \delta_{G_1+G_2+ G_3, 0} \\
+ u \sum_{G_1,G_2, G_3, G_4} n_{G_1}n_{G_2}n_{G_3}n_{G_4} \delta_{G_1+G_2+ G_3+ G_4, 0}
\end{equation}
The cubic term, as we discussed for nematic liquid crystal, will lead to first-order phase transition.
Consider, $r_G = r + c(k^2-k_0^2)^2$, for $c\rightarrow\infty$. This constrains $G=K_0$ and implies that the three vectors form an equilateral triangle as they are of same size. This very fact allows us to make fairly general statement about the tranistion as not many reciprocal lattice vectors add to zero in a traingle. It so turns out this corresponds to FCC, which is reciprocal lattice of BCC. Also since this is first order transition, a technique similar to isotropic to nematic transition can be used to calculate the transition temperature, etc. Doing the analysis one finds that BCC has the highest transition temperature which is what we expected. This is consistent even with the observation that most of the metals on the left hand side of the periodic table form BCC strucutre near the melting line.
!! Summary of Landau Theory:
* Choice of the [[Order parameter]]
* Construction of Landau functional using the order parameter which respects all the symmetries of the underlying Hamiltonian
* Analyticity of the Landau Functional
* Further calculation of relevant quantities using Landau functional
!!!Bogoliubov inequality
Exact solutions of realistic problems is usually very tough to arrive at. Even if we have the exact solution, in some particular case, mean field theory can be used to understand the details of the system by relatively simple arguments and calculations. Invariably we have to take recourse to variational approaches. Apart from having their being mathematically simpler, variational approaches provides a very good insight in the given problem. One important aspect of the self consistent mean field theories is the Bogoliubov inequality. This inequality states that the free energy of a system with Hamiltonian
\begin{equation}
H= H_{0}+\Delta H
\end{equation}
has the following upper bound:
\begin{equation}
F \leq F_{0} \ \stackrel{\mathrm{def}}{=}\ \langle H \rangle_{0} -T S_{0}
\end{equation}
//Proof:// Considering the partition function,
\begin{equation}
Z = Tr\left( e^{-\beta H} \right)
\end{equation}
This is same as,
\begin{equation}
Z = \frac{Tr\left( e^{-\beta H_0} \right)}{Tr\left( e^{-\beta H_0} \right)} Tr\left( e^{-\beta (H+H_0-H_0)} \right)
\end{equation}
Lets consider,
\begin{equation}
Z = \frac{Tr\left( e^{-\beta (H+H_0-H_0)} \right)}{Tr\left( e^{-\beta H_0} \right)}
\end{equation}
This is equivalently written as,
\begin{equation}
Z = \frac{Tr\left[ e^{-\beta H_0} e^{-\beta (H-H_0)} \right]}{Tr\left( e^{-\beta H_0} \right)} = \langle~e^{-\beta (H-H_0)}\rangle
\end{equation}
Also, we use this general theorem that,
\begin{equation}
\langle~e^{x}\rangle \geq e^{\langle x\rangle} \\
\langle~X_1 X_2 \rangle \geq \langle~X_1 \rangle\langle~X_2 \rangle
\end{equation}
This implies,
\begin{equation}
\langle~e^{-\beta (H-H_0)}\rangle \geq e^{\langle~-\beta (H-H_0)\rangle}
\end{equation}
Plugging this back in the expression of the partition function, we get,
\begin{equation}
Z \geq Tr\left(e^{-\beta H_0}\right) e^{-\beta\langle(H-H_0)\rangle}
\end{equation}
Taking log,
\begin{equation}
\ln Z \geq \ln Tr\left(e^{-\beta H_0}\right) + \ln e^{-\beta\langle(H-H_0)\rangle}
\end{equation}
$\implies$
\begin{equation}
F \leq F_0 + \langle H-H_0 \rangle
\end{equation}
\begin{equation}
F_0 = \langle H_0 \rangle - TS_0\\
F \leq \langle H_0 \rangle - TS_0
\end{equation}
Models can be used to understand phase transitions and many other physical phenomenons. Moreover, it's a very nice way of classifying physical problems. It is always a good idea to use group theoretic concepts in physics, especially when you want to exploit the symmetry and conservations laws in your theory. But before we go to ordered systems which has lesser symmetry than a homogeneous and isotropic fluid, lets revise some basics of thermodynamics.
!Thermodynamics
//Thermodynamics is the study of the restrictions on the possible properties of matter that follow from the symmetry properties of the fundamental laws of physics.
-Herbert B. Callen//
For example, the basic laws of physics are invariant under time reversal, yet at the macroscopic scale there is an arrow of time. This very thing can be motivated from the second law of thermodynamics. Moreover, suppose you removed one of the constraints of a system, otherwise in equilibrium, then thermodynamics will tell you about the new equilibrium state.
!!Zeroth law of thermodynamics
It is characterized by equilibrium. Let consider a system $A$ in thermal equilibrium with $B$ and $A$ is in equilibrium with $C$, then $B$ and $C$ are also in ewauilibrium. This law defines the notion of temperature, i.e. all the system $A, B$ and $C$ are at same temperature
!!First law of thermodynamics
This is essentially the conservation of energy.
\begin{equation}
\delta Q = pd V + d U
\end{equation}
!!Second law of thermodynamics
The second law of thermodynamics is contained in the fact that net entropy of a //macroscopic// system increases
\begin{equation}
\delta S \geq 0
\end{equation}
This is essentially ''the law'' and all other variants can be derived from this condition that a process will occur spontaneously only if net change in entropy is positive. So the statment that not all energy can be converted into work can be thought of as that not all energy can be spent in doing work, some energy has to be spent in increasing the entropy of final state and hence some heat is wasted inevitably in performing a work. Also, it defines an arrow of time as heat can not flow on its own from a colder to a hotter body as it is not favourable entropically. All other vaiants of second law can be derived from this principle of increase in entropy. Thus suppose if u allow two system to exchange heat by relaxing the constraint of no heat transfer than II law ensures that they get stabilised to a state when both of them have the same temperature. Similarly, if you also remove the constraints from volume and the number particles then in the new equilibrium state, the pressure and chemical potentialsof the two coexisting systems should be same, i.e, $T_1 = T_2,~p_1=p_2~and~\mu_1=\mu_2$
!!Third law of thermodynamics
The entropy of a system approaches a constant value as the temperature approaches zero.
The thermodynamics description of state is valid for an isotropic and homogenous fluid. But, if the system has lesser symmetry than the fluids, then we have to invoke things like [[order parameter|Order parameter]], etc. for the description of different phases exhibited by the system. We will worry about the other details later. Lets start by the discussion of different models used in physics to study spin systems.
!Ising Model
Look [[here|Ising model]].
!$Z_N$ Model
$Z_{N}$ Model, sometimes also called the ''clock model'', is a generalization of the Ising Model. This is a class of models with $Z_{N}$ symmetry, defined by associating with each site of the lattice a spin variable $S_{i}$ of unit magnitude constrained to point in one of $N$ equally spaced directions on the unit circle:
\begin{equation}
S_{i} = \left(\cos \frac{2 \pi n_{i}}{N},\sin \frac{2 \pi n_{i}}{N}\right) ,
\end{equation}
where $n_{i} = 0,1,\cdots,N-1$.
The Hamiltonian for both $Z_{N}$ and $O(N)$ models have the same form as that of the classic Heisenberg model.
\begin{equation}
H = -J \sum_{i=1} S_{i}\cdot S_{i+1}
\end{equation}
The Hamiltonian can be equivalently written as
\begin{equation}
H = -J \sum_{i=1} \cos \left(2\pi(n_{i+1} - n_{i})/N\right) .
\end{equation}
It is therefore straightforwardly observed that the Ising Model is identical to $Z_{2}$ clock model.
! $O(N)$ Models
This is a class of models satisfying $O(N)$ symmetry. The model is defined by associating a $N$-dimensional spin, $S_{i}$, with each lattice point. This spin can be thought of as a vector staying on a $N$-dimensional sphere. As already pointed out the Hamiltonian of this model is same as that of $Z_{N}$ model and the classic Heisenberg model.
\begin{equation}
H = -J \sum_{i=1} S_{i}\cdot S_{i+1}
\end{equation}
The Hamiltonian can be equivalently written as
\begin{equation}
H = -J \sum_{i=1} \cos \left(\theta_{i+1} - \theta_{i}\right)
\end{equation}
This model is generalization of different models, as outlined below:
* $N = 1$: Ising Model,
* $N = 2$: XY Model,
* $N = 3$: Heisenberg Model,
* $N = 0$: Self-Avoiding Random Walk!
! $N$-State Potts Model
This is another group invariant under $Z_N$. Here the spin $S_{i}$ can take any of $N$ discrete values. The Hamiltonian associates one energy when nearest neigbours in one state and a second energy if they are in different states. The Hamiltonian is
\begin{equation}
H = -J \sum_{i=1} [N\delta_{S_{i+1},S_{i}}-1]
\end{equation}
Order parameter is used to distinguish between the different phases of a system. Defining an order parameter for a given system can be quite a headache for theoretical physicists. Usually one writes the order parameter based on the symmetry and conservation laws of the given system such that it can be used to clearly distinguish between the different phases that the system is capable of exhibiting.
Order parameter for the [[Ising model]] is magnetisation $m$. Order parameter is usually zero in the disordered phase and non-zero in the ordered phase. For example, the magnetisation $m$ is zero in the disordered paramagnetic phase while non-zero in the ordered ferromagnetic phase. Thus, the order parameter clearly distinguishes between the two phases of the system. In liquid crystals, order parameter for isotropic to nematic transition is $\left\langle \cos^2 \theta -1/3\right\rangle$. This is zero in the isotropic phase as all the angles are possible while in a nematic phase it is non zero as a particular angle is chosen.
Similarly, one can define the order parameter for a liquid gas transition as the difference in the densities of the two phases $\left|~\rho_l -\rho_g~\right|$. Both liquid and gas, unlike solids, are invariant under translation and rotation and hence they are distinguished based on the difference in the densities of the two phases. The order parameter from liquid to solid transition is the Fourier transform of the density $n_G$, since this is zero in fluid phase except at $q=0$ while for solids it is non zero at reciprocal lattice vector $G$.
Another very important class of order parameters, which are complex numbers in general, are defined as,
\begin{equation}
\psi~(x) = \left|~\psi~(x)\right|~e^{i\phi}
\end{equation}
Here, $\left|~\psi~(x)\right|$ is the usually the density while $\phi$ tells about the modulation of the density.
if(!version.extensions.PluginMathJax) {
version.extensions.PluginMathJax = { installed: true };
config.extensions.PluginMathJax = {
install: function() {
var script = document.createElement("script");
script.type = "text/javascript";
// *** Use the location of your MathJax! *** :
/*
* Gareth: The following line assumes you have
* MathJax installed on your server in a sensible
* location. I've commented this out.
*/
//script.src = "js/MathJax/MathJax.js";
/*
* Because this tiddlywiki is currently hosted on
* tiddlyspace.com I've had to point to the 'MathJax
* Content Delivery Network' instead.
*/
script.src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML"
// EndGareth
/*
* Gareth: Richard's local definition ~MathJax
* extension (implementation of TeXs \let command)
* has been added to the list of extensions along
* with the newcommand extension upon which it
* depends. Also, the scale option for HTML-CSS was
* changed from 115 to 100.
*/
var mjconfig = 'MathJax.Hub.Config({' +
'jax: ["input/TeX","output/HTML-CSS"],' +
'extensions: ["TeX/AMSmath.js", "TeX/AMSsymbols.js", "TeX/newcommand.js", "http://oxkunengroup.tiddlyspace.com/localTeX.js"],' +
'"HTML-CSS": {' +
'scale: 100' +
'}' +
'});' +
'MathJax.Hub.Startup.onload();';
var ie9RegExp = /^9\./;
var UseInnerHTML = (config.browser.isOpera || config.browser.isIE && ie9RegExp.test(config.browser.ieVersion[1]));
if (UseInnerHTML) {script.innerHTML = mjconfig;}
else {script.text = mjconfig;}
script.text = mjconfig;
document.getElementsByTagName("head")[0].appendChild(script);
// Define wikifers for latex
config.formatterHelpers.mathFormatHelper = function(w) {
var e = document.createElement(this.element);
e.type = this.type;
var endRegExp = new RegExp(this.terminator, "mg");
endRegExp.lastIndex = w.matchStart+w.matchLength;
var matched = endRegExp.exec(w.source);
if(matched) {
var txt = w.source.substr(w.matchStart+w.matchLength,
matched.index-w.matchStart-w.matchLength);
if(this.keepdelim) {
txt = w.source.substr(w.matchStart, matched.index+matched[0].length-w.matchStart);
}
if (UseInnerHTML) {
e.innerHTML = txt;
} else {
e.text = txt;
}
w.output.appendChild(e);
w.nextMatch = endRegExp.lastIndex;
}
}
config.formatters.push({
name: "displayMath1",
match: "\\\$\\\$",
terminator: "\\\$\\\$\\n?",
termRegExp: "\\\$\\\$\\n?",
element: "script",
type: "math/tex; mode=display",
handler: config.formatterHelpers.mathFormatHelper
});
config.formatters.push({
name: "inlineMath1",
match: "\\\$",
terminator: "\\\$",
termRegExp: "\\\$",
element: "script",
type: "math/tex",
handler: config.formatterHelpers.mathFormatHelper
});
var backslashformatters = new Array(0);
backslashformatters.push({
name: "inlineMath2",
match: "\\\\\\\(",
terminator: "\\\\\\\)",
termRegExp: "\\\\\\\)",
element: "script",
type: "math/tex",
handler: config.formatterHelpers.mathFormatHelper
});
backslashformatters.push({
name: "displayMath2",
match: "\\\\\\\[",
terminator: "\\\\\\\]\\n?",
termRegExp: "\\\\\\\]\\n?",
element: "script",
type: "math/tex; mode=display",
handler: config.formatterHelpers.mathFormatHelper
});
backslashformatters.push({
name: "displayMath3",
match: "\\\\begin\\{equation\\}",
terminator: "\\\\end\\{equation\\}\\n?",
termRegExp: "\\\\end\\{equation\\}\\n?",
element: "script",
type: "math/tex; mode=display",
handler: config.formatterHelpers.mathFormatHelper
});
// These can be nested. e.g. \begin{equation} \begin{array}{ccc} \begin{array}{ccc} ...
backslashformatters.push({
name: "displayMath4",
match: "\\\\begin\\{eqnarray\\}",
terminator: "\\\\end\\{eqnarray\\}\\n?",
termRegExp: "\\\\end\\{eqnarray\\}\\n?",
element: "script",
type: "math/tex; mode=display",
keepdelim: true,
handler: config.formatterHelpers.mathFormatHelper
});
// The escape must come between backslash formatters and regular ones.
// So any latex-like \commands must be added to the beginning of
// backslashformatters here.
backslashformatters.push({
name: "escape",
match: "\\\\.",
handler: function(w) {
w.output.appendChild(document.createTextNode(w.source.substr(w.matchStart+1,1)));
w.nextMatch = w.matchStart+2;
}
});
config.formatters=backslashformatters.concat(config.formatters);
old_wikify = wikify;
wikify = function(source,output,highlightRegExp,tiddler)
{
old_wikify.apply(this,arguments);
if (window.MathJax) {MathJax.Hub.Queue(["Typeset",MathJax.Hub,output])}
};
}
};
config.extensions.PluginMathJax.install();
}
//}}}
A regular crystals has both long range order and translational symmetry. Some strucutures have long range order but not a translational symmetry, rather they exhibit a modulation in space with relative irrational periods. These quasi-periodic structures result from competition between different length scales in the system. These structures manisfest themselves in the groups of peaks in diffraction pattern which are usually weak and arranged about the main Bragg peaks, called satellites peaks. These satellite reflections can be explained by a periodic distortion of the basic structure. The period of such modulated structures may be commensurate (a rational number) or incommensurate (an irrational number) with the lattice of the basic structure which is manifested by their peaks at commensurate and incommensurate multiple of the reciprocal lattice vector. Periodic distortions can lead to interesting phenomenons like Peierls transition, which leads to a metal insulator transition at low temperatures for an one dimesnional chain with one electron per site. The word incommensurate comes for irrational numbers we encounter in the system. One such number of great importance is the Golden ratio, $\phi$,
\begin{equation}
\phi =\frac{1+\sqrt{5}}{2}
\end{equation}
The satellite reflections are at $G \pm mq$, and are much weaker than the main Bragg reflection at $G$. Thus signature of an incommensurate structure is appearance of satellite peaks at irrational multiples of reciprocal lattice vector.
Quasicrystal is an ordered structure that exhibits the long range order but not periodicity. We call the ordering to be non-periodic if it lacks translational symmetry. This is saying that a shifted copy will never match exactly with its original. Quasicrystal structures have non crystallographic point group symmetries and thus differs from the incommensurate structures. Space group with fivefold, sevenold and higher order rotation operation are excluded from the definition of the crystalline solids. So if only periodic solids can give rise to Braggs peak then we should not get a peak corresponding to a five fold symmetery while quasicrystals have such non crystallographic peaks. This can be explained by the fact that Braggs diffraction only requires a long range positional order and not long range periodic translational invariance. Quasicrystals can be obtained using Penrose tiling which, interestingly, has 5 fold symmetry in two dimension. Penrose tiling is not periodic and is self similar. In Penrose tiling, we fill the entire space with two types of tilling, skinny and fat, which has relations with the golden ratio.
!!References:
*Bak, P. (1982). Commensurate phases, incommensurate phases and the devil's staircase. //Reports on Progress in Physics//, 45(6), 587-629.
*Fisher, M. E., & Selke, W. (1980). Infinitely many commensurate phases in a simple Ising model. //Physical Review Letters//, 44(23), 1502.
*[[Quasicrystals|http://www.sciencedaily.com/releases/2011/10/111005080232.htm]]
*Goldman, A. I.,and Kelton, R. F. (1993). Quasicrystals and crystalline approximants. //Reviews of modern physics//, 65, 213-230.
>Email: rajeshrinet |at| gmail |dot| com
More details can be found on my [[homepage|http://rajeshrinet.github.io]].
!Critical exponents, scaling and universality
!The Kadanoff block spin
!The Migdal Kadanoff procedure
!The Gaussian model
!!References:
*Lectures on phase transitions and the renormalization group: //Nigel Goldenfeld//
*Principles of condensed matter physics: //P. M. Chaikin// and //T. C. Lubensky//
*Statistical physics of fields: //Mehran Kardar//
*Kadanoff, Leo P., //Scaling laws for Ising models near Tc.// Physics 2.6 (1966): 263-272.
*Wilson, Kenneth G., //Problems in Physics with Many Scales of Length.// Scient. Am 241 (1979): 140-157.
!Introduction
Scattering is a general process of spreading of a stream of particles or a beam of rays, say X-rays, over a range of directions as a result of collisions with localized particles or any general non-uniformities in the medium they pass. Scattering measurements can be used to probe fluctuations at length scales of the order of the wavelength $\lambda$ of the probe used for the measurements. A beam of wave vector $k$ is incident upon the sample which is scattered at $k' = k+q$. For elastic scattering, $|k|=|k'|$. Considering, elastic scattering from lattice planes, it is easy to derive Bragg's law,
\begin{equation}
2d~\sin\theta = n \lambda
\end{equation}
where $n$ is an integer and $2\theta$ is the angle between incident and the scattered wave. A more sophisticated analysis, discussed below, gives the same result.
The scattering cross-section, $\sigma$, is a hypothetical area which describes the likelihood of scattering by a particle. In a sense, it is the effective cross section exposed by a particle to any incoming particle or radiation. Another quantity of interest is the differential cross section $\frac{d\sigma}{d\Omega}$. Classically, this is defined as the ratio of $I_s$ and $I_0$,
\begin{equation}
\frac{d\sigma}{d\Omega} = \frac{I_s}{I_0}
\end{equation}
where $I_0$ is the intensity of the beam (measured in number of particles per area per time) incident on a scattering center while $I_s$ is the number of scattered particles per solid angle per time (the radiant intensity).
To look at it quantum mechanically, lets assume that the wave function of the incident particle is a plane-wave $e^{ikr}$. In general, the scattered wave is a spherical wave of the form
\begin{equation}
f\left( \theta,~\phi\right)\frac{e^{ikr}}{r}
\end{equation}
Thus, the differential cross section in this case is simply the probability of finding the scattered wave at a given solid angle.
\begin{equation}
\frac{d\sigma}{d\Omega} =\left|~f\left( \theta,~\phi\right)~\right|^{2}
\end{equation}
The integral cross section is obtained by integrating the differential cross section over a sphere with a total solid angle $4\pi$
\begin{equation}
\sigma = \int \frac{d\sigma}{d\Omega}~ \text{d}\Omega
\end{equation}
Fermi's golden rule is a way to calculate the transition rate (probability of transition per unit time) one account of perturbation from a energy eigenstate of a quantum system into a continuum of energy eigenstates.
\begin{equation}
T_{i \rightarrow f}= \frac{2 \pi} {\hbar} \left | \langle k'~|~U~|~k \rangle \right |^{2} \rho,
\end{equation}
where $\rho$ is the density of final states (number of states per unit of energy) and
\begin{equation}
\mathcal{M}_{kk'} = \langle k'~|~U~|~k\rangle
\end{equation}
is the matrix element (in bra-ket notation) of the perturbation $ U$ between the final and initial states. The scattering cross section turns out to be,
\begin{equation}
\frac{d\sigma}{d\Omega} \sim \frac{2 \pi} {\hbar} |\mathcal{M}_{kk'}|^{2}
\end{equation}
This is the static cross-section which can be obtained experimentally by integrating over all possible energy transferred to the medium. Lets consider a multiparticle system with a potential of the form.
\begin{equation}
U(x) = \sum_{\alpha} U_{\alpha} (x-x_{\alpha})
\end{equation}
where $x_\alpha$ is position of an atom at some location $x_\alpha$. This can be used to calculate the matrix. Let's again consider the matrix element given by
\begin{equation}
\mathcal{M}_{kk'} = \langle k'~|~U~|~k\rangle
\end{equation}
So what we are after is,
\begin{equation}
\langle k'~|~U~|~k\rangle =\int \text{d}^{d}x~ \text{d}^{d}x' \left\langle k'~|~x\rangle ~ \langle x~|~ U~|~x'\rangle ~ \langle x'~|~k~\right\rangle
\end{equation}
This can be simplified using,
\begin{equation}
\langle x~|~k\rangle = e^{ikx}
\end{equation}
\begin{equation}
U(x) = \langle x~| U|~ x\rangle
\end{equation}
This when substituted back,
\begin{equation}
\langle k'~|U|~k\rangle =\int \text{d}^{d}x~ e^{-ik(x-x')}U(x)
\end{equation}
We use the multiparticle potential,
\begin{equation}
U(x) = \sum_{\alpha} U_{\alpha} (x-x_{\alpha})
\end{equation}
Plugging this back in the expression for transition matrix element,
\begin{equation}
\langle k'~|U|~k\rangle = \sum_{\alpha} \int \text{d}^{d}x~ e^{-ix\cdot(k'-k)} ~ U_{\alpha} (x-x_{\alpha})
\end{equation}
Its convenient to define,
\begin{equation}
R(x) = x- x_{\alpha}
\end{equation}
\begin{equation}
q = k-k'
\end{equation}
This when substituted, equations take the form,
\begin{equation}
\langle k'~|~U~|~k\rangle = \sum_{\alpha} \int \text{d}^{d}R_\alpha~ e^{-i q\cdot (R+x_\alpha)} U_{\alpha} (R_{\alpha})
\end{equation}
Thus expression for the matrix element reduces to be,
\begin{equation}
\langle k'~|~U~|~k\rangle = \sum_{\alpha} U_\alpha(q)~e^{-iq x_\alpha}
\end{equation}
where $ U_\alpha (q)$ , atomic form factor, is the Fourier transform of the atomic potential.
\begin{equation}
U_{\alpha}(q) =\int \text{d}^{d}~R_{\alpha}~ e^{-iq\cdot R_{\alpha}} ~ U(R_{\alpha})
\end{equation}
The differential cross-section is is proportional to square of this matrix element
\begin{equation}
|\langle k'~|~U~|~k\rangle|^2 = \sum_{\alpha,\alpha'} U_\alpha(q)~U^{*}_{\alpha'}(q)~e^{-iq x_\alpha}~e^{iq x_{\alpha'}}
\end{equation}
If all the atoms are alike which is reflected in $U_\alpha$, then $|U_\alpha(q)|^2$ comes out of the summation and we have,
\begin{equation}
\frac{d\sigma}{d\Omega} \sim |U_\alpha(q)|^2 I(q)
\end{equation}
\begin{equation}
I(q) =\left\langle \sum_{\alpha, \alpha'} e^{-iq(x_{\alpha}-x_{\alpha'})} \right\rangle
\end{equation}
$I(q)$ is called the $structure~function$ and depends only on the position of the atoms and not on the nature of the interaction between atoms and the scattering probe. The structure function is sum of $N^2$ complex numbers. If the positions of the atoms are random (ideal gas) then only term which do not average to zero are $\alpha = \alpha'$ and hence $I(q)$ goes linearly with system size. We can make $I(q)$ intensive by dividing it by $N$. The resulting function,
\begin{equation}
S(q) = \frac{I(q)}{V} \qquad or \frac{I(q)}{N}
\end{equation}
is called the $structure~factor$. Thus, scattering cross-section has information about spatial structure of a manyparticle systems as well atomic information encoded in the atomic form factor. The structure function has information about average relative positions of atoms. Now we move on to make the connection of structure function with the density-density correlation function. Lets first define the number density
\begin{equation}
n(x) = \sum_{\alpha} \delta(x-x_{\alpha})
\end{equation}
We define the density-density correlation function as
\begin{equation}
C_{nn}(x_1,~x_2) =\left\langle \sum_{\alpha,\alpha'} n\left(x_1 \right)~n\left(x_2\right)\right\rangle
\end{equation}
For our case, this is,
\begin{equation}
C_{nn}(x_1,~x_2) =\left\langle \sum_{\alpha,\alpha'} \delta(x_1-x_\alpha) \delta(x_2-x_\alpha') \right\rangle
\end{equation}
Looking a little more closely, we make the identification that, structure function is essentially the Fourier
transform of the density-density correlation function, $C_{nn}(x_1, x_2)$,
\begin{equation}
I (q) = \left\langle n\left(q \right)~n\left(-q\right)\right\rangle
\end{equation}
where,
\begin{equation}
n(q) = \int \text{d}^{d}x~n(x)~ e^{iqx} = \sum_{\alpha} e^{-iqx_\alpha}
\end{equation}
For large separations $C_{nn}(x_1,~x_2)$ decouples into $\left\langle n(x_1)\right\rangle \left\langle n(x_2) \right\rangle$ average densities. Then, its is convenient to define the $Ursell~Function$
\begin{equation}
S_{nn}(x_1,~x_2) = C_{nn}(x_1,~x_2) - \left\langle n\left(x_1 \right)\right\rangle ~\left\langle n\left(x_2\right)\right\rangle
\end{equation}
\begin{equation}
S_{nn}(x_1,~x_2) = \left\langle n\left(x_1 \right)~n\left(x_2\right)\right\rangle - \left\langle n\left(x_1 \right)\right\rangle ~\left\langle n\left(x_2\right)\right\rangle
\end{equation}
\begin{equation}
S_{nn}(x_1,~x_2) = \left\langle [n\left(x_1 \right)- \left\langle n\left(x_1 \right)\right\rangle]~[n\left(x_2 \right)- \left\langle n\left(x_2 \right)\right\rangle]\right\rangle
\end{equation}
\begin{equation}
S_{nn}(x_1,~x_2) = \left\langle \delta n(x_1)~\delta n(x_2)\right\rangle
\end{equation}
If the correlation is weak then the Ursell function will go to zero once we move to a distance larger then the correlation length $\xi $.
\begin{equation}
S_{nn}(x_1,~x_2) \rightarrow 0\qquad;|x_1 - x_2|> \xi
\end{equation}
We are interested in Fourier transform of Ursell function as its spatial extent is small and we divide by volume as spatial extent of the Ursell function is small and hence the integral is proportional to the volume.
\begin{equation}
S_{nn}(q) = \frac1V \int \text{d}^{d}x_1~\text{d}^{d}x_2~e^{-iq(x_1-x_2)}~S_{nn}(x_1,~x_2)
\end{equation}
Again, from the definition of Ursell function
\begin{equation}
S_{nn}(x_1,~x_2) = C_{nn}(x_1,~x_2) - \left\langle n\left(x_1 \right)\right\rangle ~\left\langle n\left(x_2\right)\right\rangle
\end{equation}
and the structure factor $I(q)$, after Fourier transform, we get
\begin{equation}
I(q) = \left|\int \text{d}^{d}x~e^{-iq(x)}\langle n(x)\rangle\right|^2 + V~S_{nn}(q)
\end{equation}
So,
\begin{equation}
\langle n(q) \rangle = \int \text{d}^{d}x~ e^{-iq(x)} \langle n(x)\rangle
\end{equation}
will be proportional to volume square if it is non-zero.
*Thus, for an isotropic and homogeneous fluid, the structure function and Fourier transform of the Ursell function are identical except at $q=0$.
\begin{equation}
S(q) = S_{nn}(q) + \langle n \rangle^2 (2\pi)^3 \delta(q)
\end{equation}
where we have used connection between Kronecker and Dirac delta, $V\delta_{q,0} = (2\pi)^3 \delta(q)$.
*For an ideal gas $S_{nn} = \langle n \rangle$ independent of $q$.
*In periodic solids, $\langle n(q) \rangle$ is non-zero on a lattice of vector $G$ and hence gives $V^2$ contribution to structure function.
The general form of the structure function is
\begin{equation}
S(q) = \frac1V \left|\int \text{d}^{d}x~e^{-iq(x)}\langle n(x)\rangle\right|^2 +S_{nn}(q)
\end{equation}
Thus this gives $\delta$-function contributions to $S(q)$ at many angles.
!Pair Distribution Function
The pair distribution function is defined as,
\begin{equation}
\langle n(x_1)\rangle {g}(x_1, x_2)\langle n(x_2)\rangle = \left\langle \sum_{\alpha\neq\alpha'}\delta(x_1-x_\alpha) \delta(x_2-x_{\alpha'}) \right\rangle
\end{equation}
This can be sinplified to,
\begin{equation}
\langle n(x_1)\rangle {g}(x_1, x_2)\langle n(x_2)\rangle = \left\langle n(x_1)n(x_2)\right\rangle - \langle n (x)\rangle \delta(x_1-x_2)
\end{equation}
One way to look at pair distribution function is that suppose you are given a particle at $x_1$, then $g(x_1, x_2)$ is the probability of finding another particle at $x_2$. This is very useful construct for homogeneous system with translational invariance. In this case, $g(x_1, x_2)\rightarrow g(x_1-x_2)$.
\begin{equation}
\langle n\rangle^2 {g}(x_1-x_2) =\frac1V\int \text{d}^{d}x_2 \left\langle \sum_{\alpha\neq\alpha'}\delta(x_1-x_\alpha) \delta(x_2-x_\alpha') \right\rangle
\end{equation}
Let use $x_1-x_2 = x$ and integrate one of the delta fuunction.
\begin{equation}
\langle n\rangle^2 {g}(x_1-x_2) =\left\langle \sum_{\alpha\neq\alpha'}\frac1V \int \text{d}^{d}x_2~ \delta(x
+x_2-x_\alpha) \delta(x_2-x_\alpha') \right\rangle
\end{equation}
\begin{equation}
\langle n\rangle^2 {g}(x_1-x_2) = \frac1V \left\langle \sum_{\alpha\neq\alpha'}\delta(x-x_\alpha+x_\alpha') \right\rangle
\end{equation}
This can be conveniently written as
\begin{equation}
{g}(x_1-x_2) = \frac{1}{\langle n\rangle} \left\langle \sum_{\alpha\neq0}\delta(x-x_\alpha+x_0) \right\rangle
\end{equation}
For an ideal gas, $g(x)$ is independent of $x$, and hence
\begin{equation}
\int \text{d}^{d}x ~ g(x)~\langle n\rangle = N-1\implies g(x) = 1-N^{-1} = 1
\end{equation}
If the system is isotropic then g(x) becomes g(r) and is called $radial~distribution~function$. Pair distribution function for an ideal gas is uniform with a unit magnitude while for fluids, it has a peak at Lennard-Jones radius and decays away from it to unit magnitude, which is indicative of short range correlation in fluids. In ideal crystals it has periodic array of delta function showing the infinite correlation in them.
!Crystalline Solids
A perfect crystal consists of a atoms, moleculse, etc. arranged on space filling array of periodic repeated identical copies of a single structural unit called $unit~cell$. So, there is an underlying lattice strucuture. Any lattice point can be specified by independent primitive translational vectors $a_1,~a_2,..$
\begin{equation}
R_l = l_1 a_1 + l_2 a_2 + ... +... + l_d a_d
\end{equation}
These set of vectors $a_1,~a_2,.., a_d$ completely specify the mathematical lattice.
The translational vectors connecting any two points in the lattice is given by,
\begin{equation}
T = R_l-R_l'
\end{equation}
A $Bravais~lattice$ is the collection of all points in space which can be reached from the origin with position vectors $R$. The Bravais looks the same when viewed from any of the lattice points. A $crystal$ is periodic arrangement of one or more atoms ($the~basis$) repeated at each lattice point. In each of 0-dimensional and 1-dimensional space there is just one type of Bravais lattice. In two dimensions, there are five Bravais lattices. They are oblique, rectangular, centred rectangular (rhombic), hexagonal, and square. There are 14 Bravais lattices in 3 dimensions.
The density of crystal is written as,
\begin{equation}
n(x) = \sum_l \delta(x-R_l)
\end{equation}
If the lattice has a basis with atoms of mass $m_\alpha$ at $c_\alpha$, then mass density is
\begin{equation}
\rho(x) = \sum_{l,\alpha} \delta(x-R_l-c_\alpha)
\end{equation}
For a perfect crystal the density is translation invariant,
\begin{equation}
\rho(x) =\rho(x+T)
\end{equation}
But, there are no perfect crystals! Nevertheless, the average density has the periodicity of the perfect crystal,
\begin{equation}
\langle\rho(x)\rangle =\langle\rho(x+T)\rangle
\end{equation}
!The reciprocal lattice
The reciprocal lattice vector, $G$, satisfies the following property with a translation vectors T.
\begin{equation}
e^{i(G\cdot T)} = 1
\end{equation}
This is saying that the plane wave $e^{iG\cdot r}$ is invariant under translation by lattice translation vector $T$. Each periodic lattice have a set of equidistant parallel planes. These planes can be defined by a corresponding normal vector $G$. These reciprocal lattice vector $G$ satisfy the following condition with the translation vectors in that particular plane,
\begin{equation}
G\cdot T = 2\pi n
\end{equation}
Corresponding to direct lattice primitive vectors , there are reciprocal vectors defined by,
\begin{equation}
b_i = 2\pi \frac{a_2\times a_3}{a_1.(a_2\times a_3)}
\end{equation}
By definition, they satisfy,
\begin{equation}
a_i\cdot b_i = 2\pi~\delta_{ij}
\end{equation}
The reciprocal lattice vector is in general written as,
\begin{equation}
G_n = n_1 b_1 + n_2 b_2 + ... +... + n_d b_d
\end{equation}
These reciprocal lattice vectors form the reciprocal lattice, The Weigner-Seitz cell for reciprocal lattice is called the $Brillouin~lattice$. One can construct a lattice from reciprocal lattice vectors. The reciprocal lattice is itself a Bravais lattice, and the reciprocal of the reciprocal lattice is the original lattice. The diffraction pattern of a crystal formed by parallel set of planes in real space corresponds to lattice points in reciprocal lattice. Thus from diffraction we have information about the reciprocal lattice vector which can be used to infer the atomic arrangement of a crystal.
!Bragg Scattering
The Fourier transform of a periodic function, is defined as
\begin{equation}
f(x) = \sum_{G}e^{iGx}~f_G
\end{equation}
Periodicity implies,
\begin{equation}
f(x+T) = f(x)\implies G\cdot T = 2\pi n
\end{equation}
This can be understood by looking at
\begin{equation}
f(q) = \int d^{d}x~e^{-iqx}~f(x) = \sum_T \int_0 d^{d}x~e^{-iqx}~f(x+T)
\end{equation}
where, $\int_0$ is sum over a unit cell. Now for a crystal, this is non zero only if q is reciprocal lattice vector. So, this turns out to be,
\begin{equation}
f(q) = N_c v_o\sum_G\delta_{q,G}f_G
\end{equation}
where, $v_0$ is the volume of the unit cell and,
\begin{equation}
f_G = \frac1v_0\int d^{d}xe^{-iGx}~f(x)
\end{equation}
So, for density,
\begin{equation}
\langle n(x)\rangle = \sum_{G}e^{iGx}~\langle n_G\rangle
\end{equation}
Thus average number density in periodic solids is completely specified by $\langle n_G\rangle$ at the reciprocal lattice vectors.
The differential cross section for crystals, as noted earlier for structure factor as well, turn out to be
\begin{equation}
\frac{d\sigma}{d\Omega} = V^2 \sum_G |U_G|^2 ~ \delta_{q,G}
\end{equation}
Thus there will be peaks in the scattering pattern at every reciprocal lattice vector with intensity proportional to the square of the volume. These are called the $Bragg$ scattering peaks of the solid. Also, we are interested in the elastic scattering which leads to $Laue~condition$. From the analysis, condition for the Bragg peak is $q=G$. Also, $q=k-k' = $ and $|k| = |k'|$.
\begin{equation}
k^{'2} = k^2+G^2-2k \cdot G
\end{equation}
\begin{equation}
G^2 = 2~k\cdot G
\end{equation}
We can write, $G^2 = 2k \cdot G$. This boils down to Bragg condition $2d~\sin\theta = \lambda$ for $G = 2\pi/d$ and $k=2\pi/\lambda$. Thus even this sophisticated machinery also yields the same Bragg condition as obtained from the simple arguments of path difference.
Lets look at the structure factor again for crystals
\begin{equation}
S(q) = \sum_G|\left\langle n_G \right\rangle|^2(2\pi)^d ~\delta(q-G) + S_{nn}(q)
\end{equation}
The intensity is thus proportional to $|\left\langle n_G \right\rangle|^2$ every time $q=G$. There are also contributions from $S_{nn}(q)$. For the simplest case of an atom fixed at each lattice points $|\left\langle n_G \right\rangle|^2= v_{0}^{-1}$ for each $G$. Thus there will be peaks at the each reciprocal lattice vector. Also, since atomic factors becomes small for large $q$ and hence even out idealized model will decay for large $q$. For a more realistic system we can have vanishing contribution to intensity, even at reciprocal lattice vectors. Also, we have assumed the atoms are rigidly fixed but in general there are fluctuations which cause the intensity to decay exponentially at large $G$ due to Debye-Waller factor.
.viewer {
line-height: 125%;
font-size: 12pt;
}
.viewer pre {
font-size: 10pt;
}
mainMenu {
width: 11em;
}
*[[Quasiperiodic structures]]
*[[Models of spin systems]]
*[[Order parameter]]
*[[Ising model]]
*[[Mean field theory]]
*[[Fluctuations]]
*[[Coarsening]]
*[[Linear response theory]]
*[[Adhesion and self cleaning in gecko setae]]
*[[Generalised elasticity]]
*[[Computational physics|https://github.com/rajeshrinet/compPhy]]