Statistical thermodynamics is a field of physics which bridges the gap between the microscopic world of quantum mechanics and the macroscopic world of our everyday experience. It explains (using mathematics) how the quantum mechanical properties of atoms and molecules are consistent with the thermodynamics of macroscopic quantities of these particles.

**Two Size Regimes**

The microscopic world refers to the physics of atoms, molecules, or small collections of these. The macroscopic world refers to much larger quantities of these particles. For example, a microscopic property of water is how much energy it takes to pull an electron off a water molecule. A corresponding example of a macroscopic property of water is its conductivity^{1}.

**How It Works**

In order to determine a macroscopic property, one calculates this property for **all** the individual atoms^{2} in a substance, and then takes the average^{3}. This is where the 'statistical' part of the name comes from - we are applying the mathematics of statistics to our particular problem, a large group of atoms.

An example of a statistical measurement would be to take the average height of a group of people. First, the height of each individual would have to be measured. Then, all of these heights would be added together, and this total would be divided by the number of people in the group. This would give us the average height of members of the group.

In statistical thermodynamics, we apply the exact same procedure, but instead of working with people, we work with atoms. The key difference here is that quantum mechanics puts restrictions on what values we can measure for the properties of atoms.

There is one major problem with this method as described. A typical amount of a substance we are interested in would be about one gram. We can't actually measure the properties of each atom in one gram, because there are around 100,000,000,000,000,000,000,000 (10^{23}) atoms in that one gram of substance^{4}! Instead of measuring the properties of each individual atom, we pick a certain value of the property we are calculating, and then calculate how many atoms in the sample have this value of the property. We can imagine that in a uniform sample, this procedure will greatly reduce the amount of work we have to do.

**Boltzmann Equation**

The Boltzmann equation^{5} allows us to calculate the probability of finding an atom in one of its allowed states. This then allows us to calculate the average. The Boltzmann equation is:

P = A*exp[-E/(k_{B}*T)]

**P** is the probability of an atom being in a state which has an energy **E**.**A** is a factor of proportionality - it ensures that the total probability never exceeds a value of one^{6}.**exp** refers to the exponential function - it means raise the number **e** to the power given in brackets^{7}.**k**_{B} is the Boltzmann constant. This factor determines how likely it is for an atom in a substance with a given temperature to be in any one of its allowed states.**T** is the temperature of the substance that we wish to do our calculation for.

We can now use this equation to calculate our average. For example, let's calculate the average energy of our sample. First, by plugging in the lowest possible energy, we would determine the relative number of atoms which have this value (said to be in their 'ground state') of energy. We'll call this number P_{0} since it represents the probability of finding an atom in the zeroth energy state, E_{0}. We would then repeat this for the next highest allowed energy state E_{1} (the first 'excited state'), calling the result P_{1}. We could continue this process for any number of allowed energy states, but for simplicity we'll assume we only have two allowed states.

Now, if we have a sample which contains N atoms, the number of atoms which are in the zeroth energy state is given by multiplying the probability of finding an atom in this state by the total number of atoms:

N_{0} = N*P_{0}

In our sample above, N was roughly equal to 10^{23}. If P_{0} is substantial, we've done the work of calculating the energy for a large portion of our sample!

The same equation applies for the next energy state(P_{1}). We now know how many atoms are in each of the two possible energy states we are considering. This allows us to go back and use the formula for calculating the average energy. We know we have N_{0} atoms with energy E_{0}, and N_{1} atoms with energy E_{1}, so our average energy is given by:

Average Energy = (N_{0}*E_{0} + N_{1}*E_{1})/N

We now know how much energy a macroscopic sample would have, based purely on the determined properties of its component atoms! This is an extremely powerful tool, as it allows us to make theoretical predictions about the macroscopic properties of materials that have not yet been made.

**Accuracy of the Calculations**

By doing a full treatment of a given problem or system, a high degree of accuracy can be obtained. As an example of the accuracy, one calculation predicts the volume of sodium chloride (salt) to be 38.14 cm^{3} per mole^{8}. The experimentally determined value is 37.74 cm^{3} per mole. There is only slightly over 1% difference between these values! This type of accuracy has been routinely achieved with simple substances.

**Conclusion**

In the above example we have calculated the average energy, but we didn't have to stop there. We could have used any property which we know how to calculate on the atomic scale. We would still use the energy to calculate the probabilities of the states, but when we take our average in the last step we would have multiplied by the value of the property associated with each state, instead of the energy of each state.

One of the first things that students studying this field often learn is how to derive the gas state equations from first principles. This is an especially satisfying calculation, as many of these equations have been given to and used by the student without any justification. Statistical thermodynamics provides an explanation for the macroscopically observed gas state equations, based on the known microscopic properties of atoms and molecules

In order for this calculation to be accurate, the system being studied must be in *equilibrium*. It is only under equilibrium conditions that the Boltzmann factor is accurate. Being in equilibrium has many conditions attached to it; perhaps the most obvious is that the temperature of the sample should be uniform. The temperature is allowed to be changing, but the rate of change must be slow enough that the sample retains its uniformity. *Statistical mechanics* is a more general field of study, which includes systems which are not in equilibrium. It is, of course, much more complicated than statistical thermodynamics.

This type of calculation is performed repeatedly in physics, chemistry and engineering. One specific example is that of semiconductors. Statistical mechanics allows scientists and engineers to 'experiment' with exotic new materials, calculating how they should work, without incurring the actual expense or time required to make the full 'real world' experiments. In this way, the search for new materials can be directed to those with the greatest chances of success, thus speeding up the research and development cycle.

^{1} Conductivity is a measure of how well a substance conducts electricity

^{2} Or molecules, or whatever the smallest realistic structure is

^{3} Other statistical properties can be calculated in the same way, such as standard deviation

^{4} This is dependent upon what the substance actually is. The example given is the number of atoms in one gram of water. The number of atoms in one gram of lead would be less by a factor of 11.5 - but it would still be a fantastically large number.

^{5} This equation is named after the Austrian physicist who discovered it, Ludwig Boltzmann (1844 - 1906)

^{6} The rigorous condition is that the sum of the probabilities has to be exactly equal to one. This is where the math can get especially tricky.

^{7} For example, exp[3.1] = e^{3.1} = 22.198

^{8} A mole of sodium chloride is equal to 58.5 grams