Callen:
I. There
exist particular states (called equilibrium states) of simple systems that,
macroscopically, are characterized completely by the internal energy U,
the volume V, and the mole numbers N1, N2,
…, Nr of the chemical components.
II. There
exists a function (called the entropy S) of the extensive
parameters of any composite system, defined for all equilibrium states and
having the following property: The values assumed by the extensive parameters
in the absence of an internal constraint are those that maximize the entropy
over the manifold of constrained equilibrium states.
III. The
entropy of a composite system is additive over the constituent subsystems. The
entropy is continuous and differentiable and is a monotonically increasing
function of the energy. [Callen then adds: Several mathematical consequences
follow immediately.]
IV. The
entropy of any system vanishes in the state for which (∂U/∂S) = 0, with volume
and all chemical potentials held constant. That is, the entropy goes to zero at
the zero of absolute temperature.
Robertson:
1. The
macroscopic thermodynamic equilibrium states of simple systems are completely
specified in terms of the extensive parameters (U, V, Ni in
the present context), where the Ni are the
component populations, with all the information contained in the entropy
function S(U,V,Ni), called the fundamental
relation.
2. The
entropy of a simple system is a continuous, differentiable,
homogeneous-first-order function of the extensive parameters, monotone
increasing in U.
3. The
entropy of a composite system S({Ui, Vi, {Ni}}) is
the sum of the entropies of the constituent subsystems:
S({Ui, Vi, {Ni}})
= S1({U1, V1, N1i }) + S1({U2,
V2, N2i }) + … .
4. The
equilibrium state of a composite system when a constraint is removed maximizes
the total entropy over the set of all possible constrained equilibrium states
compatible with the constraint and the permitted range of extensive parameters.
5. The
entropy of a simple system approaches zero when
(∂U/∂S)V, {N} → 0
Like
Callen, Robertson uses the idea of equilibrium states of simple macroscopic
systems as a starting point, with equilibrium sort of implicitly taken to be
any state that can be characterized completely by a constant internal (average)
energy, constant volume, and constant “component populations” (Robertson) or
“mole numbers of the chemical components” (Callen).
Notice
that C and R both use a completeness specification. The word “completeness”
resonates a little bit here with the “complete set of commuting observables” in
quantum mechanics. We could say an equilibrium state in thermodynamics is
characterized by—or exists because of— the existence of stationary values of a
complete set of extensive parameters, which are U, V and {Ni}
in the entropy representation.
Also
notice that a fundamental relation such as the monotonically increasing entropy
function S(U,V,Ni) in thermodynamics is described
similarly to the wave function in quantum mechanics, in that both are said to
contain all the information about the system.
An aside related to vocabulary
The
word “system” should be used with some humility and caution, rather like the
word “universe”. An ideal isolated system in thermodynamics is a universe unto itself
(if you don’t tamper with it), while, conversely, the universe is a system unto
itself. What are they really? Models. Mainly,
“system” is a very broadly used word in science and engineering and it can
close off creative thinking rather than promoting it. Some people—Darwin and Fowler in their 1922 and 1923 papers, and Schrödinger in his little Statistical Thermodynamics book—have chosen to use the word “assembly” instead of
“system” when discussing Boltzmann’s ideal gas and Planck’s ideal
electromagnetic resonators. These authors also use the word “system,” but they
refer to the individual molecules or Planck resonators/vibrators/oscillators as
the systems that make up the assembly under consideration. Thus, in their view, an assembly is
macroscopic and must be assembled, and its “component population” is made of N
identical (sub)microscopic systems that each possess
mechanical and maybe electromagnetic energy (KE, PE). The assembly itself then has
some overall thermal energy distribution. A more complicated assembly would
be made up of a set {Ni} of different types of systems.
Now back to (thermodynamic) systems
analysis
But
I will continue talking about thermodynamic “systems” and their constituents
since this is the usual terminology.
Before
he provides the above postulates (Chapter 2, p. 66), Robertson describes a simple
system (his bold emphasis) as “a bounded region of space that is
macroscopically homogeneous.” He goes on to say: “That we regard a system as
simple may imply nothing more than that we have not examined it on a fine
enough scale. The failure to do so may be one of choice, or lack of it. We may
usually choose to regard an obviously complex system as simple if its internal
behavior is not involved in the problem at hand … the simple systems that are
treated in terms of statistical thermophysics are made up of atoms or
molecules, the spatial distribution of which is described by probability
densities that are constant (or periodic on the scale of a crystal lattice)
over the volume of the system.” Robertson then discusses the nature of possible
boundaries of simple systems, such as their being either material or “described
by a set of mathematical surfaces in space,” or diathermal (allowing thermal contact)
or adiabatic (preventing thermal contact), or restrictive to matter flow in
various degrees (semipermeable, open, closed), and whether they allow transfer
of energy via a work process (such as a movable piston).
I’ve discussed Robertson’s and Callen’s statements of the postulates of thermodynamics in this post in order to prepare for my next post, where I’ll compare these postulates with those of quantum mechanics and also, mainly, try to figure out why we don’t normally see the square of the wavefunction or the squares of the complex quantum mechanical superposition coefficients used as probabilities in the Shannon expression for entropy. Meanwhile, here’s a blog post on that subject: Wavefunction entropy. [The comparison of thermo and quantum postulates wasn't my next post. As of December 16, 2023, I still haven't managed to get to it. Later!]
Problem 1.10-3: "The fundamental equation of system A is
S =
(R2/v0 θ)1/3(NVU)1/3
and similarly for System B.
The two systems are separated by a rigid, impermeable, adiabatic wall. System A
has a volume of 9x10-6 m3 and a mole number of 3 moles. System B has a volume of 4x10-6
m3 and a mole number of 2 moles. The total energy of the composite system is 80 J. Calculate and Plot the
entropy as a function of UA/(UA + UB) .
If the internal wall is now made diathermal and the system is allowed to come
to equilibrium, what are the internal energies of the individual systems? (R2, v0 , and θ are constants.)"
Post-postscript, March 25: (The red text above is what I left out or wrote wrongly in my initial post. The red text below is what I re-wrote on March 29.) Non-numerical solution to our Problem 1.10-3: The given constraint U = UA + UB applies to the composite system with either the adiabatic wall or the diathermal wall. The composite system entropy sum S = SA + SB applies when the adiabatic wall is in place and subsystems A and B are energetically distinct, AND when the diathermal wall is in place with the particular values of UA and UB found from maximizing S = SA + SB. These are the thermal equilibrium values with the diathermal wall in place.
We have a continuum of different values for SA and SB that satisfy the sum S = SA + SB with the adiabatic wall in place, and these are Callen’s and Robertson’s “constrained equilibrium states” over which we want to maximize S. Using the energy constraint to write total entropy in terms of system A’s energy, and using constants kA and kB as stand-ins for all the alphabetic and numerical constants given in the problem,
S = SA + SB
= kA UA
1/3 + kB UB
1/3
= kA UA
1/3 + kB (U – UA)1/3
dS/dUA
= (kA /3)
UA
-2/3 – (kB
/3)(U – UA)-2/3 = 0,
(not checked yet for min instead of max) resulting in
UA
= U/[1 + (kB / kA)3/2]
and
UB
= U/[1 + (kA / kB )3/2].
The ratios are easy to calculate, with alphabetic constants and numerical exponents canceling: kA / kB = 27/8. Plotting the normalized relation "entropy as a function of UA/(UA + UB)" is left to the intrepid reader for the moment.