Fundamentals of Irreversible Thermodynamics for Coupled Transport

Engineering phenomena occur in open systems undergoing irreversible, non-equilibrium processes for coupled mass, energy, and momentum transport. The momentum transport often becomes a primary or background process, on which driving forces of physical gradients govern mass and heat transfer rates. Although in the steady state no physical variables have explicit variation with time, entropy increases with time as long as the systems are open. The degree of irreversibility can be measured by the entropy-increasing rate, first proposed by L. Onsager. This book conceptually reorganizes the entropy and its rate in broader aspects. Diffusion is fully described as an irreversible, i.e., entropy increasing, phenomenon using four different physical pictures. Finally, an irreversible thermodynamic formalism using effective driving forces is established as an extension to the Onsager ’ s reciprocal theorem, which was applied to core engineering phenomena of fundamental importance: solute diffusion and thermal flux. In addition, the osmotic and thermal fluxes are explained in the unified theoretical framework.


Introduction
This chapter contributes to a comprehensive explanation of the steady-state thermodynamics of irreversible processes with detailed theoretical derivations and examples. The origin and definitions of entropy are described, irreversible thermodynamics for a steady state is revisited based on Onsager's reciprocal theorem, and thermal and solute diffusion phenomena are recapitulated as examples of singlecomponent irreversible thermodynamic processes.

Thermodynamic states
In fundamental and applied sciences, thermodynamics (or statistical mechanics) plays an important role in understanding macroscopic behaviors of a thermodynamic system using microscopic properties of the system. Thermodynamic systems have three classifications based on their respective transport conditions at interfaces.
An open system allows energy and mass transfer across its interface, a closed system allows transfer of energy only, but preventing mass transfer, and, finally, an isolated system allows no transport across its interface.
Transfer phenomenon of mass and energy are represented using the concept of flux, which is defined as a rate of passing a physical variable of interest across a unit cross-sectional area per unit time. If the flux is constant, input and output rates of a physical quantity within a finite volume are equal, and the density remains constant because a net accumulation within the systems is zero. If the flux varies spatially, specifically J ¼ J x; y; z ð Þ, then its density within the specified volume changes with time, i.e., ρ ¼ ρ t ð Þ. This balance is defined as the equation of continuity: Many engineering processes occur in an open environment, having specific mass and energy transfer phenomena as practical goals. An exception is a batch reaction, where interfacial transport is blocked and a transient variation of the internal system is of concern. If the internal characteristic of the open system changes with time, the system moves toward a transient, non-equilibrium state. However, the transiency is subject to the human perception of the respective time scale. If engineering system performance is averaged over a macroscopic time scale, such as hours, days, and weeks, the time-averaged performance is a primary concern as those quantities can be compared with experimental data. Instead of transiency, the time to reach a steady state becomes more important in operating engineering processes because a steady-state operation is usually sought. Usually, the time to reach a steady state is much shorter than the standard operation time in a steady state.

Time scale and transiency
In theoretical physics, statistical mechanics and fluid dynamics are not fully unified, and non-equilibrium thermodynamics is unsolved in theoretical physics. It is often assumed that the fluid flow is not highly turbulent, and a steady state is reached with a fully developed flow field. The thermodynamic characteristics are maintained within the steady flow, and the static equilibrium is assumed to be valid within small moving fluid elements. In such a situation, each fluid element can be qualitatively analogous to a microstate of the thermodynamic ensemble.
Nevertheless, a conflict between the thermodynamics and fluid dynamics stems from the absence of a clear boundary between the static equilibrium for isolated systems and the steady state of open systems. In principle, the steady state belongs to the non-equilibrium state although the partial differentials of any physical quantities are assumed to be zero (i.e., ∂=∂t ¼ 0). A density does not change with time, but the flux exists as finite and constant in time and space. The time scale of particle motion can be expressed using the particle relaxation time defined as τ p ¼ m=β, where m and β exist as particle mass and Stokes' drag coefficient, respectively. The time scale for the fluid flow can be evaluated as the characteristic length divided by the mean flow speed, but the particle relaxation time scale is much shorter than the flow time scale. Therefore, the local equilibrium may be applied without significant deviation from the real thermodynamic state.
In engineering, various dimensionless numbers are often used to characterize a system of interest. The Reynolds (Re) and Péclet (Pe) numbers indicate ratios of the convective transport to viscous momentum and diffusive heat/mass transport in a fluid, respectively. The Nusselt and Sherwood numbers represent ratios of the diffusion length scale as compared to the boundary layer thickness of the thermal and mass diffusion phenomenon, respectively. The Prandtl and Schmidt numbers represent ratios of momentum as compared to thermal and mass diffusivities, respectively. Other dimensionless numbers include the Biot number (Bi) (for both heat and mass transfer), the Knudsen number (Kn) (molecular mean free path to system length scale), the Grashof (Gr) number (natural buoyancy to viscous forces), and the Rayleigh number (natural convective to diffusive heat/mass transfer).
Note that all the dimensionless numbers described here implicitly assume the presence of fluid flow in open systems because they quantify the relative significance of energy, momentum, and mass transport. The static equilibrium approximation (SEA) must be appropriate if the viscous force is dominant within a fluid region, preventing transient system fluctuation, as the non-equilibrium thermodynamics is not fully established in theoretical physics and steady-state thermodynamics requires experimental observations to determine thermodynamic coefficients between driving forces and generated fluxes.

Statistical ensembles
Thermodynamics often deals with macroscopic, measurable phenomena of systems of interest, consisting of objects (e.g., molecules or particles) within a volume. Statistical mechanics is considered as a probabilistic approach to study the microscopic aspects of thermodynamic systems using microstates and ensembles and to explain the macroscopic behavior of the respective systems.
Seven variables exist within statistical mechanics (i.e., temperature T, pressure P, and particle number N, which are conjugated to entropy S, volume V, chemical potential μ, and finally energy E of various forms). The thermodynamic ensemble uses the first and second laws of thermodynamics and provides constraints of having three out of the six variables (excluding E) remaining constant. The other three conjugate variables are theoretically calculated or experimentally measured. Statistical ensembles are either isothermal (for constant temperature) or adiabatic (of zero heat exchanged at interfaces). The adiabatic category includes NVE (microcanonical), NPH, μVL, and μPR ensembles, and isothermal ensembles possess NVT (canonical), NPT (isobaric-isothermal or Gibbs), and μVT (grand canonical). Here, ensembles of NVE and NPH are called microcanonical and isenthalpic, and those of NVT, μVT, and NPT are called canonical, grand canonical, and isothermalisobaric, respectively. Within statistical mechanical theories and simulations, canonical ensembles are most widely used, followed by grand canonical and isothermalisobaric ensembles. The adiabatic ensembles are equivalent to isentropic ensembles (of constant entropy) and are represented as NVS, NPS, μVS and μPS instead of NVE, NPH, μVL, and μPR, respectively. Non-isothermal ensembles often represent entropy S as a function of a specific energy form, of which details can be found elsewhere [1].

Entropy revisited 2.1 Thermodynamic laws
Thermodynamic laws can be summarized as follows: • The zeroth law: For thermodynamic systems of A, B, and C, if A ¼ C and B ¼ C, then A ¼ B.
• The first law: The internal energy change ΔU is equal to the energy added to the system Q, subtracted by work done by the system W (i.e., ΔU ¼ Q À W).
• The second law: An element of irreversible heat transferred, δQ, is a product of the temperature T and the increment of its conjugate variable S (i.e., δQ ¼ TdS).
• The third law: As T ! 0, S ! constant, and S ¼ k B ln Ω, where Ω is the number of microstates.
The entropy S is defined in the second thermodynamic laws, and its fundamental property is described in the third law, linking the macroscopic element of irreversible heat transferred (i.e., δQÞ and the microstates of the system. Suppose you have N objects (e.g., people) and need to position them in a straight line consisting of the same number of seats. The first and second objects have N and N À 1 choices, respectively; similarly, the third one has N À 2; the fourth one has N À 3 choices; and so on. The total number of ways of this experiment is as follows: Example 1: In a car, there are four seats including a driver's. Three guests will occupy the same number of seats. How many different configurations are available? There are three people, A, B, and C, and three seats, S 1 , S 2 , and S 3 . If A can chose a seat first, then A has three choices. Then, B and C have, in a sequence, two and one choices. Then, the total number of possible configurations are 3 Á 2 Á 1 ¼ 3! ¼ 6.
Next, when the N objects are divided into two groups. Group 1 and group 2 can contain N 1 and N 2 objects, respectively. Then, the total number of the possible ways to place N objects into two groups is which is equal to the number of combinations of N objects taking N 1 objects at a time For example, consider the following equation of a binomial expansion a n x n y 3Àn where a 0 ¼ a 3 ¼ 1 and a 1 ¼ a 2 ¼ 3. For the power of N, the equation exists as If we add x 3 with a constraint condition of N ¼ ∑ 3 k¼1 N k , then where the coefficient of the polynomial expansion can be written as follows: using the product notation of Example 2: Imagine that we have three containers and ten balls. Each container has enough room to hold all ten balls. Let N i (for i ¼ 1 À 3) be the number of balls in i th container. How many different configurations are available to put ten balls into the three containers? If N 1 ¼ 2, N 2 ¼ 3, and N 3 ¼ 5, then the equation is with the answer being 2520:

Boltzmann's entropy
A thermodynamic system is assumed to have a number of small micro-systems. Say that there are N micro-systems and m ≤ N ð Þ thermodynamic states. This situation is similar to N ¼ 10 ð Þballs in m ¼ 3 ð Þ containers. The number of balls in container 1, 2, and 3 is N 1 , N 2 , and N 3 , respectively. Then the total number of different configurations of micro-systems in m micro-states is defined as Boltzmann proposed a representation of entropy of the entire ensemble as

Gibbs entropy
The Gibbs entropy can be written using Ω, as ln N k ! and using Stirling's formula as exists as the probability of finding the system in thermodynamic state k. Gibbs introduced a form of entropy as which is equal to the system entropy per object or particle, denoted as

Shannon's entropy
In information theory, Shannon's entropy is defined as [2] As the digital representation of integers is binary, the base b is often set as two. Note that Shannon's entropy is identical to Gibbs entropy, if Boltzmann's constant k B is discarded and the natural logarithm ln ¼ log e is replaced by log 2 . Entropy only takes into account the probability of observing a specific event, so the information it encapsulates is information about the underlying probability distribution, not the meaning of the events themselves. Example 3 deals with tossing a coin or a dice and how the entropy S increases with respect to the number of available outcomes.
Example 3: Let's consider two conventional examples, i.e., a coin and a dice. Their Gibbs entropy values (i.e., entropy per object) are The system entropies of the coin and the dice are and their ratio is where three indicates the ratio of the number of available cases of a dice (6) to that of a coin (2). The entropy ratio, 7.754, is higher than the ratio of available states, 3.

Diffusion: an irreversible phenomenon
Diffusion refers to a net flow of matter from a region of high concentration to a region of low concentration, which is an entropy-increasing process, from a more ordered to a less ordered state of molecular locations. For example, when a lump of sugar is added to a cup of coffee for a sweeter taste, the solid cube of sugar dissolves, and the molecules spread out until evenly distributed. This change from a localized to a more even distribution exists as a spontaneous and, more importantly, irreversible process. In other words, diffusion occurs by itself without external driving forces. In addition, once diffusion occurs, it is not possible for the molecular distribution to return to its original undiffused state. If diffusion does not occur spontaneously, then there is no natural mixing, and one may have a bitter coffee taste and sweet sugar taste in an unmixed liquid phase. In general, diffusion is closely related to mixing and demixing (separation) within a plethora of engineering applications. Why does diffusion occur, and how do we understand the spontaneous phenomena? A key stands as the entropy-changing rate from one static equilibrium to the other. Before discussing diffusion as an irreversible phenomenon, however, the following section includes several pictures so as to create a better understanding of diffusion phenomenon as one of the irreversible thermodynamic processes.

Mutual diffusion
Diffusion is often driven by the concentration gradient referred to as ∇c, typically in a finite volume, temperature, and pressure. As temperature increases, molecules gain kinetic energy and diffuse more actively in order to position evenly within the volume. A general driving force for isothermal diffusion exists as a gradient of the chemical potential ∇μ between regions of higher and lower concentrations.
As shown in Figure 1, diffusion of solute molecules after removing the mid-wall is spontaneous. Initially, two equal-sized rectangular chambers A and B are separated by an impermeable wall between them. The thickness of the mid-wall is negligible in comparison to the box length; in each chamber of A and B, the same amount of water is contained. Chamber A contains seawater of salt concentration 35,000 ppm, and chamber B contains fresh water of zero salt concentration. If the separating wall is removed slowly enough not to disturb the stationary solvent medium but fast enough to initialize a sharp concentration boundary between the two concentration regions, then the concentration in B increases as much as that in A decreases because mass is neither created nor annihilated inside the container. This spontaneous mixing continues until both concentrations become equal and, hence, reach a thermodynamic equilibrium consisting of a half-seawater/half-fresh water concentration throughout the entire box. Diffusion occurs wherever and whenever the concentration gradient exists, and diffusive solute flux is represented using Fick's law as follows [3,4]: or where D is diffusion coefficient (also often called diffusivity) of a unit of m 2 =s. A length scale of diffusion can be estimated by ffiffiffiffiffiffiffi ffi Dδt p where δt is a representative time interval. In molecular motion, δt can be interpreted as a time duration required for a molecule to move as much as a mean free path (i.e., a statistical averaged distance between two consecutive collisions).

Stokes-Einstein diffusivity
When the solute concentration is low so that interactions between solutes are negligible, the diffusion coefficient, known as the Stokes-Einstein diffusivity, may be given by where k B is the Boltzmann constant, η is the solvent viscosity 1 , and a is the (hydrodynamic) radius of solute particles. Stokes derived hydrodynamic force that a stationary sphere experiences when it is positioned in an ambient flow [5]: where v represents a uniform fluid velocity, which can be interpreted as the velocity of a particle relative to that of an ambient fluid. F H is linearly proportional to v, and its proportionality 6πηa is the denominator of the right-hand side of Eq. (16). Einstein used the transition probability of molecules from one site to the another, and Langevin considered the molecular collisions as random forces acting on a solute (see Section 3.3 for details). Einstein and Langevin independently derived the same equation as (16) of which the general form can be rewritten as where d is the spatial dimension of the diffusive system (i.e., d ¼ 1, 2, and3 for 1D, 2D, and 3D spaces). 1 Greek symbol μ is also often used for viscosity in fluid mechanics literature. In this book, chemical potential is denoted as μ.

Diffusion pictures
Several pictures of diffusion phenomena are discussed in the following section, which give probabilistic and deterministic viewpoints. If one considers an ideal situation where there exists only one salt molecule in a box filled with solvent (e.g., water) of finite T, P, and V. Since the sole molecule exists, there is no concentration gradient. Mathematically, the concentration is infinite at the location of the molecule and absolutely zero anywhere else: c ¼ V À1 δ r À r 0 ð Þwhere r 0 is an initial position of the solute and r is an arbitrary location within the volume. However, the following question arises. Why does a single molecule diffuse without experiencing any collisions in the absence of other molecules? The answer is that the solvent medium consists of a number of (water) molecules having a size of an order of O 10 À10 À Á m. The salt molecule will suffer a tremendous number of collisions with solvent molecules of a certain kinetic energy at temperature T. Since each of these collisions can be thought of as producing a jump of the molecule, the molecule must be found at a distance from its initial position where the diffusion started. In this case, the molecule undergoes Brownian motion. Note that the single molecule collides only with solvent molecules while diffusing, which exists as a type of diffusion called self-diffusion.

Self-diffusion and random walk
A particle initially located at r 0 has equal probabilities of 1/6 to move in AEx; AEy; AEz ð Þdirections. For mathematical simplicity, we restrict ourselves to 1D random walk of a dizzy individual, who moves to the right or to the left with a 50:50 chance. Initially (at time t ¼ 0), the individual is located at x 0 ¼ 0 and starts moving in a direction represented by Δx ¼ AEl where þl and Àl indicate the right and left distances that the individual travels with an equal probability, respectively. At the next step, t 1 ¼ t 0 þ Δt ¼ Δt, the individual's location is found at where Δx 1 can be þl or Àl. At the time of the second step, t 2 ¼ t 1 þ Δt ¼ 2Δt, the position is where Δx 2 ¼ AEl. At t n ¼ nΔt (n ≫ 1), the position may be expressed as If there are a number of dizzy individuals and we can determine an average for their seemingly random movements, then because Δx has a 50:50 chance of þl and Àl: Now let us calculate a mean of x 2 : and in a concise form In the calculation of offdiagonal terms, Δx i Á Δx j can have four possible values with equal chance of þ; þ ð Þ, þ; À ð Þ, À; þ ð Þ, and À; À ð Þ. The products of the two elements in the parenthesis are þ, À, À, and þ with equal probability of 25%. Therefore, a sum of them is zero. Because n is the number of time steps, it can be replaced by t=Δt where t is the total elapsed time. The diffusion coefficient in one-dimensional space was derived in the previous section as D ¼ l 2 =2Δt. Then, the mean of squared distance at time t is calculated as and the root-mean-square distance is Note that x rms is proportional to t 1=2 in the random walk, as compared to the constant velocity case x ¼ vt∝t 1 . Then, the diffusivity for 1D is explicitly

Einstein's picture
The concentration C x; t ð Þ after an infinitesimal time duration δt from t within a range dx between x and x þ dx is calculated as [6] where Φ is the transition probability for a linear displacement y and the righthand side indicates the amount of adjacent solutes that move into the small region dx. The probability distribution satisfies ð þ∞ À∞ Φ y ð Þdy ¼ 1 and we assume that Φ is a short ranged, even function, meaning that it is nonzero for small |y| and symmetric, Φ Ày ð Þ ¼ Φ y ð Þ. In this case, we approximate the integrand of Eq. (29) as and substitute Eq. (31) with Eq. (29). We finally derive the so-called diffusion equation: where the diffusivity is defined as where y 2 is the mean value of y 2 , calculated as Within this calculation, we used and because yΦ is an odd function. Mathematically, Einstein's picture uses shortranged transition probability function, which does not need to be specifically known, and Taylor's expansion for a small time interval and short displacement. Conditions required for Eq. (32) are as follows: (i) transition distance is longer than the size of molecule, dx ≥ O a ð Þ, and (ii) time interval δt is long enough to measure dx after a tremendous number of collisions with solvent molecules, satisfying δt ≫ τ p , where τ p is the particle relaxation time (see Langevin's picture).

Langevin's picture
Let us consider a particle of mass m, located at x t ð Þ with velocity v dx=dt at time t. For simplicity, we shall treat the problem of diffusion in one dimension. It would be hopeless to deterministically trace all the collisions of this particle with a number of solvent molecules in series. However, these collisions can be regarded as a net force A t ð Þ effective in determining the time dependence of the molecule's position x t ð Þ. Newton's second law of motion can be written in the following form [7,8]: which is called Langevin's equation. In Eq. (37), A t ð Þ is assumed to be randomly and rapidly fluctuating. We multiply x on both sides of Eq. (37) to give and take a time average of both sides during an interval τ, defined as Then, we have after a much longer time than the particle relaxation time τ: Because the random fluctuating force A t ð Þ is independent of the particle position x t ð Þ, we calculate For further derivation, we use the following identities: We let z ¼ dx 2 =dt and rewrite Eq. (44): because the kinetic energy of this particle is equal to the thermal energy: where k B is the Boltzmann constant. Note that the origin of the particle motion exists as the number of its collisions with solvent molecules at temperature T: If we take an initial condition of z ¼ 0 indicating either position or velocity is initially zero, then we obtain where τ p ¼ m=β is the particle relaxation time. One more integration with respect to time yields If t ≫ τ p , then t=τ p in the rectangular parenthesis is dominant: Stokes' law of Eq. (17) indicates β ¼ 6πηa, and, therefore, the diffusion coefficient of Brownian motion or Stokes-Einstein diffusivity is identical to Eq. (16). The root-mean-square distance is which is proportional to ffiffi t p . Note that x h i ¼ 0. From an arbitrary time t, the particle drifts for an interval Δt, where Δt ≫ τ p , and then Then, the time step Δt is of a macroscopic scale in that one can appreciate the movement of the particle of an order of particle radius. For a short time t ≪ τ p , the mean-square distance of Eq. (48) is approximated as x rms ¼ v rms t , indicating a constant velocity motion.
Einstein's and Langevin's pictures provide identical results for x rms and D B as related to Stokes' law. On one hand, if a particle is translating with a constant velocity, its distance from the initial location is linearly proportional to the elapsed time; on the other hand, if particle is diffusing, its root-mean-square distance is proportional to ffiffi t p .

Gardiner's picture
In Langevin's Eq. (37), the randomly fluctuating force can be written as where f satisfies and Relationships between parameters are (See the next section for the Brownian diffusivity D B .) As such, we assume that where dW is the Ito-Wiener process [9,10], satisfying Then, we can obtain the stochastic differential equation (SDE) as The relationship between x, v, and t can be obtained as follows [11]: Note that Eq. (63) is free from the fundamental restriction of Langevin's equation (i.e., τ p ≪ dt) by introducing the Ito-Wiener process in Eq. (64). The time interval dt can be arbitrarily chosen to improve calculation speed and/or numerical accuracy.
Eq. (63) uses the basic definition of velocity as a time derivative of the position in the classical mechanics, and Eq. (64) represents the randomly fluctuating force using the Ito-Weiner process, dW. If we keep Langevin's picture, then these two equations should have forms of where the random fluctuation disappears in the force balance and appears as a drift displacement, ffiffiffiffiffiffiffiffi ffi 2D B p dW. Let C x ð Þ be the concentration of particles near the position x of a specific particle. Note that x is not a fixed point in Eulerian space but a moving coordinate of a particle being tracked. An infinitesimal change of C is where The first term of Eq. (67) is using Eq. (60) of the time average, which implies that the diffusion time scale already satisfies the restricted condition of dt ≫ τ p . The second term of Eq. (67) is after dropping the second order term of dt and the first order term of dW. Substitution of Eqs. (70) and (71) with Eq. (67) gives and therefore which looks similar to the conventional convective diffusion equation with the sign of v reversed. Eq. (73) indicates that a group of identical particles of mass m undergoes convective and diffusive transport in the Eulerian space. A particle in the group is located at the position x at time t, moving with velocity v. This specific particle observes the concentration C of other particles nearby its position x. Therefore, Eq. (73) exists as the convective diffusion equation in the Lagrangian picture. If the particle moves with velocity v in a stationary fluid, then the motion is equivalent to particles that perform only diffusive motion within a fluid moving with Àv. To emphasize the fluid velocity, we replace v with Àu; then the Lagrangian convective diffusion Eq. (73) becomes the original (Eulerian) convection-diffusion equation: which can be directly obtained by replacing Eq. (65) by

Dissipation rates 4.1 Energy consumption per time
In classical mechanics, work done due to an infinitesimal displacement of a particle dr under the influence of force field F is The time differentiation of Eq. (76) provides an energy consumption rate (i.e., power represented by P) as a dot product of the particle velocity v and the applied force F: For an arbitrary physical quantity Q, variation rate of its density can be represented as where V is the constant system volume and q ¼ Q=V is a volumetric density of Q or named specific Q. Eq. (78) indicates that a density changing rate of Q is equal to q operated by v Á ∇. If we replace Q by the internal energy of the system, then the specific energy consumption rate is expressed as where w and w 0 are specific work done and work done per length, respectively, and A c is the cross-sectional area normal to ∇w 0 . For a continuous media, ∇w 0 causes transport phenomena in a non-equilibrium state, and v=A c is generated as proportional to a flux. A changing rate can be quantified as a product of a driving force and a flux, as implicated from Eq. (77).
Let us consider a closed system possessing ξ 1 and ξ 2 , as some thermodynamic quantities characterizing the system state. The values of ξ i at a state of equilibrium are denoted ξ 0 1 and ξ 0 2 and values outside equilibrium ξ 0 1 and ξ 0 2 . Within a static equilibrium, the entropy represented by S of the system is maintained as the maximum. For a system away from the static equilibrium, the generalized driving force is defined as which is obviously zero for all k at the static equilibrium. A flux J j of ξ j is defined as which assumes that J j represents a linear combination of all the existing driving forces X k . We take Onsager's symmetry principle [12,13], which indicates that the kinetic coefficient L jk for all j and k are symmetrical such as The entropy production rate per unit volume, or the specific entropy production rate, is defined as where s ¼ S=V. We expand the specific entropy s with respect to infinitesimal changes of ξ k as an independent variable: which represents the changing rate of the specific entropy as a dot project of flux J and driving force X. The subscript k in Eq. (84) is for physical quantities on which the respective entropy depends. For mathematical simplicity, a new quantity is defined as Y k ¼ TX k , where T is the absolute temperature in Kelvin to have where subscripts q, v, and s of X indicates heat, volume of solvent, and solute, respectively. In Eq. (92), entropy S is differentiated by energy E, keeping V, and N s invariant, which are applied to Eqs. (93) and (94). Eq. (94) indicates that the driving force is a negative gradient of the chemical potential divided by the ambient temperature. Within the isothermal-isobaric ensemble, Gibbs free energy is defined as Þis enthalpy. If the solute concentration is diluted (i.e., N w ≫ N s ), it is referred to as a weak solution. As such, the overall chemical potential can be approximated as where H and S represent molar enthalpy and entropy, respectively. An infinitesimal change of Gibbs free energy is, in particular, written as which is equivalent to where V is a molar volume of the system, μ s is the solute chemical potential, and c ¼ N s =N w is the molar fraction of solute molecules. The gradient of the solvent chemical potential was rewritten as a linear combination of gradients of temperature, pressure, and molar solute fraction: where the following mathematical identity was used In general, fluxes of heat, solvent volume, and solute molecules are intrinsically coupled to their driving forces, such as where Onsager's reciprocal relationship, L ij ¼ L ji , is employed.

Solute diffusion
The primary driving force for the solute transport is X s ¼ À∇ μ s =T ð Þ, if temperature and pressure gradients are not significant in solute transport. We consider the diffusive flux of solute only in an isothermal and isobaric process and neglect terms of L qs and L vs : Then, the entropy-changing rate based on the solute transport is calculated as Next, we consider the Stokes-Einstein diffusivity: where k B is Boltzmann constant, η is the solvent viscosity, and d p is the diameter of a particle diffusing within the solvent medium. The phenomenological coefficient L qq is represented as For weakly interacting solutes, the solute chemical potential is where μ 0 is generally a function of T and P, which are constant in this equation, R is the gas constant, and a is the solute activity. For a dilute solution, the activity represented by a is often approximated as the concentration c (i.e., a ≃ c). The proportionality between L ss and D SE is which leads to where N A is the Avogadro constant. For a dilute isothermal solution, we represent the entropy-changing rate as for an isothermal and isobaric process. Assuming that D is not a strong function of c, Eq. (112) indicates that the diffusive entropy rate σ s is unconditionally positive (as expected), increases with the diffusive flux, and decreases with the concentration c. Within this analysis, c is defined as molar or number fraction of solute molecules to the solvent. For a dilute solution, conversion of c to a solute mass or mole number per unit volume is straightforward.

Thermal flux
The thermal flux consists of conductive and convective transports, proportional to ∇T and ∇P, respectively. Neglecting the solute diffusion in Eq. (101), the coupled equations of heat and fluid flows are simplified as using Onsager's reciprocal relationship in that the off-diagonal coefficients are symmetrical. The driving forces are The substitution of Eq. (113) into (114) gives or Subtracting the first row by the second row of Eq. (116) provides Through physical interpretation, one can conclude that β γ ¼h (119) whereh represents the system enthalpy as a function of temperature. Finally, the coupled heat transfer equation is where is the thermal conductivity.

Concluding remarks
In this chapter, we investigated diffusion phenomenon as an irreversible process. By thermodynamic laws, entropy always increases as a system of interest evolves in a non-equilibrium state. The entropy-increasing rate per unit volume is a measure of how fast the system changes from the current to a more disordered state. Entropy concept is explained from the basic mathematics using several examples. Diffusion phenomenon is explained using (phenomenological) Fick's law, and more fundamental theories were summarized, which theoretically derive the diffusion coefficient and the convection-diffusion equation. Finally, the dissipation rate, i.e., entropy-changing rate per volume, is revisited and obtained in detail. The coupled, irreversible transport equation in steady state is applied to solute diffusion in an isothermal-isobaric process and heat transfer that is consisting of the conductive and convective transport due to the temperature gradient and fluid flow, respectively. As engineering processes are mostly open in the steady state, the theoretical approaches discussed in this chapter may be a starting point of the future development in irreversible thermodynamics and statistical mechanics.