Open access

The Role and the Status of Thermodynamics in Quantum Chemistry Calculations

Written By

Llored Jean-Pierre

Submitted: 26 November 2010 Published: 02 November 2011

DOI: 10.5772/23465

From the Edited Volume

Thermodynamics - Interaction Studies - Solids, Liquids and Gases

Edited by Juan Carlos Moreno-Pirajan

Chapter metrics overview

3,132 Chapter Downloads

View Full Metrics

1. Introduction

This chapter aims at understanding what is at stake when thermodynamics is used in chemical quantum calculations. Quantum chemistry and thermodynamics seem to be two incommensurable scientific worlds the assumptions and the statements of which are thoroughly different. So my questions are: How was thermodynamics been integrated into the chemical quantum background at the very beginning of quantum chemistry? What are its role and status in current Density Functional Theory and others quantum methods used in chemistry?

I refer both to history and epistemology to grasp this entanglement of scientific approaches. First of all, I propose to analyze how thermodynamics became involved in chemistry. In this respect, I will point out how the concept of energy provides the old chemical affinity with a quantitative tool to understand chemical transformations. The birth of thermochemistry aroused opposition between two old rival conceptions of matter that framed the history of chemistry, that is to say the aggregate and the ‘mixt’ stances

A ‘mixt’ is a chemical combination composed of elements but not bearing the same properties as the constitutive elements. Conversely, an aggregate is a mere additive combinations of elements and their properties.

.

I will then highlight that this duality of conceptions was still at stake when Mulliken and Pauling created two different quantum chemical approaches. In this context, thermodynamics was not just used as a mere tool to calibrate methods; it also guided the contrivance of new quantum concepts or parameters from the outset. Following this line of reasoning, I will query how the concept of ‘state’, be it electronic or thermodynamic, allows us to bridge thermodynamics to quantum chemistry in a different way. I will indicate why and how the second law of thermodynamics is reflexively of importance to understand molecular calculations and to better grasp the relation between a molecular “whole” and its respective parts.

These investigations are widened by a global overview of the ways thermodynamic parameters are currently involved in workaday quantum methods in order to describe molecular reactivity.

To conclude, the paper will query the status of thermodynamics in predictive quantum methods. I will insist on the status of the concept of energy and the heuristic power of the second law of thermodynamics on quantum grounds.

Advertisement

2. The integration of thermodynamics into chemical grounds: From a qualitative to a quantitative affinity

The new rules of the French Royal Academy of sciences (1699), Wilhelm Homberg’s work on the interchangeability of ‘average’ -now called ‘neutral’- salts, the mechanist philosophy influences at the end of the seventeenth century, and as well Paracelsus and the alchemists’ traditions, paved the way for the empirical production of affinity tables during the 18th century. From Etienne-François Geoffroy (1718) to Bergman (1775), these tables were multiplied; some chemists, such as Guyton de Morveau (1773), developed the first experimental devices to quantify these affinities (Mi Gyung, 2003; Partington, 1962).

A shift of the explanatory function of the principles – Aristotelian, Paracelsian, or other, which previously accounted for qualities and chemical transmutations, towards the state of union between two chemical substances and the concept of process which implies union and disunion, gradually occurred (Bensaude-Vincent & Stengers, 1996). This major epistemological upheaval led to the attraction between chemical bodies being operationally redefined within the context of salts chemistry. The key question of the force or power which governed the chemical combinations remained rather unclear and mysterious according to Henri Sainte Claire Deville (Deville, 1864) until the chemists integrated knowledge of calorific and thermodynamics into their own practices.

Using a new calorimeter with mercury, J.T. Silbermann and P.A Favre showed for the first time in 1852 that a chemical decomposition could involve a release of heat. At the same time, Julius Thomsen published a paper entitled Les bases d’un système thermochimique

‘The foundations of a thermodynamic system’, my translation.

in the Annals of Poggendorf which upset the generally accepted ideas. The differentiation between combination and decomposition defended by Claude-Louis Berthollet could not be maintained anymore. A chemical act which produces heat was said to occur spontaneously. The concept of chemical reaction understood as an observable and measurable phenomenon was thus worked out by means of mathematical equations, and new experimental practices related to an innovative thermal instrumentation. Pierre Duhem reported a sentence of Thomsen according to whom: “When the chemical combination occurs, it releases a quantity of heat proportional to the affinity of the two chemical bodies”

The French original sentence is : ‘Lorsque la combinaison se produit, il se dégage une quantité de chaleur proportionnelle à l’affinité des deux corps.‘

(Duhem, 1893). Thomsen originally argued that the heat of a reaction was the true measure of affinity (Kragh, 1984). The chemical act became a work to refer to the physicists’ vocabulary but a work reinterpreted from within the current framework of chemical knowledge and laboratory practices. In 1873, Marcellin Berthelot precisely applied the Principle of maximum work to a chemical reaction (Médoire & Tachoire, 1994). He stated that in the absence of external energy, every chemical change tends towards the production of the greatest quantity of heat (Nye, 1993).

As Thermochemistry began to develop, chemists paid attention to other facts which first appeared foreign from each other. In 1852, Edmond Fremy and Henri Becquerel showed that the production of ozone was an incomplete reaction, a conclusion that Berthelot and Pan de Saint Gilles also reached for the esterification reaction ten years later. The chemical reaction appeared limited and dependent on the time factor, Sainte Claire Deville and his collaborators widened and strengthened those findings thanks to many experiments (Daumas, 1946). After many attempts, Maximilian Güldberg and Peter Waage asserted in 1861 that they were able to "find for each element and each chemical combination, numbers which express their relative affinity"

The French original sentence is : " (…) trouver pour chaque élément et pour chaque combinaison chimique, des nombres qui expriment leur affinité relative"

(Güldberg & Waage, 1867). Güldberg and Waage quickly connected the emerging concept of chemical equilibrium with the notion of affinity so as to designate the chemical force which was supposed to lead to the equilibrium. They then established the crucial chemical law of mass action while studying reaction rates, and the effects of time, temperature and mass factors.

The development of the energy approach in chemistry was the result of a fortuitous combination of independent works proposed by Wilhelm Hortsmann in Germany, by Josiah Willard Gibbs in America and by Bakhuis Roozeboom and J.H. Van't Hoff in Holland. Hortsmann integrated Rudolf Clausius’ considerations on isolated systems into chemistry. In so doing, he rediscovered in 1873 the law of mass action by means of calculation without having any idea that it had already been found on other grounds. The same year, Gibbs, published a paper entitled ‘On the equilibrium of heterogeneous substances’, within which he proposed a mathematical description of chemical equilibrium. This work remained mostly unknown by chemists because they didn’t have the necessary basic mathematical knowledge to grasp it. In 1882, Hermann von Helmholz rediscovered Gibbs’ results -which he totally ignored- using the theory of heat published by J. Clark Maxwell in 1871. All these publications gave rise to new chemical concepts which dealt with energy changes in a chemical system submitted to the action of the various forces that led to an equilibrium. One must have distinguished, according to Helmholtz, between the part of energy which appeared only as heat and the part which could be freely converted into other kinds of work, i.e. the “free energy”. Subsequently, the production of a decrease in free energy enabled chemists to explain chemical stability (Kondepudi & Prigogine, 1998). In 1884, Pierre Duhem introduced the notion of internal thermodynamic potential by analogy with classical mechanics (Duhem, 1902).

Applications to experimental chemistry by the Dutch school, for example, Roozeboom had to cope with difficulties in interpreting hydrobromic acid decomposition in the presence of water in the gas phase. His colleague physicist J.D. Van der Waals suggested to him to use Gibbs’s work and helped him to put forward the so-called phases rule. Van't Hoff established the law of equilibrium variation depending on temperature and gave to the measure of the affinity as the expression of the maximum work that the system must be able to provide under defined conditions. According to Van’t Hoff, affinity was the leading force which produced chemical transformation. The change of affinity sign accompanied the change in the direction of the reaction which occurred at the transition point (Kragh & Weininger, 1996). From that time onwards, researchers gradually moved their attention to other factors of equilibrium. In 1888, Henry Le Chatelier proposed a way to predict how a chemical equilibrium moved according to the variation of the factors on which it depended. Chemical affinity became therefore one of the many aspects of the chemical act allowing improved forecasts and performances.

At the beginning of the twentieth century, chemists attempted to know not loner why, but how matter is transformed. Chemical kinetics studied the process of transformation of matter. Swante Arrhenius introduced the concept of energy activation, researches gradually turned to focus on the question of the energy transfer and the direction of collisions between chemical bodies. Wilhelm Ostwald succeeded in describing chemical equilibrium without making any reference to atoms (Ostwald, 1919). Two antagonistic approaches of matter were at stake. Thermochemistry revolved around energy and denied any reality to atoms whereas chemical kinetics was based on the atomic assumption. Thomsen, for instance, used structural theory to assign heats of formation to specific bond types found in organic molecules. In this respect, he tried to reduce chemical properties to a mere juxtaposition of atomic properties. Others, like F.W. Clarke tried to connect the heat of formation with the one and only number of atomic linkages within the molecule. By doing so, he tried to connect valence with affinity (Weininger, 2001). All the attempts that tried to understand affinity thanks to additive and reductive descriptions failed.

To sum up this first part, I would like to emphasize that the integration of thermodynamics within the frameworks of chemistry was made possible because chemists were looking for a quantitative measure of affinity. The way thermodynamics became thermochemistry depended on the instrumentation and the practices that chemists contrived to tackle the challenge of affinity. As the philosopher Joseph Rouse points out: ‘Practices are not just pattern of action, but the meaningful configurations of the world within which actions can take place intelligibly, and thus practices incorporate the objects that they are enacted with and on and the settings in which they are enacted’. (Rouse, 1996, p.135). Thermodynamics was thus integrated into chemical projects and then transformed by such integration because it made chemists goals achievable and intelligible within such new practical backgrounds.

I suggest we should take more distance and consider the whole history of chemistry to analyze the way this integration actually took place. Let us widen the circle to grasp what is at stake behind this integration and how the duel between different conceptions of matter will remain active at the very beginning of quantum chemistry. This study will enable us to understand the role of thermodynamics in the first chemical quantum calculations.

Advertisement

3. The integration of thermodynamics into first quantum methods: The reviving of the aggregate/’mixt’ duel

3.1. Two conceptions of matter and the thermodynamics embodiment within chemical practices

First and foremost, I would like to develop the opposition of conceptions of matter we previously stressed. Duhem’s claim for an energy description of molecules that need not rely on any atomic assumption reminds us of other historical oppositions.

In the seventeenth century for instance, Nicolas Lemery in his famous Cours de Chymie, tried to account for chemical transformations by means of a multitude of corpuscles with different forms. Gabriel-François Venel argued that this reductive approach was unable to explain and predict chemical properties. Venel asserted that chemists studied ‘mixt’ whereas mere ‘aggregates’ came under mechanics. Venel used Georg Ernest Stahl’s distinction between an aggregate which was defined as a mere sum of various substances that continued to exist in the whole compound, and a ‘mixt’ within which reactants disappeared to form an emergent new whole with specific properties. Two conceptions of matter were at odds in this context and became progressively more important within the debate. On the one hand, mechanics considered matter to be homogeneous, without qualities and necessarily informed by something from outside. This kind of matter representation solely described by its form and motion could not account for the world of chemical activities and diversity according to Venel. On the other hand, most chemists considered matter to be heterogeneous and able to act and react (Bensaude-Vincent & Simon, 2008). More often than not, chemists pragmatically used one description or the other according to their laboratory goals. As Bensaude-Vincent and Simon write: ‘We prefer to see this duel between the two approaches as a characteristic feature of the history of chemistry. Chemists have always been confronted with this interpretative dichotomy, and, depending on the period, they have opted for a version of atomism or an elementary approach, or else have tried to reconcile the two. (Bensaude-Vincent & Simon, 2008, p. 128).

Not only did thermodynamics enable chemists to construe a quantitative version of affinity but it also fitted very well into the cultural background that had been framing chemists’ activities for a long time. Thermodynamics embodiment within chemical practices was thus at least twofold; it provided chemists with quantitative tools for understanding chemical reaction while recasting old oppositions of matter representations. Along with this perspective, thermodynamics could easily be integrated into the usual chemical way of thinking about matter while reconfiguring it. As Rouse claims (1996, p.157): ’In order to understand how scientific knowledge is situated within practices, we need to take account of how practices are connected to one another, for knowledge will be established only through these interconnections. Scientific knowing is not located in some privileged type of practice, whether it be experimental manipulation, theoretical modeling, or reasoning from evidence, but in the ways these practices and others become intelligible together.’

Duhem focused his work on the dichotomy between the ‘mixt’ and the aggregate referring to Aristotle’s philosophy (Needham, 1996). Like Sainte-Claire Deville and Berthellot, but not because of the same positivist reasons, he rejected atomism then deeply rooted in structural organic chemistry. According to the structural molecular paradigm, the physical arrangement of the constituent elements accounted for the properties of the whole compound. Since Lavoisier, chemists have been explaining the properties of compounds by reference to the nature, the proportion and, more recently, the bonds of its constitutive parts, be they atoms or elements: a logic that runs from simple to complex frameworks in post-Lavoisian chemistry (Bensaude-Vincent & Simon, 2008). Conversely, the holistic energy approach used compounds to explain the properties of the elements. In this respect, atomism had a weak explanatory power because it could not completely illuminate chemical processes. According to Duhem, chemical formula could make chemists believe that substances remained unchanged when they entered into combinations whereas they only existed potentially within them (Duhem, 1902). Joseph Earley has recently proposed an argument on the same lines. He uses the example of sea water in which salt and water cease to exist in their actual states–because for instance of solvatation- but they can be reproduced by distillation (Earley, 2007). When the ‘mixt’ ceases to exist, it is made to reproduce its separate constituents as Venel might have asserted. In this respect, water and salt potentially exist in sea water but do not actually exist within it. Duhem then undertook to retranslate Aristotle’s concept of power into that of the thermodynamic potential (Duhem, 1902). Measurable properties and mathematics allowed him to describe chemical reaction within the context of thermochemistry.

Duhem rejected both the idea of valence taken as an intrinsic atomic property and the concept of atomicity. According to him, the whole components could only give rise to valence information but not the contrary. The opposition between a holistic approach of chemical bodies on the one hand and the aggregative atomic description on the other hand will appear of primary importance at the very beginning of quantum chemistry. I propose to study how Linus Pauling and Robert Sanderson Mulliken created the first chemical quantum approaches in the context described before and how they integrated thermodynamics and quantum mechanics into chemistry.

3.2. The ‘mixt’ and the aggregate: A framework for the embodiment of thermodynamics into quantum chemistry?

Both standardization and precision were required if thermodynamic bond measurements were to play a significant role in calibrating innovative methods and stabilizing new theories about affinity as well as about valence or the chemical bond (Servos, 1990). The Russian-Polish Wojciech Swietolawski played a leading role in this challenge (Médoire & Tachoire, 1994). His work provided chemists with more accurate average bond energies that legitimized heat of reactions calculations. Weininger clearly shows how those thermodynamic data made researchers get to grips with valence within the atomist conception. He points out for instance how Morris Kharash used the Niels Bohr’s orbit model to propose a physical picture of thermodynamic quantities. This heuristic approach validated by Swientoslawski’s data enabled him to derive heats of combustion for hydrocarbons in quite good agreement with experiment (Weininger, 2001). But it was Linus Pauling who succeeded in bridging valence, atomic theory and thermochemistry.

Pauling’s work constitutively entangled thermodynamics with the Pauli Exclusion Principle, Heisenberg and Dirac’s approach of resonance, structural chemistry and Born’s probabilistic description (Pauling, 1928). We should bear in mind that he was first trained as a crystallographer to understand the way he shaped his experimental and theoretical crowded network that was the Valence Bond Theory. The use of both accurate thermodynamic and crystallographic data enabled Pauling to notice that the covalent radii sum of the bonded atoms approximated bond lengths very well. He then linked bond energies with experimental heats of formation of gaseous molecules (Pauling, 1932). The key step was to choose a set of molecules that could supply the data necessary for extracting those bond energies (Weininger, 2001). This approach allowed him to express the total energy of formation of the molecule as a mere sum of energy terms characteristic of the different bonds assuming that the molecule was obtained from separate atoms (Pauling, 1932). The referent molecules only had to have a single Lewis electronic structure (Pauling & Sherman, 1933a, 1933b). Atoms are the basic units of Pauling’s system, this atomic standpoint shaped the way he used thermodynamic data.

To understand Pauling’s molecular description, one needs: (1) to connect the molecular structure to its constitutive atoms; (2) to study how those atoms interact from within the molecule. This model retains the integrity of the atoms inside the molecule, a molecule is considered as an aggregate of atoms. Each atom has stable atomic orbits - 2s, 2p for instance- that will be used to form stable bonds inside a molecule or to induce ad hoc directed valence (Pauling, 1931; Slater, 1931). He stated that bonds resulted from the overlapping of two atomic eigenfunctions, the larger the overlap is, the stronger the bond gets.

The study of diatomic molecule enabled Pauling to propose the concept of ‘normal covalent bond and to express what he called the ‘normal’ covalent molecular wave function as a mere sum of covalent and ionic terms so as to provide his electronegativity concept with a quantum counterpart (Pauling, 1932). Thermochemistry was once again a touchstone for the validity of this quantum mechanical treatment of chemical bonding; it was as not just a mere tool to calibrate methods. Empirical data really aroused Pauling’s creativity and guided him to adapt his quantum work. By applying the rules for the electron-pair bond, Pauling removed the apparent incompatibility between chemistry and quantum theory (Gavroglu & Simões, 1994). Pauling answered more directly the concerns of the chemists by stressing the three-dimensional structure of molecules, the electrons being the bonding officers of the atoms. The valence bond approach which he developed with Slater was more quickly acknowledged by chemists because resonance corresponded to their usual representations and structural formula (Llored & Bitbol, 2010).

Mulliken proposed a very different quantum approach based on molecular spectroscopy. With regard to the concept of valence considered as an intrinsic property of the atom, Mulliken opposed the notion of ‘energy state’ deduced from molecular spectra on the basis of an electronic configuration, i.e., of a distribution of the molecular electrons in different orbits. In this description, each orbit is delocalized over all the nuclei and can contribute, depending on each specific case, a stabilizing or destabilizing energy contribution to the total energy of the molecule (Llored, 2010). The sum of the energy contributions of each electron in its orbit determined whether the electronic configuration allowed for the existence of a stable molecule, i.e., whether its energy was stabilizing overall. For Mulliken, the atom did not exist as a component in a molecule. His concept of molecular state suggested molecular variability of energy and geometry that could not even be considered within the approaches of Lewis and Irving Langmuir. Mulliken proved that the spectral states of the molecules could be obtained from that of their molecular ions by the mere addition of an electron without changing the quantum numbers and, thus, worked out his molecular Aufbauprinzip (Llored, 2010). This close connection between the quantum theory and the spectral studies gave birth to the correlation diagrams of 1932 (Mulliken, 1932b). Those diagrams made it possible to consider the degree of likeness between a molecule and its separated atoms or its united atom - a fictitious atom obtained by the coalescence of the two atoms such as helium He for two hydrogen H atoms - thanks, in particular, to empirical knowledge of the inter-nuclear distances, energy dissociation and of the charges of the nuclei. The molecule from then on was considered as a composite, i.e., a new entity rather than a mere aggregate of individualized atoms. He wrote: ‘In the ‘molecular’ point of view advanced here, the existence of the molecule as a distinct individual built up of nuclei and electrons is emphasized, whereas according to the usual atomic point of view the molecule is regarded as composed of atoms or of ions held together by valence bonds. From the molecular point of view, it is a matter of secondary importance to determine through what intermediate mechanism (union of atoms or ions) the finished molecule is most conveniently reached. It is really not necessary to think of valence bonds as existing in the molecule (Mulliken, 1931). Despite their irreducible differences, Duhem’s thermodynamic potential echoed the electronic states developed by Mulliken insofar as both considered a molecule from an energy standpoint as a ‘mixt’ not as an ‘aggregate’. The ‘electronic state’, the ‘binding capacity’, the ‘promotion’ of an electron, ‘the energy-bonding-power’, are among the many concepts Mulliken built to explain the capacity of the electrons to be linked to nuclei to form a molecule seen as a whole (Harré & Llored, 2011).

The semantic shift from the concept of molecular orbit to that of molecular orbital –MO- occurred in 1932. The concept of orbital took all its significance from Max Born’s probabilistic interpretation that the square of a molecular orbital corresponded to the probability density of finding this electron at a certain location in space. Mulliken wrote: ‘By an atomic orbital is meant an orbital corresponding to the motion of an electron in the field of a single nucleus plus other electrons, while a molecular orbital corresponds to the motion of an electron in the field of two or more nuclei plus other electrons. Both atomic and molecular orbitals may be thought of as defined in accordance with the Hartree method of the self-consistent field, in order to allow so far as possible for the effects of other electrons than the one whose orbital is under consideration.’ (Mulliken, 1932a).

At the very beginning of his investigations, Mulliken mainly used molecular spectroscopy data. He seldom referred to thermochemistry except for necessary calibration requirements. It is important to notice nevertheless that thermodynamics was influential when he envisaged the study of larger molecules by using group theory. I think it is important not only to check if his holistic molecular conception changed the way thermodynamics became involved in chemical quantum works; but also to compare it to Pauling’s own use of thermal data.

Mulliken’s studies of hyperconjugation are a relevant case study to grasp the role and the status of thermodynamics in such a chemical quantum background (Mulliken et al., 1941). Mulliken’s calculations taken in connection with thermal and bond distance data indicated the conjugating power of chemical groups such as the landmark methyl group. With respect to strength and stability, he could then label the single or the multiple bonds of a conjugated system as acceptor and donor bonds, respectively. The thermal data allowed him to postulate that the hyperconjugation energy of saturated hydrocarbons was to a good approximation a function only of the numbers of different types of bonds. Using localized and non-localized molecular orbitals, he described the conjugation or resonance energy as the energy of delocalisation. In order to approximate quantitative calculations, he wrote the molecular orbital as a Linear Combination of Atomic Orbitals –LCAO- within the Hartree-Fock self-consistent field approach –labelled LCAO MO SCF-.

Unlike Pauling, he systematically used heats of combustion rather than bond energies referring to Karash and W.G. Brown’s corrected tables mainly construed by using hydrogenation heats data. Mulliken and al. wrote: ‘Our procedure for deriving conjugation energy from thermal data is similar to that of Pauling and Sherman who, assuming additivity of bond energies (with corrections for special groups), compute energies of formation and interpret deviations therefrom as resonance energies. However, we shall work with heats of combustion.’ (Mulliken et al., 1941).

Heats of combustion enabled Mulliken to put forward formula to calculate conjugation energies from heats of combustion that fitted the available consistent data for gaseous saturated hydrocarbons - except methane - with considerable accuracy – mostly better than 1 kcal. The current practice of research then involved a rich set of corrections within which quantum formalism, approximations, chemical knowledge and thermochemistry were deeply intertwined in order to create a stabilized composite knowledge of conjugation energy for particular types of molecules. For instance, Mulliken tailored Lennard-Jones’s curves to make them fit the empirical data, he then determined wave function coefficients by defining and substituting new parameters in the secular determinant, and finally extracted from the computed conjugation energies some energy quantities - the third-order conjugation energy - to make a direct comparison with observed conjugation energy. By trial and error, a host of other corrections and readjustments enabled him to determine the total conjugation energy and to compare it to thermodynamic outcomes. Mulliken and al. wrote (p.56): ‘Perhaps the most uncertain feature of our analysis is the derivation from thermal data. (...). Our empirical parameters, our bond order curve, and our numerical conclusions would then be so strongly altered, since they are decidedly sensitive to variations in the empirical conjugation energies to which they are fitted. Nevertheless, their self-consistency gives a distinct support to our numerical results, since we have found that such self-consistency is not easy to attain.’ (Mulliken et al., 1941). The authors called for more accurate thermal and bond distances data, those researches got into an endless and open circle of refinements that linked calculations with empirical data. It is of importance to notice that this work led the authors to provide Hückel’s resonance parameter ‘β’ with a new interpretation that allowed a more satisfactory understanding of energy interactions within unsaturated molecules. This theoretical accommodation was then confirmed by spectroscopic data. Thermodynamics not only took part in a motley complex of scientific practices that made it possible for a quantum chemist to calculate molecular properties and to predict chemical reactivity, but it also partly altered the meaning of the theoretical quantum background. I wish to emphasize that thermodynamics was not a mere tool for calibrating a semi-empirical method but a constitutive active part of a techno scientific network that Mulliken and others shaped to study a molecule understood as a ‘mixt’.

In addition to this conclusion, there are other interesting facts we should take a look at. Mulliken and Parr studied the decrease in ‘π’ electron energy for the change from a Kekulé to a proper benzene structure by using a completely theoretical method (Mulliken & Parr, 1951). In order to make a comparison with the ordinary empirical resonance energy, they had to make several corrections that involved: (1) the ‘compressive energy’ needed to adjust the lengths of the single and double Kekulé’s bonds to those of the proper benzene; (2) hyperconjugation and related effects. They discussed the corrections and estimated their magnitudes before concluding that a reliable value could only be obtained for the compression energy. Following this line of reasoning, they determined that the computer net resonance energy was 36.5 kcal. This outcome agreed, with the uncertainties due to the omitted correction terms, with the value 41.8 kcal of the empirical resonance energy ‘Δ’ based on thermodynamic data. They then used ‘Δ’ as the point of departure of the calculation of the actual heat of formation ‘ΔH f’ of benzene from the value given by a standard formula for nonresonating hydrocarbons. They proposed a new standard formula containing corrections for the mutual effects of neighboring carbon-carbon bonds while discussing its significance. This analysis allowed them to clarify what was meant by ‘resonance energy’ and to query the significance of ‘nonresonating’ structures and repulsion terms in their own theory. They always sought to identify the conditions that made it possible for a chemist to make a clean-cut comparison between theory and experiment. In quite that light, thermodynamic data guided the way they wrote equations relating theoretical energy quantities to a sum of empirically based terms. This work allowed them to define new useful concepts such as ‘standard hydrocarbon’ – held with Δ = 0 kcal - that fostered calculations and comparisons. To sum up, they continually queried their model and its meaning. Thermochemistry, quantum chemical methods, chemical practices and culture, computers, instruments were constitutively intertwined, and they were interactively stabilized. Modelling is an open-ended process that includes thermochemistry as a foundation to create a new quantum account of a molecular ‘mixt’. As Andrew Pickering asserts: ‘Existing culture constitutes the surface of emergence for the intentional structure of scientific practice, and such practices consists in the reciprocal tuning of human and material, tuning that can itself reconfigure human intentions. The upshot is, on occasion, the reconfiguration and extension of scientific culture.’ (Pickering, 1995). The dialectics of resistances and accommodations between thermochemistry and the quantum chemical model made Mulliken continuously recast his approach so as to stabilize a great amount of tables and concepts about molecular properties. He produced a great number of tables throughout his academic life. From spectroscopic to conjugation energy tables as well as from correlation diagrams to Mulliken-Walsh ones, he knitted a network of data thanks to a constitutive interaction between theory and experiment.

I claim that this difference of practice from Pauling to Mulliken was in a way a consequence of the two conceptual schemes at stake. On the one hand, the aggregative Pauling’s approach focused on a reified chemical bond that resulted in valence electrons share. Pauling was indeed interested by the formation energy of a molecule from its parts. On the other hand, Mulliken used chemical reaction combustion data because he considered the way the ‘whole’ molecule reacted and released energy by thermal transfer in the presence of other chemical reactants and their surroundings. Pauling’s bottom-up analysis collapsed Mulliken’s holistic way of thinking. I think that my statement is to be qualified insofar as we should wonder if pragmatic reasons were also at stake concerning this choice of data. Heats of combustion corrected tables probably were more useful for Mulliken than others.

At that time, chemical affinity turned out to play no role in the integration of thermodynamics into quantum methods simply because researchers’ presumptions did not consider it as a challenge to face anymore. On the contrary, the duality of the two conceptions of matter were still at work and underpinned the way Mulliken and Pauling were using thermochemistry while doing quantum chemistry. So I emphasize that the way thermodynamics became involved in quantum chemistry partly depended on different human stories and skills -Pauling was first a chemist and crystallographer whereas Mulliken was trained as a chemist and a spectroscopist. Others were mathematicians, organic chemists, and so on. But it also depended on different representations of matter – the aggregate and the ‘mixt’. Practices of research, human skills and goals, human and non human agency, time, concepts and representations interactively took part in the integration of thermodynamics into the earlier quantum realm.

Before I move on to modern quantum chemistry, I would like to further examine the relation between earlier quantum methods and thermodynamics by querying the concept of ‘state’, be it electronic, quantum or thermodynamic.

3.3. The concept of ‘state’ and the relation between quantum chemical methods and thermodynamics

Quantum chemistry is the result of a deep entanglement of scientific and human practices within which thermodynamics was an active generator of concepts and a tool for method calibration. If we want to query the role and status of thermodynamics in quantum chemistry, it is necessary to consider the practices of research from which they originate, i.e., the techno-scientific closure which combines quantum mechanics, approximations, instrumental and algorithmic techniques, chemical know-how, and the use of Principles which do not belong to quantum theory such as the Pauli Principle. The predictive capacity of these chemical quantum approaches does not only rely on the molecular wave function but also on a host of approximations and compromises that make it possible for numerical properties and molecular landscapes to be calculated (Llored, 2010, 2012).

It is of interest to point out that quantum formalism gives rise to miscellaneous chemical quantum approaches depending on both chemical cultural resources and practical scientific backgrounds. It is astonishing however to notice that an atomic approach such as that of Pauling could have successfully developed on quantum grounds. The notion of atomic parts within a molecule is indeed deprived of meaning in quantum mechanics. The holistic approach of Mulliken seems much more understandable in a holistic, contextual and non-representionalist quantum theory. The final results reached by those methods are not pure quantum physics applications. This is a crucial point to bear in mind.

Let us deepen our study of Mulliken’s molecular orbital framework to illuminate his fine-grained relation with thermodynamics. Mulliken first worked on the couplings between orbital kinetic moments and of spin suggested by Friedrich Hund. In 1927, Hund developed an approach radically different from the work developed by Walter Heitler and Fritz London and generalized the study of Oyvind Burrau to diatomic molecules. Rather than built a molecular wave function from those describing isolated atoms, he proposed to describe each electron in the total molecular electric field of the nuclei and other electrons. Hund focused on the evolution of electronic energy during the transfer of an orbit around the joined nuclei to an orbit around the separate atoms isolated from each other. On the basis of works developed by Erwin Schrödinger, Pascual Jordan and Max Born, Hund was able to describe the exact stationary states of the two subsystems knowing those of the system by using linear combination. He wrote: ’We investigate a system with one degree of freedom as an analogous for a molecule with several atoms, using quantum mechanics. Its potential energy has several minima. We can relate the stationary states of such a system to those of partial systems that result when the separation between the minima becomes infinite or when the potential energy separating them becomes infinite. In agreement with this (and in opposition to the classical theory) we obtain an adiabatic relation between the states of two separated atoms or ions, the states of a two-atomic molecule and the states of the atom that would result when the nuclei are united. This relation allows for a qualitatively valid term system of the molecule and for an explanation of the terms ‘polar molecule’ and ‘ion lattice’.’ (Hund, 1927). The new quantum theory thus allowed him to explain the adiabatic passage between two stationary states of the same system. Hund made this result suitable for the study of molecules and proposed an interpolation between the quantum states of the isolated atoms, the united atom and the molecule. Hund further added: ‘The complete transition from the case of nuclei separated by a large distance to the case of a small separation cannot be done adiabatically in the classical model. If we start in the case of nuclei separated by a large distance with some given quantum numbers, then we first arrive at orbit type II, but for a certain internuclear distance this type is no longer possible. The classical motion becomes a limiting motion. The same occurs when we approach from the other side, with nuclei placed close together; for a certain distance between the nuclei, orbit type I becomes impossible and the motion becomes a limit. An adiabatic transition going over the limiting case is not possible because of the vanishing frequency.’ (Hund, 1927).

Within the framework of thermodynamics, a system is involved in an adiabatic process if it does not exchange any thermal energy – any heat - with the outside. It can exchange only work. In mechanics, an adiabatic process is characterized by the fact that within infinitely slow changes of external parameters, the system evolves through successive states of equilibrium. In this kind of process, some quantities remain invariant, physicists call them adiabatic invariants. The adiabatic hypothesis, which was originally developed by Paul Ehrenfest, considers that the quantum conditions must always be such that the adiabatic invariants of classical mechanics are equal to an integer multiple of the quantum of action. You can infer the values of the states of a system from quantum states of another system that can be reached by an adiabatic transformation. The difficulty related to the conservation of quantities when changing orbits, evoked by Hund, disappears when the problem is studied within the framework of quantum theory. We realize that beyond semantic diversity of words such as ‘state’ or ‘adiabatic’, what is at stake is the way quantum physics can encompass classics physics as a limited case in precise contexts. Researchers were inventing a new quantum chemical scheme, while using general scientific and linguistic devices to link it with different previous theories. The notion of ‘state’ related to that of the ‘equilibrium state’ involved in thermodynamics is not tantamount to that of a ‘quantum state’ that only provides scientists with the calculation of the probability of each set of ‘observables’ from within a precise experiment context (Bitbol, 1998). The quantum state is related to a predictive symbolism that enables scientists to study holistic systems constitutively entangled with apparatus, that is to say the study of which cannot be separated from the context of measurement. Thermodynamics and quantum chemistry are nevertheless holistic, the former is descriptive at a macroscopic level, the later is predictive at a microscopic one. In this respect, it is not surprising that scientists tried and try to bridge those approaches in what we call different levels of our universe. What may the link between the two levels be? What are the necessary pre-conditions for tuning them? What may be the link between an energy quantum study of a molecule understood as a ‘whole’ at a microscopic level, and the energy of a set of molecules at a level described by thermodynamics?

Dealing with relations between a molecule and it parts, G.K. Vemulapalli noticed that: ‘While properties of the whole are not the sums or products of the properties of parts, the states of the system can be obtained by adding the states of parts. Because properties in turn can be derived from the states, it appears that we have shown that properties of wholes are completely determined by parts. But there are two problems here. (1) It is true that the states of the system are composed of states of the parts, but there are also weighting factors in the composition. There are the constants λ in the linear combination. What factors determine these constants? (2) Just as in the molecular wave function, an atomic wave function may also be represented by a sum of an arbitrary set of functions. Thus one may claim that an atomic function is a linear combination of molecular functions or atomic states (parts) reduced to molecular states (wholes!).’ (Vemulapalli, 2003). If we set apart that the notion of properties as open to criticism in quantum contexts and the linguistic traps related to it, the author’s insight is relevant to query the interrelation between levels of description studied by quantum chemistry.

The arbitrary character of the relation between the whole and its parts is highlighted. It remains more than ever present in current semi-empirical or ab initio methods of molecular orbital calculation that depend on the choice of atomic or molecular orbital used. Mulliken developed the fragment method in 1933, two fragments could interact provided they had the same kind of symmetry and that the energy gap, measured by spectroscopy, was not too high. For the ethylene molecule ‘C2H4’, Mulliken considered two fragments ‘CH2’ and determined a suitable molecular orbital by using the irreducible representations of ethylene. He could thus propose a representation of a molecular orbital of ethylene by increasing order of energy as well as its correlation diagram thanks to those of the two fragments. In doing so, he included all the characteristics of the molecular orbital diagram of the ethylene molecule and checked it using molecular spectroscopy. Mulliken could just as easily have considered a fragment “C2” and another “H4” of adapted symmetries. The relation on whole “C2H2” with its parts was of secondary interest. The fundamental choice relates to the nature and the extent of the basis sets of the calculation. Vemulapalli threw light on the role of the weighting coefficients appearing in front of the orbitals of the key basis sets. These coefficients determined by the Variational Principle are those which minimize the molecular potential energy. How is this minimum of energy justified? What can explain the use of the Variational Principal? A quantum principle?

Vemulapalli referred to the second law of thermodynamics to explain why the studied molecular system continuously eliminates its excess energy by interactions with its environment. An energy transformation into local entropy returns legitimates the use of the Variational Principle. Vemulapalli added: ‘Thus we are led to conclude that it doesn’t matter what the states of the parts are, but it does matter the surroundings soak up the excess energy of the molecule, increasing entropy, and make the molecule settle down into the lowest energy state. It is that part of the universe coupled to the system, and the varieties of interactions between the system (molecules) and the surroundings that determines the structure of the molecule. Holism thus appears as the root of the apparent reduction of properties of a molecule to its parts through coupling states. We are able to follow a reductionist program in calculating molecular properties, but what we are able to do is a gift of holism.’ (Vemulapalli, 2003). A molecule is always in relation with its surroundings, it can at least emit a photon even in a strong vacuum. So the study at a molecular level requires a study of interactions at an upper level while microscopic descriptions require quantum predictions. Levels of description need one another, they are co-stabilized. The Variational Principle that underpins Mulliken’s work at a molecular level can find a justification within the context of thermodynamics. It is an a posteriori analysis that allows us to widen our understanding of the possible links between thermodynamics and quantum chemistry from another point of view, that of inter-levels relations.

To sum up, we have focused our work on the way thermodynamics was used from within the earlier quantum chemical methods. We have shown that the opposition between the ‘aggregate’ and the ‘mixt’ was still at stake when explaining the integration of thermodynamics into quantum chemistry. Taking distance from linguistic traps concerning words such as ‘state’ or ‘adiabatic’, and by reflecting upon the relations between the levels of scientific description – a molecule to its alleged constitutive atoms or the macroscopic and microscopic scales -, we confirm that epistemology can provide us with another kind of understanding of the interrelations between thermodynamics and quantum chemistry. I would like to turn now to modern quantum methods and to examine how they involved thermodynamics. I choose to develop the example of the density functional theory - DFT -which has been widely used for twenty years in research laboratories.

Advertisement

4. The role and status of thermodynamics in modern quantum chemistry

Kohn–Sham density functional theory has become one of the most popular tools in electronic-structure theory due to its excellent performance-cost ratio as compared with correlated wave function theory, WFT. Within this theory, the molecular space is divided into grids of cubes; researchers define an electronic density for each point of this space. It is a holistic approach that enables quantum chemists to calculate molecular geometry or total energy exhaustively thanks to its electronic density – ‘ρ(r)’ -, provided that its Ground-State is not degenerate. The total energy is in consequence a functional of the electronic density that is to say a function the basic variable of which is the electronic density function (Kohn et al., 1996). Several authors have applied the Variational Principle to the total energy with the purpose of determining the exact electronic density that minimizes it. Approximations are required because the exact electronic density cannot be reached. The accuracy of a DFT calculation depends upon the quality of the exchange–correlation – XC - functional. This functional is used to account for the exchange-correlation energy term –EXC-. This energy contains not only the non-classical effects of self-interaction, exchange and correlation, which are contributions to the potential energy of the system, but also a portion belonging to the kinetic energy. The past two decades have seen remarkable progress in the development and validation of XC density functionals.

The first generation of functionals is called the local spin density approximation – LSDA -, in which density functionals depend only on local spin densities. Although LSDA gives accurate predictions for solid-state physics, it is not a useful model for chemistry due to its severe overbinding of chemical bonds and underestimation of barrier heights. The second generation of density functionals is called the generalized gradient approximation – GGA -, in which functionals depend both on the electronic density and its gradient. GGA functionals have been shown to give more accurate predictions for thermochemistry than LSDA ones, but they still underestimate barrier heights (Trulhar & Zhao, 2008a). In third-generation functionals, a Laplacian term density is added in the functional form; such functionals are called meta-GGAs. LSDAs, GGAs, and meta-GGAs are “local” functionals because the electronic energy density at a single spatial point depends only on the behavior of the electronic density and kinetic energy at and near that point. Local functionals can be mixed with nonlocal Hartree–Fock – HF - exchange as justified by the adiabatic connection theory (Becke, 1993). Functionals containing HF exchange are usually called hybrid functionals, and they are often more accurate than local functionals for main group thermochemistry (Trulhar & Zhao, 2008a, 2008b). This field of research aims at creating new density functionals with broader applicability to chemistry by including, for instance, non-covalent interactions. The crucial step is the calibration of new functionals against benchmark databases or best theoretical estimates (Goerigk & Grimme, 2010). Let us consider a case study developed by Truhlar and Zhao in order to understand the role and the status of thermochemistry in such a current context.

The most popular density functional, ‘B3LYP’, an hybrid GGA, has some serious shortcomings among which is its underestimation of barrier heights by an average of 4.4 kcal/mol for a database of 76 barrier heights. This underestimation is usually ascribed to the self-interaction error (unphysical interaction of an electron with itself) in local DFT (Trulhar & Zhao, 2008a). Truhlar and Zhao change parameters and include new ones while shaping a new mathematical functional form that takes physical phenomena into account. In so doing, they design a new functional by trial and error. They then use databases to appraise the reliability of a new functional within a defined purpose. Two databases gather all the thermodynamic quantities: (1) the data base ‘TC177’ is a composite database consisting of 177 data for main-group thermochemistry including atomization energies, ionization potentials, electron affinities, proton affinities of conjugated polyenes, and hydrocarbon thermochemistry among others data; (2) ‘DBH76’ is database of 76 diverse barrier heights concerning for instance nucleophilic substitution and hydrogen transfer. Truhlar and Zhao then discuss the performance of new functionals for these databases, they conclude that functionals labeled ‘MO6-2X’ and ‘MO5-2X’ are the ‘best performers’ for the main-group thermochemistry and barrier heights. They propose cases study to exemplify their statement. The isomerization energy of octane involves stereoelectronic effects; none of the previous functionals gives the right sign for the isomerization energy from 2,2,3,3-tetramethylbutane to n-octane. The functional ‘B3LYP’ gives an error of 10 kcal/mol while ‘M05-2X’ predicts the right sign because this later allows a better description of medium-range XC energies, which are manifested here as attractive components of the non-covalent interaction of geminal methyl and methylene groups (Trulhar & Zhao, 2008a). On the basis of 496 data in 32 databases, they recommend different ‘best functionals’ designed to transition metal thermochemistry, main-group thermochemistry, kinetics, non-covalent interactions.

Choosing a functional of electron density depends upon: (1) the necessary accuracy; (2) the chemical system; (3) the time of calculation. It also requires choosing a set of functions called a basis to achieve calculations for each atom. The basis change according to the type of atoms and different effects such as diffusion, polarization, pseudo potentials for chore electrons, and the size of functions -double, triple zeta-. The functional and its relative basis set define a level of calculation, the process of which requires choosing a computer program such as Gaussian type or Turbomole to be processed. If calculations are not convergent, researchers can change the functional, the size of the grids and convergence thresholds in order to optimize geometry or to calculate molecular energy. Each step reveals know-how, chemical culture and pragmatic compromises. Notwithstanding their basic differences, the ways thermochemistry is involved within molecular orbital approximation or DFT approach are quite similar. Modeling includes thermochemistry as a tool for calibration but also as a heuristic guide for theoretical parameters adjustments inside functionals or wavefunctions or for the design of new quantum methods (Grimme et al., 2007). The structure, within which calculations are made, is well framed by the Variational Principle. We thus realize that thermodynamic quantities partly shape current quantum practices of optimization of geometry and calibration. Calculations help researchers to find out the energy surface associated with a particular chemical reaction. The knowledge of the minimum points on an energy surface makes it possible for a chemist to interpret thermodynamic data. Besides, thermodynamics can retroactively justify minimization of energy as we have already explained. Thermodynamics and energy surface are thus interconnected to determine transition structure and reaction pathways. Modelling structural configurations is of importance in this context and the quantum calculations of entropy play a leading role in such descriptions and predictions.

Before I conclude, I would like to focus on a last case study to widen and deepen my enquiry. Let us consider how thermodynamic quantities are used to model solvatation effects and to scrutinize a chemical reaction mechanism within the DFT calculation background. I will refer to a study about zinc-thiolate complexes reactivity depending on the zinc ligands (Picot et al., 2008). Some calculations are shaped by thermodynamic quantities especially designed for quantum context, that is to say that do not exist in classic thermodynamics. It is typically the case of the zero-point vibrational energy labeled ‘ZPVE’. The molecular vibration energy is not equal to zero at absolute zero –O K-, it is a quantum mechanical effect which is a consequence of the Uncertainty Principle. Once a stationary point is localized, be it an energy minimum or a transition state, its energy turns out to be less important than the experimental energy of the molecule. For comparison with experimentally obtained thermochemical data, zero‐point vibrational energy is required to convert total electronic energies obtained from ab initio quantum mechanical studies into 0 K enthalpies. The currently accepted practice is to employ self‐consistent‐field harmonic frequencies that have been scaled to reproduce experimentally observed fundamental frequencies (Grev et al., 1991). This procedure introduces systematic errors that result from a recognizable flaw in the method, namely that the correct ZPVE -G (0)- is not one half the sums of the fundamental vibrational frequencies. The use of scaling factors is therefore required (Grev et al., 1991); they depend upon the level of description and its computer data processing. It is then possible to calculate other thermodynamic quantities related to a chemical reaction such as the gas phase Gibbs’s free energy from the equation:

ΔGgas=ΔEelec+ΔZPVE+ΔETTΔSE1

ΔEelec, ΔZPVE, ΔET and ΔS are the differences of electronic energy, zero-point vibrational energy, thermal energy and entropy between the products and the reactants, respectively (Picot et al., 2008).

The solvatation free energy of each compound is determined by calculations depending on a model. This quantity is always defined as the required amount of energy necessary to transfer a molecule of gaseous solute into the solvent. The crucial step is to appraise how the solvent gets involved in a chemical reaction. Its action can be direct if some molecules of solvent take part in the chemical process or indirect if the solvent –then labeled the ‘bulk medium’- only modifies reactants reactivity compared with that of the same molecules in the gas phase. Whatever the context may be, the solvatation free energy is calculated from the equation (Leach, 2001):

ΔGsolv=ΔGelec+ΔGvdw+ΔGcavE2

ΔGelec quantifies the interaction between the solvent and the solute, it is all the more important as the iconicity or polarity is great. ΔGvdw takes into account Van der Waals interactions between the two. To finish, ΔGcav quantifies the cavity occupied by the solute while counting solvent reorganization around the cavity and the necessary work to fight against solvent pressure when the cavity is created. It is possible to encompass the two last terms within the equation:

ΔGvdw+ΔGcav=aS+bE3

a and b are constants, and S is the area of contact between the solute and the solvent. The different models that enable chemists to calculate ΔGsolv mostly differs by the way they appraise ΔGelec. From earlier models developed by Born (1920) and Onsager (1936) to the PCM model –Polarisable Continuum Method-, the form of the cavity and the study of polarization between the solvent and the solute were continuously modified and improved (Barone et al., 2004; Cossi et al., 2002). The surface of the cavity was divided into fine-grained fragments labeled ‘tesserae’, the wavefunction of solute is determined by Self-Consistent Field iteration. Two others models were performed, the COSMO theory –Conductor-Like Screening Model- and C-PCM approach –Conductor-Like PCM-. Modeling the interactions between the solute and the solvent is a challenge for current quantum chemists. In this context, thermodynamic quantities are the heuristic framework that shapes quantum investigations for achieving better models. The calculation of such thermodynamic quantities stir up: (1) new polarization descriptions and understanding; (2) the creation of new algorithms and cavity topological models (Barone et al., 2004); (3) the continuous recasting of levels of description and software to optimize geometry or to calculate energy quantities (Takano & Houk, 2005); (4) the modelling of the electronic density of the solute especially outside the cavity.

It is then easy to express the free energy of chemical reaction in water using the following classic thermodynamic cycle (Picot et al., 2008):

Figure 1.

This cycle in turn implied the following formula:

ΔGwater=ΔGgas+ΔGsolv(P)ΔGsolv(R)E4

Let us analyze how those thermodynamic quantities guide Picot et al. during their investigation of zinc-thiolate complexes alkylation. This short study will allow us to grasp thermodynamics role and status in workaday chemical quantum practices of research.

They first need biomimetic models that are appropriate for both structural and mechanistic studies. Based on the experimental data, they search for a consistent series of zinc complexes in which the ligands, the electric charge, and the availability of hydrogen bonding to the atom of sulfur can be varied. They choose the Gaussian 03 software and a level of calculation for the geometry optimizations using basis especially designed for each atom or physical contraction, diffusion or polarization. For each possible mechanistic pathway -see figure 2 below-, they scrutinize each stationary point by using frequency analysis. Each transition state –labeled TS1-3 in the mechanisms presented below- was verified by stepping along the reaction coordinate and confirming that the transformation occurred.

They then calculate the gas phase Gibbs free energy, and use C-PCM model to calculate the solvatation free energy within a precise set of levels of calculations. They can finally work out the react free energy in aqueous phase. They assess the adequacy of the chemical modeling and of the level of computation against observed databases of zinc complexes. They thus propose all the necessary thermodynamic quantities to analyze the chemical reaction - table 1 below.

Those thermodynamic quantities guide the authors along their line of enquiry. They compared energy barriers required to reach transition states in order to elucidate all the influencing parameters such as the global charge of the complex, the hydrogen bond, the role of zinc ligands and that of the solvent. In doing so, they confirm that their

Figure 2.

Possible mechanistic pathways for the alkylation of a zinc-bound thiolate by methyl iodide. (Picot et al., 2008).

Table 1.

Relative ΔGgas and ΔGwater in kcal.mol-1. (Picot et al., 2008).

computational outcomes are in agreement with several experimental studies. They for instance show that the net electronic charge of the complex plays a significant role not only on its reactivity, but especially on the mechanism of thiolate alkylation. They finally discuss the nature of the pathways depending on all those energy considerations. Once again, geometry and molecular configurations of the transition state are modeled and assumed to make those predictions become achievable. The entropic contribution is thus of primary importance to query such chemical potential mechanisms.

Thermodynamics is thus a tool for calibrating levels of computation (Curtis et al., 1997; Trulhar & Zhao, 2008b), but it also shapes solvatation modeling and the basic reasoning of mechanistic investigation (Takano & Houk, 2005). In a way, thermodynamics embeds a wide class of quantum activities of seeking and predicting. It provides quantum chemical methods with necessary conditions for reasoning and inventing new methods for calculations (Grimme et al., 2010).

Advertisement

5. Conclusion

The study of both earlier and recent quantum chemical methods highlights the way that thermodynamics is intertwined with quantum methods within a large network of scientific practices that includes computation, chemistry, spectroscopy, crystallography, physics, and so on. As Rouse claims concerning scientific practices (1996, p. 177): ‘What results is not a systematic unification of the achievements of different scientific disciplines but a complex and partial overlap and interaction among the ways those disciplines develop over time.’ Chemists connect ways of doing science and transform them within ongoing open-ended processes of research. As we have pointed out, thermodynamics was transmuted into thermochemistry through chemical practices, and conversely chemical instrumentation and ways of modeling were transformed by thermochemistry.

The role of thermodynamics is undoubtedly to validate models and methods while stirring up techno scientific creativity. The status of thermodynamics within quantum chemical methods is that of a reference framework that enables chemists to carry out their semi-empirical calculations or to create new ab initio predictions for thermodynamic data. This conclusion can be widened by considering other methods such as metadynamics, AIM – Atoms in Molecules - and so on.

This study also points out that alleged incommensurable scientific worlds such as thermodynamics and quantum mechanics, the assumptions, the formalisms and the natures – descriptive or predictive - of which are completely different, can constitutively interact to form the composite field of quantum chemistry. Epistemological queries thus arise concerning inter-levels description of what we call ‘reality’ and the way scientific fields and knowledge can be mutually stabilized. To this extent, this study also stresses the importance of an epistemology that focuses its attention on scientific practices while including historical insights.

It is interesting to notice that chemical affinities reappear in the latest quantum chemical background. Truhlar and Zhao, among others, refer to affinities –electron affinities, proton affinities of different molecules- in their benchmark databases. Thermodynamics was first introduced in chemistry, we have shown, because it provided chemists with a notion of quantitative affinity. This concept went astray in earlier chemical quantum works and then reappeared from within databases or concepts that help current quantum chemists to shape their functionals according to thermochemistry and to investigate chemical reactivity. Further epistemological investigations are considered necessary to open up the reviving role of the concept of affinity to scrutiny in modern chemistry.

Advertisement

Acknowledgments

I would like to thank Rom Harré for his second reading of this paper, his advice and his generosity. I also would like to thank Miss Zgela, the Publishing Process Manager in charge for the book, for her help during the different steps of the publishing process.

References

  1. 1. BaroneV.ImprotaR.RegaN. 2004 Computation of protein pK’s values by an integrated density functional theory/polarizable contimuum model approach. Theoretical Chemistry Accounts, 111, 237245 .
  2. 2. BeckeA. D. 1993 Density-functional thermochemistry III. The role of exact exchange. Journal of Chemical Physics, 98, 56485652 .
  3. 3. Bensaude-VincentB.StengersI. 1996 History of chemistry. Harvard University Press.
  4. 4. Bensaude-VincentB.SimonJ. 2008 Chemistry, the impure science. Imperial College Press.
  5. 5. BitbolM. 1998 L’aveuglante proximité du réel. Flammarion. Paris.
  6. 6. CossiM.et al. 2002 New developments in the polarizable continuum model for quantum mechanical and classical calculations on molecules in solution. Journal of Chemical Physics, 117, 4354 .
  7. 7. CurtissL. A.et al. 1997 Assessment of Gaussian-2 and density functional theories for the computation of enthalpies of formation. Journal of Chemical Physics, 106.
  8. 8. DaumasM. 1946 L’Acte chimique, Essai sur l’histoire de la philosophie chimique. Editions du Sablon, Bruxelles.
  9. 9. DuhemP. 1902 Le Mixte et la combinaison chimique, réédité au Corpus des œuvres de philosophie en langue française. Fayard, Paris, 1985.
  10. 10. EarleyJ. E. 2005 Why there is no salt in the sea. Foundations of Chemistry, 7, 85102 .
  11. 11. GavrogluK.SimõesA. 1994 The Americans, the Germans, and the beginnings of quantum chemistry : The confluence of diverging traditions. Historical Studies in the Physical Sciences, 27, (1), 47110 .
  12. 12. GoerigkL.GrimmeS. 2010 A General Database for Main Group Thermochemistry, Kinetics, and Noncovalent Interactions- Assessment of Common and Reparameterized (meta-)GGA Density Functionals. Journal of Chemical Theory and Computation, 6, 107126 .
  13. 13. GrevR. S.JanssenC. L.SchaeferH. F. 1991 Concerning zero‐point vibrational energy corrections to electronic energies. Journal of Chemical Physics, 95, 51285132 .
  14. 14. GrimmeS.et al. 2010 A consistent and accurate ab initio parametrization of density functional dispersion correction (DFT-D) for the 94 elements H-Pu. The Journal of Chemical Physics, 132, 154104 EOF .
  15. 15. GuldbergC. M.WaageP. 1867 Etudes sur les Affinités Chimiques, Christiana.
  16. 16. KraghH. 1984 Julius Thomsen and Classical Thermochemistry. British Journal for the History of Science, 17, 255272 .
  17. 17. KraghH.WeineigerS. J. 1996 Sooner Silence than Confusion: The Tortuous Entry of Entropy into Chemistry. Historical Studies in the Physical and Biological Sciences, 27, 91130 .
  18. 18. KohnW.BeckeA. D.ParrR. G. 1996 Density functional theory of electronic structure. Journal of Physical Chemistry, 100, 1297412980 .
  19. 19. KondepudiD.PrigogineI. 1998 Modern Thermodynamics: from Heat Engines to Dissipative Structures, John Wiley & Sons, Chichester and New York.
  20. 20. HarréR.LloredJ. P. 2011 Mereologies as the Grammars of Chemical Discourses. Foundations of Chemistry, 13, 6376 .
  21. 21. HundF. 1927 On the Interpretation of Molecular Spectra I. Zeitschrift für Physik, 40.
  22. 22. HundF. 1974 The History of Quantum Theory. Harrap London.
  23. 23. LeachA. R. 2001 Molecular Modeling. Principles and Applications. 2nd Edition. Pearson Education. Prentice-Hall.
  24. 24. LloredJ. P. 2010 Mereology and quantum chemistry: the approximation of molecular orbital. Foundations of Chemistry, 12, 203221 .
  25. 25. LloredJ. P. 2012 forthcoming). Towards a practical epistemology for chemistry, In: Philosophy of Chemistry : practical roots, methods and concepts, Llored, J.P. (ed), Cambridge Scholars Publishing, Cambridge.
  26. 26. LloredJ. P.BitbolM. 2010 Molecular orbital: Dispositions or Predictive Structure?, In: Quantum biochemistry, Matta, C.F. (ed), Wiley-VCH.
  27. 27. MédoireL. A.TachoireH. 1994 Histoire de la thermochimie. Prélude à la thermodynamique chimique. Aix-en-Provence : Publications de l’Université de Provence.
  28. 28. MullikenR. S. 1931 Bonding power of electrons and theory of valence. Chemical Review, 9, 369
  29. 29. MullikenR. S. 1932a Electronic structures of polyatomic molecules and valence II. General consideration. Physical Review, 41, 50
  30. 30. MullikenR. S. 1932b Interpretation of Band Spectra, Part III. Electron Quantum Numbers and States of Molecules and their Atoms. Review of Modern Physics, 4, 46 .
  31. 31. MullikenR. S. 1932c Electronic structures of polyatomic molecules and valence III. Quantum theory of the double bond. Physical Review, 41, 754
  32. 32. MullikenR. S. .RiekeC. A.BrownW. G. 1941 Hyperconjugation. The Journal of the American Chemical Society, 63, 4156 .
  33. 33. MullikenR. S.ParrR. G. 1951 LCAO Molecular Computation of Resonance Energies of Benzene and Butadiene, with General Analysis of Theoretical Versus Thermodynamical Resonance Energies. Journal of Chemical Physics, 19, n 10, 12711278 .
  34. 34. MullikenR. S. 1967 Spectroscopy, molecular orbital and chemical bonding (Nobel lecture). Science, 157, 1324 .
  35. 35. NeedhamP. 1996 Aristotelian Chemistry : A prelude to Duhemian Metaphysics. Studies in the History and Philosophy of Science, 27, 251269 .
  36. 36. NeeseF.SchwabeT.GrimmeS. 2007 Analytic derivatives for perturbatively corrected “double hybrid” density functionals: Theory, implementation, and applications. The Journal of Chemical Physics, 126, 124115.
  37. 37. NyeMary. J. 1993 From Chemical Philosophy to Theoretical Chemistry: Dynamics of Matter and Dynamics of Disciplines: 1800-1950 (section: From Chemical Affinity to Chemical Thermodynamics, 116120 ). University of California Press.
  38. 38. MiGyung. K. 2003 Affinity, that elusive dream. The MIT Press. Cambridge Massachusetts, London.
  39. 39. OstwaldW. 1919 L’évolution d’une science. La chimie, traduction française, Flammarion, Paris, première édition en 1909.
  40. 40. PartingtonJ. R. 1962 A History of Chemistry. vol III. Macmillan (ed), London.
  41. 41. PaulingL. 1928 The shared-electron chemical bond’ Proceedings of the National Academy of Sciences, 14, 359362 .
  42. 42. PaulingL. 1931 The nature of the chemical bond. Application of results obtained from the quantum mechanics and from a theory of paramagnetic susceptibility to the structure of molecules. Journal of American Chemical Society, 53, 13671400 .
  43. 43. PaulingL. 1932 The Nature of the Chemical Bond. IV. The Energy of Single Bonds and the relative Electronegativity of Atoms. Journal of the American Chemical Society, 54, 35703582 .
  44. 44. PaulingL.ShermanJ. 1933a The Nature of the Chemical Bond. VI. Calculation from Thermodynamical Data of the Energy of Resonance of Molecules among several Electronic Structures. Journal of Chemical Physics, 1, 606617 .
  45. 45. PaulingL.ShermanJ. 1933b The Nature of the Chemical Bond. VII. The calculation of resonance energy in conjugated systems. Journal of Chemical Physics, 1, 679686 .
  46. 46. PicotD.OhanessianG.FrisonG. 2008 The Alkylation Mechanism of Zinc-Bound Thiolates Depends upon the Zinc Ligands. Inorganic Chemistry, 47, 81678178 .
  47. 47. PickeringA. 1995 The Mangle of Practice. Time, Agency and Science. The University of Chicago Press. Chicago.21 0-22666-802-9
  48. 48. RouseJ. 1996 Engaging Science; How to Understand Its Practices Philosophically. Cornell University Press. Ithace and London.
  49. 49. SainteClaire.DevilleH. 1914 Leçons sur la Dissociation. Professées devant la Société Chimique de Paris le 18 mars et le 1er avril 1864, Collection Les Classiques de la Science, Paris.
  50. 50. ServosJ. W. 1990 Physical Chemistry from Ostwald to Pauling : The Making of Science in America. Princeton: Princeton University Press.
  51. 51. SlaterJ. C. 1931 Directed valence in polyatomic molecules. Physical.Review, 37, 481489 .
  52. 52. SwietolawskiW. 1920 The Thermochemistry of Hydrocarbons according to P. W. Zubow’s Data. Journal of the American Chemical Society, 42, 13121321 .
  53. 53. TakanoY.HoukK. J. 2005 Chemical Theory Computations, 1 (1),7077 . doi:10.1021/ct049977a.
  54. 54. VemulapalliG. K. 2003 Property reduction in chemistry. Some lessons, In: Chemical Explanation. Characteristics, Development, Autonomy. Joseph E. Earley (Ed), Annals of the New York Academy of sciences, 988, 1 95
  55. 55. WeiningerS. J. 2001 Affinity, Additivity and the Reification of the Bond, In: Tools and Modes of Representation in the Laboratory Sciences, Ursula Klein ed., Boston Studies in the Philosophy of Science, Kluwer Academic Publishers.
  56. 56. ZhaoY.TruhlarD. G. 2008a Density Functionals with Broad Applicability in Chemistry. Accounts of Chemical Research, 41, n 2, 157167 .
  57. 57. ZhaoY.TruhlarD. G. 2008b Exploring the Limit of Accuracy of the Global Hybrid Meta Density Functional for Main-Group Thermochemistry, Kinetics, and Noncovalent Interactions. Journal of Chemical Theory and Computation, 4, 18491868 .

Notes

  • A ‘mixt’ is a chemical combination composed of elements but not bearing the same properties as the constitutive elements. Conversely, an aggregate is a mere additive combinations of elements and their properties.
  • ‘The foundations of a thermodynamic system’, my translation.
  • The French original sentence is : ‘Lorsque la combinaison se produit, il se dégage une quantité de chaleur proportionnelle à l’affinité des deux corps.‘
  • The French original sentence is : " (…) trouver pour chaque élément et pour chaque combinaison chimique, des nombres qui expriment leur affinité relative"

Written By

Llored Jean-Pierre

Submitted: 26 November 2010 Published: 02 November 2011