Philosophical Chemistry: Genealogy of a Scientific Field
There is no such thing as Science. The word "Science" refers to a reified generality that together with others, like Nature and Culture, has been a constant source of false problems: are controversies in Science decided by Nature or Culture? Avoiding badly posed problems requires that we replace Science with a population of individual scientific fields, each with its own concepts, statements, significant problems, taxonomic and explanatory schemas.
This book is an attempt at creating a model of a scientific field capable of accommodating the variation and differentiation evident in the history of scientific practice. [...] The model is made of three components: a domain of phenomena, a community of practitioners, and a set of instruments and techniques connecting the community to the domain.
Substances considered to be elementary were formally defined in the 1780s as the limit of chemical analysis. Because this limit was relative to the state of analytical instrumentation, chemists would sometimes refer to them as "substances not yet decomposed."
We may want to explain, for example, why a substance has these chemical properties instead of other chemical properties; or why it has these chemical properties instead of these magical properties; or why it has these properties instead of having no properties at all. The different ways of setting these contrasting alternatives yield different explanatory goals. The presupposition and the contrasting alternatives define what is an admissible solution to a problem. In the 1700s, the presupposition was not only that substances had an enduring identity, but also that they had certain properties because of their composition. So the problems posed really had the form: Given that substances derive their physical and chemical properties from their composition, why does this substance have these properties, instead of some other properties?
When a debate is modeled as involving the clash of two monolithic theories, it is easy to conclude that a complete breakdown in communication will occur: if two sub-communities disagreed about the reference of every concept, the truth of every statement, the significance of every problem, or the value of every explanatory or taxonomic schema, then reaching agreement through argumentation would be impossible, and a switch from one rival theory to another would be like religious conversion. But if, on the contrary, each cognitive tool changes at its own rate, and if the changes are distributed differently across a community, then the existence of multiple partial overlaps becomes plausible; some practitioners may disagree on whether a concept has a referent but agree on the truth of statements in which the concept appears; or they may dispute the value of an explanatory schema but accept the significance of the problem the schema is meant to solve. These overlaps can keep communication from breaking down during a controversy...
It may still be reasonable for participants on the losing side to hold on to their views becauses cognitive gains in some components are compatible with cognitive losses in others: the controversy may have involved several unsolved significant problems, for example, only some of which had been solved at the end, while the remaining ones may have been unfairly discarded.
Is an ontology based on oxygen an improvement over one based on phlogiston?
The answer to this question is not straightforward because the term "phlogiston" had several referents. The original referent, the old Sulphur principle, turned out not to exist by chemistry's own criteria: it never was a product obtained as the limit of a chemical analysis. By 1750, chemists had proposed that its referent was the matter of heat, fire, or light. If this were correct then it can be argued that phlogiston survived until the end of the controversy, but that it had been relabeled "caloric." This is only partly true: the term "caloric" referred to the substance emitted during combustion by all burning substances, not only those believed to possess the inflammable principle. Finally, if we take the referent of the term "phlogiston" to be inflammable air, that is, hydrogen, then whether there was an ontological improvement as a result of the controversy can be settled by using chemical analysis to answer questions like: Is water an elementary substance? Are pure metals compound substances? By 1800, most textbooks were answering these questions in the negative, and this consensus has lasted till the present day. Given this track record, we can state unambiguously that replacing phlogiston by oxygen constituted a great improvement.
From Personal to Consensus Practice 1700-1800
In the seventeenth century the art of distillation, and the use of fire as a solvent, had acquired the status of exemplary achievements, the very standard of what an analytical technique could achieve. To sustain this line of research, furnaces of increasing power were built. Nevertheless, because at this stage in its evolution chemistry needed to legitimize itself by its medical and pharmaceutical applications, dry distillation could be challenged if it interfered with the preparation of remedies. In the early 1600s such a challenge had indeed been issued, and it gathered strength as the century came to a close. Some practitioners began to doubt, for example, whether fire recovered the original components of vegetable raw materials or whether its action was so strong that it produced new ones. Evidence began to accumulate that some substances created using fire as a solvent had lost their medicinal capacity to affect the human body. In response, some chemists proposed switching to wet distillation and the use of water as a solvent. By the early 1700s, solution analysis had joined dry distillation as a laboratory technique, not with the same degree of consensual approval but with an increasing appeal to those engaged in the preparation of pharmaceutical remedies.
Why are calcined metals dusty and matte, rather than ductile and shiny?
The consensus answer in 1750 was: because calcined metals have lost the components that gave them their metallicity. This problem had two presuppositions: that metallic substances were a compound of a calx and phlogiston; and that the application of fire to a piece of metal forced the phlogiston out (due to its greater affinity for the flame), leaving the powdery calx behind. [...] A calx was considered to be an elementary substance, an EArth with the capacity to turn into glass. But if this was so then other vitrifiable earths, such as common sand, should turn metallic when phlogiston from charcoal was added to them. Yet, this had never been observed. Hence the problem was: Why don't all vitrifiable earths become metallic when compounded with phlogiston?
In collaboration with Bergman, [Guyton] set out to reform chemical nomenclature. Prior to this reform the names for chemical substances were coined following a variety of criteria: the sensible properties of a substance (its color, smell, taste, consistency); its method of preparation; the name of its discoverer or the place where it was discovered; or even symbolic associations, like that between metals and the planets. The new approach was to develop a general method for coining names based on a substance's composition. Neutral salts were the part of the domain for which the largest number of compositional problems had been solved, so Guyton used them as his exemplary case: he created a table the first row of which contained all known acids, the first column all known bases, the resulting neutral salt placed at their intersection. Listing 18 acids and 24 bases, he could use his table to derive names for almost 500 salts.
Despite their strange names, we know that the gases isolated by [Priestly and Cavendish] were the same as the substances to which we refer by different terms, because the reaction Cavendish used (dissolving iron in dilute sulphuric acid) does indeed produce hydrogen, and the one used by Priestly (reducing mercury calx without charcoal) produces oxygen. In other words, we know what they were referring to because the referent of their concepts was fixed instrumentally.
The need to use an arbitrary substance as a fixed point of reference for both equivalents and atomic weights meant that, in the absence of a commonly accepted convention, units of combinations varied from one chemist to another. Moreover, practitioners had plenty of discretion to round up the messy figures from laboratory measurements into integer numbers, providing another source of variation. But underneath the variations there were regularities that the conventionality of the numbers could not hide. In particular, the combining proportions tended to occur in simple multiples: oxygen and nitrogen, for instance, were able to form compounds in proportions 37 to 63, 56 to 44, or 70 to 29, but not in any other proportion. This led to the important empirical generalization that two substances could combine in either one determinate proportion or in several proportions that were integer multiples of one another. In the early 1800s, not everyone accepted this generalization, and the question of whether proportions varied continuously or discontinuously could not be resolved experimentally. But there was an advantage to accepting the validity of the general statement, because discontinuous variation in proportions implied definite compositional identity.
Displaying graphically how elements and radicals could be linked to one another fell short of expressing their spatial arrangement, but for a while this was a benefit rather than a shortcoming, since it allowed chemists with different ontological commitments (atomists and anti-atomists) to use the new formulas.
Despite the fact that the type approach to classification prevailed over the radical approach, both contributed to the development of the concept of valency: the type approach supplied the idea that elements or radicals could be polyvalent (a capacity for multiple bonding), while the radical approach contributed the notion that one and the same element could have variable valency (a capacity to form a different number of bonds in different combinations).
By 1927 it was clear that the two rival models of the bond, as well as the two rival versions of the Affinity schema, each captured part of the truth: some bonds were polar (or ionic) while others were non-polar (or covalent). Retrospectively it can be argued that the variability of models and schemas turned out to reflect the variability of the objective phenomena.
From Personal to Consensus Practice 1800-1900
We can get a sense of the new consensus by examining a textbook published a few years after the Karlsruhe conference. If we compare this textbook to the one used to sample the state of consensus practice in 1800, the most striking difference is the approach used to classify and identify organic substances. At the start of the century, gums, sugars, oils, gelatins, as well as blood, milk, saliva and other bodily fluids, were classified on the basis of their plant and animal sources. [...] By 1860, the idea that organic chemistry is the study of carbon compounds, regardless of their source, is well established. [...] The ordering of this realm is performed using a serial taxonomy that disregards any of the criteria used in 1800, one that focuses exclusively on the chemical reactions that produce the series of kindred substances. Of these, the series comprising the substances we know as methane, ethane, propane, butane, pentane, and so on, had been particularly well worked out. By rendering this series with formulas, its internal structure can be clearly displayed: CH4, C2H6, C3H8, C4H10, C5H12, C6H14, C7H16, C8H18, C9H20. Even a cursory examination of this series shows that it increases by a modular amount, CH2, and that its structure is captured by the formula CnH2n+2.
BY manipulating mechanical models using springs, von Baeyer calculated that the most stable cyclic configuration, the one with the least amount of strain, contained five carbons. Configuration with a greater or lesser number would become deformed, and would therefore contain energy of deformation, a surplus of energy that could explain why organic substances like diacetylene exploded when heated.
Using more accurate instrumentation, many heats of reaction were measured in the following century leading to the discovery that the same quantity of heat was produced regardless of the path that the reaction followed: a transformation that took two steps to produce a given product yielded the same heat as one that took five steps to yield the same product. This discovery anticipated the physicists' idea that the amount of energy in a system is a function of its state at any one moment and not of the process that brought it to that state.
The traditional way of linking models and evidence was to discover a real phenomenon, such as a new substance with intriguing properties, and then build a simplified model of its composition to explain its properties. The ideal gas model reversed this familiar order since the model was constructed first, and a laboratory phenomenon was created later to approximate the ideal: an artificially low-density gas.
From Personal to Consensus Practice 1800-1900
Philosophers of science, particularly those who tend to reduce laboratory practice to the testing of predictions deduced from models, have never adequately conceptualized the activity of measurement, an activity that is often a goal in itself.
The most novel addition to the cognitive content of the field is, of course, mathematical models. Among the practical reasons given for this inclusion is this: the velocity of a reaction varies as the concentration of reactants and products changes, so laboratory measurements may yield different results depending on the duration of the measuring operation. Hence, ideally, the measurement should be conducted at extremely short durations, and when this is not technically possible, mathematics should be used to calculate those values indirectly. In particular, one operator from the differential calculus (differentiation) can be used to calculate the instantaneous rate of change of a reaction. Thus, at this point, the manipulation of symbols on paper to learn about processes has become accepted, joining the ranks of Berzelian formulas and other paper tools. In addition, while in the reference textbook from 1850 only a single equation appeared, in 1900 equations are used throughout the text to express numerical relations between measurements, and to study regular dependencies among the properties measured.
The logical approach to generating an infinite number of alternatives relies for its plausibility on ignoring the costs of change. [...] The historical record does contain many examples of evidence failing to compel assent, but this only proves the existence of local, not global, underdetermination.
It is important to emphasize that not all conventions work the same way. We must distinguish between constitutive conventions that define the very identity of an activity, as exemplified by the rules of chess, from coordinative conventions that do not. [...] The question now is whether the culture of laboratories is like that of chess, fully constituted by its conventions and generating its own values, in which case the conclusion of constructivist historians that experimental practices are based on arbitrary foundations would be correct.
There is no way to answer this question other than by examining the historical record over periods of time long enough to eliminate the artifacts created by short episodes of underdetermination.
In Germany and Sweden, for example, where chemistry was linked to mineralogy and had proved its utility in many assaying and testing offices, promoters felt the need to stress its value as a university subject. It was in Sweden that the distinction between pure and applied science was introduced to reassure academicians that the subject was distinguished enough for them to teach. In France, chemistry emerged in close association with pharmacy, proving itself as a viable alternative to traditional ways of producing medicaments. Thus, the promotional literature in the early eighteenth century stressed its unique ability to use analysis (distillation) to learn about a substance's composition, and the application of this information to the creation of substances with superior medical virtues.
The discourse produced by scientistic movements neatly isolates the cognitive tools most affected by mythological, propagandistic, and rhetorical uses. From this point of view, no statement has had a greater impact on scientism than the claim that science is characterized by the use of unbiased observation to discover the eternal and immutable laws of nature.
Dissensus over whether a given concept has a referent, or whether a particular statement is true, or whether a specific problem is correctly posed—can be settled through arguments that invoke the rules of evidence, while disagreement about the rules can be settled through arguments that show that some evidence rules promote the achievement of a research goal better than others.
[Quoting M. Norton Wise:] "England, where trustworthiness was identified with class-extraction, was less receptive to the use of Least Squares than Germany, where trust was linked not to gentlemanly status of witnesses but to the methodical exposure of errors to public scrutiny."