Margaret Morrison, Philosophy Department, University of Toronto
Toy Models: More than Playing Around, but what exactly??
Toy models are often characterized in terms of the level of simplification with which they represent some particular mechanism or system. The virtue of toy models is that they allegedly illustrate a mechanism or behaviour in a way that is unencumbered by theoretical details, thereby making the process easier to understand (e.g. the Ising model as an illustration of ferromagnetism). The physical mechanical models of nineteenth century British field theorists are an interesting example of the use of toy models that extends well beyond illustration and includes the modification of Maxwell’s field equations, among other things. I discuss these and some other more recent examples of toy models such as models for quantum magnetism in an attempt to uncover what makes them valuable aids for theory construction and knowledge acquisition more generally. I close with a discussion of whether the simplified models currently used in high energy physics can be fruitfully classified as toy models.
Alistair Isaac, University of Edinburgh
Measuring with Models: Realizing the Pragmatic Turn
Recent epistemology of measurement has observed that models play a constitutive role in the calibration of measurement instruments, and thus the determination of measurement outcomes. This observation upends the naive realist view on which data supports or undermines the truth of a theory insofar as it confirms or contradicts its predictions. Hasok Chang and Eran Tal formerly argued that these results motivate a form of iterative epistemic coherentism that said little about any correspondence between measurement and the world. Recently, however, Chang and Tal have turned toward pragmatist arguments for a tempered form of realism about measurement outcomes. In this talk, I will first survey this recent history of the debate, before arguing that pragmatism can suggest an even more robust form of realism. In particular, I argue that a Peirce-inspired transcendental argument for realism provides a firmer ground for measurement realism than Tal’s recent “inference to the best prediction” account.
Wendy Parker, Durham University
An outline of the adequacy-for-purpose view
In previous work, I have advocated thinking of models as representational tools that should be assessed according to their adequacy for particular purposes. Here I develop that view in more detail. I clarify what counts as a purpose, and then distinguish two types of adequacy-for-purpose. I show that whether a model is adequate for purpose can depend on more than properties of the model and that increased fidelity in the model’s representation of the target is not always desirable. Finally, I characterize some special problems – related to holism and underdetermination – that can arise when evaluating the adequacy of models for particular purposes.
Alisa Bokulich, Boston University
Using Models to Correct Data: Paleodiversity and the Fossil Record
It has long been recognized that models play a crucial role in science, and in data more specifically. However, as our philosophical understanding of models has grown, our view of data models has arguably languished. In this talk I use the case of how paleontologists are constructing data-model representations of the history of paleodiversity from the fossil record to show how our views about data models should be updated. In studying the history and evolution of life, the fossil record is a vital source of data. However, as both Lyell and Darwin recognized early on, it is a highly incomplete and biased representation. A central research program to emerge in paleontology is what D. Sepkoski has called the “generalized” (or what I prefer to call “corrected”) reading of the fossil record. Building on this historical work, I examine in detail the ways in which various models and computer simulations are being used to correct the data in paleontology today. On the basis of this research I argue for the following: First, the notion of a data model should be disentangled from the set-theoretic, ‘instantial’ view of models. Data models, like other models in science, should be understood as representations. Second, representation does not mean perfectly accurate depiction. Data models should instead be assessed as adequate-for-a-purpose. Third, the ‘purity’ of a data model is not a measure of its epistemic reliability. I conclude by drawing some parallels between data models in paleontology and data models in climate science.