\(\phi\land\psi \vDash \phi>\psi\)
Table 4: Selection Constraints & Associated Validities
A few comments are in order here, though. Strong centering is sufficient but not necessary for Modus Ponens, weak centering would do: \(w\in f(w,p)\) if \(w\in p\). LT and LAS follow from SSE, and allow similarity theorists to say why some instances of Transititivity and Antecedent Strengthening are intuitively compelling.
The issue of whether a second wave strict analysis ( §2.2.1 ) or a similarity analysis provides a better logic of counterfactuals is very much an open and subtle issue. As sections 2.2.1 and 2.3 detailed, both analyses have their own way of capturing the non-monotonic interpretation of antecedents. Both analyses also have their own way of capturing instances of monotonic inferences that do sound good. Perhaps this issue is destined for a stalemate. [ 31 ] But before declaring it such, it is important to investigate two patterns that are potentially more decisive: Simplification of Disjunctive Antecedents , and a pattern not yet discussed called Import-Export .
Both SDA and Import-Export are valid in a strict analyses and invalid on standard similarity analyses. Crucially, the counterexamples to them that have been offered by similarity theorists are significantly less compelling than those offered to patterns like Antecedent Strengthening . Import-Export relates counterfactuals like (33a) and (33b) .
It is hard to imagine one being true without the other. The basic strict analysis agrees: it renders them equivalent.
But it is not valid on a similarity analysis . [ 32 ] While Import-Export is generally regarded as a plausible principle, some have challenged it. Kaufmann (2005: 213) presents an example involving indicative conditionals which can be adapted to subjunctives. Consider a case where there is a wet match which will light if tossed in the campfire, but not if it is struck. It has not been lit. Consider now:
One might then deny (34a) . This match would not have lit if it had been struck, and if it had lit it would have to have been thrown into the campfire. (34b) , on the other hand, seems like a straightforward logical truth. However, it is worth noting that this intuition about (34a) is very fragile. The slight variation of (34a) in (35) is easy to hear as true.
This subtle issue may be moot, however. Starr (2014) shows that a dynamic semantic implementation of the similarity analysis can validate Import-Export , so it may not be important for settling between strict and similarity analyses.
As for the Simplification of Disjunctive Antecedents (SDA) , Fine (1975) , Nute (1975b) , Loewer (1976) , and Warmbrōd (1981) each object to the similarity analysis predicting that this pattern is invalid. Counterexamples like (29) from Mckay & van Inwagen 1977: 354) have a suspicious feature.
Starr (2014: 1049) and Warmbrōd (1981a: 284) observe that (29a) seems to be another way of saying that Spain would never have fought for the Allies. While Warmbrōd (1981a: 284) uses this to pragmatically explain-away this counterexample to his strict analysis, Starr (2014: 1049) makes a further critical point: it sounds inconsistent to say (29a) after asserting that Spain could have fought for the Allies.
Starr (2014: 1049) argues that this makes it inconsistent for a similarity theorist to regard this as a counterexample to SDA . On a similarity analysis of the could claim, it follows that there are no worlds in which Spain fought for the Allies most similar to the actual world: \(f(w_@,{\llbracket}\mathsf{Allies}{\rrbracket})={\emptyset}\). But if that’s the case, then (29b) is vacuously true on a similarity analysis, and so a similarity theorist cannot consistently claim that this is a case where the premise is true and conclusion false. It is, however, too soon for the strict theorist to declare victory. Nute (1980a) , Alonso-Ovalle (2009) , and Starr (2014: 1049) each develop similarity analyses where disjunction is given a non-Boolean interpretation to validate SDA without validating the other antecedent monotonic patterns. But even this is not the end of the SDA debate.
Nute (1980b: 33) considers a similar antecedent simplification pattern involving negated conjunctions:
Nute (1980b: 33) presents (37) in favor of SNCA.
Note that \(\mathsf{\neg(N\land A)}\) and \(\mathsf{\neg N\lor\neg A}\) are Boolean equivalents. However, non-Boolean analyses like Nute (1980a) , Alonso-Ovalle (2009) , and Starr (2014: 1049) designed to capture SDA break this equivalence, and so fail to predict that SNCA is valid. Willer (2015, 2017) develops a dynamic strict analysis which validates both SDA and SNCA. Fine (2012a,b) advocates for a departure from possible worlds semantics altogether in order to capture both SDA and SNCA. However, these accounts also face counterexamples. Fine (2012a,b) and Willer (2015, 2017) render \((\neg\phi_1\lor\neg\phi_2)>\psi\) and \(\neg(\phi_1\land\phi_2)>\psi\) equivalent, while Champollion, Ciardelli, and Zhang (2016) present a powerful counterexample to this equivalence.
Champollion, Ciardelli, and Zhang (2016) consider a light which is on when switches A and B are both up, or both down. Currently, both switches are up, and the light is on. Consider (38a) and (38b) whose antecedents are Boolean equivalents:
While (38a) is intuitively true, (38b) is not. [ 33 ] This is not a counterexample to SNCA , since the premise of that pattern is false. But such a counterexample is not hard to think up. [ 34 ]
Suppose the baker’s apprentice completely failed at baking our cake. It was burnt to a crisp, and the thin, lumpy frosting came out puke green. The baker planned to redecorate it to make it at least look delicious, but did not have time. We may explain our extreme dissatisfaction by asserting (39a) . But the baker should not infer (39b) and assume that his redecoration plan would have worked.
Willer (2017: §4.2) suggests that such a counterexample trades on interpreting \(\mathsf{\neg(B\land U)>H}\) as \(\mathsf{\neg B\land\neg U)>H}\), and provides an independent explanation of this on the basis of how negation and conjunction interact. If this is right, then an analysis which validates SDA and SNCA without rendering \(\neg(\phi_1\land\phi_2)>\psi\) and \(\neg\phi_1\lor\neg\phi_2>\psi\) equivalent is what’s needed. Ciardelli, Zhang, and Champollion (forthcoming) develop just such an analysis. As Ciardelli, Zhang, and Champollion (forthcoming: §6.4) explain, SDA and SNCA turn out to be valid for very different reasons. Champollion, Ciardelli, and Zhang (2016) and Ciardelli, Zhang, and Champollion (forthcoming) also argue that the falsity of (38b) cannot be predicted on a similarity analysis. This example must be added to a long list of examples which have been presented not as counterexamples to the logic of the similarity analysis, but to what it predicts (or fails to predict) about the truth of particular counterfactuals in particular contexts. This will be the topic of §2.5 , where it will also be explained why the strict analysis faces similar challenges.
Where does this leave us in logical the debate between strict and similarity analyses of counterfactuals? Even Import-Export and SDA fail to clearly identify one analysis as superior. It is possible to capture SDA on either analysis. Existing similarity analyses that validate SDA , however, also invalidate SNCA (Alonso-Ovalle 2009; Starr 2014) . By contrast existing strict analyses that validate SDA also validate SNCA (Willer 2015, 2017) . However, this is far from decisive. The validity of SNCA is still being investigated, and it is far from clear that it is impossible to have a similarity analysis that validates both SDA and SNCA, or a strict analysis that validates only SDA (perhaps using a non-Boolean semantics for disjunction). So even SNCA may fail to be the conclusive pattern needed to separate these analyses.
In their own ways, Stalnaker (1968, 1984) and D. lewis (1973b) are candid that the similarity analysis is not a complete analysis of counterfactuals. As should be clear from §2.3 , the formal constraints they place on similarity are quite minimal and only serve to settle matters of logic. There are, in general, very many possible selection functions—and corresponding conceptions of similarity—for any given counterfactual. To explain how a given counterfactual like (40) expresses a true proposition, a similarity analysis must specify which particular conception of similarity informs it.
Of course, the strict analysis is in the same position. It cannot predict the truth of (40) without specifying a particular accessibility relation. In turn, the same question arises: on what basis do ordinary speakers determine some worlds to be accessible and others not? This section will overview attempts to answer these questions, and the many counterexamples those attempts have invited. These counterexamples have been a central motivation for pursuing alternative semantic analyses, which will be covered in §3 . While this section follows the focus of the literature on the similarity analysis ( §2.5.1 ), §2.5.2 will briefly detail how parallel criticisms apply to strict analyses.
What determines which worlds are counted as most similar when evaluating a counterfactual? Stalnaker (1968) explicitly sets this issue aside, but D. Lewis (1973b: 92) makes a clear proposal:
Just as counterfactuals are context-dependent and vague, so is our intuitive notion of overall similarity. In comparing cost of living, New York and San Francisco may count as similar, but not in comparing topography. And yet, Lewis’ (1973b: 92) Proposal has faced a barrage of counterexamples. Lewis and Stalnaker parted ways in their responses to these counterexamples, though both grant that Lewis’ (1973b: 92) Proposal was not viable. Stalnaker (1984: Ch.7) proposes the projection strategy : similarity is determined by the way we “project our epistemic policies onto the world”. D. Lewis 1979) proposes a new system of weights that amounts to a kind of curve-fitting: we must first look to which counterfactuals are intuitively true, and then find ways of weighting respects of similarity—however complex—that support the truth of counterfactuals. Since Lewis’ (1973b: 92) Proposal and Lewis’ ( 1979 ) system of weights are more developed, and have received extensive critical attention, they will be the focus of this section. [ 35 ] It will begin with the objections to Lewis’ (1973b: 92) Proposal that motivated Lewis’ ( 1979 ) system of weights, and then some objections to that approach.
Fine (1975: 452) presents the future similarity objection to Lewis’ (1973b: 92) Proposal . (41) is plausibly a true statement about world history.
Suppose, optimistically, that there never will be a nuclear holocaust. Then, for every \(\mathsf{B\land H}\)-world, there will be a more similar \(\mathsf{B\land\neg H}\)-world, one where a small difference prevents the holocaust, such as a malfunction in the electrical detonation system. In short, a world where Nixon presses the button and a malfunction prevents a nuclear holocaust is more like our own than one where there is a nuclear holocaust that changes the face of the planet. But then Lewis’ (1973b: 92) Proposal incorrectly predicts that (41) is false.
Tichý (1976: 271) offers a similar counterexample. Given (42a) – (42c) , (42d) sounds false.
Lewis’ (1973b: 92) Proposal does not seem to predict the falsity of (42d) . After all, Jones is wearing his hat in the actual world, so isn’t a world where it’s not raining and he’s wearing his hat more similar to the actual one than one where it’s not raining and he isn’t wearing his hat?
(1979: 472) responds to these examples by proposing a ranked system of weights that give what he calls the standard resolution of similarity , which may be further modulated in context:
While weight 2 gives high importance to keeping particular facts fixed up to the change required by the counterfactual, weight 4 makes clear that particular facts after that point need not be kept fixed. In the case of (42d) the fact that Jones is wearing his hat need not be kept fixed. It was a post-rain fact, so when one counterfactually supposes that it had not been raining, there is no reason to assume that Jones is still wearing his hat. Similarly, with example (41) . A world where Nixon pushes the button, a small miracle occurs to short-circuit the equipment and the nuclear holocaust is prevented will count as less similar than one where there is no small miracle and a nuclear holocaust results. A small-miracle and no-holocaust world is similar to our own only in one insignificant respect (particular matters of fact) and dissimilar in one important respect (the small miracle).
It is clear, however, that Lewis’ (1979) System of Weights is insufficiently general. Particular matters of fact often are held fixed.
Example (43) crucially holds fixed the outcome of a highly contingent particular fact: the coin outcome. Cases of this kind are discussed extensively by Edgington (2004) . Example (44) shows that a chancy outcome is not an essential feature of these cases. Noting the existence of recalcitrant cases, (1979: 472) simply says he wishes he knew why they came out differently. Additional counterexamples to the Lewis’ (1979) System of Weights have been proposed by Bowie (1979) , Kment (2006) , and Wasserman (2006) . [ 36 ] Kment (2006: 458) proposes a new similarity metric to handle this example which is sensitive to the way particular facts are explained, and is integrated into a general account of metaphysical modality in Kment (2014) . Ippolito (2016) proposes a new theory of how context determines similarity for counterfactuals which aims to make the correct predictions about many of the above cases.
Another response to these counterexamples has been to develop alternative semantic analyses of counterfactuals such as premise semantics (Kratzer 1989, 2012; Veltman 2005) and causal models (Schulz 2007 2011; Briggs 2012; Kaufmann 2013) . These accounts start from the observation that the counterexamples can be easily explained in a model where matters of fact depend on each other. In (42) , when we counterfactually retract the fact that it rained, we don’t keep the fact that the man was wearing his hat because that fact depended on it raining. Hence, (42d) is false. In (43) , when we counterfactually retract that you didn’t bet on heads, we keep the fact that the coin came up heads because it is independent of the fact that you didn’t bet on heads. These accounts offer models of how laws, and law-like generalizations, make facts dependent on each other, and argue that once this is done, there is no work left for similarity to do in the semantics of counterfactuals. While these accounts are the focus of §3 , it is worth presenting one of the additional counterexamples to the similarity analysis that has emerged from this literature.
Recall (38) from §2.4 . Champollion, Ciardelli, and Zhang (2016) and Ciardelli, Zhang, and Champollion (forthcoming) argue on the basis of this example that any similarity analysis will make incorrect predictions about the truth-conditions of counterfactuals. In this example a light is on either when Switch A and B are both up, or they are both down. Otherwise the light is off. Suppose both switches are up and the light is on.
Intuitively, (38a) is true, as are \(\mathsf{\neg A >\neg L}\) and \(\mathsf{\neg B >\neg L}\), but (38b) is false. Champollion, Ciardelli, and Zhang (2016: 321) argue that a similarity analysis cannot predict \(\mathsf{\neg A >\neg L}\) and \(\mathsf{\neg B >\neg L}\) to be true, while (38b) is false. In order for \(\mathsf{\neg A >\neg L}\) to be true, the particular fact that Switch B is up must count towards similarity. Similarly, for \(\mathsf{\neg B >\neg L}\) to be true, the particular fact that Switch A is up must count towards similarity. But then it follows that (38b) is true on a similarity analysis: the most similar worlds where A and B are not both up have to either be worlds where Switch B is down but Switch A is still up, or Switch A is down and Switch B is still up. In those worlds, the light would be off, so the similarity analysis incorrectly predicts (38b) to be true. Champollion, Ciardelli, and Zhang (2016) instead pursue a semantics in terms of causal models where counterfactually making \(\neg \mathsf{(A\land B)}\) true and making \(\mathsf{\neg A\lor\neg B}\) true come apart.
Do strict analyses avoid the troubles faced by similarity analyses when it comes to truth-conditions? This question is difficult to answer, and has not been explicitly discussed in the literature. Other than the theory of Warmbrōd (1981a,b) , strict theorists have not made proposals for the accessibility relation analogous to Lewis’ (1973b: 92) Proposal for similarity. And, Warmbrōd’s proposal about the pragmatics of the accessibility relation is this:
All subsequent second wave strict analyses have ended up in similar territory. The dynamic analyses developed by Fintel (2001) , Gillies (2007) , and Willer (2015, 2017, 2018) assign strict truth-conditions to counterfactuals, but have them induce changes in an evolving space of possible worlds. These changes must render the antecedent consistent with an evolving body of discourse. While Fintel (2001) and Willer (2018) explicitly appeal to a similarity ordering for this purpose, Gillies (2007) and Willer (2017) do not. Nevertheless, the formal structures used by Gillies (2007) and Willer (2017) for this purpose give rise to the same question: which facts stay and which facts go when rendering the counterfactual antecedent consistent? Accordingly, at present, it does not appear that the strict analysis avoids the kinds of concerns raised for the similarity analysis in §2.5.1 .
Recall Goodman’s Problem from §1.4 : the truth-conditions of counterfactuals intuitively depend on background facts and laws, but it is difficult to specify these facts and laws in a way that does not itself appeal to counterfactuals. Strict and similarity analyses make progress on the logic of conditionals without directly confronting this problem. But the discussion of § 2.5 makes salient a related problem. Lewis’ (1979) System of Weights amounts to reverse-engineering a similarity relation to fit the intuitive truth-conditions of counterfactuals. While Lewis’ ( 1979 ) approach avoids characterizing laws and facts in counterfactual terms, Bowie (1979: 496–497) argues that it does not explain why certain counterfactuals are true without appealing to counterfactuals. Suppose one asks why certain counterfactuals are true and the similarity theorist replies with Lewis’ ( 1979 ) recipe for similarity. If one asks why those facts about similarity make counterfactuals true, the similarity theorist cannot reply that they are basic self-evident truths about the similarity of worlds. Instead, they must say that those similarity facts make those counterfactuals true. Bowie’s ( 1979: 496–497 ) criticism is that this is at best uninformative, and at worst circular.
A related concern is voiced by Horwich (1987: 172) who asks “why we should have evolved such a baroque notion of counterfactual dependence”, namely that captured by Lewis’ (1979) System of Weights . The concern has two components: why would humans find it useful, and why would human psychology ground counterfactuals in this concept of similarity rather than our ready-at-hand intuitive concept of overall similarity? These questions are given more weight given the centrality of counterfactuals to human rationality and scientific explanation outlined in §1 . Psychological theories of counterfactual reasoning and representation have found tools other than similarity more fruitful ( §1.2 ). Similarly, work on scientific explanation has not assigned any central role for similarity ( 1.3 ), and as Hájek (2014: 250) puts it:
Science has no truck with a notion of similarity; nor does Lewis’ ( 1979 ) ordering of what matters to similarity have a basis in science.
Morreau (2010) has recently argued on formal grounds that similarity is poorly suited to the task assigned to it by the similarity analysis. The similarity analysis, especially as elaborated by D. Lewis (1979) , tries to weigh some similarities between worlds against their differences to arrive at a notion of overall comparative similarity between those worlds. Morreau (2010: 471) argues that:
[w]e cannot add up similarities or weigh them against differences. Nor can we combine them in any other way… No useful comparisons of overall similarity result. (Morreau 2010: §4)
articulates this argument formally via a reinterpretation of Arrow’s Theorem in social choice theory. Arrow’s Theorem shows that it is not possible to aggregate individuals’ preferences regarding some alternative outcomes into a coherent “collective preference” ordering over those outcomes, given minimal assumptions about their rationality and autonomy. As summarized in §6.3 of Arrow’s theorem , Morreau (2010) argues that the same applies to aggregating respects of similarity and difference: there is no way to add them up into a coherent notion of overall similarity.
Strict and similarity analyses of counterfactuals showed that it was possible to address the semantic puzzles described in §1.4 with formally explicit logical models. This dispelled widespread skepticism of counterfactuals and established a major area of interdisciplinary research. Strict analyses have been revealed to provide a stronger, more classical, logic, but must be integrated with a pragmatic explanation of how counterfactual antecedents are interpreted non-monotonically. Similarity analyses provide a much weaker, more non-classical, logic, but capture the non-monotonic interpretation of counterfactual antecedents within their core semantic model. It is now a highly subtle and intensely debated question which analysis provides a better logic for counterfactuals, and which version of each kind of analysis is best. This intense scrutiny and development has also generated a wave of criticism focused on their treatment of truth-conditions, Goodman’s Problem , and integration with thinking about counterfactuals in psychology and the philosophy of science ( §2.5 , §2.6 ). None of these criticisms are absolutely conclusive, and these two analyses, particularly the similarity analysis, remain standard in philosophy and linguistics. However, the criticisms are serious enough to merit exploring alternative analyses. These alternative accounts take inspiration from a particular diagnosis of the counterexamples discussed in §2.5 : facts depend on each other, so counterfactually assuming p involves not just giving up not-p , but any facts which depended on not-p . The next section will examine analyses of this kind.
Similarity and strict analyses nowhere refer to facts, or propositions, depending on each other. Indeed, 1979 was primarily concerned with explaining which true counterfactuals, given a similarity analysis, manifest a relation of counterfactual dependence. Other analyses have instead started with the idea that facts depend on each other, and then explain how these relations of dependence make counterfactuals true. As will become clear, none of these analyses endorse the naive idea that \(\mathsf{A > B}\) is true only when B counterfactually depends on A . The dependence can be more complex, indirect, or B could just be true and independent of A . Theories in this family differ crucially in how they model counterfactual dependence. In premise semantics ( §3.1 ) dependence is modeled in terms of how facts, which are modeled as parts of worlds, are distributed across a space of worlds that has been constrained by laws, or law-like generalizations. In probabilistic semantics ( §3.2 ), this dependence is modeled as some form of conditional probability. In Bayesian networks, structural equations, and causal models ( §3.3 ), it is modeled in terms of the Bayesian networks discussed at the beginning of §1.2.3 . Because theories of these three kinds are very much still in development and often involve even more sophisticated formal models than those covered in §2 , this section will have to be more cursory than §2 to ensure breadth and accessibility.
Veltman (1976) and Kratzer (1981b) approached counterfactuals from a perspective closer to Goodman (1947) : counterfactuals involve explicitly adjusting a body of premises, facts or propositions to be consistent with the counterfactual’s antecedent, and checking to see if the consequent follows from the revised premise set—in a sense of “follow” to be articulated carefully. Since facts or premises hang together, changing one requires changing others that depend on it. The function of counterfactuals is to allow us to probe these connections between facts. While D. Lewis (1981) proved that the Kratzer (1981b) analysis was a special case of similarity semantics, subsequent refinements of premise semantics in Kratzer (1989, 1990, 2002, 2012) and Veltman (2005) evidenced important differences. Kratzer (1989: 626) nicely captures the key difference:
[I]t is not that the similarity theory says anything false about [particular] examples… It just doesn’t say enough. It stays vague where our intuitions are relatively sharp. I think we should aim for a theory of counterfactuals that is able to make more concrete predictions with respect to particular examples.
From a logical point of view, premise semantics and similarity semantics do not diverge. They diverge in the concrete predictions made about the truth-conditions of counterfactuals in particular contexts without adding additional constraints to the theory like Lewis’ (1979) System of Weights .
How does premise semantics aim to improve on the predictions of similarity semantics? It re-divides the labor between context and the semantics of counterfactuals to more accurately capture the intuitive truth-conditions of counterfactuals, and intuitive characterizations of how context influences counterfactuals. In premise semantics, context provides facts and law-like relations among them, and the counterfactual semantics exploits this information. By contrast, the similarity analysis assumes that context somehow makes a similarity relation salient, and has to make further stipulations like Lewis’ (1979) System of Weights about how facts and laws enter into the truth-conditions of counterfactuals in particular contexts. This can be illustrated by considering how Tichý’s ( 1976 ) example (42) is analyzed in premise semantics. This illustration will use the Veltman (2005) analysis because it is simpler than Kratzer (1989, 2012) —that is not to say it is preferable. The added complexity in Kratzer (1989, 2012) provides more flexibility and a broader empirical range including quantification and modal expressions other than would -counterfactuals.
Recall Tichý’s ( 1976 ) example, with the intuitively false counterfactual (42d) :
Veltman (2005) models how the sentences leading up to the counterfactual (42d) determine the facts and laws relevant to its interpretation. The law-like generalization in (42a) is treated as a strict conditional which places a hard constraint on the space of worlds relevant to evaluating the counterfactual. [ 37 ] The particular facts introduced by (42c) provide a soft constraint on the worlds relevant to interpreting the counterfactual. Figure 9 illustrates this model of the context and its evolution, including a third atomic sentence \(\mathsf{H}\) for reasons that will become clear shortly.
\(C_0\) | \(\mathsf{R}\) | \(\mathsf{W}\) | \(\mathsf{H}\) |
\(\boldsymbol{w_0}\) | |||
\(\boldsymbol{w_1}\) | |||
\(\boldsymbol{w_2}\) | |||
\(\boldsymbol{w_3}\) | |||
\(\boldsymbol{w_4}\) | |||
\(\boldsymbol{w_5}\) | |||
\(\boldsymbol{w_6}\) | |||
\(\boldsymbol{w_7}\) |
\(\hspace{15px}\underrightarrow{\medsquare(\mathsf{R\supset W})}\)
\(C_1\) | \(\mathsf{R}\) | \(\mathsf{W}\) | \(\mathsf{H}\) |
\(\boldsymbol{w_0}\) | |||
\(\boldsymbol{w_1}\) | |||
\(\boldsymbol{w_2}\) | |||
\(\boldsymbol{w_3}\) | |||
\(\xcancel{w_4}\) | \(\xcancel{1}\) | \(\xcancel{0}\) | \(\xcancel{0}\) |
\(\xcancel{w_5}\) | \(\xcancel{1}\) | \(\xcancel{0}\) | \(\xcancel{1}\) |
\(\boldsymbol{w_6}\) | |||
\(\boldsymbol{w_7}\) |
\(\quad\underrightarrow{\mathsf{R\land W}}\)
\(C_2\) | \(\mathsf{R}\) | \(\mathsf{W}\) | \(\mathsf{H}\) |
\(w_0\) | 0 | 0 | 0 |
\(w_1\) | 0 | 0 | 1 |
\(w_2\) | 0 | 1 | 0 |
\(w_3\) | 0 | 1 | 1 |
\(\xcancel{w_4}\) | \(\xcancel{1}\) | \(\xcancel{0}\) | \(\xcancel{0}\) |
\(\xcancel{w_5}\) | \(\xcancel{1}\) | \(\xcancel{0}\) | \(\xcancel{1}\) |
\(\boldsymbol{w_6}\) | |||
\(\boldsymbol{w_7}\) |
Figure 9: Context for (42) , Facts in Bold, Laws Crossing out Worlds
On this model a context provides a set of worlds compatible with the facts, in \(C_2\) \(\textit{Facts}_{C_2}={\{w_6,w_7\}}\), and the set of worlds compatible with the laws, in \(C_2\) \(\textit{Universe}_{C_2}={\{w_0,w_1,w_2,w_3,w_6,w_7\}}\). This model of context is one essential component of the analysis, but so too is the way Veltman (2005) models worlds, situations, and dependencies between facts. These further components allow Veltman (2005) to offer a procedure for “retracting” the fact that \(\mathsf{R}\) holds from a world.
Veltman’s ( 2005 ) analysis of counterfactuals identifies possible worlds with atomic valuations (functions from atomic sentences to truth-values) like those depicted in Figure 9 . So \(w_6={\{{\langle \mathsf{R},1\rangle},{\langle \mathsf{W},1\rangle},{\langle \mathsf{H},0\rangle}\}}\). This makes it possible to offer a simple model of situations , which are parts of worlds: any subset of a world. [ 38 ] It is now easy to think about one fact (sentence having a truth-value) as determining another fact (sentence having a truth value). In context \(C_3\), \(\mathsf{R}\) being 1 determines that \(\mathsf{W}\) will be 1. Once you know that \(\mathsf{R}\) is assigned to 1, you know that \(\mathsf{W}\) is too. Veltman’s ( 2005 ) proposal is that speakers evaluate a counterfactual by retracting the fact that the antecedent is false from the worlds in the context, which gives you some situations, and then consider all those worlds that contain those situations, are compatible with the laws, and make the antecedent true. If the consequent is true in all of those worlds, then we can say that the counterfactual is true in (or supported by) the context. So, to evaluate \(\neg \mathsf{R>W}\), one first retracts the fact that \(\mathsf{R}\) is true, i.e., that \(\mathsf{R}\) is assigned to 1, then one finds all the worlds consistent with the laws that contain those situations and assign \(\mathsf{R}\) to 0. If all of those worlds are also \(\mathsf{W}\) worlds, then the counterfactual is true in (or supported by) the context. For Veltman (2005) , the characterization of this retraction process relies essentially on the idea of facts determining other facts.
According to Veltman (2005) , when you are “retracting” a fact from the facts in the context, you begin by considering each \(w\in \textit{Facts}_C\) and find the smallest situations in w which contain only undetermined facts—he calls such a situation a basis for w . This is a minimal situation which, given the laws constraining \(\textit{Universe}_C\), determines all the other facts about that world. For example, \(w_6\) has only one basis, namely \(s_0={\{{\langle \mathsf{R},1\rangle},{\langle \mathsf{H},0\rangle}\}}\), and \(w_7\) has only one basis, namely \(s_1={\{{\langle \mathsf{R},1\rangle},{\langle \mathsf{H},1\rangle}\}}\). Once you have the bases for a world, you can retract a fact by finding the smallest change to the basis that no longer forces that fact to be true. So retracting the fact that \(\mathsf{R}\) is true from \(s_0\) produces \(s'_0={\{{\langle \mathsf{H},0\rangle}\}}\), and retracting it from \(s_1\) produces \(s'_1={\{{\langle \mathsf{H},1\rangle}\}}\). The set consisting of these two situations is the premise set .
To evaluate \(\mathsf{\neg R>W}\), one finds the set of worlds from \(\textit{Universe}_{C_3}\) that contains some member of the premise set \(s'_0\) or \(s'_1\): \({\{w_0,w_1,w_2,w_3\}}\)—these are the worlds consistent with the premise set and the laws. Are all of the \(\neg \mathsf{R}\)-worlds in \({\{w_0,w_1,w_2,w_3\}}\) also \(\mathsf{W}\)-worlds? No, \(w_2\) and \(w_3\) are not. Thus, \(\neg \mathsf{R>W}\) is not true in (or supported by) the context \(C_3\). This was the intuitively correct prediction about example (42) . Of course, the similarity analysis supplemented with Lewis’ (1979) System of Weights also makes this prediction. But consider again example (43) , which is not predicted:
This example relies seamlessly on three pieces of background knowledge about how betting works:
If you don’t bet, you don’t win: \(\mathsf{\medsquare(\neg B\supset\neg W)}\)
If you bet and it comes up heads, you win: \(\mathsf{\medsquare((B\land H)\supset W)}\)
If you bet and it doesn’t come up heads, you don’t win: \(\mathsf{\medsquare((B\land\neg H)\supset\neg W)}\)
And it specifies facts: \(\mathsf{\neg B\land H}\). The resulting context is detailed in Figure 10 :
\(\mathsf{B}\) | \(\mathsf{H}\) | \(\mathsf{W}\) | |
\(w_0\) | 0 | 0 | 0 |
\(\xcancel{w_1}\) | \(\xcancel{0}\) | \(\xcancel{0}\) | \(\xcancel{1}\) |
\(\boldsymbol{w_2}\) | |||
\(\xcancel{w_3}\) | \(\xcancel{0}\) | \(\xcancel{1}\) | \(\xcancel{1}\) |
\(w_4\) | 1 | 0 | 0 |
\(\xcancel{w_5}\) | \(\xcancel{1}\) | \(\xcancel{0}\) | \(\xcancel{1}\) |
\(\xcancel{w_6}\) | \(\xcancel{1}\) | \(\xcancel{1}\) | \(\xcancel{0}\) |
\(w_7\) | 1 | 1 | 1 |
Figure 10: Context for (43)
Now, consider the counterfactual \(\mathsf{B>W}\). The first step is to retract the fact that \(\mathsf{B}\) is false from each world in \(\textit{Facts}_{C_{(43)}}\). That’s just \(w_2\). This world has two bases—minimal situations consisting of undetermined facts—\(s_0={\{{\langle \mathsf{B},0\rangle},{\langle \mathsf{H},1\rangle}\}}\) and \(s_1={\{{\langle \mathsf{H},1\rangle},{\langle \mathsf{W},0\rangle}\}}\). [ 39 ] The next step is to retract the fact that \(\mathsf{B}\) is false from both bases. For \(s_0\) this yields \(s'_0={\{{\langle \mathsf{H},1\rangle}\}}\) and for \(s_1\) this also yields \(s'_0\)—since the fact that you didn’t win together with the fact that the coin came up heads, forces it to be false that you bet. Given this situation, the premise set consists of the two worlds in Universe (43) that contain \(s'_0\): \({\{w_2,w_7\}}\). Now, are all of the \(\mathsf{B}\)-worlds in this set also \(\mathsf{W}\)-worlds? Yes, \(w_7\) is the only \(\mathsf{B}\)-world, and it is also a \(\mathsf{W}\)-world. So Veltman (2005) correctly predicts that (43) is true in (supported by) its natural context.
It should now be more clear how premise semantics delivers on its promise to be more predictive than similarity semantics when it comes to counterfactuals in context, and affords a more natural characterization of how a context informs the interpretation of counterfactuals. This analysis was crucially based on the idea that some facts determine other facts, and that the process of retracting a fact is constrained by these relations. However, even premise semantics has encountered counterexamples.
Schulz (2007: 101) poses the following counterexample to Veltman (2005) .
Intuitively, (45d) is true in the context. Figure 11 details the context predicted for it by Veltman (2005) .
\(C_2\) | \(\mathsf{A}\) | \(\mathsf{B}\) | \(\mathsf{L}\) |
\(w_0\) | 0 | 0 | 0 |
\(\xcancel{w_1}\) | \(\xcancel{0}\) | \(\xcancel{0}\) | \(\xcancel{1}\) |
\(w_2\) | 0 | 1 | 0 |
\(\xcancel{w_3}\) | \(\xcancel{0}\) | \(\xcancel{1}\) | \(\xcancel{1}\) |
\(\boldsymbol{w_4}\) | |||
\(\xcancel{w_5}\) | \(\xcancel{1}\) | \(\xcancel{0}\) | \(\xcancel{1}\) |
\(\xcancel{w_6}\) | \(\xcancel{1}\) | \(\xcancel{1}\) | \(\xcancel{0}\) |
\(w_7\) | 1 | 1 | 1 |
Figure 11: Context for (45d)
There are two bases for \(w_4\): \(s_0={\{{\langle \mathsf{A},1\rangle},{\langle \mathsf{L},0\rangle}\}}\)—the fact that Switch A is up and the light is off determines that Switch B is down—and \(s_1={\{{\langle \mathsf{A},1\rangle},{\langle \mathsf{B},0\rangle}\}}\)—the fact that Switch A is up and the fact that B is down determines that the light is off. (No smaller situation would determine the facts of \(w_4\).) Retracting \(\mathsf{B}\)’s falsity from \(s_0\) leads to trouble. \(s_0\) forces \(\mathsf{B}\) to be false, but there are two ways of changing this. First, one can remove the fact that the light is on, yielding \(s'_0={\{{\langle \mathsf{A},1\rangle}\}}\). Second, one can eliminate the fact that Switch A is up, yielding \(s''_0={\{{\langle \mathsf{L},0\rangle}\}}\). Because of \(s''_0\), the premise set will contain \(w_2\), meaning it allows that in retracting the fact that Switch B is down one can give up the fact that Switch A is up. But then there is a \(\mathsf{B}\)-world where \(\mathsf{L}\) is false, and \(\mathsf{B>L}\) is incorrectly predicted to be false.
Intuitively, the analysis went wrong in allowing the removal of the fact that Switch A is up when retracting the fact that Switch B is down. Schulz (2007: §5.5) provides a more sophisticated version of this diagnosis: although the fact that Switch A is up and the fact that the light is off together determine that Switch B is down, only the fact that the light is off depends on the fact that Switch B is down. If one could articulate this intuitive concept of dependence, and instead only retract facts that depend on the fact you are retracting (in this case the fact that B is down), then the error could be avoided. It is unclear how to implement this kind of dependence in Veltman’s ( 2005 ) framework. Schulz (2007: §5.5) goes on show that structural equations and causal models provide the necessary concept of dependence—for more on this approach see §3.3 below. After all, it seems plausible that the light being off causally depends on Switch B being down, but Switch A being up does not causally depend on Switch B being down. It remains to be seen whether the more powerful framework developed by Kratzer (1989, 2012) can predict (45) .
While premise semantics has been prominent among linguists, probabilistic theories have been very prominent among philosophers thinking about knowledge and scientific explanation. [ 40 ] Adams (1965, 1975) made a seminal proposal in this literature:
However, Adams (1970) was also aware that indicative/subjunctive pairs like (3) / (4) differ in their assertability. To explain this, he proposed the prior probability analysis of counterfactuals (Adams 1976) :
It would seem that this analysis accurately predicts our intuitions in (45) about \(\mathsf{B>L}\). Let \(P_0\) be an agent’s credence before learning that Switch B is down. (45a) requires that \(P_0(\mathsf{L}\mid \mathsf{A\land B})\) is (or is close to) 1, (45b) requires that \(P_0(\mathsf{\neg L}\mid \mathsf{\neg A\lor \neg B})\) is (or is close to) 1. The agent also learns that Switch A is up, so \(P_0(\mathsf{A})\) is (or is close to) 1. All of this together seems to guarantee that \(P_0(\mathsf{B\mid L})\) is also very high. However, this is due to an inessential artifact of the example: the agent learned that Switch B was down after learning that Switch A is up. This detail does not matter to the intuition. As was seen with example (43) , we often hold fixed facts that happen after the antecedent turns out false. Indeed, Adams’ Prior Probability Analysis makes the incorrect prediction that (43) is unassertible in its natural context.
This problem for Adams’ Prior Probability Analysis is addressed in Edgington (2003, 2004: 21) who amends the analysis: \(P_0\) may also reflect any facts the agent learns after they learn that the antecedent is false, provides that those facts are causally independent of the antecedent. This parallels the idea pursued by Schulz (2007: Ch.5) to integrate causal dependence into the analysis of counterfactuals. This idea was also pursued in a probabilistic framework by Kvart (1986, 1992) . Kvart (1986, 1992) , however, does not propose a prior probability analysis and does not regard the probabilities as subjective credences: they are instead objective probabilities (propensity or objective chance). Skyrms (1981) also proposes a propensity account, but pursues a prior propensity account analogous to the subjective one proposed by Adams (1976) .
Objective probability analyses have been popular among philosophers trying to capture the way that counterfactuals feature in physical explanations, and why they are so useful to agents like us in worlds like ours. Loewer (2007) is a good example of such an account, who grounds the truth of certain counterfactuals regarding our decisions like (46) in statistical mechanical probabilities.
Loewer (2007) proposes that (46) is true just in case (where \(P_{\textit{SM}}\) is the statistical mechanical probability distribution and \(M(t)\) is a description of the macro-state of the universe at t ):
Loewer (2007) acknowledges that this analysis is limited to counterfactuals like (46) . He argues that it can address the philosophical objections to the similarity analysis discussed in §2.6 , namely why counterfactuals are useful in scientific explanations, and for agents like us in a world like our own.
Conditional probability analyses do not proceed by assigning truth-conditions to (all) counterfactuals. They instead associate them with certain conditional probabilities. [ 41 ] This makes it difficult to integrate the theory into a comprehensive compositional semantics and logic for a natural language. Kaufmann (2005 2008) makes important advances here, but it remains an open issue for conditional probability analyses. Leitgeb (2012a,b) thoroughly develops a new conditional probability analysis which regards \(\mathsf{\phi>\psi}\) as true when the relevant conditional probability is sufficiently high. [ 42 ] But conditional probability analyses have other limitations. Without further development, these analyses are limited in their ability to explain how humans judge particular counterfactuals to be true. There is a large literature in psychology, beginning with Kahneman, Slovic, and Tversky 1982 , showing that human reasoning diverges in predictable way from precise probabilistic reasoning. Even if these performance differences didn’t turn up in counterfactuals and conditional probabilities, there is an implementation issue. As discussed in §1.2.3 , directly implementing probabilistic knowledge makes unreasonable demands on memory. Bayesian Networks are one proposed solution to this implementation issue. They are also used in the analysis of causal dependence ( §1.3 ), which conditional probability analyses must appeal to anyway. Since Bayesian Networks can also be used to directly formulate a semantics of counterfactuals, they provide an worthwhile alternative to conditional probability analyses despite proceeding from similar assumptions.
Recall from §1.2.3 the basic idea of a Bayesian Network: rather than storing probability values for all possible combinations of some set of variables, a Bayesian Network represents only the conditional probabilities of variables whose values depend on each other. This can be illustrated for (45) .
Sentences (45a) - (45c) can be encoded by the Bayesian Network and structural equations in Figure 12 .
Figure 12: Bayesian Network and Structural Equations for (45)
Recall that \(L\dequal A\land B\) means that the value of L equals the value of \(A\land B\), but also asymmetrically depends on the value of \(A\land B\): the value of \(A\land B\) determines the value of L , and not vice-versa. How, given the network in Figure 12 , does one evaluate the counterfactual \(\mathsf{B>N}\)? Several different answers have been given to this question.
Pearl (1995, 2000, 2009, 2013: Ch.7) proposes:
On this approach, one simply deletes the assignment \(B=0\), replaces it with \(B=1\), and solves for L using the equation \(L\dequal A\land B\). Since the deletion of \(B=0\) does not effect the assignment \(A=1\), it follows that \(L=1\) and that the counterfactual is true. This simple recipe yields the right result. Pearl nicely sums up the difference between this kind of analysis and a similarity analysis:
In contrast with Lewis’s theory, counterfactuals are not based on an abstract notion of similarity among hypothetical worlds; instead, they rest directly on the mechanisms (or “laws,” to be fancy) that produce those worlds and on the invariant properties of those mechanisms. Lewis’s elusive “miracles” are replaced by principled [interventions] which represent the minimal change (to a model) necessary for establishing the antecedent… Thus, similarities and priorities—if they are ever needed—may be read into the [interventions] as an afterthought… but they are not basic to the analysis. (Pearl 2009: 239–240)
As interventionism is stated above, it does not apply to conditionals with logically complex antecedents or consequents. This limitation is addressed by Briggs (2012) , who also axiomatizes and compares the resultant logic to D. Lewis (1973b) and Stalnaker (1968) —significantly extending the analysis and results in Pearl (2009: Ch.7) . Integrations of causal models with premise semantics (Schulz 2007, 2011; Kaufmann 2013; Santorio 2014; Champollion, Ciardelli, & Zhang 2016; Ciardelli, Zhang, & Champollion forthcoming) provide another way of incorporating an interventionist analysis into a fully compositional semantics. However, interventionism does face other limitations.
Hiddleston (2005) presents the following example.
(48c) is intuitively true in this context. The network for (48) is given in Figure 13 .
Figure 13: Bayesian Network and Structural Equations for (48)
Hiddleston (2005) observes that interventionism does not predict \(\mathsf{F>B}\) to be true. It tells one to delete the arrow going in to F , set its value to 1, and project the consequences of doing so. However, none of the other values depend on F so they keep their actual values: \(L=0\) and \(B=0\). Accordingly, \(\mathsf{F>B}\) is false, contrary to intuition. Further, because the intervention on F has destroyed its connection to B , it’s not even possible to tweak interventionism to allow values to flow backwards (to the left) through the network. [ 43 ] Hiddleston’s ( 2005 ) counterexample highlights the possibility of another kind of counterexample featuring embedded conditionals. Consider again the network in Figure 12 . The following counterfactual seems true ( Starr 2012: 13 ).
And, considering a simple match, Fisher (2017b: §1) observes that (50b) is intuitively false.
In both cases, interventionism is destined to make the wrong prediction. With (49) , the intervention in the first antecedent removes the connection between Switch A and the light, so when the antecedent of the consequent is made true by intervention, it does not result in L ’s value becoming 0. And so the whole counterfactual comes out false. Similarly with (50b) , when the first antecedent is made true by intervention, it stays true even after the second antecedent is evaluated. Hence the whole conditional is predicted to be true. Fisher (2017a) also observes that interventionism also has no way of treating counterlegal counterfactuals like if Switch A had alone controlled the light, the light would be on .
These counterexamples to interventionism have stimulated alternative accounts like Hiddleston’s ( 2005 ) minimal network analysis and further developments of that analysis (Rips 2010; Rips & Edwards 2013; Fisher 2017b) . Instead of modifying an existing network to make the antecedent true, this analysis considers alternate networks where only the parent nodes of the antecedent which directly influence it are changed to make the antecedent come true. However, Pearl’s ( 2009 ) interventionist analysis has also been incorporated into the extended structural models analysis (Lucas & Kemp 2015) . This analysis aims to capture interventions as a special case of a more general proposal about how antecedents are made true. One important aspect of this proposal is that interventions often involve inserting a hidden node that amounts to an unknown cause of the antecedent. The analysis of Snider and Bjorndahl (2015) pursues a third idea: counterfactuals are not interpreted by manipulating a background network, but instead serve to constrain the class of possible networks compatible with the information shared in a conversation, as in Stalnaker’s ( 1978 ) theory of assertion. [ 44 ] Among these relations can be cause-to-effect networks as in (45d) , but also networks that involve the antecedent and consequent having a common cause, as in (48c) . As should be clear, this is a rapidly developing area of research where it is not possible to identify one analysis as standard or representative. It does bear emphasizing that this literature is driven not only by precise formal models, but also by experimental data which is brought to bear on the predictions of these analyses.
A few final philosophical remarks are in order about the kinds of analyses discussed here. If one follows Woodward (2002) and Hitchcock (2001) in their interpretation of these networks, a structural equation should be viewed as a primitive counterfactual. It follows that this is a non-reductive analysis of counterfactual dependence: it only explains how the truth of arbitrarily complex counterfactual sentences are grounded in basic relations of counterfactual dependence. However, note in the earlier quotation above from Pearl (2009: 239–240) that he interprets structural equations as basic mechanisms or laws, and so arguably counts as an analysis of counterfactuals in terms of laws. These amount to two very different philosophical positions that interact with the philosophical debates surveyed in §1.3 .
It is also worth noting that while many working in this framework apply these networks to causal relations, there is no reason to assume that the analysis would not apply to other kinds of dependence relations. For example, constitutional dependence is at the heart of counterfactuals like:
From a Bayesian Network approach to mental representation ( §1.2.3 ), this makes perfect sense: the networks encode probabilistic dependence which can come from causal or constitutional facts.
Finally, it is worth highlighting that the philosophical objections directed at the similarity analysis in §2.6 are addressed, at least to some degree, by structural equation analyses. Because the central constructs of this analysis—structural equations and Bayesian Networks—are also employed in models of mental representation, causation, and scientific explanation, it grounds counterfactuals in a construct already taken to explain how creatures like us cope with a world like the one we live in.
Premise semantics ( 3.1 ), conditional probability analyses ( §3.2 ) and structural equation analyses ( §3.3 ) all aim to analyze counterfactuals by focusing on certain relations between facts, rather than similarities between worlds. These accounts make clearer and more accurate predictions about particular counterfactuals in context than similarity analyses. But, ultimately, both premise semantics and conditional probability analyses had to incorporate causal dependence into their theories. Structural equation analyses do this from the start, and improve further on the predictions of premise semantics and conditional probability analyses. Another strength of this analysis is that it integrates elegantly into the broader applications of counterfactuals in theories of rationality, mental representation, causation, and scientific explanation surveyed in §1.1 . There is still rapid development of structural equation analyses, though, so it is too early to say where the analysis will stabilize, or how it will fair under thorough critical examination.
Philosophers, linguists, and psychologists remain fiercely divided on how to best understand counterfactuals. Rightly so. They are at the center of questions of deep human interest ( §1 ). The renaissance on this topic in the 1970s and 1980s focused on addressing certain semantic puzzles and capturing the logic of counterfactuals ( §2 ). From this seminal literature, similarity analyses (D. Lewis 1973b; Stalnaker 1968) have enjoyed the most widespread popularity in philosophy ( §2.3 ). But the logical debate between similarity and strict analyses is still raging, and strict analyses provide a viable logical alternative ( §2.4 ). Criticisms of these logical analyses have focused recent debates on our intuitions about particular utterances of counterfactuals in particular contexts. Structural equation analyses ( §3.3 ) have emerged as a particularly prominent alternative to similarity and strict analyses, which claims to improve on both in significant respects. These analyses are now being actively developed by philosophers, linguists, psychologists, and computer scientists.
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.
[Please contact the author with suggestions.]
causation: counterfactual theories of | conditionals | impossible worlds | laws of nature | logic: conditionals | logic: modal | modality: epistemology of | modality: varieties of | possibilism-actualism debate | possible worlds
Copyright © 2019 by W. Starr < w . starr @ cornell . edu >
IMAGES
VIDEO
COMMENTS
Hypothesis Contrary To Fact, also known as "counterfactual fallacy" or "speculative fallacy," is a type of logical fallacy where a statement or argument is made based on a hypothetical situation that is presented as fact, but is actually contrary to what is known or proven to be true.
Hypothesis Contrary to Fact. Description: From a statement of fact, the argument draws a counterfactual claim (i.e. a claim about what would have been true if the stated fact were not true). The argument falsely assumes that any state of affairs can have only one possible cause. Examples:
Hypothesis contrary to fact This attempts to prove a point by referring to hypothetical situations instead of concrete evidence. It’s a fallacy because imaginary events or actions have no impact on real-life events or the truth.
Logical fallacies are leaps of logic that lead us to an unsupported conclusion. People may commit a logical fallacy unintentionally, due to poor reasoning, or intentionally, in order to manipulate others. Logical fallacy example.
Hypothesis Contrary to Fact (Argumentum Ad Speculum): Trying to prove something in the real world by using imaginary examples alone, or asserting that, if hypothetically X had occurred, Y would have been the result.
Hypothesis contrary to fact forms an argument on the basis of something that didn’t happen. This fallacy is also called “if only” thinking. If only my candidate had won, the economy would be fixed by now. (There is no way to prove or disprove what would have happened if the other candidate had won, so the argument is meaningless.)
Today's Logical Fallacy is...Hypothesis Contrary to Fact! This fallacy occurs when someone argues that their specific prediction about the present would be true or accurate if a past event had happened differently. It’s fallacious because the premises are based on speculation, not fact or evidence, essentially drawing conclusions from a ...
Accent: the emphasis on a word or phrase suggests a meaning contrary to what the sentence actually says; Category Errors . Composition: because the attributes of the parts of a whole have a certain property, it is argued that the whole has that property; Division: because the whole has a certain property, it is argued that the parts have that ...
1. Counterfactuals and Philosophy. 1.1 What are Counterfactuals? 1.2 Agency, Mind, and Rationality. 1.2.1 Agency, Choice, and Free Will.
Logically Fallacious is one of the most comprehensive collections of logical fallacies with all original examples and easy to understand descriptions, perfect for educators, debaters, or...