Extended Project Description

The question of how knowledge and decision relate to each other is probably as old as the enterprise of epistemology itself. In the Meno, Plato lets Socrates raise the issue whether knowledge of the right way to Larisa is any better than merely having true beliefs about it. In other words, does knowledge, as opposed to true belief, have any decision-theoretic advantages? Independently of how one answers Plato’s challenge, it is a highly plausible hypothesis that the human search for knowledge is primarily driven by its potential to make for better decisions, for more informed decisions tend to lead to better results. One might therefore expect that the concept of knowledge plays a prominent role in contemporary decision theory. But quite the opposite seems to be true: the conceptual framework of the dominant trends in decision theory are mostly explicated without any recourse to knowledge. The proposed project sets out to fill this lacuna.

Confidence in letting knowledge play a fundamental role in epistemology declined under the impression of the post-Gettier debate. After Gettier (1963) had shown that knowledge cannot be analysed as justified true belief as Plato had suggested in the Theaetetus, various attempts were made to correct Plato’s analysis, only to run into new similar counterexamples (amended accounts are developed e.g. in Dretske 1981, Goldman 1967, Nozick 1981 and Pritchard 2005; for criticism, see Williamson 2000 and Zagzebski 1994; for general discussion, see e.g. Spohn forthcoming). The proposed analyses became increasingly complex without ever reaching, in a non-circular way, a condition which had a good claim to capture the nature of knowledge. One could draw from this the consequence that epistemology might perhaps be better cast in terms of justification and belief. This move would fit nicely with the theories of probability and confirmation emerging from the work of Hempel (1945), Carnap (1950), de Finetti (1964), Ramsey (1926), Savage (1954) and Jeffrey (1965, 2004) (to name only a few). Perhaps the most influential product of this research programme is what goes nowadays under the heading of Bayesianism (see in addition e.g. Earman 1992, Levi 1980, Skyrms 1980, J. Williamson 2011). Roughly speaking, the basic idea of Bayesian epistemology and decision theory is to represent epistemic agents in terms of a probability function and a utility function. The probability function represents, on one interpretation, the degrees of confidence (commonly called subjective probabilities or credences) the agent assigns to various propositions. The utility function, on the other hand, derives from the agent’s preferences. A rational decision is then described as one which maximizes expected utility.

Over the last decade or so, the concept of knowledge has been rehabilitated to a considerable extent. The work of Williamson (2000), which marks one of the most significant turning points in recent epistemology, was probably largely responsible for this. Williamson’s central idea was to  take knowledge as a primitive notion instead of continuing to seek an analysis. If one takes knowledge to be a fundamental epistemological notion, it should matter for decision-making. However, how a knowledge based decision theory could look like is still largely under-explored. The proposed project is designed to close this gap. It promises to be an important touchstone for a knowledge based epistemology and will shed new light on key issues in epistemology and decision theory. Ultimately, it aims at an integrated theory of theoretical and practical rationality.

Here is an overview over the subprojects:

Knowledge. Knowledge is a factive epistemic state: if it is known that p, then p. Various more specific epistemic states such as remembering or perceiving that p imply knowledge. Moreover, knowledge implies belief or has at least a similar subjective phenomenology (but see Myers-Schulz & Schwitzgebel forthcoming and Radford 1966). To a first approximation, knowledge is true belief which could not easily have been false. When a subject s truly believes that she is looking at a zebra, but the zebra is, unbeknownst to her, surrounded by a number of painted mules looking exactly like zebras, then her belief is not knowledge, for she could easily have looked at one of the cleverly painted mules, in which case her belief would have been false. This gives naturally rise to a safety condition on knowledge (e.g. Sosa 1999): a subject knows that p only if she would not easily have acquired a false belief in similar situations. Safety is an instance of the more general idea that knowledge is in some sense stable true belief (Rott 2004; Williamson 2000, 2009). Safety illuminates a structural feature of knowledge: if a subject s knows that p in a world w, then there is a circle of similar worlds around w in which s does not falsely believe that p (or a proposition p* relevantly similar to p). In turn, a safety condition can explain why higher-order knowledge is more demanding than first-order knowledge. In order to know that one knows that p, one must not falsely believe that one knows that p in a circle of similar worlds. But around each such world there must then be another circle of similar worlds in which one does not falsely believe that p. Hence, higher-order knowledge requires a larger safety zone, which is why one is not always in a position to know that one knows that when one knows that p. To put it in what has become the received terminology, knowledge is not a transparent or luminous state (Williamson 2000).

The Meno problem is not about how knowledge differs from true belief (for an overview, see Pritchard and Turri 2012). It concerns the question whether knowledge is more useful than true belief. When trying to find the way to Larisa, does it matter whether one has knowledge or merely true beliefs? This is ultimately a decision-theoretic question: can a subject equipped with knowledge expect to be better off in the long run than a subject with the same goals which has merely true beliefs? Williamson (2000) gives some examples indicating that agents with knowledge are more likely to succeed in achieving their goals (but see Kaplan 2003). What is still missing in the current debate is a general framework in which the relevant features could be modelled, which would in turn allow to investigate under which conditions knowledge can be shown to be more valuable than true belief. There are some decision-theoretic results which are relevant to the Meno problem (see e.g. Good 1967, Huttegger 2014 and Skyrms 1990). These show that an agent can expect a more informed decision to be no worse and generally better than a less informed decision. But two features of these kinds of result stand in the way of constituting a satisfying answer to Plato’s challenge. The first issue is that the results are merely internal: they only show that the agent should rationally believe that a more informed decision can be no worse and will often be better. They do not show that this will be objectively so. A second and more severe limitation of the results is that the notion of information appealed to is indiscriminate between knowledge and belief. The results apply equally well to information which happens to be knowledge and information which only happens to be true belief (see also Spohn forthcoming). For this reason, they do not yet establish that knowledge is more valuable than true belief. What they show is only that true information of any kind can be expected to be valuable.

A solution of the Meno problem would have important applications. First, it could yield an explanation of how knowledge relates to action. Hawthorne & Stanley (2008) propose that an agent should take a proposition as a reason for acting only if she knows the proposition to be true. If knowledge can be shown to be decision-theoretically superior to rival epistemic attitudes including true belief, then the Hawthorne/Stanley hypothesis might become explainable on the grounds that one should base one’s actions on the epistemic attitude which serves one’s goals best. Secondly, and relatedly, new light could be shed on Williamson’s (1996, 2000) conjecture that knowledge is the norm of assertion. According to this claim, a subject s should assert that p only if s knows that p. Now, there is a plausible link between assertion and action: assertions often, perhaps even fundamentally, serve the purpose of providing information the addressee could act upon. If knowledge is decision-theoretically advantageous, then it is reasonable to require an asserted proposition to be known.

Uncertainty. Decisions are usually based on a limited body of knowledge providing only partial information about the possible consequences of the available actions. As a result, one typically has to fall back on evaluating decision problems probabilistically. According to a prominent line of thought, a rational decision is made by weighing the respective values of an action’s possible consequences against the likelihoods with which they are thought to happen. The arrived at weighted average is said to be the action’s expected utility. A decision is then taken to be rational if the chosen option’s expected utility is maximal (cp. e.g. Jeffrey 1965). This sketch of a broadly Bayesian decision theory still leaves many issues open.

A first question concerns the standard for possibility. What is the appropriate decision-theoretic meaning of a possible consequence of an action? On an extremely liberal view, all logically possible consequences should be taken into consideration (Jeffrey 1965 and Lewis 1981 lean in this direction with sense data as a possible exception). Although there are reasons for being extremely liberal, the phenomenology of decision-making suggests a more moderate stance: outlandish possibilities are typically not taken into consideration. Ruling certain possibilities out right from the start not only makes for more effective decision making, it also promises to yield better decisions by approximating more closely those consequences of an action which are actually possible. If knowledge is linked to decision, it is a natural hypothesis that knowledge might be the standard for possibility in decision making. According to this thought, a consequence C of an action A is possible for an agent iff it is compatible with the agent’s knowledge that will lead to (Levi 1980; Hawthorne & Stanley 2008; Williamson 2005a).

A second question then concerns the nature of epistemic probability relevant for decision-making. A prominent approach construes epistemic probability as subjective probability, which is only constrained by the axioms of probability combined with certain constraints on how they are to be updated over time. According to this picture, commonly referred to as (radical) subjective Bayesianism, an epistemic life starts with an initial probability distribution P representing, on one salient interpretation, the a priori commitments of the agent. The probabilities of the agent at a given time are taken to be, on the standard picture, the initial probabilities conditional on the evidence the agent has so far acquired (cp. e.g. Lewis 1980). More objective forms of Bayesianism result by imposing stronger constraints on what an admissible initial probability function is (cp. (Jon) Williamson 2011). On the other end of the spectrum, it is possible to hold that there is only one fully rational initial probability function. This position is commonly referred to as the uniqueness thesis (cp. Kelly 2010). On the uniqueness thesis, the Bayesian picture becomes virtually indistinguishable from the project of characterizing objective evidential relations between a body of evidence (a set of propositions) and a proposition. Williamson (2000: ch. 10) develops an account of evidential probabilities along such lines. He takes the present evidential probabilities of an agent to be captured by an initial evidential probability function conditional on the agent’s knowledge. This conception squares well with the idea that knowledge is the standard for possibility in decision-making: it follows that a proposition can only have a positive evidential probability relative to an agent and a time if the agent does not know that the proposition is false.

An immediate consequence of the latter account is that knowledge always receives probability 1. Alternative pictures which still take the current probabilities to result from conditionalizing on one’s evidence, where not all knowledge needs to be evidence or not all evidence has to be known, have the similar consequence that evidence always receives probability 1 (the probabilistic fact in the background being that always P(E|E)=1). This consequence is decision-theoretically problematic, for assignments of probability 1 force one to ignore the potential falsity of the relevant proposition in decision-making. One would judge a bet favourable which offers a penny if the proposition is true but threatens one’s life otherwise. The reason is that such a bet would have positive expected utility if money is positively valued, namely the utility of a penny, for the possible negative consequence of taking the bet has probability 0 and can therefore not make a difference (the general form of the calculation being 1•x+0•y=x, where x and y are the utility of a penny and one’s life respectively). However, it seems at least rationally permissible to turn down bets of this kind. If highly valuable goods are at stake, a proposition being known does not always seem to warrant assignments of probability 1 for the purposes of decision making. This problem has recently been dubbed the prodigality problem by Greco (2013), but it has been recognized long before (e.g. in Jeffrey 1965).

Partly in response to this problem, partly for independent reasons, an alternative account has been developed. Drawing from Jeffrey (1965), it is possible to hold that almost no propositions should ever be assigned probability 1. The main key which opens up this possibility is an alternative epistemic kinematics. The primary way certainties are produced on the standard picture is, as explained above, by learning through conditionalization. In order to prevent this, one can revert to Jeffrey conditionalization which allows for the acquisition of uncertain evidence (Jeffrey 1965: ch. 11). If one now holds that no evidence is ever learned with absolute certainty, then the way rational agent should update their subjective probabilities over time does no longer produce any unwanted certainties. It should be noted, though, that some propositions are still going to be assigned probability 1. These are (i) logical theorems which are forced to receive probability 1 by the axioms of probability and (ii) certain non-logical propositions whose complement is a null set, which often exist in infinite probability spaces for which there is not always a regular probability function (cp. Hajek ms). On these grounds, it has been objected that the present approach might diminish but does not really solve the prodigality problem, for it should still be rationally permissible not to bet one’s life on any logical truth or on any proposition whose complement happens to be a null set (Williamson 2005b). In response, one could hold that the status of logical truths in formal epistemology and the problem of regularity constitute deep but independent problems of their own (see also Williamson 2007 and Haverkamp & Schulz 2012).

Another, perhaps even more pressing problem for the Jeffrey inspired picture concerns the relation between epistemic probability and outright epistemic attitudes like knowledge and belief (cp. Spohn 2012a). Take the case of knowledge. With some necessary provisos in place, it seems to be closed under competent deduction: if a conclusion is competently deduces from a number of known premises, the conclusion becomes knowledge, too. On the other hand, there is no probability t less than 1 (but greater than 0) such that a logical consequence of a set of premises is guaranteed to receive a probability of at least t as long as all the premises’ probability is no less than t (this is what gives rise to Kyburg’s (1961) Lottery Paradox). This makes it impossible to identify any logically closed epistemic attitude with a probability of less than 1 (but greater than 0). Because the present picture reserves probability 1 for a very small number of elite propositions, it faces the problem of seemingly being unable to account for outright epistemic attitudes which are closed under (competent) deduction.

If one is liberal about probability 1, one can accommodate outright epistemic attitudes, but one faces the prodigality problem. If, on the other hand, one only allows probability 1 to be assigned very sparingly, one can solve the prodigality problem to a large extent, but one has trouble accounting for outright epistemic attitudes. This leaves one with a puzzling dilemma. It has a fairly rich, though in some important respects still under-explored solution space. If one sticks to the first horn of the dilemma by disallowing assignments of probability 1, one can either hold (i) that there is no systematic connection between epistemic probability and outright epistemic attitudes, (ii) that despite appearances the targeted outright attitudes are not closed under logical consequence, or (iii) that high probability is only a necessary, but not a sufficient condition for outright epistemic attitudes. Particularly the third option has recently been looked at with a renewed interest by Leitgeb (forthcoming). On Leitgeb’s account of outright belief, context plays the crucial role in deciding which propositions with a certain threshold probability count as being outrightly believed. This saves closure, but there is also a certain price to pay: certain propositions are believed while others are not despite the fact that the agent’s evidence does not discriminate between them. If, instead, one sticks to the second horn of the dilemma and associates outright epistemic attitudes with probability 1, a solution to the prodigality problem has to be found (additionally, a story about how probability 1 can be undone has to be provided; cp. Williamson 2000: ch. 10). One option is to defend contextualism or sensitive invariantism about knowledge (and belief) according to which higher stakes can raise the standards for knowledge (DeRose 1992, 1995; Stanley 2005). When one’s life or other highly valuable goods are at stake, one may no longer count as knowing the proposition under consideration (see Greco 2013 for criticism). Another possible line is to simply stick to the original theory and claim that the prodigality problem is merely an application problem which arises from a lack of higher-order knowledge: in situations in which one knows but does not know that one knows, one may be unable to decide whether one has correctly applied the decision theory (Williamson 2005a). Without a systematic description of how a lack of higher-order knowledge relates to decision-theoretic caution, the theory seems to fall behind its competitors as Greco (2013) worries, though Williamson suspects that any theory will have similar application problems. Even if one does not follow Williamson in taking the prodigality problem to be an application problem only, it is a promising option to further explore how uncertainty may derive from the lack of higher-order knowledge, to be further elaborated on below.

Belief. Standard decision theories are usually cast in terms of belief and subjective probability rather than knowledge and evidential probability. A possible link is the idea that belief should aim at knowledge (Pritchard 2007, Williamson 2000). According to this view, in acquiring (outright) beliefs, a rational agent tries to adopt only those beliefs which constitute knowledge. Evidence for such a norm comes from the observation that rational agents tend to retract their beliefs when they acquire information which suggest that a given belief does not constitute knowledge. Recall the case of the painted mules: if the agent learns that there are a number of painted mules on the field and she has no way of telling them apart from actual zebras, she will typically give up her outright belief that there is a zebra in front of her (though she can still be expected to assign this thought a possibly high subjective probability). A structurally similar relation can be assumed to hold between subjective probabilities and evidential probabilities. The idea would be that subjective probabilities aim at the corresponding evidential probabilities (Williamson 1998).

A knowledge oriented account of belief and subjective probability inherits the prodigality problem if coupled with a Bayesian decision theory. If one’s current subjective probabilities result from conditionalizing on one’s outright beliefs, many ordinary propositions will frequently be assigned probability 1. There is another reason for thinking that the proposal sketched so far cannot be the whole story about an agent’s degrees of confidence. Suppose Sally outrightly believes that there is a $10 note in her pocket. Suppose she also believes that there is a $10 note in her pocket or not (an instance of the law of excluded middle). One can expect her to be more confident about the latter than about the former, one reason being that the latter is implied by the former. Yet if both propositions are assigned subjective probability 1, this difference in confidence cannot be marked in terms of her subjective probabilities. To uncover the grounds for Sally’s uncertainty, it is instructive to look at the higher-order knowledge she takes herself to possess. She can be expected to possess much more iterations of knowledge for an instance of the law of excluded middle than for a particular claim about the external world. It is therefore a possible conjecture that the difference in confidence may derive from a corresponding difference in higher-order knowledge.

If this is on the right track, one may ask whether there is anything which parallels higher-order knowledge on the level of belief. It is fairly obvious that higher-order belief does not seem to be the right candidate. Believing that one believes p usually does not increase one’s confidence in p. In the context of practical reasoning related to the prodigality problem, Williamson (2005a) has introduced the idea of degrees of outright beliefs. Although Williamson does not draw this parallel, it seems congenial to his overall picture that an outright belief of degree n is, to a good approximation, a belief for which one takes oneself to possess justification strong enough to license n iterations of knowledge. In terms of safety, it would be a belief which one takes to be safe within a safety zone large enough for the corresponding degree of higher-order knowledge. However, this analogy also has its limitations: whereas higher-order knowledge is in part also knowledge about one’s own mind, implying, for instance, that one believes the proposition under consideration, a higher-order degree of outright belief does not involve a belief about what one believes, it is merely concerned with whatever the target proposition is about. Moreover, degrees of outright belief should not be conflated with subjective probabilities (cp. Spohn 2012a and Williamson 2005a): Sally’s subjective probability concerning a coin toss may be ½, but her degree of outright belief that the coin will come up heads, say, can be assumed to be 0. Degrees of outright belief are naturally measured in the realm of the natural numbers just like degrees of higher-order knowledge. They have long been studied in the context of ranking theory (the most extensive treatment can be found in Spohn 2012a), which is primarily a theory of qualitative belief whose hallmark it is to be closed under logical consequence. The proposed research project will develop an integrated picture of ranks and probabilities and apply it to decision theory, in particular with an eye on the prodigality problem as it presents itself in higher stakes decision situations.

Preferences. To a first approximation, decisions can be seen as the joint product of one’s desires and one’s knowledge about the world. According to a prominent line of thought, some things are desired merely because they are a means to a desirable end. For instance, Sally may desire that the bus be on time because she does not want to be late for work. Desires of this kind are commonly referred to as instrumental (or extrinsic) desires. In contrast, some desires concern ends in themselves. The pleasure of eating a piece of chocolate or the joy of reading a good book may constitute (at least partly, for some people) ends in themselves. Desires of this second type are fundamental (or intrinsic) desires, for they can be seen to ground all other desires through means-to-end considerations (cp. Broome 1993).

In decision theory, desires are standardly substituted by preferences (see e.g. Skyrms 1990 for discussion). The preferences of an agent are taken to induce an order on the possible outcomes of the available actions. Outcomes can in turn be represented by possible worlds, or small sets thereof which do not differ in any practically relevant respects (Lewis 1981). The location of an outcome in the agent’s preference order can then be seen to indicate how much the agent would like to see the corresponding possible world being actualized. In order to be able to calculate expected utilities, preferences are standardly measured by a utility function u from a set of possible worlds into the real numbers. To some extent, the utility function can be seen to represent the agent’s fundamental desires. The expected utilities of a possible action, on the other hand, can be seen as being more concerned with the agent’s instrumental desires (the rare exception being actions the performance of which already constitutes an end in itself). The expected utility of an action depends not only on the agent’s basic utilities but also on her subjective probabilities. A change in beliefs about what leads to what will typically induce a corresponding change in the agent’s instrumental desires, but it has not necessarily an effect on her basic preferences.

A deep question, famously raised by Hume (1739-40), concerns the relation between belief and desire. On a common interpretation, Hume argued that belief is primarily linked to desire through means-to-end considerations, but that neither one is reducible to the other. This runs against the anti-Humean thought that desires might be intimately linked to what an agent believes is good for her. This is commonly called the desire-as-belief thesis (cp. Lewis 1988). One probably does not even have to invoke Freudian considerations to reckon with the possibility that agents may be wrong about their own desires. Despite this possibility, it  still seems plausible that it should at least be possible that there are agents whose desires correspond to their beliefs about what is good for them, but this thesis conflicts with formal results launched by David Lewis (1988, 1996). Lewis shows that the desire-as-belief thesis is incompatible with standard decision theory. In response to Lewis, various suggestions were made about how one could rescue a variant of the desire-as-belief thesis (Bradley & List 2009, Hájek & Pettit 2004, Price 1989). Regarding this debate, two things are worth pointing out. Lewis did not mean to attack the thesis that non-instrumental desires may correspond to beliefs about what is good for one (cp. Lewis 1988: p. 324). So there is room for a possible restriction of the thesis to a intrinsic-desire-as-belief thesis, though following-up on this option has to meet an objection by Broome (1990). A second  observation is that given that instrumental desires in part depend on one’s beliefs, a notion of good-for-one which could correspond to one’s instrumental desires can be expected to depend in a similar way on one’s beliefs. It can therefore serve as a promising starting point to explore whether a balanced desire-as-belief thesis could be upheld if ‘good’ is taken to be context-dependent on one’s beliefs. This has been suggested but not yet worked out in sufficient detail by Hájek & Pettit (2004).

The structure of preferences could prove crucial for a knowledge based solution of the prodigality problem. If different stakes are associated with different levels of knowledge (or different degrees of outright belief), then one must be able to identify stakes of different degrees. A natural thought would be that stakes are as subjective as desires or preferences in general (some may value their lives or their jobs very highly, while others might care more about the well-being of their children). But then one should be able to read them off the agent’s bouletic attitudes. One possibility would be to subdivide an agent’s preference order into different stakes (or levels), which could in turn be associated with different levels of epistemic care. There is a point of contact with the debate on incommensurability here (cp. Broome 1998, 1999; Joyce 1999: ch. 3), but a possible sensitivity to different stakes does not amount to incomparability, for although losing one’s life would be in a different category as losing a certain amount of money, say, the two events would be comparable, the former being categorically worse than the latter.

Representation. Knowledge, belief and desire have content. Ever since Bolzano and Frege, the nature of thought has been one the central questions in the philosophy of mind and language. Decision theory usually proceeds with a fairly coarse-grained notion of content. Propositions are taken to be sets of possible worlds. Utilities are defined over possible worlds and expected utilities again over sets thereof. It is clear that an account of content in terms of sets of possible worlds is far from unproblematic. Believing one mathematical truths does not imply believing all of them, even though all mathematical truths correspond to the set of all possible worlds due to their necessity (cp. the exchange between Stalnaker 1984 and Field 1986). However, issues of this kind have been of little concern in decision theory, for they do not seem to lead to any genuine decision-theoretic problems. For instance, in response to considering the question what would happen if de se contents were to be integrated into decision theory, Lewis (1979: 149) answers ‘Not much’. All decision theory seems to need is a well-behaved algebraic structure over which probabilities and utilities can be defined.

In the light of recent research on the essential indexical, the present status of the notion of content in decision theory may no longer be unchallenged (this is Elga’s (2000) contention). The background for the renewed interest in indexicals is provided by Perry’s (1977, 1979) arguments that some thoughts are essentially indexical and play an irreducible role in the explanation of human action. Lewis (1979) joined Perry and suggested that there is a distinctive kind of de se attitude best represented by sets of centred possible worlds containing, in addition to a world, a time, a place and a subject. Over the last years, a great amount of research has been devoted to questioning and clarifying the nature of indexical thought (see e.g. the recent monograph by Capelen & Dever 2013). A central question, which is immediately relevant for the proposed project, concerns the epistemic dynamics of de se attitudes. This issue is mostly addressed in the context of the Sleeping Beauty problem (cp. Elga 2000). Prima facie, the Sleeping Beauty problem suggests that there can be rational belief changes without the acquisition of any new evidence. On closer inspection, it becomes clear that the problem concerns indexical beliefs about, for example, which day it is (which day is today?). How indexical beliefs should be updated over time is a deep question which has no immediate solution in standard frameworks (for an extensive discussion, see the recent monograph by Titelbaum 2013). As I argue in Schulz (2010), part of the problem is that an agent needs constant contextual information in order to retain her indexical beliefs (see also Moss 2012). A similar problem can be expected to arise for updating preferences. If at least some of an agent’s preferences are essentially indexical as is widely assumed, then these preferences will require constant updates through contextual information. How such updates are supposed to work is a question which is still under-researched. The proposed project will study the joint dynamics of indexical beliefs and preferences in order to provide a unified picture of how they evolve over time.

Causation. The role of knowledge or belief in decision-making is to identify possible consequences of  available actions. Although most decision theorists can assent to this on some interpretation, there is disagreement about how the pertinent notion of consequence is to be understood. In response to problems with Savage’s theory, Jeffrey (1965) proposed to base decision theory on the conditional  probability of a possible outcome given the performance of the action under consideration. The resulting theory has been labelled evidential decision theory (EDT) because the decision-theoretically relevant notion of consequence is understood evidentially: that the performance of an action is evidence for the obtaining of a desirable outcome is supposed to capture the idea that an agent should choose that option which she thinks is most likely to lead to a good result. EDT has been called into question by Newcomb’s problem and variants of it (Nozick 1969). Newcomb’s problem has widely been taken to suggest that EDT leads to the wrong verdicts in cases in which the action’s evidential impact on a desirable outcome is not underwritten by a direct causal effect (but see Spohn 2012b). In response to this problem, causal decision theory (CDT) has been developed (Gibbard & Harper 1981; Joyce 1999; Lewis 1981). According to CDT, what matters are the unconditional probabilities of the causal dependencies of the possible outcomes on the available actions. The notion of causal dependency is usually spelled out in terms of counterfactuals: in deliberation, an agent should think about what would happen if she were to perform a certain action.

Until recently a widely shared impression was that CDT is clearly superior to EDT, but this impression has been shaken in the current debate about whether CDT does not come with its own set of counterexamples (Egan 2007).  Egan’s examples suggest, if genuine, that CDT has trouble explaining situations in which the performance of an action is evidence for the obtaining (or non-obtaining) of a causal connection. In other words, it seems that sometimes the conditional probabilities figuring in EDT do matter. But in light of Newcomb’s problem, simply going back to EDT does not seem to be an option either. There have been various reactions to Egan’s examples including Ahmed (2012), Arntzenius (2008), Joyce (2012) and Wedgwood (2013), which have, however, not reached a consensus and still fall short of a fully satisfying theory.  Briggs (2010) suggests that the contrast between EDT and CDT is structurally similar to the familiar voting paradoxes which would, by an application of Arrow’s theorem (Arrow 1951), indicate that no decision rule can satisfy all desirable constraints on decision-making. In the proposed project, Briggs’s paper will be taken as a starting point for a systematic investigation of the general features a decision theory should have. Exploiting the link with social choice theory promises to deepen and generalize the debate considerably.

Rationality. Rationality is a (perhaps even the) key notion in epistemology and decision theory.  A common project both in epistemology and decision theory is to postulate norms of rationality, which are then compared to how well they agree with basic pre-theoretic intuitions. Although this clearly marks one important aspect, an at least equally prominent project concerns the reduction of certain proposed norms to norms which are more directly accessible to evaluation. The so-called dutch book arguments – tracing back to a suggestion by Ramsey (1926) – can be seen as a salient instance in epistemology (structurally similar are the so-called money pump arguments concerned with coherence constraints on preferences). Synchronic dutch book arguments are typically used to establish the axioms of probability as norms of rationality for an agent’s credences, the result being, roughly speaking, that one is susceptible to the acceptance of a number of bets which lead to a sure loss iff one’s credences violate the axioms of probability (see e.g. Hájek 2005 for discussion). Diachronic dutch books, on the other hand, are used to establish diachronic update rules such as conditionalization  as the only rational reaction to certain types of evidence (cp. Teller 1973, crediting Lewis). In this case, the results show that under certain conditions an agent using a different update rule will be susceptible to a series of bets which will lead to a sure loss. It is an as yet unresolved controversy how strong the dutch book arguments really are (for an overview, see Vineberg 2011). One type of worry concerns the general connection between credences and dispositions to accept certain bets which look favourable in the light of those credences. Next there are a number of much more concrete problems. One concerns the status of logic: it does not seem irrational to be uncertain about certain theorems of logic, but the axioms of probability require certainty in this regard. This suggests that the synchronic dutch book arguments show too much, at least as far as real world agents are concerned which cannot always be assumed to be logically omniscient. A second problem case concerns the potential loss of evidence through forgetting or the lack of relevant contextual information as evidenced by the Sleeping Beauty problem. As such belief changes sometimes make for the loss of certainties, they cannot be modelled in terms of conditionalization and might be open to a diachronic dutch book argument (see Titelbaum 2013: ch. 12 for discussion). Dutch book arguments also play a crucial role in the context of the prodigality problem. Greco (2013) uses a dutch book argument to argue against a sensitive invariantist approach to the problem.

An under-explored topic in the current debate concerns the dynamics of knowledge and how beliefs can be assumed to change if they aim at knowledge. For instance, it is a fairly uncontentious fact that knowledge can be lost through the acquisition of undermining though misleading evidence (for discussion, also in relation to dutch book arguments, see Williamson 2000: 10). This seems to make certain belief changes rational which share their structure with parallel losses of knowledge even if these changes would be susceptible to a dutch book given that they amount to a loss of probability 1. Taking knowledge and changes in knowledge to be fundamental promises to yield a new perspective on the dynamics of belief and related epistemic attitudes.