• Lucid Dreaming - Dream Views




    Results 1 to 23 of 23
    Like Tree5Likes
    • 2 Post By Xei
    • 2 Post By DuB

    Thread: Can they both be rational?

    Hybrid View

    1. #1
      I am become fish pear Abra's Avatar
      Join Date
      Mar 2007
      Location
      Doncha Know, Murka
      Posts
      3,816
      Likes
      542
      DJ Entries
      17

      Can they both be rational?

      Ok this thread is going to be better than my last thread I swear.

      Take two people. Give them the exact same relevant evidence which could influence a set of graded beliefs*. Is it rationally permissible for these two to differ on their set of graded beliefs, once the evidence has been considered?

      *graded beliefs as opposed to binary beliefs. So instead of saying "I believe it's going to rain tomorrow," you say "I am 90% certain it is going to rain tomorrow." If people do not differ on a set of graded beliefs, then the number associated with each belief is the same.

      You can even take the question to mean, given the exact same total evidence (same life experiences), could two people differ on any of their views (set of graded beliefs), while both people are thinking rationally? Or must one (or both) of them have had either an arbitrary means as to how to assign their belief(s), or an irrational means?
      Abraxas

      Quote Originally Posted by OldSparta
      I murdered someone, there was bloody everywhere. On the walls, on my hands. The air smelled metallic, like iron. My mouth... tasted metallic, like iron. The floor was metallic, probably iron

    2. #2
      Dreamer Achievements:
      Tagger First Class Made lots of Friends on DV Vivid Dream Journal 5000 Hall Points Referrer Bronze Veteran Second Class
      JoannaB's Avatar
      Join Date
      Feb 2013
      LD Count
      2017:1, pre:13+
      Gender
      Location
      Virginia
      Posts
      3,024
      Likes
      2155
      DJ Entries
      449
      Are you assuming that the two people are fully rational? Are you also assuming that they are either identical twins or that genetics do not play any roles in differences in interpretation? Are you assuming that both people are same gender?

      Furthermore, you would need to assume that there are no contradictions that needed arbitrary decision or coincidence? Normal life is full of contradictions, contradictory evidence, contradictions in personality.

      Of course, all these assumptions are so unrealistic, that I think you might as well decide that they could or could not come up with different graded beliefs and either one of them would be equally unrealistic because all the assumptions are unrealistic. I think you got yourself a case of if 0 = 1, then I am the Pope, since in real life people never have the exact same experiences, nor are they fully rational. So I think if you craft the unrealistic assumptions in just the right way, you can arrive at either outcome.

    3. #3
      I am become fish pear Abra's Avatar
      Join Date
      Mar 2007
      Location
      Doncha Know, Murka
      Posts
      3,816
      Likes
      542
      DJ Entries
      17
      Yes, fully rational. If two agents were fully rational, what difference would their gender make? Rationality should be able to overcome thought-bias due to gender (if such a thing exists). Since it doesn't make rational sense for two arguments to differ based on the gender of the arguers. Does it really matter if I'm a boy or a girl, that I have a given opinion on climate change? Is there a "right" answer for boys, and a different "right" answer for girls? No, that's ridiculous.

      Same goes for other genetic differences, unless you point out an example and explain how it's non-trivially different from my gender example. And, yes, being mentally handicapped without the ability to mentally overcome that handicap would obviously not work in this thought experiment.

      Quote Originally Posted by JoannaB View Post
      Furthermore, you would need to assume that there are no contradictions that needed arbitrary decision or coincidence? Normal life is full of contradictions, contradictory evidence, contradictions in personality.
      No, you wouldn't need to assume there are no contradictions. If you find evidence which supports a given hypothesis, but contradicts another that also has evidence to support it, you'd believe more strongly in the hypothesis that had stronger evidence. If the total evidence for each hypothesis is equally strong, then you'd suspend judgement on both hypotheses until further evidence was gained.

      And if you need to make an arbitrary decision, that's fine. We all need to do that. You can behave in multiple ways and still be rational, that is not the question. The question is, would the beliefs necessarily match up, if both agents had the exact same evidence and were both "fully" rational.

      I'm not going to say "fully" before "rational" anymore. That's what I mean whenever I say "rational."
      Last edited by anderj101; 03-20-2013 at 04:05 AM. Reason: Merged
      Abraxas

      Quote Originally Posted by OldSparta
      I murdered someone, there was bloody everywhere. On the walls, on my hands. The air smelled metallic, like iron. My mouth... tasted metallic, like iron. The floor was metallic, probably iron

    4. #4
      Member
      Join Date
      Feb 2004
      Posts
      5,165
      Likes
      709
      If they were robots then they would have the same results. Humans might have the same results but humans are not always rationally and we are very bad at assigning percentages to things. If they were both educated and rational type people, then they should come to the same conclusion but may still have slightly differing percentages because humans are just not that accurate.

    5. #5
      I am become fish pear Abra's Avatar
      Join Date
      Mar 2007
      Location
      Doncha Know, Murka
      Posts
      3,816
      Likes
      542
      DJ Entries
      17
      Quote Originally Posted by Alric View Post
      If they were robots then they would have the same results. Humans might have the same results but humans are not always rationally and we are very bad at assigning percentages to things. If they were both educated and rational type people, then they should come to the same conclusion but may still have slightly differing percentages because humans are just not that accurate.
      Interesting. So you are saying that they would align in their beliefs, if they were truly ideal rational subjects, taken to be really smart, emotionally detached, logical robots.

      But wouldn't that mean, if you fed these robots all the evidence in the universe, they'd have the same, single rational view on all ethical, epistemological, ontological, and metaphysical beliefs? This would mean that there is a single rational credence distribution (percentages you assign to each belief) on the set of all beliefs. That would mean there is one rationally permissible perspective on abortion, animal rights, skepticism, religion, which football team is best, et cetera (and it does not and probably should not be the case that each belief in each proposition is with 100% certainty).

      You're a bold one.

      Quote Originally Posted by Wayfaerer View Post
      It seems to me that two rational people could have the same belief graded differently. I'm finding it difficult to imagine one relatively effective method of grading for all proposals.
      Bayesianism is p. cool from what I've learned so far. But people still argue about which method for grading is "best." Can you give an example where there are two methods of grading which are equally 'logical', but produce different grades on a belief? The only ones I can think of are the indifference principle, which says that given N mutually exclusive possibilities with no evidence, you should assign 1/N as the likelihood of each possibility. But I reject this idea, because it can produce contradictory belief distributions. (I say instead that there should be no distribution assigned in that case, but some people disagree with me on this).

      Another debate deals with how to use correlative evidence when forming beliefs about action, and it is at ends with a causal evidence approach. I could explain why I like the causal approach more and what they both are if you'd like.

      Point being, I can see how you could have a single best way for assigning credences. I find it possible to imagine multiple best ways, but I find it harder to justify this possibility... I have yet to come across two ways to grade that are equally compelling.

      Each article of evidence could 'reasonably' carry different weights for different people, especially since they only really serve to help you effectively guess in the face of the unknown.
      How?

      Some people might be guided by one possibility over another for aesthetic reasons.
      Inherently, aesthetics don't justify belief in jack, so these people would not be rational.

      A good example might be the proposal of the planet Vulcan to explain Mercury's disagreement with Newton's laws of gravity. Would all rational people have to have been 99% sure that it existed considering all the evidence supporting Newton's laws? After a time of never being able to find Vulcan, would someone who was 80% sure that a big conceptual leap in how we think of gravity was needed have been relatively irrational?
      You're example is not clear. 99% sure that Newton's laws exist? Or that Vulcan/Mercury exists? Do the Vulcaners believe in them, or the Mercurians? What is this "relatively irrational" you speak of? If group A had evidence for the laws that group B didn't, then it's of course permissible that the rational group Aers differ in their beliefs from the rational group Bers.

      Nature has presented us with many unexpected observations in the past that it would be hard to say for sure. It could have been invisible matter perturbing Mercury's orbit for all they knew.
      I have no idea what your example is talking about at this point.

      Individually proceeding from this information seems more of an intuitive procedure, guided by aesthetic preferences of different ideas. A similar situation going on now with galaxies showing departures from general relativity could be seen the same way. Some seem 99% sure that "dark matter" exists, some aren't so convinced and give more consideration to the possibility that a reworking of gravity might be needed. Both of these possibilities seem rational, especially considering the previous example. Some of these people might just be guided by hunches that evidence can't support yet.
      Aaaand I have already commented on some things covered in here, and won't argue with your conclusions until you've clarified some things.

      Also, lol @ this thread:

      If you don't like assigning a single degree of certainty to a proposition, pretend you're assigning a range of degrees instead. I have 80%-90% confidence that it will rain tomorrow.

      Reformulated this way, the strong version of "single rational credence distribution" would say that the ranges would have to align. (A weak version would perhaps allow for a mere overlap between two rational people's ranges. But the weak version isn't very informative, if for example one person had a range of 0-100% in the proposition "the sun will explode tomorrow", and the other had a range of .001-.00101%. Would those two people be equally rational?)
      Last edited by anderj101; 03-20-2013 at 04:05 AM. Reason: Merged 3 posts
      Abraxas

      Quote Originally Posted by OldSparta
      I murdered someone, there was bloody everywhere. On the walls, on my hands. The air smelled metallic, like iron. My mouth... tasted metallic, like iron. The floor was metallic, probably iron

    6. #6
      Member Achievements:
      Created Dream Journal Referrer Bronze 5000 Hall Points Tagger First Class Populated Wall Veteran First Class
      Arra's Avatar
      Join Date
      Jan 2011
      Posts
      3,838
      Likes
      3887
      DJ Entries
      50
      Not necessarily. Of course, if two people have the same evidence, they should both come to the same conclusion. However, most logical arguments are dependent upon one or more presuppositions, which are often not evident, and are not stated in the given premises. If the people end up with a different conclusion, it's likely that they both assume different premises.

      Ex:
      P1: Carlos is a man.
      P2: All men have 10-inch dicks.
      C: Therefore, Carlos has a 10-inch dick.

      Yeah....

    7. #7
      I am become fish pear Abra's Avatar
      Join Date
      Mar 2007
      Location
      Doncha Know, Murka
      Posts
      3,816
      Likes
      542
      DJ Entries
      17
      Quote Originally Posted by Dianeva View Post
      Not necessarily. Of course, if two people have the same evidence, they should both come to the same conclusion. However, most logical arguments are dependent upon one or more presuppositions, which are often not evident, and are not stated in the given premises. If the people end up with a different conclusion, it's likely that they both assume different premises.

      Ex:
      P1: Carlos is a man.
      P2: All men have 10-inch dicks.
      C: Therefore, Carlos has a 10-inch dick.

      Yeah....
      A rational person would not be justified to use "evidenceless presuppositions" when forming their beliefs. A rational person could not assume "all men have 10-inch dicks" unless they've seen a lot of (and only) ten inch dicks. A rational person could not assume "Carlos is a man," if they have never heard anything about Carlos. Also, you're taking both premises to be things the rational being was 100% certain of. It doesn't really make sense to be 100% certain of a non-mathematical universally quantified ("All") hypothesis, since there's usually the possibility of a counterexample. Another point is that if the person can fit a man with a smaller dick into his picture of the natural world, then that person can't be 100% certain that all men have 10-inch dicks.

      Is it fair to presuppose something, if you have no data on it? Does it make rational sense to even assign beliefs like that a credence value ("percentage")?
      Abraxas

      Quote Originally Posted by OldSparta
      I murdered someone, there was bloody everywhere. On the walls, on my hands. The air smelled metallic, like iron. My mouth... tasted metallic, like iron. The floor was metallic, probably iron

    8. #8
      Member Photolysis's Avatar
      Join Date
      Dec 2007
      Gender
      Posts
      1,270
      Likes
      316
      Should I be inferring anything from that somewhat unusual choice of an example?
      Last edited by Photolysis; 03-16-2013 at 04:34 PM.

    9. #9
      Banned
      Join Date
      May 2008
      LD Count
      don't know
      Gender
      Posts
      1,602
      Likes
      1146
      DJ Entries
      17
      It seems to me that two rational people could have the same belief graded differently. I'm finding it difficult to imagine one relatively effective method of grading for all proposals. Each article of evidence could 'reasonably' carry different weights for different people, especially since they only really serve to help you effectively guess in the face of the unknown. Some people might be guided by one possibility over another for aesthetic reasons.

      A good example might be the proposal of the planet Vulcan to explain Mercury's disagreement with Newton's laws of gravity. Would all rational people have to have been 99% sure that it existed considering all the evidence supporting Newton's laws? After a time of never being able to find Vulcan, would someone who was 80% sure that a big conceptual leap in how we think of gravity was needed have been relatively irrational? Nature has presented us with many unexpected observations in the past that it would be hard to say for sure. It could have been invisible matter perturbing Mercury's orbit for all they knew. Individually proceeding from this information seems more of an intuitive procedure, guided by aesthetic preferences of different ideas. A similar situation going on now with galaxies showing departures from general relativity could be seen the same way. Some seem 99% sure that there exists matter that neither emits nor absorbs electromagnetic radiation to account for the unexplained gravitational observations, some aren't so convinced and give more consideration to the possibility that a reworking of gravity might be needed. Both of these possibilities seem rational, especially considering the previous example. Some of these people might just be guided by hunches that evidence can't support yet.
      Last edited by Wayfaerer; 03-16-2013 at 09:25 PM.

    10. #10
      Banned
      Join Date
      May 2008
      LD Count
      don't know
      Gender
      Posts
      1,602
      Likes
      1146
      DJ Entries
      17
      Quote Originally Posted by Abra View Post
      Bayesianism is p. cool from what I've learned so far. But people still argue about which method for grading is "best." Can you give an example where there are two methods of grading which are equally 'logical', but produce different grades on a belief? The only ones I can think of are the indifference principle, which says that given N mutually exclusive possibilities with no evidence, you should assign 1/N as the likelihood of each possibility. But I reject this idea, because it can produce contradictory belief distributions. (I say instead that there should be no distribution assigned in that case, but some people disagree with me on this).

      Another debate deals with how to use correlative evidence when forming beliefs about action, and it is at ends with a causal evidence approach. I could explain why I like the causal approach more and what they both are if you'd like.

      Point being, I can see how you could have a single best way for assigning credences. I find it possible to imagine multiple best ways, but I find it harder to justify this possibility... I have yet to come across two ways to grade that are equally compelling.
      That method seems cool, but the point I was making is that since these propositions are ultimately unknown, it really comes down to personal intuition on what to do about the available evidence for them. History and previously successful methods could serve as evidence for a proposition, in predicting the weather for example, but they're only really guesses in the end. Who's to say previously successful methods will always work, or that our limited view of natural history necessarily repeats itself? The weather might be relatively predictable, but how would you grade something as frequently astonishing as progress in science?

      "Each article of evidence could 'reasonably' carry different weights for different people, especially since they only really serve to help you effectively guess in the face of the unknown."
      Quote Originally Posted by Abra View Post
      How?
      Well, 'reasonably' as in two different guesses or intuitive opinions can both be 'reasonable'.

      Quote Originally Posted by Abra View Post
      Inherently, aesthetics don't justify belief in jack, so these people would not be rational.
      I said guided by aesthetic reasons, nothing about belief. I was saying that after all evidence is considered, they can serve as a motivation toward certain routes of action.

      Quote Originally Posted by Abra View Post
      You're example is not clear. 99% sure that Newton's laws exist? Or that Vulcan/Mercury exists? Do the Vulcaners believe in them, or the Mercurians? What is this "relatively irrational" you speak of? If group A had evidence for the laws that group B didn't, then it's of course permissible that the rational group Aers differ in their beliefs from the rational group Bers.
      I was asking if, given all the evidence that Newton's laws were completely correct and using a grading method, that that one little perturbation would have served as enough "reasonable" evidence to conclude the belief that a new conception of gravity was 80% inevitable? How reasonable would it have been to think that Vulcan was invisible? Would there have been a method of sorting the evidence that produced the actual likelihood of these possibilities?
      Last edited by Wayfaerer; 03-16-2013 at 11:14 PM.

    11. #11
      I am become fish pear Abra's Avatar
      Join Date
      Mar 2007
      Location
      Doncha Know, Murka
      Posts
      3,816
      Likes
      542
      DJ Entries
      17
      Quote Originally Posted by Wayfaerer View Post
      That method seems cool, but the point I was making is that since these propositions are ultimately unknown, it really comes down to personal intuition on what to do about the available evidence for them.
      Is basing your credences based on personal intuition rational? Where does "personal intuition" come from?

      History and previously successful methods could serve as evidence for a proposition, in predicting the weather for example, but they're only really guesses in the end. Who's to say previously successful methods will always work, or that our limited view of natural history necessarily repeats itself?
      It's fine that you can be wrong, or inaccurate. You can be wrong about something and rational at the same time. But in the face of new evidence, you learn by a logical means and reformat your credences to compensate. Make an educated guess on some set of evidence, gather more evidence, and form a more accurate guess. It's the means by which you do so which makes you rational.

      "Each article of evidence could 'reasonably' carry different weights for different people, especially since they only really serve to help you effectively guess in the face of the unknown."
      What justification do you have that each article of evidence could 'reasonably' carry different weights for different people?

      Well, 'reasonably' as in two different guesses or intuitive opinions can both be 'reasonable'.
      What do you mean by 'reasonable,' is what I'm getting at. Because if has enough to do with 'rational,' then you can't respond to: "Can two folks have differing credence distributions and the same evidence and still be rational?" with "yes, because two different opinions can be 'reasonable'". That is circular.


      I said guided by aesthetic reasons, nothing about belief. I was saying that after all evidence is considered, they can serve as a motivation toward certain routes of action.
      Ok? I guess I can respond to this with: yes, it is rational to be guided by aesthetic reasons, but only within a context or goal. Such as men seeing pale, fat girls as good potential mates in times of old because it reflected social status (or, more generally for just fatties and even older days, it reflected health). It is not rational, however, to like pale, fat girls for inherently because they are pale and fat (and so you'd weigh evidence that contained pink fat girls differently for some reason? You aren't trying to say that, are you?). Fat's only good because it means something better (health/weath/luxury/fetish fulfillment/fresh meat).


      I was asking if, given all the evidence that Newton's laws were completely correct and using a grading method, that that one little perturbation would have served as enough "reasonable" evidence to conclude the belief that a new conception of gravity was 80% inevitable?
      At that point in time, it's possible. But I think you mean to say, "to conclude, with 80% confidence, that a new conception of gravity was inevitable".
      Unless you really mean, "to conclude, with 100% certainty that a new conception of gravity is 80% inevitable", which is less plausible.
      But yeah, if that's all the evidence you had, that's a possible rational belief.

      Would there have been a method of sorting the evidence that produced the actual likelihood of these possibilities?
      What method are the human-minds using? What method should they use, if their aim is to be rational? Is there a single rational method? Is there only one possible 'rational' credence distribution to have, given the same set of evidence? That's the point of the thread.
      Abraxas

      Quote Originally Posted by OldSparta
      I murdered someone, there was bloody everywhere. On the walls, on my hands. The air smelled metallic, like iron. My mouth... tasted metallic, like iron. The floor was metallic, probably iron

    12. #12
      Xei
      UnitedKingdom Xei is offline
      Banned
      Join Date
      Aug 2005
      Posts
      9,984
      Likes
      3084
      I'm interested in where you're going with this. Why 'graded beliefs' in particular?

      I'm not really sure what distinguishes graded beliefs from binary beliefs. Believing something will happen with 100% or 0% probabilities seem to be graded beliefs which are also binary. Conversely, "given my data, it will rain tomorrow with probability 80%" seems to be a binary belief (it's true or false) which is graded.

      I guess I'd give a tentative "no" for now. If they have the same data, any two 'perfectly rational' entities should come up with the same conclusions.
      sloth and dutchraptor like this.

    13. #13
      I am become fish pear Abra's Avatar
      Join Date
      Mar 2007
      Location
      Doncha Know, Murka
      Posts
      3,816
      Likes
      542
      DJ Entries
      17
      Quote Originally Posted by Xei View Post
      I'm interested in where you're going with this. Why 'graded beliefs' in particular?

      I'm not really sure what distinguishes graded beliefs from binary beliefs. Believing something will happen with 100% or 0% probabilities seem to be graded beliefs which are also binary. Conversely, "given my data, it will rain tomorrow with probability 80%" seems to be a binary belief (it's true or false) which is graded.

      I guess I'd give a tentative "no" for now. If they have the same data, any two 'perfectly rational' entities should come up with the same conclusions.
      You can convert graded to binary, or binary to graded, and both functionally exist. Graded works in problems that binary doesn't, though. Such as thinking about the probability of winning the lottery, in relation to your beliefs about each ticket winning. You know Pr(some ticket wins) = 1, if it's a fair lottery. But if you had a binary belief system, Pr(this specific ticket will win) = 0, because surely you don't think any specific ticket will win (binary beliefs set probabilities to 0 or 1). But then Pr(ticket 1 will win) + Pr(ticket 2 will win) + ... + Pr(ticket 1000 will win) = 0, when it should be equal to Pr(some ticket wins)! This problem dissolves if we believe each ticket has a 1/1000 chance of winning.

      You could also make your dual-binary/graded-looking belief into a meta-graded belief. How confident are you that you are 80% confident that it will rain tomorrow? Instead of straight-up believing 100%, that there is an 80% chance of rain tomorrow, as you described, you could perhaps have only 50% confidence in your belief about the chance of rain. This shows that the binaryness describing the graded proposition is talking about something different, something other than the subjective 80% belief in rain, namely, your confidence in that belief.

      And if your answer is 'no,' does that also mean there is only one rational way to update your beliefs, given new evidence? Or could there be meaningfully multiple ways, that occur using non-equivalent rules to give you the same credence distribution?
      Last edited by Abra; 03-18-2013 at 06:05 PM.
      Abraxas

      Quote Originally Posted by OldSparta
      I murdered someone, there was bloody everywhere. On the walls, on my hands. The air smelled metallic, like iron. My mouth... tasted metallic, like iron. The floor was metallic, probably iron

    14. #14
      Xei
      UnitedKingdom Xei is offline
      Banned
      Join Date
      Aug 2005
      Posts
      9,984
      Likes
      3084
      You can frame those lottery questions in the language of binary beliefs; namely, "the probability of this ticket winning is 1/1000" (which in this case is true).

    15. #15
      I am become fish pear Abra's Avatar
      Join Date
      Mar 2007
      Location
      Doncha Know, Murka
      Posts
      3,816
      Likes
      542
      DJ Entries
      17
      Quote Originally Posted by Xei View Post
      You can frame those lottery questions in the language of binary beliefs; namely, "the probability of this ticket winning is 1/1000" (which in this case is true).
      But do you agree that the "binary part" of the belief would be talking about something different than what the graded belief is, ie. the confidence in your graded belief?

      Nevertheless, we used the graded interpretation when calculating the sum of each ticket's probability of winning. We don't somehow add up all the "it is true that I believe" bits to describe our graded beliefs about each ticket winning.

      And wouldn't that mean you are 100% certain that each belief that doesn't have exactly 80% confidence in rain is false? So you're 0% confident that it's 79% likely to rain (describing "it is false that I believe it is 79% likely to rain"), but 100% confident that it's 80% likely to rain? That doesn't seem reasonable.
      Last edited by Abra; 03-18-2013 at 06:48 PM.
      Abraxas

      Quote Originally Posted by OldSparta
      I murdered someone, there was bloody everywhere. On the walls, on my hands. The air smelled metallic, like iron. My mouth... tasted metallic, like iron. The floor was metallic, probably iron

    16. #16
      Ripeness is all.
      Join Date
      Feb 2013
      LD Count
      3
      Location
      Earth
      Posts
      15
      Likes
      1
      Quote Originally Posted by Photolysis View Post
      Should I be inferring anything from that somewhat unusual choice of an example?


      Quote Originally Posted by Abra View Post
      You can convert graded to binary, or binary to graded, and both functionally exist. Graded works in problems that binary doesn't, though. Such as thinking about the probability of winning the lottery, in relation to your beliefs about each ticket winning.
      Why even bother with the latter? Any belief you hold about specific tickets winning has zero correlation to the actual probability of winning the lottery.

      You know Pr(some ticket wins) = 1, if it's a fair lottery. But if you had a binary belief system, Pr(this specific ticket will win) = 0, because surely you don't think any specific ticket will win (binary beliefs set probabilities to 0 or 1). But then Pr(ticket 1 will win) + Pr(ticket 2 will win) + ... + Pr(ticket 1000 will win) = 0, when it should be equal to Pr(some ticket wins)!
      Why would you have adopted that pattern of thinking at all?

      As you and Xei mentioned

      This problem dissolves if we believe each ticket has a 1/1000 chance of winning.
      Quote Originally Posted by Abra View Post
      But do you agree that the "binary part" of the belief would be talking about something different than what the graded belief is, ie. the confidence in your graded belief?

      Nevertheless, we used the graded interpretation when calculating the sum of each ticket's probability of winning. We don't somehow add up all the "it is true that I believe" bits to describe our graded beliefs about each ticket winning.
      Wouldn't it go without saying that it's true you believe so?

      You could also make your dual-binary/graded-looking belief into a meta-graded belief.
      For what use?

      How confident are you that you are 80% confident that it will rain tomorrow? Instead of straight-up believing 100%, that there is an 80% chance of rain tomorrow, as you described, you could perhaps have only 50% confidence in your belief about the chance of rain. This shows that the binaryness describing the graded proposition is talking about something different, something other than the subjective 80% belief in rain, namely, your confidence in that belief.
      The purpose of the original graded belief was to express my confidence in a prediction. It goes without saying I'm not 100% confident my prediction will be accurate because that is the nature of a graded belief and why I chose to use one in the first place. Meta-graded beliefs simply draw the mind further away from the context in which it was required to estimate. So, they don't have much utility beside maybe revealing your might be fond of smoking pot.

      Quote Originally Posted by Abra View Post
      Take two people. Give them the exact same relevant evidence which could influence a set of graded beliefs*. Is it rationally permissible for these two to differ on their set of graded beliefs, once the evidence has been considered?

      *graded beliefs as opposed to binary beliefs. So instead of saying "I believe it's going to rain tomorrow," you say "I am 90% certain it is going to rain tomorrow." If people do not differ on a set of graded beliefs, then the number associated with each belief is the same.

      You can even take the question to mean, given the exact same total evidence (same life experiences), could two people differ on any of their views (set of graded beliefs), while both people are thinking rationally? Or must one (or both) of them have had either an arbitrary means as to how to assign their belief(s), or an irrational means?
      If we think differently, we probably go about "updating our beliefs" differently, too. Since your example suggests these two people have the same life experiences, no, they wouldn't differ on any of their views. The problem with this example is that it isn't applicable to physical reality because two people constitute two separate points in time and space. Through our understanding of relativity, we know they wouldn't experience life in exactly the same way.

      And if your answer is 'no,' does that also mean there is only one rational way to update your beliefs, given new evidence? Or could there be meaningfully multiple ways, that occur using non-equivalent rules to give you the same credence distribution?
      The manner in which we update our beliefs is correlated to why we update them. As long as we are pursue truth, we share a rational basis for updating them. However, because we think differently, the literal transformation can occur in multiple ways using various rules.

    17. #17
      DuB
      DuB is offline
      Distinct among snowflakes DuB's Avatar
      Join Date
      Sep 2005
      Gender
      Posts
      2,399
      Likes
      362
      Quote Originally Posted by Abra View Post
      Take two people. Give them the exact same relevant evidence which could influence a set of graded beliefs*. Is it rationally permissible for these two to differ on their set of graded beliefs, once the evidence has been considered?
      I guess we have to bring in a little additional information.

      We are assuming that both people are "rational." Presumably by this we mean that they are both perfect Bayesians. You haven't told us anything about each person's prior information, but I think that this matters for answering the question.

      If we assume that each person starts with the same prior information, that each person is exposed to the same evidence, and that each person combines evidence with priors in the same way (i.e., the Bayesian way), then the answer seems obvious. They must have the same posterior belief. I mean... we haven't really given them any means with which to form different beliefs!

      If we assume, on the other hand, that each person starts with different prior information, then of course they will not (in general) have the same posterior belief after viewing the same evidence.

      However, if you continue to expose both people to new pieces of evidence, and let their posterior beliefs continually evolve based on each new piece of incoming evidence, then if they are both rational Bayesians, they should eventually converge to the same beliefs. Of course, this is only something that should happen in the limit as time/evidence approaches infinity, so it still need not be the case that the two Bayesians hold exactly the same belief after any finite amount of time/evidence. But, after some finite amount of time/evidence, the differences in their beliefs should indeed be "negligible."

      [An interesting related case is the case covered by Aumann's agreement theorem. The usual summary of this result is that "two rational people can't agree to disagree." Of course, the actual content of the agreement theorem is more precise and less general than this summary appears to make it. But it seems like something interesting and relevant enough to mention here.]

      One thing to keep in mind about all of these different results is that they pretty much uniformly depend upon a very specific and technical definition of what it means to be "rational." Usually all that these kinds of results warrant us to say about a given disagreement is that at least one party is not a perfect Bayesian. But this is not particularly surprising or useful to know from the perspective of human psychology, as we know it is almost certainly the case that no one actually reasons in a perfectly Bayesian way. So it is not obvious whether these kinds of results have much practical bearing on actual disagreements between humans in the real world. But they are fun to think about.

    18. #18
      I am become fish pear Abra's Avatar
      Join Date
      Mar 2007
      Location
      Doncha Know, Murka
      Posts
      3,816
      Likes
      542
      DJ Entries
      17
      Quote Originally Posted by DuB View Post
      One thing to keep in mind about all of these different results is that they pretty much uniformly depend upon a very specific and technical definition of what it means to be "rational." Usually all that these kinds of results warrant us to say about a given disagreement is that at least one party is not a perfect Bayesian. But this is not particularly surprising or useful to know from the perspective of human psychology, as we know it is almost certainly the case that no one actually reasons in a perfectly Bayesian way. So it is not obvious whether these kinds of results have much practical bearing on actual disagreements between humans in the real world. But they are fun to think about.
      But if two people disagreed, and they both thought they were perfect bayesians, wouldn't both of them re-check their evidence and formulations?

      This is a practical question to answer, because people who disagree and claim to be rational will attempt to learn and converge, rather than shrugging it off as having two rational perspectives (and neither coming closer to the truth, or mutual understanding).

      @Parasousia:
      I only mentioned graded vs. binary to show how they can be used, and how you can't reduce one into the other without some loss of meaning.

      The manner in which we update our beliefs is correlated to why we update them. As long as we are pursue truth, we share a rational basis for updating them.
      What if you think religion is truth?

      However, because we think differently, the literal transformation can occur in multiple ways using various rules.
      Ok, you have an opinion. Now's your chance to justify it, by showing me a few non-equivalent updating rules that either produce the same credence distributions, or give different distributions, but are still reasonable rules to have.
      Abraxas

      Quote Originally Posted by OldSparta
      I murdered someone, there was bloody everywhere. On the walls, on my hands. The air smelled metallic, like iron. My mouth... tasted metallic, like iron. The floor was metallic, probably iron

    19. #19
      DuB
      DuB is offline
      Distinct among snowflakes DuB's Avatar
      Join Date
      Sep 2005
      Gender
      Posts
      2,399
      Likes
      362
      Quote Originally Posted by Abra View Post
      But if two people disagreed, and they both thought they were perfect bayesians, wouldn't both of them re-check their evidence and formulations?

      This is a practical question to answer, because people who disagree and claim to be rational will attempt to learn and converge, rather than shrugging it off as having two rational perspectives (and neither coming closer to the truth, or mutual understanding).
      Do you think that anyone really, truly, honestly believes that they are a perfect Bayesian? Undoubtedly most people think that they are basically "rational" in some vague, colloquial sense of the term. But that's not what we're talking about. Do you think that anyone really believes that they personally are strictly rational in the Bayesian sense?

      I submit that the prior probability that any given person's reasoning processes perfectly conform to the tenets of Bayesian subjective probability are pretty close to 0. That you can safely enter any argument or discussion assured in the knowledge that almost certainly neither one of you are Bayesians. So then observing that, lo and behold, the two of you cannot seem to agree--and therefore that one of you must not be a perfect Bayesian--doesn't really tell you anything that you didn't already know going into the discussion. In other words, these results are largely irrelevant to actual disagreements between humans on Earth, because no one is a Bayesian, and everyone knows it (or should know it).

      I don't even see that results like this give either of the parties any particular additional motivation to reach a stable agreement, either. Since the results clearly only apply to situations involving two perfectly rational agents, it seems exceedingly easy for one or both of the parties to decide something like: "Clearly this person is not a perfectly rational agent, therefore I have no reason to expect that I would or should eventually agree with them given a sufficient length of discussion. Maybe we should talk about the weather instead."

      Although I am not convinced that these questions have a lot of practical importance as they pertain to human decision makers, they are potentially more relevant and important when we start to think about their implications for designing non-human, artificially intelligent agents.

    20. #20
      I am become fish pear Abra's Avatar
      Join Date
      Mar 2007
      Location
      Doncha Know, Murka
      Posts
      3,816
      Likes
      542
      DJ Entries
      17
      Quote Originally Posted by DuB View Post
      I don't even see that results like this give either of the parties any particular additional motivation to reach a stable agreement, either. Since the results clearly only apply to situations involving two perfectly rational agents, it seems exceedingly easy for one or both of the parties to decide something like: "Clearly this person is not a perfectly rational agent, therefore I have no reason to expect that I would or should eventually agree with them given a sufficient length of discussion. Maybe we should talk about the weather instead."
      What I want is for that person to walk out of the conversation with, "Huh, one of us was wrong, and couldn't prove who was right. But there is clearly an answer. I should study this more from her point of view, and I hope she does the same."

      I realized the second after posting that due to the impossibility of anyone becoming perfectly rational, what I was after wasn't going to happen.
      Abraxas

      Quote Originally Posted by OldSparta
      I murdered someone, there was bloody everywhere. On the walls, on my hands. The air smelled metallic, like iron. My mouth... tasted metallic, like iron. The floor was metallic, probably iron

    21. #21
      Xei
      UnitedKingdom Xei is offline
      Banned
      Join Date
      Aug 2005
      Posts
      9,984
      Likes
      3084
      How do we rationalise that Bayesianism is perfectly rational?

    22. #22
      DuB
      DuB is offline
      Distinct among snowflakes DuB's Avatar
      Join Date
      Sep 2005
      Gender
      Posts
      2,399
      Likes
      362
      Well, the simplest answer is that "rational ≡ Bayesian" is just a definition, and if you don't want to accept the definition, that's fine, just so long as you know what we mean when we invoke the definition. But this answer is kind of a cop-out to what is actually an interesting question. A full answer to this question would have to be huge, but I can give a brief introduction and then point to some further resources that I think will more than satisfy you.

      "Bayesianism," in this context, is shorthand for Bayesian epistemology. (In other contexts it might refer to Bayesian statistics or something else.) One of the projects of an epistemological theory is to give an account of what kinds of beliefs or ways of knowing are good or justified. These justified beliefs or justifiable ways of knowing are said to be "rational."

      The foundational idea of Bayesian epistemology is that of Bayes' Theorem. It is usually said that Bayesian epistemology holds that an agent's inferences and (degrees of) beliefs are "rational" if and only if they conform to the behavior laid out by Bayes' Theorem. But of course, Bayes' Theorem is just a straightforward and uncontroversial result that follows from the basic axioms of probability theory. So a slightly more appropriate way to state the thesis of Bayesian epistemology is that a rational agent's degrees of belief must obey the probability axioms. The probability axioms impose certain coherence conditions on an agent's beliefs. So, on one way of viewing it, Bayesian rationality imposes rather loose constraints on what is and is not rational, since, for example, it makes no reference at all to any notion of whether beliefs are "correct" or whether the believed propositions are "true." To be rational, they need only satisfy the probability axioms.

      The point of all of the above is just to slightly reframe and rephrase the central question as: Why is it desirable for an agent's degrees of belief to obey the probability axioms? (For that matter, what are the probability axioms? This is explained in the links at the bottom.)

      The standard arguments in favor of this position are known as Dutch Book Arguments. The spirit of most (all?) Dutch Book Arguments is to show that if you have two decision making agents, Joe whose beliefs obey all the probability axioms--including Bayesian conditionalization (belief updating)--and Bob whose beliefs systematically violate one or other of the probability axioms--including, but not limited to, failing to update beliefs in a Bayesian way--then it is always possible to construct a situation in which Bob is guaranteed to do worse than Joe.

      Typically the examples used here are gambling examples. So, to be specific, "doing worse" in these cases amounts to showing that we can always construct a set of gambles, or a "book," which will demonstrably lead to a sure loss for anyone who plays the book according to their beliefs, but nevertheless Bob the non-Bayesian would judge this "Dutch book" to be fair (and thus would in principle be willing to play), and Joe the Bayesian would not. Joe will never judge a Dutch book to be fair.

      The point of these Dutch Book Arguments is not to say anything specifically about gambling, but rather to demonstrate the general point that any set of beliefs which does not conform to the probability axioms is ultimately self-defeating if acted upon. We are personally better off if we are rational in the Bayesian sense of the term.

      So this is the part where I just start pointing to other resources on the topic and hope that my introduction and framing were somewhat useful.

      Dutch Book Arguments (Stanford Encyclopedia of Philosophy)

      Epistemic Utility Arguments for Probabilism (Stanford Encyclopedia of Philosophy)

      Bayesian Epistemology (Stanford Encyclopedia of Philosophy)
      Last edited by DuB; 03-26-2013 at 08:29 PM.
      Abra and Xei like this.

    23. #23
      Ripeness is all.
      Join Date
      Feb 2013
      LD Count
      3
      Location
      Earth
      Posts
      15
      Likes
      1
      Quote Originally Posted by Abra View Post
      I only mentioned graded vs. binary to show how they can be used, and how you can't reduce one into the other without some loss of meaning.
      you can't reduce one into the other without some loss of meaning.
      Are you sure? Whether graded or binary, a belief seems ultimately to be a measure of uncertainty. What is the point of binary or graded beliefs outside the realm of experimentation? If you can put your belief 'to the test' so to speak, then shouldn't the resulting experience stand as a token of either disapproval or affirmation as to the rationality of your belief? A conducted experience (yes, experience), whether guided by your burning desire to confirm a belief or merely consequent as a result of passive curiosity, should suffice as a junction for synthesizing theory and practice. Even when dealing with old experiences and known facts, the occurance of speculation alone, or, rather, your puzzlement over the reality of something, evinces why you hold a belief or theory to begin with. When we do in fact acquire enough knowledge to draw new conclusions and gain new understandings, we no longer hold beliefs, but truths. And on that note,

      What if you think religion is truth?
      That's a matter of someone's linguistic philosophy, which, especially in the context of this thread, quite tragically amounts to an ironic misuse of language--something incomprehensible--for what is religion (or science, for that matter) but an approach to acquiring truths? What is religion but a means to explaining the universe? What do we express when we say we are familiar with the lessons of a parable? What does it mean to 'have an understanding' of a subject? We are interested in distinguishing the purpose of what surrounds us, from that of people's actions to that of natural disasters. All that exists outside ourselves remains apart and foreign to us as long as we are unfamiliar with its design. And, thus, we are victims... we are slaves to our mysterious surroundings--to unexplainable causes--as long as we remain susceptible to them. "The ignorant man is not free because he is confronted by an alien world."

      Ok, you have an opinion. Now's your chance to justify it, by showing me a few non-equivalent updating rules that either produce the same credence distributions, or give different distributions, but are still reasonable rules to have.
      Coincidentally, your question, "What if you think religion is truth?" conveniently serves as a relevant example supporting my 'opinion'; that which I will now dare to 'justify'. Firstly, I'll point out that it is useless for me to consider the question at face value while pursuing a consensus because doing so would ultimately lead to a stalemate between us rather than an advancement in rationality. Seeing how the claim 'religion is truth' conveys meaning despite its aforementioned incongruency... in light of this absurdity... it would appear the adaptibility of language to suit our own visions and explications ratifies that we are truely lawless even under command of words: how 'anything goes'. In this realm of autonomy, of personal sovereignty, truth is religion for the believer as much as it is Bayesianism for the probability philosopher, and neither approach is more appropriate than the other in context of its applicability. In this limbo of subjectivity, we have the liberty to associate with whatever code of conduct satiates our temporal musings and complements our dispositions of the day without regarding that of others. Here, the nihilists have won and ethical conduct amounts to nothing more than bourgeois fantasy. We arrive at an impasse with one another only when subjectivity has a monopoly on rationality.

      If, however, I go deeper than face value by also considering your question within the context or ambiance of this thread, and especially within that of your apparent thoughtfulness, it seems crucial that I resume divulging in semantic matters; for the illogical conclusion deriving from your question is of a dialectical nature, after all. The phrasing of the question, "What if you think religion is truth?" dually exposes the vastness of our capacity for interpretation and exhibits the flexibility of language. This slipperiness of tongue often leads us astray from that which we'd intended to communicate, and, thus, implicates non-equivalent updating rules. In turn, any unanimity resulting between individuals exemplifies the variation characteristic of credence 'distributions', as you phrased it, and is truly a miracle considering our communication handicap.
      Last edited by Parousia; 03-30-2013 at 08:48 AM. Reason: typo

    Similar Threads

    1. Near-death experiences: Is there a rational explanation
      By moonshine in forum Beyond Dreaming
      Replies: 45
      Last Post: 07-14-2009, 08:21 PM
    2. Finally, we rational people can discuss here
      By Ghost94 in forum Beyond Dreaming
      Replies: 53
      Last Post: 06-12-2009, 07:04 PM
    3. Top youtube Muslim starts being rational
      By Needcatscan in forum Religion/Spirituality
      Replies: 39
      Last Post: 09-10-2008, 02:42 AM
    4. Psychology: A religion for the rational?
      By HyperSniper in forum Extended Discussion
      Replies: 19
      Last Post: 12-22-2007, 02:48 AM
    5. Anti-Iraq War Rational
      By sephiroth clock in forum The Lounge
      Replies: 28
      Last Post: 05-20-2005, 02:34 PM

    Bookmarks

    Posting Permissions

    • You may not post new threads
    • You may not post replies
    • You may not post attachments
    • You may not edit your posts
    •