Jump to content

Talk:Two envelopes problem/Archive 5

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1Archive 3Archive 4Archive 5Archive 6Archive 7Archive 10

Can we all agree on the following?

1) The Two Envelopes Paradox that this article is about is the variant where the envelope is not opened (there is no real discussion of the other variant)

2) The paradox is the contradiction between symmetry and the switching argument - specifically points 1-8, as written in the current article. Step 1 is an assertion. Steps 2,3,4,5,6,7,8 are deduced from the premises and the previous steps.

3) The switching argument is false (because A: it violates symmetry and specifically B: it refutes itself - the same argument can be used to prove the opposite conclusion).

4) The flaw in the argument is that step 2 cannot be deduced from the premises while the statement in step 1 is true.

The last point I guess will be contentious/new. If we are trying to prove that we can switch, and specify the amount in the envelope as A, our subsequent arguments must be true for all values of A. This means they must be true for any value of A, e.g. $2. Since we have not defined the probability distribution the envelopes are chosen from, the arguments must also be true for all distributions, e.g. the distribution "the amounts are $1 and $2 with 100% probability". We can show in this case that the probability that the other envelope contains is not 12 - it is zero. This is actually the same point that is made in Devlin and Storkey.

They both make a lot of good points but they also make some errors. These are Internet blogs or personal web pages, not peer reviewed academic publications, and neither is an expert in probability theory. Storky says that there are no prior distributions so that it always pays to switch, yet there are. Devlin talks about the "infinite uniform distribution" but if he is looking for a discrete distribution, it is uniform on all powers of 2, not on all integers; if he's looking for a continuous distribution, it has the density 1/x (not uniform at all). Christensen and Utts made the same mistake but later corrected it. Richard Gill (talk) 19:52, 27 July 2011 (UTC)

5) It's not actually relevant to the resolution of the paradox, but much of the literature seems to focus on the consequence of maintaining Step 1 and 2 as simultaneously true - the consequence is that we assume an improper probability distribution (not necessarily uniform, just true that p(x) = p(2x) for all x). To make the argument true, we would have to explicitly state this (or take step 2 as an assertion, rather than a deduction - some philosophers may be doing this). If we do so we then make a nonsense of steps 7 and 8, since we are comparing infinities (and infinities of the same type are always equal in size, see Cardinal number). Dilaudid (talk) 09:16, 27 July 2011 (UTC)

Martin's responses

1) I am happy to concentrate on the unopened version if this is the most well known version.

2) Fine, but it is not clear exactly what 1 is intended to mean.

3) A) Yes, this is what creates the paradox. B) This is the same as A

4) We must first decide exactly what A represents before coming to any conclusions as to what it can and cannot be.

5) This is relevant if your claim in 4 does not stand up to scrutiny.

Martin Hogbin (talk) 11:28, 27 July 2011 (UTC)

Deciding what A represents - can we define A as follows: are positive real numbers, where . We define a random variable which takes the value L with 12 probability and S with 12 probability. is the value of the random variable. My probability notation isn't perfect but I think I've defined this correctly and that the random variable needs to be defined separately to the value A. Richard Gill - can you comment?
Probabilists describe TEP as follows. Let X be the smaller amount of money, define Y=2X. You may think of X as being fixed if you like, but you can also think of it as being a random variable. A subjectivist would treat X as a random variable, whose probability distribution represents his personal prior beliefs about the smaller of the two amounts. A frequentist might be imagining repetitions of the game where the smaller amount is determined by some physical randomization procedure. Anyway, X and Y go into the two envelopes, and the player then selects one of the closed envelopes at random. Call the amount in the chosen envelope A, call the amount in the other envelope B. So A equals X or Y with probability half, independently of the actual amounts X, Y.

If this is how we want to mathematically model TEP then I can't agree with Dilaudid that the TEP argument goes wring at step 2. For me it goes wrong at steps 6, 7 and 8, and depending on how I interpret the intentions of the writer, at least three different error diagnoses are possible. My personal preference goes for the following: the writer wants to calculate E(B|A=a). He does this by splitting up according to the events A smaller or bigger than B. The conditional expectations in the two cases are 2a and a/2 respectively, these should be weighted according to the *conditional* probabilities that A is smaller or larger than B *given* that A=a. But the writer uses the *unconditional* probabilities 1/2, 1/2. In other words: he behaves as if knowing the actual value of A would give no information at all as to whether A is larger than B or vice versa. Intuitively, this seems unreasonable. For instance, if X can't be larger than m, then knowing that A is larger than m tells us definitively that A is the larger of the two amounts of money. A careful mathematical analysis supports this intuition. Hopefully some editors here are able to say these things in such plain language that a layman can understand.

But anyway, though each of our personal favorite solutions are interesting, Wikipedia has to present the solutions in the literature. It is strange that iNic moves away to the Arguments page everything I write when I try to give a well organized overview of what's in the literature, yet doesn't seem to have any objection to Martin and Dilaudid expounding their personal thinking about TEP without reference to the literature! Richard Gill (talk) 19:37, 27 July 2011 (UTC)

I'll try to read more and pontificate less. Thanks :) Dilaudid (talk) 19:46, 27 July 2011 (UTC)
It is not true that I only move Gill's personal opinions to the Arguments page. Last move I made was the Puzzled section which wasn't about Gill's opinions. But it was directly reverted and Gill supported the revert! So if I don't move sections with other's opinions this is bad. And if I do that is bad too, according to Gill. Gill, why don't you move the sections you think is displaying opinions yourself to the Arguments page? iNic (talk)
Sorry, we got off to the wrong foot and consistently annoyed one another. Please let's be collegial. My name is Richard, by the way. Richard Gill (talk) 08:55, 29 July 2011 (UTC)
Also can you confirm that you are happy for the statement "The switching argument must be false, since by symmetry, it refutes itself." to be in the article? It's implied in the current draft, but I think it should be explicit. Dilaudid (talk) 16:37, 27 July 2011 (UTC)
I think that is too cryptic. Actually it is wrong. The situation being symmetrical is actually an argument (but a silly one) for endless swapping. If it were asymmetrical then you would logically swap only once. I do not think we need explain any more why endless swapping is wrong it is fairly obvious to most people that this is a daft thing to do. Martin Hogbin (talk) 22:12, 27 July 2011 (UTC)
The situation being symmetric, almost endless swapping is harmless. But gives no advantage. Yet the argument purports to show that each time you switch, you are strictly better off than before. Richard Gill (talk) 08:53, 28 July 2011 (UTC)
My response above was a little light-hearted. My real point was that symmetry by itself was not an argument against endless switching, the problem is that the (wrongly) expected gain does not materialise. But this is quite obvious, nobody needs persuading that endless switching is a bad idea. Martin Hogbin (talk) 12:32, 28 July 2011 (UTC)
I agree with Richard above. As a layman, I did not originally think too hard about what kind of object A was but I intuitively thought of it as a random variable. I therefore see no problem with step 2. Like Richard, I see the problem being in steps 6,7, or 8. For a finite distribution the problem is the abbreviated calculation of the expectation, this is made particularly clear by considering the case of just two envelopes.
Richard, I do not think iNic has anything against you, however I do think iNic has too rigid a view about discussion and reliable sources. No source tells us how to write this article, so some kind of genral discussion aroun d the subject is necessary to help us decide how to improve it. We are not writing a literature review we are trying to write an article for a varied audience that explains the TEP and how it is resolved, supported by reliable sources. Martin Hogbin (talk) 22:12, 27 July 2011 (UTC)
I agree. He does manifest himself as an owner of the article, though! And has strong opinions (e.g. it is philosophy not maths) yet does not support these with reliable sources. But, no problem. I think we are making a lot of progress. I am learning lots from all of you, that's how I like it! Richard Gill (talk) 08:51, 28 July 2011 (UTC)

I haven't substantially edited this article for a very long time and yet you see me as the owner of the article? That is very funny. Meta-discussions regarding what is and what is not philosophy would be naive to think we could find an answer to in any of the TEP papers. I have only tried to explain that we should stick to the most common view here or else a lot of readers will get the wrong impression. This has already happened in the past (when the page was written in the way Gill now wants) and then a lot of readers got seriously worried. Since it was changed back to the current wording no one got worried anymore. IMHO, I thinks it's not good if that would happen again. iNic (talk) 01:50, 29 July 2011 (UTC)

I'm sorry if I (and others) got the wrong impression! Let's please let byegones be byegones. Still I am interested in how you come to think that TEP is a problem of philosophy. Earlier you said that this was because it was a problem of decision theory and Bayesian probability. Sure, TEP is about probabilistic reasoning and decision making. Sure, it raises some questions for the foundations of those fields, but this does not make it primarily belong to philosophy rather than mathematics (or perhaps logic). Philosophers write papers about TEP. Mathematicians write papers about TEP. Logicians write papers about TEP. I'm not aware of an authoritative source who declares that TEP belongs to a particular field, *and* gives an argument to support that claim. People who see TEP as an attempt at probabilistic reasoning show that when you take it as a reasoning inside conventional probability theory, it's a botched reasoning. The writer uses marginal probabilities instead of conditional at a crucial place. The solution is: use standard modern elementary probability calculus so that you do not make this kind of mistake. Eskimos have 20 words for different kinds of snow. Probabilists have several words for several different kinds of probabilities. The TEP argument goes wrong because the writer is not capable of making important distinctions. His language is too poor for the task he sets himself. Richard Gill (talk) 08:52, 29 July 2011 (UTC)

You have written a paper about the (philosophical) interpretation of QM. You are a mathematician. Therefore, the philosophical interpretations of QM is from now on mathematics, not philosophy. Einstein wasn't employed as a physicist when he wrote his famous papers in 1905. This means that these papers doesn't belong to physics. That is, if we are to follow your definitions. We can of course multiply the examples hundreds of times, both from history of science and from scientific papers written today. If you want to pursue your view here you need to burn a lot of books and articles that doesn't comply with your definitions. From where did you get the crazy idea that everything a watchmaker touches becomes a watch? iNic (talk) 06:16, 31 July 2011 (UTC)

I do not have the crazy idea which iNic attributes to me. This discussion started with his idea, forgive me if I misunderstood him, that TEP is a problem belonging to philosophy which is still not resolved. To support this point of view he put forward the paper by Schwitzgebel and Dever. Apart from this, he does not offer any support at all, neither by studying what kind of academics write about TEP in what kind of journals, nor by studying the actual content of their publications. So according to him, when two philosophy post-docs write about a topic, and claim that till now no-one has satisfactorilly resolved it, but that they will do so in their paper, then that topic henceforward belongs to philosophy?

Seriously, isn't it time to stop all this aggressive behaviour? What about the basic wikipedia policies of assuming good faith, about civility, and so on? Take a look at WP:CIVIL. Richard Gill (talk) 13:11, 31 July 2011 (UTC)

I have never singled out the paper by S&D as the "evidence" that the problem belongs to philosophy. That is a misunderstanding (I'm sorry if I caused it somehow). On the contrary, I have tried to explain over and over that it doesn't matter if the authors of a particular paper are philosophers, mathematicians, watchmakers or whatever. The only thing that matters is about what the paper is written, not who wrote it. (The latter is important in religion, not science.) You keep repeating who wrote different papers all the time as if that would matter. I'm glad if it is correct that you don't believe that that matter anymore, because then we can move on. I don't think I'm uncivil or display lack of good faith when showing, by example, what your ideas would lead to if taken seriously. In mathematics for example, if someone could show that your definitions lead to crazy results, would you not thank her for that rather than accuse her for being uncivil and not trusting you? iNic (talk) 23:22, 31 July 2011 (UTC)

Thank you! I welcome counterexamples to my thinking. As (I think) Niels Bohr said about EPR: "Now we have a contradiction, now we can make some progress!" I observe, in general, a correlation between the academic field of an author of a paper, the field of the journal in which the paper appears, and the field to which the content of the article belongs. I note that a lot of mathematicians and statisticians write mathematical analyses of TEP in mathematical journals. I agree, this does not prove TEP is not a philosophical problem. I notice that many of the papers by philosophers in philosophy journals on TEP are presenting (amateur) mathematical analyses. But anyway, "what's in a name?". I'm the first to oppose artificial barriers between different areas of science. The usual demarcations are meant to be a help. Discard them when they're a hindrance. Richard Gill (talk) 05:35, 1 August 2011 (UTC)

Singular or plural envelopes

I am a native (British) English speaker who has an amateur interest in English grammar and idiom and I find the question of 'envelope' or 'envelopes' quite hard to call. Standard English for an event involving several objects is to use the singular, as in a 'three car crash' (we would never say a 'three cars crash') or 'the three body problem' or Sherlock Holmes who had his famous 'the three pipe problems'. On the other had, all those cases envisage the possibility of one, two, three, or more objects, with a particular case being singled out. The two envelope(s) problem is not quite like that. We would, for example, refer to the 'Three Gables' adventure of Sherlock Holmes. In other words 'The two envelopes' can be regarded more like the name of the problem as in the 'The Three Musketeers' or even the 'Three Men in a Boat' stories. It could be argued that 'Two envelopes' should, strictly speaking, be in quotes in that case but I suggest that we overlook that point.

Google shows a 5:1 preference for 'envelopes' and it is not our job to invent or change terminology, we just report things as they are. I do not think you could say that either is actually wrong so, taking into account the extra hassle and complication of moving this article along with redirecting others, and the likely follow-on arguments, I think we are best to keep it where it is.

Some of us here are already co-winners of the most lame argument on WP. I suggest we quit while we are ahead Martin Hogbin (talk) 08:35, 1 August 2011 (UTC)

To which argument do you refer, Martin? Richard Gill (talk) 08:02, 2 August 2011 (UTC)
We got the award for the MHP argument. I am suggesting we drop (do not start) the envelope/envelopes argument. Martin Hogbin (talk) 19:28, 5 August 2011 (UTC)

How is the article now?

I am still a little puzzled as to what the differences of opinion are here. Do we all like the article as it is now or is there still serious disagreement?

My opinion is that we still need more and clearer explanation for the general reader. The given argument is very persuasive for most people and we need to show simply and clearly where the problem lies. Martin Hogbin (talk) 09:30, 29 July 2011 (UTC)

I feel that the article gets off to a reasonable start now (sections 1 through 7). But I think Section 8 "Extensions to the Problem" on the situation where Envelope A is opened is much too long and clumsy. However the earlier sections are probably written in too technical a style for many readers. This means that I am not the best person to try to rewrite them.

Are we agreed that the most favoured resolution of the original paradox (according to reliable sources) is that the writer is trying to compute the expected value of B given A=a? That he tries to do this by weighing the conditional expectation values, 2a and a/2 respectively, which hold in the two cases A<B and B<A respectively? But that he (implicitly) assumes that the probability weights of these two events do not depend on a? Which can't be the case for any reasonable probability distribution of the amounts of money in the envelopes? Richard Gill (talk) 18:33, 30 July 2011 (UTC)

Does anyone disagree with this? Martin Hogbin (talk) 18:40, 30 July 2011 (UTC)
Hi - I can prioritise my views as follows:
Priority 1 - there is wiki policy Wikipedia:REDFLAG which requires exceptional claims to be backed up by exceptional sources. Section 7 - the claim/question that there is an unsolved problem in/with mathematical economics, is not actually directly made by the Fallis source, and I think we would need a mathematical economics source for such a claim. Please can we remove this section altogether? I can't find anything in the article that suggests this is an unsolved problem in probability theory, so there is no problem on that score.
Priority 2 - much lower - I think the article tries to do too much. As Richard says, Section 8/9 is about a different problem where the envelope is opened. It's a shame as someone has obviously worked hard to summarise the paper, but it also duplicates parts of the Exchange paradox article. I would support its removal but I will go with the consensus.


My third and lowest priority is that I would like a single short, concise explanation and refutation of the paradox - addressing your point Richard, I don't know. I don't really think the prior distribution of A should come into this at all, can we leave this stuff alone for a week or two while 1) I write up what I think is going on on the arguments page and 2) I test my own views against the literature? The other thing here, is that we are completely at the mercy of the literature. If we all agree that step 37 is the cause, the wiki policy is that we should ignore our views and report what the sources say. But if that happens, we can always publish :) Dilaudid (talk) 19:46, 30 July 2011 (UTC)
Rome was not built in a day. Take your time!

My nomination for "a single short, concise explanation and refutation of the paradox" is that in step 6, the writer seems to be assuming that whether Envelope A contains the smaller or larger amount is totally unrelated to the amount actually in that envelope. *Whatever* that amount might be! But knowing the amount in Envelope A would in any real world situation, at least sometimes, give a strong clue to whether it's the smaller or the larger. For instance if the envelopes contain dollar bills then seeing $1 in Envelope A tells us for sure it's the smaller! On the other hand, seeing an enormous amount is a strong suggestion that the other envelope only has half that amount. Remember that we chose our envelope at random. If the guy who prepares the envelopes is a university professor (ie not Bill Gates or Steve Jobs) and if Envelope A happened to contain exactly $100, do we *really* believe that it's equally likely the other envelope has $50$ or $200$? And the similarly for any other amount of money which it is conceivable could be in Envelope A?

In the meantime, I am preparing a publication in which I'll expound my theory of how the Anna Karenina principle and the analogy with the Aliens Franchise enables one at last to get a comprehense resolution of TEP; see my talk page, or the TEP Arguments page for an outline. It will take account of my now extensive correspondence with several and philosophers and logicians, and also include a few "missing" mathematical results.

The literature is huge and confused and diffuse (statistics, probability, logic, philosophy, economics, education; everyone seeing a different point worth making within their own field. Quite a few people making mistakes. Many tying their hand behind their backs by fear - or ignorance - of mathematical abstraction).

Dilaudid, if you don't like the "prior distribution" of X (the smaller of the two amounts) to enter into this at all, don't talk about prior distribution, but talk about information/knowledge/beliefs/understanding about X and 2X. For instance if X and 2X are physical amounts of money actually owned by somebody and you actually have the possibility to get X or 2X from them, we can put a finite upper limit and a positive lower limit to X. Especially if the money is in US dollar bills and is actually in the envelopes. Which we can see but not touch. Anyway, if A is below twice the lower limit of X, or above the upper limit of X, we know for sure that it's the smaller or larger of A and B, respectively. So step 6 is wrong. In general, not talking technically in terms of prior distributions, but talking in ordinary language: intuitively, the larger A, the more likely it's the larger of the two; the smaller, the more likely it's the smaller of the two. Yet step 6 assumes that it makes no difference. That's all. Richard Gill (talk) 07:17, 31 July 2011 (UTC)

PS The Exchange paradox article should be merged with Two envelopes problem. The two names are pretty much synonymous. Richard Gill (talk) 07:22, 31 July 2011 (UTC)
I agree that we should merge the exchange paradox into this one. Martin Hogbin (talk) 08:35, 31 July 2011 (UTC)
Can someone please just do the merge (a redirect)? I have tried before but the owner of the page reverted. In addition I think this article should change name from "Two envelopes problem" to "Two-envelope problem." I think that would be more correct English. Can please someone that has English as his or her native language confirm this? iNic (talk) 00:09, 1 August 2011 (UTC)
See section above (put there by mistake) on envelope(s). Martin Hogbin (talk)

Well, the only small problem with having the article single out "a single short, concise explanation and refutation of the paradox" is that there is none, not even a long one. IMHO I think that should be taken into consideration. iNic (talk) 00:00, 1 August 2011 (UTC)

What do you think of my nomination: in step 6, the writer seems to be assuming that whether Envelope A contains the smaller or larger amount is unrelated to the amount actually in that envelope, *whatever* that amount might be! But knowing the amount in Envelope A would in any real world situation, at least sometimes, give a strong clue to whether it's the smaller or the larger. As Schwitzgebel and Dever (almost) say, when A is the smaller of thewo it's half as large, on the average, then when it's the larger of the two. There *is* a relationship between the amount in A and whether or not it's the larger. (Technical explanation: statistical (in)dependence is a symmetric relationship. Which is smaller effects A. Hence A effects which is smaller). Richard Gill (talk) 05:22, 1 August 2011 (UTC)

I just added a little essay on this topic on the Arguments page. Richard Gill (talk) 06:52, 1 August 2011 (UTC)

Our job as editors is not to single out the best or nicest explanation we can agree upon and promote that on Wikipedia. This is in blatant violation of NPOV. Instead the article should merely, in a neutral way, display the different proposed solutions to explain (or explain away) the paradox. That's it! The discussions among editors here about which solution is the "true" solution is just a waste of time and space. iNic (talk) 14:05, 9 August 2011 (UTC)

One reasons these puzzles cause so much argument.

Before this section is moved to 'arguments' let me say that I hope it will shed some light in the here on how to improve the article.

The reason that articles on mathematical puzzles cause argument is, in my opinion, that the exact question is never properly defined, thus there is no correct answer.

In a real world problem there should be a correct answer, even though this is not always the one arrived at. If a consultant were actually approached by a client with a question along the lines of, 'I have these two envelopes and have chosen one...', the first action of any good consultant would be to ask a barrage of questions, 'Where did these envelopes come from?', 'Have you any idea how much might be in them?', 'What are your personal objectives', and many more. Out of these answers a clear problem would emerge, to which there should be a correct answer.

The musings of mathematicians and philosophers, who usually tacitly answer some of the background questions according to their own whim, on these problems is only an interesting distraction from the essential puzzle, which was probably intended to be either a simple scenario or intentionally ambiguous.

First and foremost therefore, the article should concentrate on the simplest formulation of the problem and its solution or, if there is what appears to be a deliberate ambiguity in the question, the article should first point out that ambiguity and give relevant solutions.

Of course, nothing prevents us from following this up with a scholarly discussion of the academic discussions on the subject.

This is one of the conclusions of the paper by Albers, Kooi and Schaafsma: "We are interested in the two-envelope problem, because there are some consequences for Statistical Science at large. It often happens that the statistician is asked to use data in order to compute some posterior probability, to make a distributional inference, or to suggest an optimal decision. Some, perhaps many, of these situations are such that the lack of relevant information is so large that it is wise not to try to settle the issue. This leaves us with the problem to draw a distinction between those situations where the information is too weak to say something and those where the information is sufficiently overwhelming. The difficulty is, of course, in the area between."

I am busy preparing a scholarly overview of the academic literature on the Two Envelopes Problem, which will fill in a few small gaps, correct some mistakes, and offer some kind of synthesis. I hope it will be useful to editors here.

In my opinion the simplest resolution of TEP is to notice that at step 6 the author is assuming that the probability that Envelope A contains the lowest amount of money does not depend on how much money is actually in it, whatever that might be. Whereas intuitively, the more is in Envelope A, the more likely Envelope A contains the larger amount of money. In particular, if the money consists of real dollar bills then there is a lower and an upper limit to the amounts in the envelopes, and at these limits, the "equally likely" assumption is completely wrong. Richard Gill (talk) 14:41, 6 August 2011 (UTC)

I agree except that I would say the error was in step 7. Step 6 says, 'Thus the other envelope contains 2A with probability 1/2 and A/2 with probability 1/2'. If A (as some intuitive random variable and the probability not being conditional) is the sum in the envelope you hold then it is equally likely that the other envelope will hold A/2 as 2A. The problem is when you try to calculate the expectation without realising that the conditional probability is not independent of the value of A. It is the abbreviated expectation calculation in step 7 that fails. Martin Hogbin (talk) 15:02, 6 August 2011 (UTC)
You're right. If you read Step 6 as talking about the unconditional probability then Step 7 is wrong, since it assumes equality of conditional and unconditional probability, for all possible values of A.

This underlies my point that it is not possible to say *exactly* where the argument goes wrong, without assuming a formal context, or at least, without trying to guess the intention of the writer. What he writes may well all be correct. What is wrong is the stuff he should be saying between each of the steps. Richard Gill (talk) 20:51, 6 August 2011 (UTC)

I think the answer to this is to try and see the problem that way a non-expert reader would. I think most people see A as some form of intuitive random variable and they see the probability that the other envelope contains A/2 or 2A as being unconditional, or rather they do not see the reason or necessity for a condition to be attached to the probability. On that basis the mistake is that that, because the statement that probability that the other envelope contains A/2 or 2A is not true for every possible value of A, the simple calculation of the expectation is invalid. This is more or less what I say in my introduction but I am happy to work with anyone interested to make it as clear and technically accurate as possible.
The thing I would add is that I do not think that this answer can be made any simpler. The attempts that to do that that I have seen are as vague as the problem itself and are therefore invalid.
Do the others here agree that this is the simplest resolution of the paradox? Martin Hogbin (talk) 21:59, 6 August 2011 (UTC)
In order to explain what goes wrong one needs a carefully chosen language and one needs to use language which carefully acknowledges the important distinctions which need to be made. The most crucial distinction here is that between a random variable, and a particular value which that random variable can take. In our case, we have to distinguish between the random variables X, Y, A, B (smaller and larger amounts of money; and amounts in first and second envelope) and possible values which these variables could take x, y, a, b. Making this distinction allows us to talk about E(B|A=a). How to compute this? We must weigh the possible values of B according to the probabilities that B takes those values, under the condition that A=a. The values are easy: if it is given that A=a, then we know B=2a or B=a/2. The probability weights which we need are Pr(B=2a|A=a) and Pr(B=a/2|A=a), or equivalently Pr(B>A|A=a) and Pr(B<A|A=a). Yet in steps 6 and 7 the writer uses the unconditional probabilities Pr(B>A) and Pr(B<A). The point is that the value of A does in general give us information as to whether Envelope A contains the smaller or the larger amount of the two. As some simple examples immediately make clear. More need not be said. On the other hand, it is not possible to give this simple explanation without recourse to the notion of conditional and unconditional probability. One can avoid the mathematical formulas by writing out expressions like Pr(B>A|A=a) in words, that might make the explanation a little less scary for some readers. But honestly, it seems to me that TEP is a puzzle which is *only* of academic (speciaist) interest. It is *not* a popular brain teaser. People do not tell one another the TEP problem in pubs, unlike MHP. So the article on TEP in wikipedia has to concentrate on the academic aspects of the problem. After a brief introduction saying essentially what I have just written, most amateur readers won't be interested to read any more at all, and that is how it should be. On the other hand the professional readers (students of philosophy, logic, probability theory, or decision theory) must be assumed capable of dealing with some mathematical formalism and with the notions of expectation value and conditional expectation value. TEP is about these notions. If you are not familiar with them you have got quite some catching up to do, before TEP is of any interest to you. Richard Gill (talk) 13:40, 7 August 2011 (UTC)
I do not disagree with any of that. I am just checking that what I have already written in the 'Introduction to resolutions of the paradox' section, which I think is pretty much what you said, is agreed in principle by all. There was some talk of an even simpler solution but I think the one we have is the simplest possible. Martin Hogbin (talk) 14:35, 7 August 2011 (UTC)
I rearranged the last sentences of the 'intro to resolutions' section (the bit about mathematicians and distributions without upper limit). I think the remark about the necessity of an improper distribution to get everything in the argument (up to step 8) go through exactly needs to be said first.

How to apply this principle here

My initial thoughts on this subject are that a simple puzzle is unlikely to have been intended to be about an infinite distribution. We should therefore start by giving a clear resolution of the paradox for a finite distribution of envelopes. This is, I think, not contentious and, working together, we should be able to come up with something clear and simple. Martin Hogbin (talk) 09:40, 6 August 2011 (UTC)

This also answers the question above, 'Is TEP a problem in Philosophy or Probability or Logic'. It is none of these, it is a simple mathematical puzzle. Of course mathematicians, logicians, and philosophers have chosen to study it, each in their own way. Martin Hogbin (talk) 11:08, 6 August 2011 (UTC)

The Two Envelope Problem was put into the public domain by the famous Martin Gardner (a mathematician, by the way). We should find out what he saw as the solution to the puzzle which he gave the world. Richard Gill (talk) 13:46, 7 August 2011 (UTC)
Amusingly it turns out that this was a problem which actually defeated the great Martin Gardner! (But thenm, he wasn't a probabilist). But I have found an excellent layman's discussion of TEP in the book "Probabilities: the little numbers that rule our lives" by Peter Oloffson (Wiley, 2007). I've put pages 130--132 in my drop box. Wikipedia editors shouldn't be working from the primary sources (the research articles) but from the secondary and tertiary sources - the reviews, the text books, the popularizing literature. Well, here at last is a decent one. Of course Peter Oloffson's solution is exactly what we have just been discussing.

The article by Nalebuff (1989) is also an excellent, let us say, secondary, source. He discusses a number of solutions with great clarity. It's in the dropbox too, now. Richard Gill (talk) 14:38, 7 August 2011 (UTC)

Thanks for that. I think what I have written agrees pretty well with Nalebuff so I will add it as a reference. I cannot see the Ollofson pages. Martin Hogbin (talk) 15:40, 7 August 2011 (UTC)
Oloffson should be there. But anyway, I copied them from Google Bbooks and from Amazon. Search for Gardner two wallets. Richard Gill (talk) 06:07, 8 August 2011 (UTC)
Unfortunately I haven't yet been able to see Martin Gardner's (1982) Aha! Gotcha which allegedly discusses TEP. However in his (1989) Penrose Tiles to Trapdoor Ciphers and the Return of Dr Matrix he briefly mentions our present standard TEP (with the computation leading to E(B|A)=5A/4) saying that he hasn't seen in in print yet though recently people have been talking about it. 1989 is the year that Nalebuff's article was published, which refers to correspondence which Nalebuff had with a large number of people, turning up just about all of the "known" resolutions, as well as containing the usual mistakes, e.g. that the amount of money should be uniformly distributed ... it's the logarithm which has to be uniformly distributed to get the equal probabilities that the other envelope is larger or smaller, given what's in yours, whatever is in it.

Gardner (1989) only states the problem (to say what is wrong with the reasoning leading to E(B|A)=5A/4) but does not give any clue to an answer himself. His 1989 discussion does makes it clear that his 1982 book discussed essentially the earlier Maurice Kraitchik (Mathematical Recreations, Dover, 1953, pages 133- 134) two neckties problem. Gardner (1989) does say that in his opinion, Laurence McGilvery (1987) 'Speaking of Paradoxes . . .' or Are We?, Journal of Recreational Mathematics, 19, 1987, pp. 15--19 gave an adequate solution of the Kraitchik problem; Gardner (1982), as quoted by Nalebuff and others, was still perplexed by it. Actually Kraitchik does give a resolution to his paradox but Gardner did not find it convincing. I think that the problem is that neither Gardner nor Kraitchik seem able to use modern probability terminology, i.e., to make the distinctions which need to be made in order to figure out what is going wrong. Richard Gill (talk) 10:15, 9 August 2011 (UTC)

I have written a paper entitled Anna Karenina and The Two Envelopes Problem, [1]. It expounds my explanation (the Anna Karenina principle) of why there is so much argument about TEP and contains some modest new results. Comments are welcome. It is not finished yet; when it is I'll submit it to a suitable journal. Richard Gill (talk) 15:06, 9 August 2011 (UTC)

And placed djvu files of "Aha! Gotcha" and "Mathematical Recreations" in the TEP drop-box folder. And a pdf of Littlewood's 53 book. I would buy Kindle (or other eBook format) versions of these books if they existed. However as long as publishers are so slow in embracing the digital revolution, I had to resort to other means to obtain these books. Richard Gill (talk) 12:02, 10 August 2011 (UTC)

Is TEP a problem in Philosophy or Probability or Logic (continued)

The opening line of the article currently says "The two envelope problem, also known as the exchange paradox, is a puzzle or paradox in philosophy, especially within decision theory and the Bayesian interpretation of probability theory". iNic says that he wants this formulation because philosophers still consider TEP a problem, while mathematicians do not. He believes, I think, that saying it is a paradox in mathematics would imply that mathematics is in danger, but saying that it is a paradox in philosophy is harmless, because no one is worried that philosophy is in some kind of crisis.

The philosophers' job is to create problems, even if no-one saw a problem before. Mathematicians' job is to solve them. Mathematicians claim that TEP is an example of muddled thinking which is resolved by converting the steps of the argument into formal probability calculus. Then one easily sees one or more places where it goes wrong. Some philosophers think that intuitive probabilistic reasoning can be done without formalizing it as mathematics. Mathematicians formalized it in order to abolish the TEP kind of "paradox" (muddle).

I think that it is better to say that TEP is a puzzle or paradox studied in probability theory, decision theory, logic and philosophy. The consensus among probabilists is that it is "old news", i.e., it is a puzzle which can be resolved by a little bit of careful thought. Which is not to say that inventive people can't come up with new insights, new twists to the story, new versions.

I am not sure if there is a consensus among philosophers, or not. Smullyan's version of the paradox "without probability" seems to be a nice test-case for people in logic with a pet theory how to formalize counterfactual reasoning, so it is still "alive". Other than that, it does not seem to me that philosophers in general find TEP very interesting.

You seem to hold the view that TEP is a different problem depending on who is thinking about it, and furthermore that in some communities it is a solved problem while in others it's not solved. Can you please give some other example, current or from the history of ideas, where this odd situation has appeared? I can't think of any. In fact I think the whole idea is crazy. This would mean that a probability problem or puzzle is intrinsically subjectivistic, that is that it doesn't have any objective features at all. The same puzzle is both solved and unsolved at the same time depending on within which sociological environment it is placed. If this would be true that would be an amazing discovery in itself and a strong argument in favor of a post modernistic world view. iNic (talk) 14:37, 9 August 2011 (UTC)

Conclusion: I don't think we should say that TEP "belongs to" any particular field. All we can say from studying the sources, at that academics from various fields find TEP interesting, and that different fields seem to have different consensus's (or none) about whether it is resolved or not. And the main fields in question are probability theory, decision theory, logic and philosophy. Richard Gill (talk) 13:01, 2 August 2011 (UTC)

I agree that 'studied in probability theory, decision theory, logic and philosophy' is better but let us not lose sight of the fact that this started a simple puzzle, intentionally devised to confuse people, in which a series of seemingly logical steps leads to an absurd conclusion. Martin Hogbin (talk) 19:24, 5 August 2011 (UTC)

This is not studied in probability theory proper, as defined by the Kolmogorov axioms. To claim that is not correct. However it is studied in the bayesian interpretation of probability theory, which is something else. So if the word 'bayesian' is put in front of 'probability theory' the list becomes correct. But as bayesian probability theory, decision theory and logic are all part of the broader subject of philosophy, it's a little strange to put 'philosophy' at the end of the list. You seldom see lists of the type "cars, busses, bicycles and vehicles" or do you? It is better to put the most abstract word first, like this: "This road is made for different vehicles, in particular cars, busses and bicycles." iNic (talk) 14:37, 9 August 2011 (UTC)

Huh? Bayesian probability is generally taken to satisfiy the Kolmogorov axioms. Those axioms are abstract, structural properties of "probabilities", "expectations", "conditional..." which apply perfectly well, with complete neutrality, both to Bayesian and to frequentist probability. And since when are decision theory and subjective probability part of philosophy, not of mathematics? Please cite your sources, iNic. In TEP, there are hardly any issues concerning the *foundations* of ... Which would anyway be an area between mathematics and philosophy, not strictly in either. Richard Gill (talk) 16:53, 12 August 2011 (UTC)
When there are more than one interpretation of a theory that to some extent compete with each other we have to do with philosophy. What is considered philosophy today can be science tomorrow. The atomic theory of matter for example was part of philosophy for a very long time (over 2K years) until experiments at the end of the nineteenth century showed that this interpretation was indeed the correct philosophy. The very strong competing philosophies of matter died and the atomic theory of matter went from part of philosophy to part of science. The competing faulty philosophies went at the same time from part of philosophy to part of non-science.

In the case of probability theory Bayesianism and frequency theories are just labels on two broader classes of philosophical interpretations of probability, and there are other interpretations outside both of these groups as well. So we have many competing interpretations here, resembling the situation we have for QM. As long as no one can provide an experiment, thought experiment or something else that conclusively shows which one of the interpretations of probability is correct, we have to live with the fact that we can't tell which is the correct one. In this situation you are in a sense free to pick the interpretation you like the most. But as soon as one of the interpretations is proved correct and thus becomes part of science that freedom disappears. If you want to be scientific you suddenly have to embrace the interpretation that has scientific support and abandon the other ones that haven't, even if the one that turned out to be correct was the one you disliked the most. For example, you are no longer free to believe that matter is a continuum even if you think that the atomic theory of matter is a very ugly theory. I hope this short explanation made it more clear to you what philosophy is. iNic (talk)

A simple puzzle deliberately engineered to confuse people. Yes. I added the word "brainteaser" and emphasized the sources in books of Recreational Mathematics. TEP, basic form, basic solutions, is supposed to provide some mental stimulation and distraction to while away a dreary Sunday afternoon for those who prefer a mathematical puzzle to watching a football match on TV. Richard Gill (talk) 16:58, 12 August 2011 (UTC)

Redirect also from Necktie paradox?

I would suggest that the Necktie paradox also should redirect to Two envelopes problem. The historical development was Necktie (Kraitchik) -> Wallets (Gardner) -> Envelopes (Nalebuff). On the way, the more universal label Exchange paradox came into use. There is also an independent line of descent from a paradox apparently proposed by Erwin Schroedinger, published in a 1953 book on mathematical recreations by the great mathematician Littlewood, which concerned immediately numbers, not neckties or amounts of money. Note that Kraitchik published the problem in another 1953 book about recreational mathematics. Gardner in his Scientific American column on recreational mathematics. Schroedinger is a famous physicist. I think that these origins are good reasons *not* to consider TEP as a topic in philosophy. It's an (un?)popular puzzle in recreational mathematics. I say "unpopular" since most serious writers find it annoying. So they tend not to do it justice, and to make mistakes; they just want to clear it out of the way as quickly as possible. Richard Gill (talk) 11:40, 10 August 2011 (UTC).

Various writers in the Journal of Recreational Mathematics have satisfactorily explained the paradox, according to mathematicians in later articles, and the mathematical economist Nalebuff, and the mathematician Gardner. Unfortunately the relevant volumes of this journal are not yet online. The philosophers don't understand such explanations because the mathematician's explanations use notation and language and concepts from probability theory. For the philosophers, this is just formalism, not ideas. Hence they come up with their own explanations which, at best, consist of writing the mathematicians' explanation out in words. That costs a lot of words and involves subtle distinctions, well understood by probabilists, but unfamiliar to philosophers. Remember, to the "insider" a probability formula is a picture, and a picture is worth a thousand words. To the philosopher the formula is just abracadabra. Very sad state of affairs. Richard Gill (talk) 12:16, 12 August 2011 (UTC)

I think the two 1953's (Kraitchik, Littlewood) and the two 1989's (Gardner, Nalebuff) are not coincidences. People were talking about the problem, then and then. All of this in the context of Mathematical Recreation (not philosophy). Richard Gill (talk) 17:08, 12 August 2011 (UTC)

We voted for this once before on its talk page and the consensus then was to keep it as a separate article. But you can raise the question again if your want. Do it on the Necktie paradox talk page. iNic (talk) 20:29, 10 August 2011 (UTC)
Thanks. We can also just keep it on our wish-list for the time being. We would have add to some material on the original Kraitchik problem here, anyway. I am studying it at the moment.Richard Gill (talk) 06:44, 11 August 2011 (UTC)

I just added to the Argument page here, a complete analysis of the necktie paradox, which shows how the same mistake is being made in a very similar chain of reasoning: the shop-price of your necktie when it is larger than that of the other guy is larger than when it is the smaller of the two, if we are in a situation of symmetry regarding subjective beliefs about both prices, and expectation values are finite (which is true if we have 100% certainty that actual values are below some limit). Neither Kraitchik nor Gardner ever explained "what goes wrong". (Kraitchik's style is to leave that to the reader. Gardner 1982 admitted to being perplexed, but Gardner 1989 referred to a paper in the Journal of Recreational Mathematics which had appeared in the meantime, which did explain the problem). But it is not difficult. Richard Gill (talk) 16:49, 11 August 2011 (UTC)

Essay on Probability Notation

I wrote, at the suggestion of a fellow editor, during the Monty Hall Problem wars, a little wikipedia essay on notation in probability theory: [2]. This could be useful for Two Envelope Problem editors, too. Especially philosophers and other non-mathematicians.

The page has its own talk page, comments are welcomed. Richard Gill (talk) 11:15, 11 August 2011 (UTC)

Quantum Two Envelopes Problem

See my talk page for a candidate for the quantum two envelopes problem, Q-TEP. Richard Gill (talk) 10:31, 16 August 2011 (UTC)

See Richard Gill's (talk page) for a candidate for the quantum two envelopes problem, Q-TEP. Gerhardvalentin (talk) 18:02, 16 August 2011 (UTC)

Please do not use Wikipedia pages or talk pages for self-promotion. If this talk page entry is meant to be interpreted as a suggestion to add a new section to the article or something like that, please state that explicitly. Otherwise please remove this new section from the talk page. iNic (talk) 17:08, 16 August 2011 (UTC)

No suspicion of self-promption, okay?  Gerhardvalentin (talk) 18:02, 16 August 2011 (UTC)

iNic, your comments could be interpreted as implying a lack of Good Faith in the motives of your fellow editors. Such an attutude is not conducive to collaborative editing.
It is partly meant as a joke. Remember, TEP is a joke. Designed by professional mathematicians in order to tease amateurs. Kraitchik, Littlewood, Schrödinger, Nalebuff and Gardner are roaring with laughter about these discussions.
The serious part of my joke is to draw attention to the surprising fact that Q-TEP does not yet seem to exist. I offer a candidate for those people seriously interested in this phenomenon, though personally, I do think the similarity of classical TEP with "my" candidate Q-TEP is superficial. Editors working on the TEP page might come up with better (pre-existing) candidates. So a) I wish to amuse, and b) I am seriously contributing to collaborative editing, pointing out a possible "blind spot" in our coverage so far. For instance, the editors on Monty Hall Problem had a collective blind spot to the optimization (decision theory, game theory) literature. Q-MHP does exist, incidentally. Richard Gill (talk) 09:20, 17 August 2011 (UTC)
I love serious jokes. Seriously! So you do indeed suggest that we should add a new section to the article where we talk about Q-TEP? Who else has written about it? iNic (talk) 11:13, 17 August 2011 (UTC)
I suggest we try to find out if anyone has studied a Q-TEP. AFAIK, no-one. But I could be wrong. As long as nothing exists, there is nothing to write about. Except that surprisingly no-one seems to have invented one yet. I am asking my friends in Quantum Information Theory. Richard Gill (talk) 12:28, 17 August 2011 (UTC)

Is TEP philosophy or mathematics?

iNic, will you please provide me with sources which explain why TEP is a philosophical problem. It seems to me TEP is a problem about logic and logic lies at the basis of both philosophy and mathematics.

If TEP would be a problem in logic we would be able to derive the paradox within some specified logical system. But we can't. Same goes for mathematics and probability theory. The collection of all sources in the world support this view because the derivation that would prove this wrong is nowhere to be found, in any source. iNic (talk) 00:12, 14 July 2011 (UTC)
I would say the following: because TEP is a problem in logic we can see within a decent formal system of logic that the argument is false, ie, cannot be converted into a formal argument. This is exactly what Byeong-Uk Yi does with regard to Smullyan's version. Yi uses a sensible and uncontroversial formal logic of counterfactual reasoning and he shows that both of Smullyan's conclusions are *untrue* statements, in that system (I am not sure if he shows they are untrue because they are unprovable or because they are meaningless or because their negations are true - these things are subtly different). He shows where any attempt to translate Smullyan's verbal reasoning into a formal reasoning (within Yu's formal system) fails. But I would say that this is a very formal way to say that the argument is using the same words to denote different things. "the amount you would win if you would win" get's two different meanings, similarly that for what you would lose if you would lose, hence you get two apparently contradictory statements.
The original (standard) TEP is understood by translating the argument into modern probability theory, which is a formal system built on standard formal logic, and showing that one of the steps fails and indeed must fail since 6 is actually contradictory to the other assumptions.
Then the problem where we don't use step 6 but still get the apparent contradiction of E(B|A=a)>a for all a with appropriate distributions of X is resolved by noting that such distributions must have E(X)=infinity hence the expectation value is not very relevant to what you will see when you only live a finite length of time. Which is one of the reasons why formal decision theory usually insists on bounded utility.
Then we can continue (Syverson) and show that we still can get *approximately* the paradox with everything bounded. However inspection of his argument shows that his probability distributions are so long tailed that expectations are again totally unrepresentative of average values of finitely many repetitions. So actually we did not get further.
Or we can go back a la Smullyan and remove the probability from the paradox. Now we are back with using the same words for two different things and getting a contradiction. And anyway, the probability ingredient (that your envelope A is equally likely the smaller X or the larger Y=2X independently of X) is an essential part of the paradox. Without that ingredient, there is no reason to switch or not to switch (no rational decision making without probabilities or utilities). It is that ingredient precisely which tells us that the player may switch envelopes as many times as he likes, it makes no difference. Without that ingredient there is no way to evaluate strategies. Richard Gill (talk) 09:02, 14 July 2011 (UTC)
OK, so you call TEP a problem in logic just because someone has proven that TEP can not be formulated within a specified logical system? Let's say that someone has proved that a specific board game is not a valid game of Chess, would that proof turn the board game into a Chess game? If you answer yes to the first question you have to answer yes to the second too. iNic (talk) 00:56, 16 July 2011 (UTC)
(1) I do not call TEP a problem in logic just because ...

(2) It is not the case that answering yes to your first question would imply I should answer yes to your second question.

Regarding (1), I do not call TEP a problem in logic just because someone has shown that the paradox is resolved by looking at TEP as translated into a specified logical system. Philosophers call TEP a problem in logic. Philosophers agree that TEP (without probability) belongs to counterfactual reasoning. There exist well understood and widely accepted formal systems of counterfactual logic. Yi translates TEP into the systems and shows where the deduction fails. What does this prove? It proves that there exist philosophers (after all, Yi is faculty member at a philosophy department) who consider that TEP can be usefully studied by using the methodology of logic. People who study games could show that a certain game is not a game of chess. This does not prove that the game in question is therefore a game of chess. It does prove that the "game" is studied by people who study game, which certainly supports the notion that the game in question is indeed a game. It also supports the notion that it is a different game from chess.

Schwitzgebel and Dever are philosophers. They resolve TEP (they claim) by proving a little theorem in probability theory, whose assumptions are not satisfied by TEP, but which is such that if those assumptions had been satisfied, then the conclusion of TEP (switch, whatever) would follow. (1) What does this prove? (2) Does this paper make TEP a problem primarily *in* philosophy, logic or mathematics? My answers are: (1) it proves that Schwitzgebel and Dever paper is not very interesting and in particular that their claim is false, that they have at last succeeded in showing where TEP goes wrong where all before them had failed; (2) since this particular paper is not one of the most significant papers in TEP studies, it doesn't help answer question (2). But a glance at the literature shows that logicians, philosophers and mathematicians all write iteresting papers on TEP and all have different and interesting insights which can be brought to bear on it. Richard Gill (talk) 17:20, 17 July 2011 (UTC)
Well, the only reason why this classification thing matter at all here is that if the Wikipedia article (erroneously) claims that the paradox is in logic or in mathematics or in probability theory a lot of first time readers of the article will get worried. They will rightfully think that this page claims that this paradox shows that logic/math/probability theory is inconsistent, i.e., contain internal contradictions. This has already happened in the past when for a while this page claimed that TEP is a problem within probability theory. Some readers got worried and wondered why their teachers hadn't told them about this major problem with probability theory. I calmed them down and explained that this is a problem within some branches of philosophy only and that the wording in the lead was unfortunate. Since then have struggled to keep at least the lead in a good shape. iNic (talk) 14:03, 18 July 2011 (UTC)
If your world view is that something is mathematics when and only when mathematicians by profession write about it, and likewise for philosophy, logic and probability theory, that is fine with me. I won't try to make you change your world view. But I would appreciate if you realize that this view is not the more common view. In general people don't think that only a watchmaker can fix a watch, or that everything a watchmaker touches becomes a watch. Neither is it the common view that for example the special theory of relativity is a patent or that Gauss only made astronomical discoveries after 1807. What Einstein or Gauss did for a living is one thing, what they did for science is another. This is true in many cases still today and even more so in the past. I do understand now, however, why you stress your own profession so much here at Wikipedia and why you think that everything you say or write is by definition mathematics. iNic (talk) 14:03, 18 July 2011 (UTC)
iNic, neither my world-view or your world-view are of any relevance. For wikipedia what counts is what reliable sources say. Philosophers, logicians, and mathematicians all write about TEP and all these different kinds of people publish articles in the journals of their own field claiming they have resolved the paradox, or added an important new twist, or shedded new light, or whatever.
Schwitzgebel and Dever write in a philosophy journal and claim to be the first to have explained what is wrong with the original TEP argument. They also write a "simple solution" on their own website. What they "do" in their paper and website is probability theory. They attempt to say to philosophers some things which probabilists already know and indeed already have written. Their audience is hampered, and they are hampered, by not being familiar with the standard language of probability theory. Actually, their simple solution is quite good, in my opinion. That is to say they succeed in expressing in plain words what a probabilist more easily expresses in a couple of formulas and mention of a couple of standard concepts. But it comes to the same thing.
My profession happens to be mathematics and this profession colours how I write and how I think. But so what. On wikipedia we want to write stuff which people can understand who are not from logic, or from philosophy, or from mathematics. That's a big challenge. Richard Gill (talk) 16:04, 18 July 2011 (UTC)
Exactly, what counts is what the reliable sources say, not who wrote the reliable sources, or to whom the authors of the reliable sources wrote, or what profession the authors of the reliable sources have, or what salary the authors of the reliable sources have, or what color of the skin the authors of the reliable sources have, or what age the authors of the reliable sources have, or what religion the authors of the reliable sources have, and so on. In fact, to judge what people have to say based on prejudices of who they are is unscientific, unwikipedian and in my opinion uncivilized. iNic (talk) 00:51, 20 July 2011 (UTC)
Extraordinary! I didn't talk of age, religion or skin colour of authors or of their readers; I talked only of the academic field of academic writers and the academic field of the journals they write in, for readers from the same field. I would call that "essential context". I would say that you can't write a good encyclopedia article based on reliable sources concerning your subject, without understanding what your souces write about your subject. And that to sufficiently understand these writings, you need to understand the context in which your sources write. But you think you can write good Wikipedia articles without any understanding at all of your topic! You just cut and paste, at will. That's an extremely post-modern opinion, I think. No wonder so many editors coming to the TEP page have complained about your behaviour. At the same time you dogmatically claim TEP is a philosophical problem and S&D are the first authors ever to address the essential problem. What source supports this POV, apart from S&D themselves? Or is that as irrelevant as the question whether you're a philosopher or a mathematician? I don't ask who you are, I don't ask for your race or religion. I am intrigued to know your own expertise relative to the topic under discussion. Since you don't give reliable sources to back up your opinions. Richard Gill (talk) 19:56, 26 July 2011 (UTC)
I have read all published sources to date about TEP, and most unpublished sources. So far I have understood all ideas presented. But I don't understand from where you got the idea that I don't have "any understanding at all" of this topic. Please explain. There is no WP rule that says that only academics are allowed to edit or contribute to Wikipedia. On the contrary, academics should be aware that they won't be treated in any special way at Wikipedia. Your impression that I'm holding S&D very high is a misunderstanding. I mentioned their paper at 11:45, 10 July 2011 only as an answer to the question "Do you have high quality sources to support this resolution?", not as supporting the view that TEP is a philosophical problem. My arguments for the latter I have explained separately. iNic (talk) 22:07, 1 September 2011 (UTC)

Please will you also tell me if you are a philosopher or a logician yourself. I'm interested to understand why you think that Schwitzgebel and Dever is such a brilliant philosophy paper (written by two philosophy PhD students about a problem outside of their specialist fields). Struggling with some mathematics which is quite beyond their capacities.

I never said that the S&D is "a brilliant philosophy paper." I merely said that it is a philosophy paper written by authors that properly understand the question TEP poses. If you are the only one climbing the correct mountain, it doesn't really matter if you climb it in the technically correct way. At least until other mountaineers start to climb the correct mountain as well. (Who I am is of no importance.) iNic (talk) 00:12, 14 July 2011 (UTC)
I think the two authors do not properly understand the question. And their claim that they are the first to answer it correctly is clearly false: their answer is wrong. They show that similar reasoning gives "the right answer" in some other problems. They say that earlier authors had shown that the used reasoning violates standard rules of probability theory (which is true, of course). They then prove a new (true) theorem (within conventional probability theory) which allows them to validate the similar reasoning in the other problems because one could pretend that the reasoning was making an appeal to their theorem, and moreover the assumption in that theorem is correct in those other examples. They show that the assumption in their theorem is not true in TEP. But their theorem is not an "if and only if" theorem. Their whole paper merely shows that if you apply probability theory correctly to the other examples, you get the answer which is correct for those examples; it illustrates what we already know, that you can't translate standard original TEP into probability theory. We already know that no theorem from probability theory can "save" the "wrong reasoning" in TEP, because we know that no example can exist in classical probability theory which has exactly what is needed to produce E(B|A)=5A/4, or (to write out the meaning of that statement explicitly) E(B|A=a) = (5/4) a for all values of a.

So in my opinion S&D's logic is wrong and their paper does not add anything to what we didn't already know. Note that they try to resolve TEP by turning to probability theory!

They are also not the first mountaineers who climb the same mountain. As far as I can see the mountain that they claim to see had already been climbed many times. They climb another mountain by mistake, do not reach the top, but don't see where they have gone wrong, since they are still in the clouds. Richard Gill (talk) 08:17, 14 July 2011 (UTC)

Please will you also tell me if you are aware of any secondary or tertiary sources about TEP (eg University undergraduate texts) in philosophy. There do exist such sources in mathematics, e.g. Cox's book on inference (though some would call this a graduate text, not undergraduate). Research articles are primary sources according to wikipedia rules which you so strongly adhere to. We are not allowed to write articles whose reliable sources are original research articles. This rules out almost all of the literature on TEP. The rest consists of blogs by amateurs. That is also not a reliable source for an academic subject.

I agree that good secondary sources in this area are really scarce, not to say non existent. But this is merely a result of the fact that the philosophical discussion is still alive and kicking. When TEP is mentioned in university text books the author often merely promote his own idea of the matter anyway (for example Dennis V. Lindley, Understanding Uncertainty, 2006), with no mention at all that other scholars holds other opinions. Textbooks like that will fool the reader into believing that this problem is solved and no controversy exist anymore. The best overviews I have read are in the introductory parts of some of the more recently published papers. iNic (talk) 00:12, 14 July 2011 (UTC)

The main message for the layman should be that original TEP and Smullyan's TEP are examples of faulty logical reasoning caused by using the same symbol (or verbal description) to stand for different things at the same time. This is the executive summary of almost all the articles I have studied so far, by the way.

I agree. This should be the first suggested solution in the article. Many authors support this view. iNic (talk) 00:12, 14 July 2011 (UTC)

TEP with opened envelope belongs to probability and decision theory and is useful in the classroom for showing the strange things that happen with infinite expectation values. You do *not* expect an infinite expectation because you never live long enough. You are always disappointed. In the long run you are dead (Keynes). That's it. That's the executive summary of the decision theoretic / statistical literature on this topic. I will write a survey paper containing no original research and not promoting any personal point of view, and then at last we will have a good secondary source for the wikipedia article.

Yes, all these variants should be mentioned in the second solution in the article. iNic (talk) 00:12, 14 July 2011 (UTC)

I think that Falk's paper does constitutes a reliable secondary source. She analyses TEP from the point of view of teaching probability. She is writing for layerpersons. For undergraduates. Her executive summary is the same as what I just mentioned: examples of faulty logical reasoning caused by using the same symbol (or verbal description) to stand for different things at the same time. Don't worry, almost all of the philosophers say the same thing, but of course want to say it in a subtly different way from earlier authors, since their job is to publish papers in which they nit-pick in previously published papers.

This is again the first solution. Falk should be one of the sources there, for sure. iNic (talk) 00:12, 14 July 2011 (UTC)

I'm participating in a philosophy conference tomorrow at which several authors of TEP papers are present. That will be interesting. Richard Gill (talk) 16:51, 13 July 2011 (UTC)

Littlewood's problem and Schroedinger's problem

Littlewood (1953)

A huge pack of cards contains cards marked on each side with numbers n and n+1, where n =1,2,.... A card is picked at random and held up so that players A and B each only see an opposite (random) face of the card. They must each either say "pass" or "play". If both say "play" a prize goes to the person who has the larger of the two numbers.

What is wrong with the following argument: if A has "1" he must pass since B has "2". If A has "2" and B doesn't pass then A must pass since B must have "3". And so on. Therefore, both must pass, whatever. They don't even need to look at their side of the card to make their decision.

Analysis. Denote by A and B the numbers on the card facing each player. If player A is rational he will make his decision according to some prior probability distribution P over the numbers on the card. By Gill's 2Necktie problem theorem, either E(A) is infinite or A and {A<B} are not statistically independent of one another, under P. Therefore, with E(A) finite, there are values a of A such that P(A<B|A=a) is larger than 1/2, and values such that it is smaller. So with realistic expectations about the cards, there certainly are values on the card which he sees such that it would be rational to say "play".

What is wrong with the argument that both must always say "pass"? Note that the argument for saying pass involves waiting to see what the other player is going to say. Thus it supposes that the choice of the other player is allowed to influence your own. If we demand that both players say "pass" or "play" simultaneously, as in the paper-scissors-stone game, they can't learn from the other learning from them learning from the other learning from...

This is Problem 3 in Chapter 1 of Littlewood's book. He finds the mathematical paradox not terribly interesting, but he does find the problem moderately entertaining, for someone in a good mood.

Schroedinger, reported by Littlewood (1953)

Now there are 10 times as many cards marked with n and n+1 as with n-1 and n. There is a neutral bank or bookie, and the players are allowed to place a bet with the bank on their number being the smaller one, at even odds.

What is wrong with the following argument. If player A sees n it is ten times more likely (since there are ten times as many cards) that the card is n,n+1 rather than n-1,n. So he should be willing to bet (at even odds) any amount of money on his card being the smaller! But the same is true for player B. Both should be willing to bet arbitrarily large amounts on having the smaller card, whatever the amount n actually is. They don't even have to look at their cards to make their wagers!

Analysis. Again, with any realistic picture (or beliefs), by the first player, about the relative chances of the differently numbered cards, it must be the case that A and {A<B} are not statistically independent of one another, so there are numbers n such that P(A<B|A=n) is larger than 1/2 and numbers such that it is smaller.

What went wrong. Realistically, there cannot be 10 times as many cards marked n and n+1 as with n-1 and n, indefinitely.

This is Problem 4 in Chapter 1 of Littlewood's book. He finds it similar in entertainment value and mathematical depth to the previous one. He notes that the hypothesis that there are ten times as many cards each time we go up one step in their values, implies that whatever number N you might imagine in advance, it is infinitely more likely that the numbers on the card you draw are larger than N, than not. He says that the (faulty) reasoning is based on a monstrous hypothesis.

Conclusion

A solution or explanation of what went wrong in a two envelopes problem ought to work well in similar problems. I think we are doing well.

Note, Littlewood refers to this kind of mathematical paradox as jokes. Some jokes are better than others. He finds these ones fairly feeble but at least they have the merit that you can share them with non-mathematically trained friends. Richard Gill (talk) 12:02, 16 August 2011 (UTC)


PS This is Gill's 2Necktie theorem:

Suppose A and B are two random variables with finite expectation values, with positive probability to be different, and such that the pair (A,B) has the same joint probability distribution as (B,A).

Then the random variable A and the event {A < B} are statistically dependent. In particular, P(A < B | A=a) is not constant as a function of a; and E(A|A > B) > E(A |A < B)

Proof: E(A-B|A-B > 0) > 0. Therefore E(A|A-B > 0) > E(B|A-B > 0); here we use finite expectation values and the positive probability that the variables differ to ensure that everything is well-defined and finite. By symmetry, E(B|A-B > 0) = E(A|B-A > 0). Combining and rewriting the condition, E(A|A > B) > E(A |A < B). This inequality proves the statistical dependence.

Remark 1. Though the random variable A is the same random variable when it is larger than B or smaller than B it's (conditional) probability distribution is different in the two situations. Thus a subjective Bayesian whose beliefs about two numbers are encapsulated in such a distribution is obliged, assuming he has finite expectations, to entertain different beliefs about A when considering the situation that it is the smaller or larger of the two. This is what the philosophers refer to when they talk about the sin of equivocation: giving different things the same name. But this is a little misleading: the things we are talking about is the entirety of our subjective beliefs about the value of something else in various different situations. That something else is the same thing in all those situations.

Remark 2. The only way to escape the consequences of the theorem is through infinite expectations or worse. The new level of paradoxes which explicitly depend on this escape route have to be defused through a new level of explanation, for instance, why in this new situation expectation values are useless as guide to decision. Richard Gill (talk) 12:30, 16 August 2011 (UTC)

Remark 3. The assumption of finite expectations can be dropped. The conclusion remains true. Proof: choose a function mapping the real line continuously and 1-1 to a bounded interval, e.g. arc tangent. Applying it to A and to B and then apply the theorem to the transformed variables. The conclusion of the theorem therefore applies to the transformed variables. Now "undo" the transformation. Richard Gill (talk) 16:54, 4 October 2011 (UTC)

Law of Diminishing returns

If it is possible to "give arguments that show that it will be to your advantage to swap envelopes by showing that your expected return on swapping exceeds the sum in your envelope", I dare say, that it is possible to give an argument that if one was to contunue swapping that person's expected return on swapping will diminish over time to a point where it will no longer be desireable to swap anymore as the desire to see the contents of the envelope will exceed the increase in the expectation. So no, it will not be "beneficial to continue to swap envelopes indefinitely" as there the diminishing returns rule will kick in.

Oh and by the way, there is no such thing as a logical absurdity. Nothing logical can ever *stay* absurd. — Preceding unsigned comment added by 75.185.11.200 (talk) 04:46, 13 October 2011 (UTC)

TEP Sources

There are banners saying that the TEP sources page [3] is to be deleted and merged into the main TEP page. Yet this is an extremely useful bibliography. At the very least, we could move it to a subpage of the TEP talk pages. What do people think? Richard Gill (talk) 16:49, 4 October 2011 (UTC)

PS the discussion on this issue is archived at [4] . There to, the recommendation seems to be to move the list of sources to the talk pages. Richard Gill (talk) 17:00, 4 October 2011 (UTC)

I hope nobody is going to delete the TEP sources page [5] before someone else moves it to TEP talk pages! Richard Gill (talk) 23:13, 16 October 2011 (UTC)

Elementary solution

An editor has recently added an 'elementary' solution, which I have reverted pending discussion. It is my understanding that things are not quite as simple as that and the Falk paper does not actually support the solution given. Martin Hogbin (talk) 09:16, 12 October 2011 (UTC)

I did this. Falk has several papers on this, and I just cited the most extensive one, which addresses multiple scenarios. She has also published a 2008 paper addressing specifically the Wikipedia version, which I should have referenced instead. In fact, this article is already in a footnote on the first paragraph of the "Problem" section, currently number 3. I believe this resolution makes perfect sense, as surely we cannot use A as one and the same variable while assuming A=X and A=2X. Do you disagree with this? If not, I think this resolution should be stated somewhere in this Wikipedia article, though I am not accustomed to editing so was not sure how to best go about this. Also, I wrote her and she noted there's a typo in the 2008 paper. On page 87, 6th line in the paragraph after the numbered steps, instead of 3/2A it should be 3/2S Jkumph (talk) 18:44, 13 October 2011 (UTC)
There has been considerable discussion on this topic recently on this talk page and, as I understand it, the argument you gave is something of a simplification of Falk's argument. If A is a random variable then it can take on different values. What exactly is X? Martin Hogbin (talk) 15:02, 14 October 2011 (UTC)
The argument given is pretty much exactly what Falk stated in her 2008 paper, just shortened a bit. X is just what we are calling the smaller amount (Falk uses the variable S in her paper). Of course A can take on different values in different scenarios, but if we assume different values for different scenarios, it no longer makes sense to continue to use it in one equation as if we had not assumed different values. That is what happens in step 7, which computes an expected value of B based on two incompatible assumptions about the value of A. This can easily be seen if we stop for a moment and realize that, based on the assumptions behind each term, we could substitute 2X for 2A in the first term (since it is based on the assumption that A=X), while we can substitute X for A/2 in the second term (since it is based on the assumption that A=2X). But this implies that the A in the first term does not actually have the same value as A in the second term, which makes this use of A mathematically invalid. The proper thing would be to have stopped using A after steps 4 and 5 and just compute the expected value of B in terms of X, which would yield 1.5X. It can be easily shown that the expected value of A is also 1.5X, so there is no advantage in switching. I realize of course that there are other versions of the problem where it is more complicated, such that distributions need to be taken into consideration, but in this particular version it does appear that simple. Please give this argument careful consideration, as if Falk and I are mistaken, we would both like to know specifically where. Thank you. 108.192.17.5 (talk) 21:49, 14 October 2011 (UTC)
You and Falk jump to the conclusion that we are computing an expected value. But lots of sources think we are computing a conditionally expected value (the expected amount that we would believe is in the second envelope, if we were to peek in the first envelope and see some specified amount there). It then seems to turn out that whatever actual amount we would see if we did look in the first envelope, the expected amount in the second given that, is larger. Hence there is no need to take a peek: we should switch anyway.

Falk's "solution" does not admit the possiblity that the writer was heading that way. Many writers, especially the less mathematically sophisticated (the philosophers) see it Falk's way. Many writers, especially the more mathematically sophisticated (especially the probabilists and statisticians and mathematical economists) see it the other way. You can't understand the whole story if you don't see that there is choice here, Falk is making an assumption. If you want to present Falk's solution as a simple solution, right up front, you should also say that she is making a particular assumption about the intention of the writer. Personally I would go for the alternative but then I'm a mathematician. (And TEP was invented by mathematicians). But what counts is what is out there in the literature, and Falk is certainly an important authority. Richard Gill (talk) 00:14, 17 October 2011 (UTC)

This seems like a fair compromise. Jkumph (talk) 20:34, 17 October 2011 (UTC)
To drive home the point, here's a little reductio ad absurdum of the idea that it is acceptable to employ a random variable in an expected value calculation while assuming two different values:
Let C be a random variable with .5 probability C=1 and .5 probability C=2. Now, of course:
(1)
Now suppose we find it acceptable to use the same random variable C in this expected value calculation, just as A is used in TEP step (7):
(2)  ?
If this is also true, by transitivity, we may combine (1) and (2) to prove that C=1.5. However, this is absurd for two reasons. First, it allows us to obtain a fixed value for a supposedly random variable. Second, the fixed value obtained is one that, given the possible values of C, is in fact impossible. This shows that (2) is not permissible. If our intention was to represent the general form of (1), then we would be better off writing something with subscripts, such as:
(2)
Then, for example, represents the value of C only in the outcome that C=1. (Applying this guideline to the TEP line of reasoning, we should have begun using subscripts for A somewhere around steps (4) thru (7).) 108.192.17.5 (talk) 23:42, 14 October 2011 (UTC)
The first thing any teacher of probability has to do is to get the students to use capital letters for random variables, and lower case for possible values thereof. Richard Gill (talk) 00:17, 17 October 2011 (UTC)

As I have pointed out before, this resolution must definitely be mentioned in the article. Not only mentioned, it should be the very first presented solution. It is the simplest and most natural resolution as it doesn't rely on the concept of priors. I thus propose that this resolution is presented as the 'First solution.' What is now presented as the first solution should be renamed the second and so on. I also propose that the "Introduction to resolutions" section be deleted entirely. iNic (talk) 02:05, 15 October 2011 (UTC)

INic, are you saying that X is also a variable? Martin Hogbin (talk) 08:34, 15 October 2011 (UTC)

¿Que? I'm not talking about any "X" at all. I'm talking about the resolution to TEP that is right now omitted from the article. If "X" stands for this resolution I think that "X" should be inserted in the article again. As the first solution. Clear enough? iNic (talk) 15:40, 15 October 2011 (UTC)

I think the 'simple solution' is wrong. Falk does not actually give it as a solution in her paper, she gives something similar as an alternative question that makes a solution easier to see.
The problem lies in step 7 (using Falk's numbering) - the incorrect calculation of the expectation. This what her paper actually says. Martin Hogbin (talk) 22:38, 15 October 2011 (UTC)
Which paper are you referring too? She has written three on the subject. It sounds like it may be the 2006 paper, in which her explanation indeed does not seem to follow the "simple solution" I first wrote up. I made a mistake by first referencing that paper, thinking it was essentially the same resolution, sorry. Please be sure to look at the 2008 paper or 2009 paper (both in "Teaching Statistics"). My writeup paralleled the 2008 article, but she makes the same point in the 2009 article (which she co-authored with Nickerson). Referring to the expected value calculation, the 2009 paper states "though the arithmetic of (1) is perfectly right, the primary requirement for algebraic consistency is violated. As pointed out by Bruss (1996), Falk (2008), and other authors, the symbol A in the first term denotes the greater of the amounts in the two envelopes (because only then does the other envelope contain A/2), whereas A in the second term denotes the smaller of the two amounts (for a symmetric reason). At the same time, A on the right side of the equality sign in (1) stands for a random variable, namely, the sum in your envelope." I have been communicating with Falk and she told me the 2009 paper is her favorite. I have seen this basic resolution provided in other places as well, so it seems that, even if there is no consensus on it here, it should be included in the article. 108.192.17.5 (talk) 23:50, 15 October 2011 (UTC)
I am referring 2008 article from Teaching Statistics.
On page 87, Falk gives what she calls 'Modified Wikipedia steps'. As far as I can see this is not intended to be a resolution of the paradox but a way of restating it which makes certain facts clearer. Underneath this she says, In Wikipedia's version, adding up the two terms in the formula of step 7 to obtain 5/4 A is illegitimate. It is the erroneous calculation of the expectation that is wrong. My point therefore is that there is not a reliable source that actually gives your simple explanation. Martin Hogbin (talk) 09:09, 16 October 2011 (UTC)
Let me just preface this by stating, again, that I am in active correspondence with Falk herself. I showed her the edit I made to Wikipedia and she approved of it. So to say that my source on this is unreliable is simply incorrect. I can forward the email if proof is required.
Now, to respond to your reading of the 2008 article. You have quoted one part of her paper as if it were meant to stand for the whole thing. But of course, as we all know, the parts of papers are rather meant to support the overall thesis.
In the introduction, that thesis is stated as clearly as possible: "The purpose of this note is to offer a simple formulation of the resolution that relies on elementary probability concepts and employs terms that are accessible to high school and undergraduate students, not requiring advanced technical expertise. The analysis is in keeping with Wikipedia’s first proposed solution and those of others (Bruss 1996; Marinoff 1993; Rawling 1994). I try to dispel the doubts that sometimes linger despite the sound arguments in the above sources."
After referring to this Wikipedia article itself, in a section aptly headlined "Finding the Flaw", she resolves the paradox as follows: "each of the two terms in the formula in no. 7 applies to another value, yet both are denoted by A. In Rawling’s (1994) words, in doing so one commits the ‘cardinal sin of algebraic equivocation’"
Having pointed out the error in the reasoning, she then proceeds to offer an alternate, modified line of reasoning, cleansing it of the error, to verify that the paradox then disappears.
I have been looking at the discussion archives and this simple resolution has been mentioned many times. It is bizarre that it has not yet made it into the main article. It does not take up a lot of space, and it is backed up by numerous highly credible sources. 108.192.17.5 (talk) 10:30, 16 October 2011 (UTC)
I agree it's bizarre that it's currently not present in the article. But it has been there. It was there when Falk wrote her paper for example. It was the "First solution" for years, before some vandals started to destroy the article a couple of years ago. Some parts have been restored since then but this obvious "First solution" still remain to re-enter the article. iNic (talk) 08:05, 18 October 2011 (UTC)
Firstly, if Ruma Falk would like to contribute directly to this discussion that would be most welcome. I have started a page on what I think Falk is actually saying in her paper. I would welcome any contribution from her either on that page or by private email or indeed by you or any other user here. I do not suggest that you publish a private email here unless the originator agrees.
I have no argument with "each of the two terms in the formula in no. 7 applies to another value, yet both are denoted by A" but the question is exactly what is wrong with this. The term 'algebraic equivocation’ does not really explain what the problem is. I believe that the problem is either the improper use of a random variable in an expectation calculation or the use of a numerical value in place of a random variable earlier on in the argument. I believe that most people consider A to be some intuitive form of random variable thus my first suggestion is the problem. Martin Hogbin (talk) 11:05, 16 October 2011 (UTC)
Once again, and again, and again: Falk is correct, believe it, Martin.
The equation in line 7 is a flaw, since "A" cannot be said to be both, the smaller amount (2A=B) and the greater amount likewise (A=2B), both at the same time.
The absurdity of  implied "A=2A"  is that this could only be "correct" for the cases of  A=B=±0  and for  A=B=±infty.
So line 7 is a flaw, in saying: "So the expected value of the money in the other envelope is":
1/2(2A)  note: but only valid in case that "A" is the smaller amount: 2A=B    +  1/2(A/2)  note: but only valid in case that "A" is the greater amount: A=2B)    =  5/4"A"
And once again: of which "A"?   Of the "one A" or "of quite another A"???   And only valid in case of "A=2A"? Believe it, Martin, she is correct, line 7 is a flaw. As I said it on Richard's talk page 12:24, 29 July.   Regards, Gerhardvalentin (talk) 14:00, 16 October 2011 (UTC)
There is no argument that point 7 is the problem but why? That is the question. Also, please note that I have not said Falk is wrong just that the proposed 'simple solution' does not properly represent Falk's argument. Martin Hogbin (talk) 15:29, 16 October 2011 (UTC)
There is an argument that line 7 is not the problem (though the notation should be improved), but line 6 is the problem. Read Schwitzgebel and Dever. Or read Gill. There is an argument that neither line 6 nor 7 is the problem (though the notation could and should be improved). It's perfectly possible that the writer is so extraordinarily ignorant of the amounts in the two envelopes that even if he would know what it is in one envelope, he would still consider the other equally likely to have half or twice that amount. However such profound ignorance would imply that his expectation value of the amount in the envelopes is infinite. Then indeed, if he opened his envelope and saw a in it, he should switch, since he'ld then indeed expect 5a/4. On the other hand if he doesn't look in the envelope there's no point in switching, since without looking in his envelope, he already expects an infinite amount in there, and if he switches it's the same.

Remember the Anna Karenina principle. The argument which we are presented is not presented as belonging to a specific formal mathematical context. It's informal. The writer also doesn't refer to particular theorems to justify each of his steps, so we have no way of knowing what kinds of probability calculus rules he is using. It's clear to me that the writer doesn't distinguish between conditional and unconditional distributions - ie expectations with or without further information - and doesn't distinguish between random variables and realisations thereof, and possibly doesn't even distinguish between actual values and expected values.

Secondly, if you want to explain the paradox to philosophers you have to use a lot of long words and avoid any mathematics. The philosophy papers are long and difficult and use specialistic philosophy jargon. If on the other hand you want to explain the paradox to mathematicians you can use the notation and calculus of probability theory and the explanation can be done in a few lines but only mathemticians who are familiar with probability theory will be able to follow you.

On this wikipedia page we have to survey the literature, not write our own solutions. The literature is a huge mess. There are at least three completely different ways to read the intention of the writer hence at least three completely different resolution of the paradox. And each resolution has to be explained in a different way to philosophers, in a different way to mathematicians, and presumably in yet another different way for ordinary people (the main readers of wikipedia!).

Almost every paper giving a solution whether long or short, mathematical or philosophical, also includes a little numerical example which shows that given some specific beliefs about x, the smaller of the two amounts of money, your beliefs about whether the second envelope contains more or less than the first would change if you peeped in the first and saw some particular amount of money in there. It's a mathematical theorem that this is not just some pecularity of the particular examples which people figured out so far, but it's universal, it's generic, it has to happen. I think that this is the heart of the matter and you'll find it somewhere in every single paper on the topic. If you bear this in mind you'll easily see where steps 6 and 7 together are too fast, too careless. Richard Gill (talk) 20:26, 16 October 2011 (UTC)

The proposed simple solution

This is the proposed simple solution. I have copied it her so that we can discuss it, rather than the article by Falk which I give my opinion on here

...it should be noted that at least one convincing elementary resolution exists for the above formulation. This follows from the recognition that the symbol A in step 7 is effectively used to denote two different quantities, committing the fallacy of equivocation. This error is brought into relief if we denote by X the smaller amount, making 2X the larger amount, then reconsider what happens in steps 4 and 5:

4. If A=X then the other envelope contains 2A (or 2X).

5. If A=2X then the other envelope contains A/2 (or X).

Each of these steps treats A as a random variable, assigning a different value to it in each possible case. However, step 7 continues to use A as if it is a fixed variable, still equal in every case. That is, in step 7, 2A is supposed to represent the amount in the envelope if A=X, while A/2 is supposed to represent the value if A=2X. However, we cannot continue using the same symbol A in one equation under these two incompatible assumptions. To do so is equivalent to assuming A=X=2X; which, for nonzero A, implies 1=2.

Discussion

I think this solution tries to make things just a little too simple. It is not clear to me exactly where this solution says the problem in the original line of reasoning lies.

We all agree that the problem lies in line 7 which claims to calculate the expectation value of the unchosen envelope.

This line says that the expectation in the unchosen envelope is 1/2 2A + 1/2 A/2 = 5/4 A

What exactly is wrong with that calculation according to the proposed solution?

(Note that it is always possible to propose other lines of reasoning that show why you should not swap, there are many of these, all correct, but they do not resolve the paradox. This requires us to show what is wrong with the given line of reasoning.) Martin Hogbin (talk) 16:35, 16 October 2011 (UTC)

First, are we in agreement that this is not an issue about what Prof. Falk and others accept as a resolution but what Mr. Hogbin fails to accept as a resolution? If that is case, it should still be included in the article regardless of Mr. Hogbin's personal opinion, since as far as I know Mr. Hogbin is not an expert on the subject and so should not be allowed to override the voicing of credible, informative points of view.
Second, the problem with line 7 is that is that, in context, the equation is algebraic nonsense. This is my informed opinion from having studied pure mathematics at the University of Chicago. It is also what Falk and Nickerson essentially state, once in the 2008 article, and once in the 2009 article. They note they are not the first to make this observation; in particular, Bruss, in 1996 said the same thing.
In any algebraic equation, variables are not allowed to change their meaning in mid-equation. I cannot write A + A = 2A and then say A in the first case stands for apple, in the second case for orange, and the third case for apple again. Nor can I write any equation with any variable that is allowed to have different values in different parts of the equation. Any mathematician worth his or her salt will agree with this most basic of all logical principles, also known as the Law of identity. 108.192.17.5 (talk) 18:08, 16 October 2011 (UTC)
Whether step 7 is right or wrong depends on what the author of these steps is trying to do. There are at least three different more or less reasonable interpretations of the intention of the writer, and according to these different interpretations, an error is made at different places. Different writers on TEP seem to be choosing one of these interpretations as if it were obvious and necessarily the only interpretation. That is one reason why the literature is so messy.

To explain this, some more careful notation might be useful. Let x denote the smaller of the two amounts, y=2x the larger; let a denote the amount in the first selected envelope, b denote the amount in the other. In any one instance these are all fixed, unknown quantities. Our player seems to be making a probability argument so it seems reasonable to suppose that all this talk of expectation values and probabilities is relative to the player's system of prior beliefs. For instance, a priori the player is 50% certain that x is smaller than 100, and 100% certain that it is smaller than 1 000 000, for instance. We can now use probability theory notation, and introduce random variables X, Y, A, and B whose joint probability distribution is built up as follows: X has the probability distribution corresponding to the player's prior beliefs about x; next, Y=2X; next, A=X or A=Y each with probability 1/2, independently of the value taken by X; and correspondingly and finally, B=Y or B=X.

Now with this more subtle notation let's look at step 6. It's true by construction that B=2A or B=A/2 each with probability 1/2. This is simply saying that our a priori beliefs make it equally likely that we hold the envelope with the smaller or the larger amount. But it's not necessarily true that conditional on A=a, B=2A or B=A/2 each with equal probability half. In the words of beliefs: if we imagine that we look in the envelope and happened to see an amount 80 there, it would no longer necessarily any more be equally likely to us, when we take account of our prior beliefs about a, that x is equally likely to be 40 or 160. And similarly for any other value for a.

In fact there is a simple probability theory theorem which says the following: if X, Y, A and B are defined as above, starting with a proper probability distribution for X (representing our initial prior beliefs about x, remember), then it impossible that whether or not the first envelope contains the smaller amount is independent of the amount actually in it.

With this in mind, let's ask ourselves what the writer is doing in steps 6 and 7 together. In step 7 it looks as though he is trying to compute the expected value of the amount in the second envelope given what is in the first. In other words, he imagines that he peeps in his envelope, happens to see 80 in there (for instance). In the language of random variables, he wants to calculate E(B|A=80). Probability calculus tells us that this is equal to 160 Prob(B > A| A = 80) + 40 Prob(B > A| A=80). According to line 7 he takes these probabilities equal to 1/2. But I have just told you that it is impossible for these probabilities - representing how his beliefs about x, y, a and b would be changed if he peeped in his envelope and happened to see any particular amount there - to both equal 1/2 for all possible quantities he can imagine seeing in the first envelope.

OK, so let's suppose this was not the intention. Suppose the writer was just trying to compute the expected amount in the second envelope without knowing what's in the first. Of course, it is perfectly legitimate to calculate this by splitting over the two possibilities that the first is smaller and larger, respectively, each of which has probability 1/2. Then probability calculus tells us E(B)=0.5 E(B|B > A) + 0.5 E(B | B < A) = 0.5 E(2A|B > A) + 0.5 E(A/2 | B < A) = E(A |B > A) + E(A | B < A)/4. Now we have a problem to calculate those two conditional expectation values. I just told you that it is impossible under mutually consistent prior beliefs to have the amount in the first envelope independent of whether it contains the smaller or larger amount. In other words, our beliefs about a, if we were to be informed that it's smaller than b, are not the same as our initial beliefs about a. We can't just replace E(A |B > A) by E(A) and also replace E(A | B < A) by E(A). (Moreover, we can't just drop the E(.) altogether). Well this explanation is essentially Falk's simple explanation: it's all about equivocation. Using the same symbol for two different things. The two A 's on the right hand side of 7 do indeed, according to this reading of the intention of the writer, refer to two different things, namely they refer to our beliefs about a in two different situations. Not to our prior beliefs. But to what our beliefs would be, were we to imagine that we were informed that our envelope was contained the smaller, or the larger, of the two amounts.

Those are two different explanations of what went wrong. As I said, you can find both of them, either written more compactly still in probability notation, or written out in many many words in philosophy papers.

But there is also a third possibility (and who knows, maybe more!). Perhaps we are initially so totally ignorant about the amount x that we don't want to use a proper probability distribution to represent our total lack of knowledge. Many writers say that if we know nothing about a positive number then we know nothing about its logarithm, which is an arbitrary real number (somewhere between minus infinity to plus infinity). Then we should use a uniform probability distribution over all of the real numbers to express our prior beliefs about the log(x). This is the same as using the probability density proportional to 1/x to respresent our prior knowledge about the unknown positive number x itself. This is a bit tricky, we go outside of conventional probability theory, since the function 1/x integrates to infinity... Still, one can try and turn the handle and calculate anyway. When we do this, it turns out that our beliefs about whether or not a is the smaller of the two amounts are independent of the actual amount a itself! So maybe the writer was so totally ignorant about the amount x that indeed, whatever he happened to see in the first envelope, he would still believe it equally likely to be the smaller or the larger of the two! Or, whether or not he was told that his envelope contained the smaller of the two amounts (or the larger), wouldn't change his beliefs about what's in there! Then step 6 is correct and in step 7 he is using a bit careless notation but what he means is E(B|A=a)=2a P(B > A | A=a) +0.5 a P(B < A | A=a) = 2a . 0.5 + 0.5 a . 0.5 = 5a/4. In shorthand notation commonly used by probabilists, E(B|A) = 5A/4. So now we still have a paradox. This seems to tell us we should switch, whatever is in the first envelope, and we'll be better off; but that is absurd. Well, it is not a paradox after all, but then we have to work a little further through. If E(B|A)=5A/4 then taking expectations again on both sides, we find E(B)=5E(A)/4. But we know by symmetry that E(B)=E(A). This seems to be a contradiction.

But it isn't! There is a solution such that both E(B)=5E(A)/4 is true, and E(B)=E(A). It's E(A) is infinite, and E(B) is infinite too. And that is actually what our prior beliefs are saying. According to such total ignorance of the amount in the first envelope, the average amount in there is infinite. Now we see that indeed we only have a paradox - an apparent contradiction - not a real contradiction. The fact is that if E(A) is infinite then if we were to know what was in the envelope, whatever it would be, we would be disappointed. When expectation values are infinite, they are not good guides to decision making. You are always disappointed compared to the expectation value. The argument seems to show that exchanging envelopes increases the expected value of what's in them. But it doesn't, and can't, since the expected amount in the first is already infinite.

You write "It's true by construction that B=2A or B=A/2 each with probability 1/2". Actually, this is false if A is meant to represent the same value in either side of the "or" conjunction. There is a difference between saying it is equally likely to have the larger or smaller envelope, and saying it is equally likely that the other envelope has either half or twice what is in the one we are holding. 108.192.17.5 (talk) 20:12, 16 October 2011 (UTC)
Sorry, it is true by construction. By the construction I told you about. I am using probability theory notation and I started by telling you my probability theory assumptions. X is the name of a random variable whose probability distribution is identically the same as our prior beliefs about x. For instance, if we were 95% sure that x is smaller than 1000, then we would have Prob( X < 1000)= 0.95.

Next, still in the language of random variables, Y=2X. Next, toss a fair coin, independently of the value of X and use it to define A=X or A=Y and simultaneously B=Y or B=X. Isn't that clear? The joint probability distribution of X, Y, A and B represents the totality of our prior beliefs about x, y, a and b. And if we imagine how our beliefs should be adjusted were we to be informed, for instance, that a < b , then this would translate into the probability theory operation of replacing the original joint distribution by the conditional distribution given A < B.

I was doing probability theory, not pure mathematics. The notation is specially designed to give us a mathematical language in which we distinguish between the actual values of things, and our beliefs about them, and also allows us to handle in a very convenient way how our beliefs would logically be adjusted were we to be given further information. Richard Gill (talk) 20:38, 16 October 2011 (UTC)

Fellow editor iNic is fond of emphasizing that the Two Envelope Problem is a problem in subjective Bayesian decision theory. In other words, about decision making under uncertainty, when we express uncertainty in terms of probability, and the aim is to make the best decision in terms of expectation value - expectation value with respect to the probabity distribution representing our uncertainty. The word "Bayesian" is added to emphasize that on obtaining new information we should update the probability distributions representing our uncertainty by doing probabilistic conditioning, for instance making use of Bayes' rule if appropriate. The writer of the TEP argument certainly appears to be a subjective Bayesian. So, let's assume this is the case, and find out if we can make sense of his derivation by going carefully through it in such terms. Then we have to decide whether at steps 6 or 7 we are talking about conditional or unconditional probabilities, and conditional or unconditional expecttion values, and we have to do it in a mutually consistent way. Some writers, e.g. Schwitzgebel and Dever, and apparently Falk too, seem to think that the writer is going for an unconditional expectation. Others thing he is going for a conditional expectation. But in both cases the argument is wrong since in both cases the author is assuming that information about a wouldn't affect our beliefs about which envelope has the smaller amount, or equivalently, information about which envelope has the smaller amount wouldn't affect our beliefs about the amount in, say, the first envelope.

Well, there is just the extreme possiblity that the writer was a Bayesian using an improper prior with density 1/x to represent the most total ignorance about the amounts in the envelopes you can imagine. So ignorant that his beliefs about x are the same as his beliefs about y. Well, in that case his ignorance about x is so sublime that the expected value of x according to the relevant probabilty distribution is infinite. Well, in that case the paradox is only a paradox (and not a contradiction) for slightly different reasons. Richard Gill (talk) 21:00, 16 October 2011 (UTC)

Sorry, but if I follow your construction properly and I am properly understanding the reasoning that is not specifically included, then I believe it commits exactly the same error of equivocation; it is only slightly obfuscated by extra notation. Specifically, it seems you must be following this particular train of thought:
(E1) (probability 1/2): A=X => B=Y (by symmetry) => B=2X (by Y=2X) => B=2A (by A=X)
(E2) (probability 1/2): A=Y => B=X (by symmetry) => B=(1/2)Y (by Y=2X) => B=(1/2)A (by A=Y)
Now, taken separately, these are both true, internally consistent deductions. However, E1 is only true in a world where A=X. while E2 is only true in a world where A=Y, which by construction, also implies A=2X. I.e. the very logic in E1 only follows if A=X, and that in E2 only if A=2X! Therefore it is mathematically incorrect to use the same symbol A in these statements when taken together, since the A's do not both refer to the same thing. It does not matter if A is random variable, a fixed quantity, a function, or a some kind of cow; we are not allowed to use it to mean different things in different parts of one and the same equation. Never, ever, ever. Not even on holidays.
That said, it is perfectly possible that people are confused by the two envelope problem for other reasons than equivocation, and if the other resolutions given help people think more clearly and avoid errors, that's wonderful. All I ask is that the resolution based on equivocation be included. For many people, it is enlightening. Jkumph (talk) 03:36, 17 October 2011 (UTC)
I don't see what you mean by saying "it is mathematically incorrect to use the same symbol to mean different things at the same time". If I use probability notation and write P(B=2A)+P(B=A/2)=1 I am using the same symbol twice in one mathematical expression whereas the two statements B=2A and B=A/2 are mutually exclusive. I hope you agree that formal probability theory is part of pure mathematics, even if we probabilists use special short-hand notation so as to make it easier (for experts) to rapidly understand what otherwise would be complicated statements.

I also do agree with you that one reading of the paradoxical argument is that the writer was trying to compute E(B) by using the absolutely true fact E(B)=E(B| B > A)P(B > A) + E(B| B < A)P(B < A). And that seems to be what Ruma Falk is assuming. Next, it is perfectly correct, whether we are on holiday or not, to write E(B| B > A) = 2 E(A | B > A). But now it goes wrong. The writer just replaces E(A | B > A) by A which is complete nonsense.

If you want to call this mistake of replacing E(A | B > A) by A an example of the cardinal sin of equivocation, it is fine by me. Indeed the writer doesn't distinguish between (1) his a priori beliefs about a (which are encapsulated, in my notation, by the probability distribution of the carefully defined random variable A), (2) how his beliefs would alter if he were informed that actually b > a (encapsulated by the distribution of A given B>A), (3) the same when a < b, and (4) the actual unknown true value a itself.

So yes, one resolution of the paradox (popular among many philosophers) is that this was the route the writer was taking, and yes, he commits the cardinal sin of equivocation. However it remains a fact that a whole lot of other authorities think that the writer was taking a different route and made a different mistake. And there are also a few authorities who think he was actually taking yet another completely different route and actually his argument goes wrong somewhere later (though still it would have been better if he had been more careful with his notation in particular by distinguish between random variables and their values, and also explain at each step what probability calculus rule he is using and why he thinks it is applicable). Richard Gill (talk) 07:42, 17 October 2011 (UTC)

My initial response follows. But let me also say: we can argue the details until the cows come home---and surely I'd like to; it is fun. However, I'd also like to have the simple resolution in the article. Since it seems in keeping with your most admirable Anna Karenina principle, will you assent to this?
In the context of probability notation as customarily understood, I would say the Bs effectively do not actually mean different things in your example, so there is not a problem. The equation merely uses shorthand notation to refer to the probability of an imaginary event in which B equals 2A and another imaginary event in which B equals A/2. It is not as if the equation ever actually, outside of the imaginary events themselves, assigns different values to B. This comes down to semantics I suppose. Perhaps one could also say there are two "meanings". The meaning in the context of the event, and the meaning in the context of the equation of as whole. My assertion applies to the latter type of meaning.
My original objection had to do with your assertion that B=2A or B=A/2 with equal probability 1/2. I said this is false if A is understood to be the same value, i.e., that it is not allowed to vary between the two events. You then stated it was true regardless, by construction. I then showed these two statements are true together only if A is allowed to vary between the events, so that A is not the same value. I thus took this as an error of equivocation on A. It is also possible that you simply glossed over my caveat about the statements being false together if A is the same value.
Now, if you are always careful to use A in a context such that it is understood to vary, as in E(A|A>B) versus E(A|B>A), then certainly I cannot protest. My point is just that, in step (7) of the TEP reasoning, A is used without any such context, therefore equivocating, as you have noted, between different possible beliefs about A in different possible situations. Moreover, this possibility, which you call the second, seems like the most common possible error of reasoning. It follows directly from step (6) so long as the two As are not clearly distinguished there, as they aren't.
The first possibility you mention, that in step (7) the writer is attempting to calculate an expected value given some actual A=a, looks like a simple non-sequitor on the part of the writer. In particular there is no justification for the supposition that B=a/2 or B=2a with equal probability 1/2. We don't we need to discuss about priors or anything else in this case---the justification for these probabilities is simply missing.
Finally, maybe I am missing something, but your reasoning in the third possibility seems either incomplete or flawed. You derive what appears to be a paradox, and then resolve it by simply stating "when expectation values are infinite, they are not good guides to decision making." This is does not, to my mind, satisfactorily resolve the paradox. It seems more likely that there is an error in the reasoning which produces E(B|A=a)=(5/4)a in the first place. Specifically, I take issue with the statement E(B|A=a)=2a P(B > A | A=a) +0.5 a P(B < A | A=a). Ignoring the a's in the P(.)s, what do each of the a's refer to in their appearances in the right hand part of the equation? It looks like the first a refers to some a in the case that B>A, and the second to some a in the case that If B<A. But if B>A, is it not then necessarily true that b>a and vice versa? If so, then the 'a' in 2a cannot refer to the same value as the 'a' in .5a; in one case b>a, in the other b<a. This seems essentially like the same problem dealt with in the second resolution, some kind of equivocation.Jkumph (talk) 19:58, 17 October 2011 (UTC)
When *I* talk about a I am talking about the fixed but unknown amount in the first envelope, in one specific instance. When I talk about v I am talking about an amount of money which we could imagine is in there, in the same instance. When I talk about A I am talking about my uncertainty about a in the same specific instance. Nothing is varying. There are no repetitions. I am trying to carefully distinguish between a number of different things by using an appropriate notation.

The writer of TEP appears to be a careless or amateur mathematician since his notation is ambiguous and/or he makes mistakes. Or he is a very sophisticated mathematician and the "mistake" is somewhere else, but then he is hiding from the reader the fact that he's making assumptions. So sure, this very likely is all about equivocation. Not carefully distinguishing different concepts or things or different levels.

About Falk's solution: my personal opinion is, yes, this is *a* solution, because it's a legitimate way to understand the intention of the writer, and in that case yes, then this is where they go wrong. But it's not the only solution because there are other legitimate ways to understand the intention of the writer and then it was somewhere else where they went wrong. By legitimate I mean both that I find them reasonable and also that it is a fact that many authorities have taken them to be the intention. (It seems I'm the only guy who says that there is not one unique solution because there is not one unique legitimate (reasonable) intention of the writer). Since I come from probability I have a personal bias to a different interpretation than Falk's. And in my favour, the writer is making a smaller number of mistakes according to my reading than he makes according to Falk's.

The philosophers tend to go for Falk's reading: more mistakes but the writer is not using quite such sophisticated probability. Remember, the puzzle was invented by mathematicians experienced in probability theory for the express purpose of teasing amateurs! Sorry; for the express purpose of getting them to think. Which requires distinguishing concepts which ordinary people are not used to distinguishing.

About the third possibility. It's a fact that there exist prior distibutions on x such that E(B|A=v) > v for all v. The improper prior 1/x in particular, but also quite a few proper priors. So no, there is not an error in the reasoning which produces - with the rather special prior in question, it's called Jeffrey's prior and everyone used it around the time TEP was invented - E(B|A=v) = 5v/4 for all v. It's straightforward probability calculus, though indeed, we are going slightly outside of conventional probability calculus by using an improper prior for our intial beliefs about x. Richard Gill (talk) 13:39, 18 October 2011 (UTC)

Richard, what is your opinion about the proposed 'elementary solution' to the problem? Martin Hogbin (talk) 22:38, 16 October 2011 (UTC)

Sure, you can say that this is all about the sin of equivocation: using the same symbol to denote different things. But it's too easy to say that. And you cannot say that *this* is where the argument goes wrong, because you can't tell what the writer was trying to do.

The TEP argument is an argument about how the player's subjective beliefs in the amounts of money in the two envelopes would change under hypothetical information. If you want to explain what goes wrong, you had better make use of a language which allows you to make the necessary distinctions. And do the calculus properly. So let's start by distinguishing between the actual unknown amounts a, b, x, y; and your prior or initial beliefs about them, expressed as a joint probability distribution of random variables A, B, X, Y. Now we can talk about how those beliefs would logically have to be modified when you imagine knowing which envelope contains the smaller amount, or if you imagine knowing the amount in the first envelope.

The writer appears to be computing an expectation value. It's not clear if he want to calculate the unconditional expectation E(B) and compare it to E(A) or if he wants to compute the conditional expectation E(B|A=v) and compare it to v, for arbitrary v (possible values of a). In both cases one can try to compute what you want to know, by splitting over the two possibilities A > B and B > A. In the first case you write E(B)=E(B | A > B)P(A > B)+E(B | B > A)P(B >A). The two probabilities are both 1/2. For the first of the two conditional expectations, we can write E(B | A > B)=E(2A | A > B)=2 E( A |A>B). Now it looks as though the writer, if indeed he is following this route, has simply replaced E( A |A>B) by E(A) though he actually only writes A (without any expectation operation) on the right hand side of line 7. So: he is using the same symbol A to stand for the actual unknown amount in the first envelope, what I would like to call a; and for its expectation value according to his prior beliefs about x; and for its expectation value according to his prior beliefs about x but pretending that someone informed him that the first envelope contained the larger amount; and for pretending that someone informed him the first envelope contained the smaller amount. Threefold equivocation, I would say.

Alternatively he was after E(B|A=v), that is to say, what according to his prior beliefs the average amount in the second envelope would have been, if someone has informed him that actually the amount in the first was v. (I want to distinguish any possible imagined value v of a, from the actual fixed but unknown amount a itself). OK, then he is entitled to write E(B|A=v)=E(B|A=v,A < B)P(A < B|A=v)+E(B|A=v,B< A)P(B < A|A=v)= 2v P(A < B|A=v) + v/2 P(B < A|A=v). But now he replaced those two conditional probabilities (of which envelope has the smaller amount, suposing you learnt that the first envelope contained v) each by the unconditional probabilities 1/2. But those two conditional probabilities of necessity must depend on v. I don't think that this is a problem of equivocation. This is just a problem of mixing conditional and unconditional probabilities.

You can't see which route the writer was trying to take so you can't say which mistake was made. Falk and also quite a few of the philosophers seems to believe he took the first route, but then he is mixing up four things, not two: the unknown value a and our beliefs about it in three different situations - without further information, and with the informtion that the first envelope contains the larger amount, and with the information that it contains the smaller amount.

Most of the mathematicians and probabilists imagine that the writer screwed up taking the second route.

But there's a further possibility which has a long and very respectable history: because he knows *nothing* whatsoever about x, being given information like which envelope has the larger or smaller amount, or how much might be in one of them, doesn't change anything his initial beliefs! He still knows *nothing*. Whatever value he's told, it's still equally likely which one is the larger. Well, that's quite plausible, and then this is a paradox of infinity.

Sorry I can't give you a short answer. Different people will be happy with different solutions. If you know some basic probability theory, it is all terribly simple. (But still, there is not one solution: there are at least three!). If you don't, it is all terribly confusing. TEP and its friends was invented by mathematicians to tease non-mathematicians. Richard Gill (talk) 23:47, 16 October 2011 (UTC)

Discussion 2

Richard my question was not about the answer to the problem in general but the specific 'elementary solution' presented at the top of this section. This is intended to be based on Falk's 2008 article but in my opinion it is too simple and does not properly represent the points that the article is making. I would be interested in hearing your comments on my, half-baked, thoughts on Falk's article here.

Also, the overall point you have made need to be made somewhere in the article. Should we add something like this.

The proposed line of reasoning is not rigorously defined by the simple English language statements and thus, before the error in the reasoning can be found, it is first necessary to decide in precise mathematical terms exactly what the proposed reasoning is intended to be. The proposed line of reasoning can be interpreted into precise mathematical statements in several different ways with the erroneous step being at different points in different cases. There are therefore several correct resolutions to this paradox in the literature depending on exactly what the proposed line of reasoning is taken to be. Martin Hogbin (talk) 08:35, 17 October 2011 (UTC)

Yes! Small problem: find a Reliable Source which says just this (Wikipedia editors are not allowed to make their own, novel, synthesis). But I hope something like it can be found in Nickerson and Falk, which is a long paper surveying many different resolutions of the paradox. The short papers which propose one snappy solution (the author's favourite) never do this. It seems that outside of Nickerson & Falk, Nalebuff, and myself, no-one has ever published a synthesis / overview / encyclopedic / review paper on TEP. There are almost solely primary sources (new research contributions). Wikipedia editors are supposed to rely on tertiary sources if possible. (Standard university texts, neutral review articles with no pretense at supplying an original persective). The papers of Nickerson and Falk, Nalebuff, and me are somewhere between primary and secondary. Richard Gill (talk) 09:19, 5 December 2011 (UTC)

Why prior beliefs about a can't be independent of whether it's smaller or larger than b

Many articles on TEP present numerical examples illustrating this fact. There is a theorem in Samet, Samet and Schmeidler (2004) "One Observation behind Two Envelope Puzzles", American Mathematical Monthly, which implies that this is always necessarily the case, but it is a bit tricky (but not much more than one page), and it's about a bit more general situation so that you have to do a tiny bit more work to turn it into the result you want for TEP and related paradoxes.

Here's a much shorter and much simpler proof.

First let's do it assuming that everything has a finite expectation value.

Start with the obvious fact E(A-B|A-B > 0) > 0. Which is the same as the statement E(A-B|A > B) >0. Which implies E(A|A > B) - E(B|A > B) > 0 (here I use finite expectation values). Or in other words, E(A|A > B) > E(B|A > B). By symmetry, E(B|A > B) = E(A|B > A). So we get E(A|A > B) > E(A|B > A).

Now, if the expectation of A is different when A > B than when A < B, it must be the case that the distribution of A in those two cases must be different. In other words, the random variable A is not independent of the event A < B . Which also tells us that the event A < B is not independent of the random variable A. In other words again, the conditional probability (given A = a) that A < B depends on a.

If A and B have infinite expectations then first of all, choose some function g which smoothly and strictly monotonically increasing maps the positive real numbers onto some bounded interval of numbers; for instance, the arc tan function would do. Apply this function to both A and B, and then apply the previous argument to the transformed variables instead of the original. The final conclusion is that transformed A is not independent of the event: transformed A smaller than transformed B. But this event is the same as the original event: A smaller than B. And finally, transformed A being not independent of this event implies original A is not independent of the event.

So the conclusions we just saw about non independence (or equivalently, about dependence of conditional probabilities) in the case of finite expectation values are also true in general.

By the way, exactly the same little fact lies at the heart of all the other paradoxes related to Two Envelopes Problem: Kraitchik's two neckties problem; Littlewood's two-sided cards problem; Schroedinger's two-sided cards problem. Richard Gill (talk) 21:22, 16 October 2011 (UTC)

Since I'm doing OR on TEP, and have an own POV on the problem, I was trying to keep away from TEP on wikipedia! But still, in case anyone is interested, I just put a revised version of my manuscript (work-in-progress) on TEP on my homepage, [6]. Richard Gill (talk) 07:51, 17 October 2011 (UTC)

Reactions to Jkumph

You wrote "The first possibility you mention, that in step (7) the writer is attempting to calculate an expected value given some actual A=a, looks like a simple non-sequitor on the part of the writer. In particular there is no justification for the supposition that B=a/2 or B=2a with equal probability 1/2. We don't we need to discuss about priors or anything else in this case---the justification for these probabilities is simply missing."

I don't understand. In my notation, and by my understanding, it is a given fact that b=a/2 or b=2a with equal probability 1/2. That's just the same as saying that a=x or a=2x with equal probability 1/2. But of course, according to this reading, he should be finding conditional probabilities, not unconditional probabilities. You may find it a non-sequitur but other people find it the obvious diagnosis of the mistake the writer was making. According to one reading he goes astray by four-fold equivocation, according to another reading he goes astray by a small oversight, a very typical student error in fact, which you would rather call a non-sequitur. It's a mistake, yes. It's a natural mistake to make in this context. You can call it a non-sequitur but that is also an error, just as an equivocation is an error. But don't ask me. Read the sources. Lots of writers go for this diagnosis. And if you go back to Schrödinger, Littlewood, Kraithchik, Nalebuff, you'll find them all calculations conditional expectations of the contents of the other envelope given what might be imagined in the first, or equivalent things in their own contexts. And this is the mistake that is built in. Either by using marginal probabilities instead of conditional, or by using the improper prior which makes the marginal probabilities equal the conditional. Richard Gill (talk) 14:21, 18 October 2011 (UTC)

I'm not saying that Falk or the philosophers are wrong. At some point TEP went viral, the problem is told without its history. And new people see different things in it and write new papers about that, in new fields. Just like Monty Hall Problem. In fact, it's a wonderful and positive phenomenon. This is living culture. But historically the problem started with my third interpretation (Schrödinger) then went to my second (Kraitchik, Littlewood, Nalebuff), on the way got picked up by Martin Gardner who never could understand it - he was am amateur pure mathematician not a probabilist by the way - and at some point got into the philosophical literature. And into the maths pedagogical literature. From Nalebuff it went to the decision theory literature and the philosophy of economics. In that context it is all about priors and about expectations and about infinite expectations.

At each step in the long story, some of the prehistory is forgotten, the context and common understanding shifts, the problem bifurcates. That's why it's hard to write an article on TEP or MHP. Richard Gill (talk) 14:28, 18 October 2011 (UTC)

Richard, I would be interested in your comments on my solution User:Martin_Hogbin/Falk_2008 Martin Hogbin (talk) 18:06, 18 October 2011 (UTC)
Thanks for your continued dialogue. Perhaps I'm missing something about your description of the first possibility; I suggest shelving it for a moment. What troubles me much more is the third possibility. Before, you wrote:
(1)
Now if I am understanding where this is coming from, it is from the general form:
(2)
Where is the value of B when B>A and A=a, is the value when B<A and A=a, and are the respective conditional probabilities for B>A, B<A when A=a.
If that is correct, then can't we also rely on the fact that and , then substitute back into (2) (including the probabilities, each 0.5), to get:
(3)
Meanwhile, if (1) and (3) are both valid, since each is E(B|A=a), then we have:
(4)
This however, is a blatant contradiction, since for any a, we know a=x or a=2x. As far as I can tell, I've followed the same steps that you followed when you substituted the in terms of a. Therefore, unless I'm missing something, the contradiction stems from an error in those steps (one which looks like a form of equivocation to me).
Jkumph (talk) 20:16, 18 October 2011 (UTC)
Ha! I think I just answered my own question. The reason the (1) is not equivocation while (3) is equivocation is because in each, A is fixed to a, so it does not differ among the possibilities, but B and X are not fixed, and actually do differ among the possibilities. So we can express the expected value in terms of a, but we cannot express it in terms of x, because no such fixed value is presupposed by the expected value (we would have to write E(B|X=x) for that). So it looks like I misunderstood things pretty badly. Mea culpa. That said, I have to say I still don't feel entirely comfortable with the resolution provided; I'll have to think quite a bit more. It seems utterly bizarre that merely knowing a fixed value for A (doesn't matter what!) should make us prefer the other envelope. But I guess that's the point. Jkumph (talk) 01:53, 19 October 2011 (UTC)
Very good, Jkumph! What one one generally expect, is that if we looked in the envelope A and saw a small amount a in there, we would tend to switch, and if we saw a big amount there, we should stay. And that is usually true, for realistic prior beliefs about the amount of money the guy has who is offering us this game. However, it is a mathematical fact that if you have strong enough prior beliefs on arbitrarily large values of x, a so-called fat-tailed distribution, then it can indeed be the case that whatever a you actually saw, you would - according to the expectation value of B given A=a - be advised to switch. Since you'ld get the same advice *whatever* you saw in the envelope, you might as well not look and just switch anyway. Which is obviously nonsense. So ... where's the mistake? It's in the fact that in this, your expectation value of x is necessarily infinite, and whatever you actually got, in either envelope, you would be disappointed. Of course. Richard Gill (talk) 09:11, 19 October 2011 (UTC)
Here is an example, sort of. A truncated fat tail distribution. So no actual infinities, but for practical purposes the same! Suppose for some reason you believe that the guy offering you the challenge actually does so by, in advance, doing the following. He chooses completely randomly an amount x from the following list: 1, 2, 4, 8, 16, 32, ... , 2 to the power 63 dollars. (Remember the grains of rice on a chessboard?) He puts x in one envelope and 2x in the other. You pick one envelope at random. Now if you looked in that envelope and saw a=1 you know you should switch, if you saw a=2 to te power 64 you'ld hold on to it; but if you saw anything else, it would indeed be equally likely that b is half or twice what you saw. So if you really were interested in maximizing the expectation value of what you get, you would switch in all those cases. In other words, there's only a 1 in 128 chance that you wouldn't want to switch. And anyway, what's the point of getting 2 to the 64 dollars instead of 2 to the 63? So the rational guy who uses expectation value to decide whether or not to switch, will probably not bother with looking in the envelope (perhaps he has to pay to get a peek, first!) but just switch anyway ... which is obviously stupid!

Put this little example on your computer and do some explicit calculations.

Do you think a real world casino could ever offer this game? What would be the entrance fee to be allowed to play (ie pick an envelope?) How much extra would you be prepared to pay in order to take peek in envelope A before deciding whether or not to switch?

You'll agree I hope that this example is not realistic. Just not possible in view of how much money there is in the world and what would happen if you happened to own all of it. That would immediately make money worthless, in fact.

The improper prior distirbution comes in te limit from taking the amount x uniformly distributed over all integer powers of 2. This is the discrete version of Jeffrey's prior and it is very often used, whether deliberately or "by accident", for a prior probability distribution representing complete and utterly total ignorance about a positive number x. If you now get some observations which tell you about x, the improperness quickly vanishes. That's why this mathematical abstraction of "ignorance" is actually useful in Bayesian statistics. But in our case we get no data and stay with our prior (unless we peek). After we have peeked, we will believe it equally likely that the other enveope contains half or twice what we saw, whatever we did actually see. So we'll believe it equally likely that x is 2a or a/2. This is quite logical. We knew *nothing* about x in advance. When we take a peek,this is what we learn. Full stop. Richard Gill (talk) 09:25, 19 October 2011 (UTC)

Thanks. I am familiar with the properties of these distributions. In that case of a truncated distribution, then, strictly speaking, if the only goal is to maximize the expectation, then peeking is necessary, since there is at least one case in which switching is not beneficial. There it is easy to see how that constitutes extra information. In that case of the unbounded distribution, however, it apparently makes sense that one should switch regardless of the value discovered. In this case, I have trouble seeing what extra information is gained by knowing the amount in an envelope. I am inclined to wonder if the situation described is not just materially impossible, but logically impossible---though it is difficult to see where the impossibility lies, if it does. Jkumph (talk) 18:25, 20 October 2011 (UTC)
There are very easy to construct proper probability distributions for your prior beliefs in x (for a Bayesian story) or for the way x is generated (for a frequentist story) such that E(B|A=a) > a for all possible a. For instance, the amount of money is 2 to the power N where N has the geometric distibution with parameter p=1/3. ie you toss a coin with success-chance 1/3 till the first time you have a success. That number n is the realized value of N, the number of "failures" before the first "success". Which could be 0, 1, 2, ... but certainly will be finite. And then you put 2 to the n dollars in one envelope, and double that in the other.

The point of the story, in this case, is that you indeed know for sure (i.e. without looking in your envelope) that if you did look in there, and computed the expectation of the amount of money in the other given what you saw in the first, it would be larger than what you saw! However much that happened to be!

So it seems like you should switch anyway. Which is obviously nonsense.

Yes it is nonsense and that's because without looking at all the expected amount in either envelope is infinite so whatever amount you did actually see you would be disappointed. Read my draft paper for further analysis of what this all means. You have to understand that expectation values stand for long run averages. But in the case of fat-tailed distributions, the long run takes a very long time to kick in, or even, it never kicks in. As Keynes said: in the long run we are all dead.

Another point of the example I just gave you is that if you try to figure out how to make it work "in real life" - e.g. in a real casino - you have to truncate it very severely indeed, if the amounts of money are to remain within the bounds of reality. And we have to remember that in the real world the value of each dollar you have in your pocket depends a bit on how many dollars people believe are "out there" and how many of them you have. If you had all the dollars in the world, but nothing else, the dollar would become worthless. Real economies are finite. Indefinite growth is a fiction. It's a Ponzi scheme which is bound to collapse. Smart people know this and get rich in the beginning and quit before it's too late. Internet bubble, first banking crisis, present Euro crisis .. it's all the same. Richard Gill (talk) 07:50, 24 October 2011 (UTC)

Another issue that still troubles me is that, even if we don't presume the existence of a specific value of X by writing something like E(B|X=x), it seems something like this is implicitly assumed by the setup of the problem. That is, there is smaller value, call it s. Whether we know it or not, it is fixed and in one of its appearances in equation (1) above a=2s (since it follows from B<A), and in another appearance a=s (since it follows from B>A). I'm not disagreeing with your math, but suggesting that, if we add this assumption (that there exists some unknown but fixed smaller value), then we end up with a problem with the reasoning.
So a lot may depend on how we conceive of the problem. If for example, we assume the amounts in the envelopes are determined before the player has a chance to choose between them, then it seems we are stuck with the assumption that such a fixed s exists, and any logic that ignores this is in danger of making a serious error. The only alternative I can think of is to imagine that the exact amounts are determined once an envelope is opened, with the person running the game only determining ahead of time which envelope will contain X and which 2X. (Perhaps upon opening he flips a coin until it lands on tails to compute X.) Now, once the envelope is opened, a fixed known value for A is determined. But also a fixed unknown value for X is determined. So it seems again we run into the same problem. What especially confuses this matter is the subtle differences between "determined" (by who/what?), "fixed", and "known". Jkumph (talk) 18:54, 20 October 2011 (UTC)
Words like "fixed" and "known" are indeed tricky. But the way the story is told it is clear that two amounts of money are fixed in advance by someone, and the player picks one of two envelopes at random. First Anna Karenina bifurcation: is the player - the guy who convinces himself that he should switch indefinitely - using probability in a frequentist sense, or in a Bayesian sense? Do his probability computations refer *only* to the 50/50 chance that he has the smaller or the larger amount; the two amounts - I like to call them x and y - being fixed, but unknown to him. Or do they refer *also* to his initial uncertainty about what x might be? ie to his subjective beliefs about the financial situation and the psychology of the guy who is offering him this game? In the latter case, we get a probability distribution over the amounts of money which we find conceivably might be in the envelope with the smaller amount. We can do calculations "as if" this was actually a frequentist distribution, ie as if we believe that the "host" is actually determining x by choosing it at random according to this distribution. Richard Gill (talk) 08:00, 24 October 2011 (UTC)
In fact just like Martin always says about Monty Hall Problem, no-one who thinks seriously about TEP can avoid pondering on the meaning of probability.

Almost everyone agrees on the calculus of probability! But no-one agrees what it actually means. When you apply it to a real problem you have to consistently fix a meaning for all the different components of the problem. And make that explicit. Other people may like to use other meanings. Their analysis might well be different. I think that this is what is really going wrong: the writer is confusing different kinds of probability. That's the fundamental equivocation here! Richard Gill (talk) 08:03, 24 October 2011 (UTC)

By the way I sent Falk my draft paper asking for comments. No reaction yet. Maybe Jkumph you could ask her "privately" if she got it and what she thought of it?! (Up to you whether or not you tell me what comes out. But it could be interesting!). Richard Gill (talk) 08:09, 24 October 2011 (UTC)

Consensus?

Although Richard Gill and I are still having fun working out the finer points, we both believe it reasonable to include some form of the simple resolution, a la Falk. That is, of course, with the caveat that it only diagnoses one particular error that could be made. Based on this, I propose what iNic suggested earlier:

(1) We add the simple resolution back as the "first solution", with the caveat that this is just one diagnosis.

(2) We remove or modify the "introduction to resolutions" section.

(3) We renumber the currently named "first" and "second" resolutions to "second" and third.

Jkumph (talk) 19:44, 18 October 2011 (UTC)

The problem is that the proposed simple solution does not reflect what Falk actually says. It is just not that simple. Martin Hogbin (talk) 21:47, 18 October 2011 (UTC)
So we disagree on this matter; Falk told me she was pleased with it, as I mentioned. The simplest way to resolve that is to ask Prof. Falk. Can you please send me your email address, so we can resolve this? — Preceding unsigned comment added by Jkumph (talkcontribs) 23:04, 18 October 2011 (UTC)
Are you sure that Falk actually said that the exact wording quoted above was good? I am not going to say that it is actually wrong I think it could be worded better, it is not quite clear exactly what this solution says the error is in the proposed line of reasoning. I will quote it again below with my comments.
I would be happy for you to contact me by email. Just go to my user page and click 'E-mail this user' in the left hand column. I would be delighted to hear Falk's thoughts on this discussion, especially as we have the task of not just finding the flaw in the argument but clearly explaining this to our readers. Martin Hogbin (talk) 08:26, 19 October 2011 (UTC)
Great. She told me she had looked through the modified version, and her only critique was that I had failed to include Nickerson in the 2006 paper citation. That said, it is perfectly possible that she might prefer a different reading. She recently forwarded me her 2009 article and suggested including it---which I take to mean the whole thing. It's short, but probably we'll need to extract some portion of it so as not to overload this article.
I couldn't find your email on the user page, but you may email me at code at cs dot uchicago dot edu. Jkumph (talk) 18:19, 20 October 2011 (UTC)
It's not surprising that Falk would like her solution up front and suggests adding references to her other papers. Anyway, we should not be referring to primary but to secondary if not tertiary literature. Is Falk's solution referred to in standard undergraduate texts? Is it generally accepted and reproduced by other writers? Richard Gill (talk) 18:32, 20 October 2011 (UTC)
I don't know, but that seems like a rather high bar with such a contentious paradox---have all other resolutions provided been shown to be referred to in standard undergraduate texts? It seems like we should rather include all the relevant resolutions by authoritative sources. I thought we had agreed (1) that the simple resolution should be presented and (2) that Falk's articles represented one authoritative presentation of it. Jkumph (talk) 19:19, 20 October 2011 (UTC)
BTW, just so you do not misinterpret Falk's motives. First, she didn't ask me to include another reference, but rather just to correct one where i had left out the co-author. Second, it is unclear what she meant in forwarding the whole paper and saying "please include this". I took it to mean that she just likes this expression of her resolution best, and also, perhaps, that she does not understand how Wikipedia works. There is not enough space to include the whole thing, even if it is short, only a brief summary. Third, it was I who contacted her to suggest that her resolution ought to be included; She is not attempting to use Wikipedia as a venue for self-promotion. Jkumph (talk) 19:41, 20 October 2011 (UTC)
Understood, sorry for any unintended suggestions. But TEP does have a problem according to Wikipedia criteria of "reliable sources". There hardly exist authoritative surveys of the whole story of TEP and all the solutions out there. There only exist accounts of particular solutions, that of each author. Secondly, there is a statistics literature, an economics literature, a philosophy literature, an educationalist literature... And each tends to see different interpretations and hence offers different solutions. If we want to survey all important solutions, we have to stand at a higher level and understand the different interpretations. Do we use the same notation fir each interpretation, or different notations? Either choice is possible. But we must also beware of equivocation. So if we keep changing notation, we must every time explain exactly what everything is supposed to mean in the present section. Richard Gill (talk) 06:31, 21 October 2011 (UTC)

Proposed elementary solution

[The error] follows from the recognition that the symbol A in step 7 is effectively used to denote two different quantities, committing the fallacy of equivocation. This error is brought into relief if we denote by X the smaller amount, making 2X the larger amount, then reconsider what happens in steps 4 and 5:

4. If A=X then the other envelope contains 2A (or 2X).

5. If A=2X then the other envelope contains A/2 (or X).

Each of these steps treats A as a random variable, assigning a different value to it in each possible case (1). However, step 7 continues to use A as if it is a fixed variable, still equal in every case(2). That is, in step 7, 2A is supposed to represent the amount in the envelope if A=X, while A/2 is supposed to represent the value if A=2X. However, we cannot continue using the same symbol A in one equation under these two incompatible assumptions(3). To do so is equivalent to assuming A=X=2X; which, for nonzero A, implies 1=2.

... implies 1 = 2, or alternatively, that A is infinite! Richard Gill (talk) 09:13, 19 October 2011 (UTC)
Martin's comments

I have added numbers to the text for my comments below:

1 It not clear what is meant here by 'assigning a different value to it in each possible case'. In the original line of reasoning there is no suggestion that we should assign values to A. We have chosen to consider here different values that the A might take. Of course, what slips by in the original line of reasoning is that we have imposed two different conditions on what the value of A might be.

2 Nowhere in the original line of reasoning does it say that we are using A as a fixed variable (!), neither is it true that we must be doing this. It is perfectly acceptable to use a random variable in an expectation calculation is some cases, but not in this case. We must make clear the reason that we cannot do this.

3)I agree that we cannot use A in an expectation formula under two incompatible assumptions but I do not agree that thera is an assumption in the original line of reasoning that A takes two fixed values. The two incompatible assumptions are two applied conditions, that A is not the highest number and that A is not the lowest number.

An explanation should be as simple as possible but no simpler. I thing that by trying to make this explanation really simple you have lost the plot a bit. I would be happy to work with you to try to come up with some better wording, if Falk would contribute, either here or by private email that would be fantastic. Martin Hogbin (talk) 08:26, 19 October 2011 (UTC)

It is reasonable to think of x and y=2x as two fixed (though unknown) values. Let's not be Bayesian at all. We don't put a probability distribution on them. Actually, this is the first possible bifucation when we try to reconstruct the intention of the writer in order to see where he goes astray. Different interpretations, different points where the error creeps in.

OK let's be non-Bayesian. The only randomness is in our choice of first envelope. The amount in the first envelope can be though of as a random variable, since it can either equal x of y, and each of these possibilities has probability half. PLEASE let's distinguish between random variables and fixed (whether known or not) variables. PLEASE distinguish between conditional and unconditional expectations.

In the present context I will therefore use the notation A for the random variable which takes the values x and y with equal probability 1/2, and B for the corespnding oher.

Let's think about calculating the expectation value of B conditionally given A=a. (We can also try to calculate the expectation value of B without conditioning on the value taken by A). We know that a could either equal x or y. There are therefore two cases to consider, for the conditional expectation. If a=x, then conditionally given A=a, we have with probability 1 that B=2a. Similarly if a=y then we have with probability 1 that B=a/2. So the expectation of B given A=a is either a/2 or 2a depending on whether a=y or a=x. Not much use to us since we don't know which is the case, that's the whole issue!

Alternatively let's compute the unconditional expectation. There are lots of ways to do it correctly (exercise for the dear readers!) and every way gives us E(B)=3x/2=(A+B)/2=(x+y)/2 (the sum of the random variables A and B is constant. And similarly of course E(A)=3x/2=(A+B)/2=(x+y)/2.

My point is that if you want to do probability calculations you should start by making absolutely clear what you are taking to be fixed (even if unknown) and what you are taking to be variable (even if only in an imaginary world where you repeatedly pick a new true a anew from the distribution representing your prior beliefs).

While you're at it, PLEASE fix a notation which allows you to make the distinctions which you are going to need to make. The writer of TEP doesn't do this. Just starts arguing informally and gets mixed up. Who first wrote those lines? What did they have in mind? Does that matter, since we can only guess anyway? It was originally a trick question, invented by mathematicians, designed to get people puzled and to get them thinking.

There are frequentist and Bayesian and improper Bayesian interpretations, and in all interpretations you could look at the conditional expectation or the unconditional expectation. Depending on what you like you'll find the mistake in a diferent place. Richard Gill (talk) 09:52, 19 October 2011 (UTC)

Richard, you are missing the point a bit here. The proposed elementary solution was added to the article by 108.192.17.5, who I think is now Jkumph. It is based on a the 2008 article by Falk but my opinion is that it is not very clear or representative of Falk's article so it should not be added to the WP page in its current form. Do you agree? Martin Hogbin (talk) 11:57, 19 October 2011 (UTC)
My point is that once we figure out Falk's interpretation we'll be able to understand her solution. And that as long as we editors are not careful to say exactly what we mean, and to carefully use an appropriately expressive notation, we'll not make any progress. I think what I said is very pertinent. Next move should be by somebody else. Maybe Ruma Felk. I emailed her my draft paper. No reaction yet. Richard Gill (talk) 18:18, 20 October 2011 (UTC)
BTW I agree with you Martin that Jkumph's contribution is hard for me to decode. And Falk's article too. But I didn't look at it recently. Jkumph: like to share a dop box folder with just about all the references in pdf? Just email me so I can invite you to join. Richard Gill (talk) 18:24, 20 October 2011 (UTC)
Rereading the simple solution, I think it is OK. Falk is supposing that the only randomness is in the choice of envelope. No subjective Bayes prior. The good notation for this is: x and y=2x are fixed (unknown) amounts. A is a random variable taking the values x and y each with probability 1/2. Use the symbol a to denote a possible value of A. We suppose the writer is trying to compute E(B) by writing E(B)=E(B|A=x)/2+E(B|A=y)/2. Now the writer goes astray through equivocation, confusing A and its two possible values.

The writer should have realised immediately that he had screwed up because the formula E(B)=5A/4 has a fixed number on the left hand side and a random variable on the right hand side. It's nonsense, already. Pretty stupid to then go on and deduce a nonsensical inference from an already clearly invalid intermediate results.

Personally, I think the writer was actually trying to be more sophisticated, and is actually making less obvious mistakes. Unfortunately, the less sophisticated reader can not even imagine that. Hence the enormous stupid literature especially by the philosophers who don't know elementary probability calculus. They should keep away from this problem. The ones who do know it have clearly a deeper understanding, but they still have their hand tied behind their backs, because they are writing for other philosophers who know less than they do. So it all has to be done in long words and circumlocutions instead of a beautiful evolved visual language, developed specifically for the purpose of avoiding this kind of mess. Richard Gill (talk) 06:18, 21 October 2011 (UTC)

Richard's comments

Here is my version of this resolution.

According to Steps 1 to 6 we are treating A as a random variable. It can either equal the smaller amount of the two amounts of money, or the larger amount of the two amounts of money, each with probability 1/2. Let us denote by x>0 and y=2x these two amounts. Note we now use lower case letters rather than upper case. That is in order to emphasize that x and y are two fixed, even if unknown, positive and different amounts of money, while A and B (the amount in the other envelope) are random through our random choice of envelope.

In step 6 we want to compute the expectation value of B. The random variable B can take on two different values, and it does so each with probability 1/2. The two values are x and y. Thus the correct writing of step 6 is E(B)=y/2+x/2. Comparing with the right hand side of the equation in step 6, we see that the writer is confusing the two specific values which B can assume, depending on which envelope contains the smaller or larger amount, with two other random variables A/2 and 2A, to which B happens to be equal in those two cases. That is to say, when B=y then B=2A, but when B=x, then B=A/2.

Writing out the proper formula for E(B) in full, we have E(B) = E( B | A > B)P(A > B)+E( B | B > A)P(B > A) = E( B | A > B)/2+E( B | B > A)/2. We know that when A > B, then A is certainly equal to y, and similarly, when B > A, then A is certainly equal to x. So we may compute the two conditional expectations as y/2+x/2=3x/2. On the other hand, though it is legitimate to rewrite E( B | A > B) = E( A/2 | A > B) = E(A | A > B)/2 and similarly E( B | B > A)=2 E(A | A > B) it is nonsense to rewrite E(A | A > B) as A. The first of these two expressions, E(A | A > B), is a fixed number (whether we know it or not) - it equals y. The second of the two expressions, A, is a random variable.

Perhaps the writer was confusing random variables and expectations, and imagines that E(A | A > B) =E(A). But this is not true either; given A>B, the expectation value of A is y=2x, while the unconditional expectation of A is the smaller number 3x/2. Many probabilist and statisticians point out that if only the writer had distinguished random variables from possible values thereof through conventional probability notation (invented precisely in order to avoid such mix-ups) he or she could never have gone astray.

This interpretation does indeed have the writer commiting the sin of equivocation: using the sample symbol to denote different things. At the same time he or she is confusing random variables with expectation values of random variables. There are alternative interpretations of what the writer was trying to do, which lead to a different diagnosis. Perhaps he or she was more sophisticated than we imagine. Trying to do something more difficult, though making less mistakes. But still a mistake just as deadly. We will now turn to an analysis of what went wrong, in that case. Richard Gill (talk) 17:32, 21 October 2011 (UTC)

Richard, I think it clarifies thinking to consider a simpler version of the problem. I would be interested in you comments on my thoughts on the subject on User:Martin_Hogbin/Two_envelopes Martin Hogbin (talk) 19:29, 21 October 2011 (UTC)
I agree. It's very illuminating to start with the simplest possible platform for the TEP argument. Though even on this platform Anna Karenina applies. I've added my comments to your TEP page.

I would also appreciate your comments on my draft paper , here or there or by email or on my talk page! And anybody else's too!! Richard Gill (talk) 14:29, 22 October 2011 (UTC)

I have started to look at your paper. Perhaps you could help me with something. When you talk about E(B|A) and the probabilist's choice, would this quantity represent the expectation value of B given that the player knows what is in A, that is to say, has opened the envelope? (I do appreciate that this may not be a necessary condition, as you can always consider what the player believes might be in the envelope). Martin Hogbin (talk) 08:30, 24 October 2011 (UTC)
It is the expectation value of B which the player would have if he imagines peeping in the envelope and seeing the amount there. Obviously it would depend on what he saw there. We think of it as a function of that amount.

Suppose you have two random variables X and Y with some joint probabity distribution. Now we may define first of all E(X|Y=y). The expected value of X calculated according to the conditional distribution of X given Y=y. This quantity is defined quite independently of whether or not we actually observe Y. One can compute the conditional expectation for all possible values y which Y might take. Call the result g(y). For instance, it might be y-squared or 2y-5 or something else. Now we can apply this function to y. By definition, E(X|Y) is the function of Y whose value is the conditional expectation of X given Y equals the value actually taken by Y.

Example. Let X be the number of heads when we toss a fair coin once, let Y be the number of heads when we toss it twice (including the first time). It is easy to calculate that E(X|Y=y)=y/2, where y could equal 0, 1 or 2. We are therefore find, by the definition I just explained, E(X|Y)=Y/2. This is a true statement of probability theory which doesn't assume we do observe the value of Y. Richard Gill (talk) 16:54, 25 October 2011 (UTC)

Martin, you ask for "E(B|A)". That's the flaw in step 7 of the TEP, omitting the necessary reservations within the theorem, as I said on Richard's talk page 12:24, 29 July ...
No Gerhard, it is not a flaw, as long as you know what you are talking about. Read an introductory text on probability theory. Richard Gill (talk) 16:54, 25 October 2011 (UTC)
The total of (A+B) is, a priori, a  "fixed (although to us still unknown) positive amount  >0  <∞"  –  is that okay? So E(B| A < B) = 2A, whereas E(B| A > B) = A/2.
E(B|A)  – without the necessary reservation –  in step 7 TEP however is  "said to be 5A/4"  
What means that B equals "2A" as well as "A/2" at the same time simultaneously, without reservation, meaning that A=B=(A+B)=±0  or  ±∞, and nothing else in step 7.
As (A+B) in any task being a fixed (although to us still unknown) positive sum  >0  <∞, in consequence E(A)  [or call it E(A|B)] is (A+B)/2, and likewise also E(B)  [or call it E(B|A)] = (A+B)/2. Okay? Of course you can show that with any correct theorem, but E(B|A) without the necessary reservations in step 7 TEP is a sin. Gerhardvalentin (talk) 23:54, 24 October 2011 (UTC)
Gerhard, you write things which look like probability theory, but you don't know the definitions, rule, key results. I start off with prior beliefs about the smaller amount of money. I define in a completely abstract way X to be a random variable having this probability distribution. I define A to be a random variable which equals X or 2X each with probability 1/2, independently of X. Within this notation, A+B is not a fixed amount. It equals 3X. Whose probability distribution represents our beliefs about 3x, which follow from our initial beliefs about x. You are committing the crime of equivocation. Using the same symbol for different things at the same time. I am trying by bl**** best to consistently use a notation which allows me to consistently and clearly distinguish between the things which need to be distinguished. If you don't do the same you'll just write nonsense. Richard Gill (talk) 16:54, 25 October 2011 (UTC)
By the way, I do agree with your comment that the TEP is an annoyance rather than fun. With the MHP you have to give the correct answer (and maybe explain why it is correct); with the TEP you have to first explain what the other person is trying to say and then say why it is wrong. That is why I want to stamp on it. Martin Hogbin (talk) 08:33, 24 October 2011 (UTC)
Little busy lately, but just wanted to say I like this version, at least as a first pass. It certainly is helpful to clarify the random variables versus the fixed ones. Jkumph (talk) 03:41, 25 October 2011 (UTC)
Gerhard, I do not disagree. The problem with the proposed line of reasoning in the TEP is that it is a series of vague statements and non-sequiturs. I agree that the problems do concentrate around step 7 which is an attempt to informally calculate an undefined expectation. The TEP is more of a conjuring trick than a mathematical puzzle in that the point of the problem is to dupe the listener into accepting a bogus line of argument.
I think that the key actually is the statistical dependence between the sum in the chosen envelope and the probability that it is the smaller sum, as Richard's paper says. There are many ways of getting to this, depending on what you think the original argument is intended to be.
My aim is to try to come up with a simple way of resolving the paradox that is philosophically and mathematically sound. I have started in my user space at User:Martin Hogbin/Two envelopes but messed it up a bit so far. Your comments there would be welcome. Martin Hogbin (talk) 09:37, 25 October 2011 (UTC)
Gerhard's comments

Thank you, Martin. I hadn't time enough to look at your page. Suggestion:

There are two envelopes, A and B. Let us denote the smaller amount in the two envelopes as "x" and the larger amount as "2x". Envelope A can be x or 2x, and likewise envelope B can be x or 2x, no-one knows.

  • If envelope A contains the smaller amount "x", then the other envelope B contains twice as much as A, it contains 2A or better: B contains "2x".
  • If envelope A contains the larger amount "2x", then the other envelope B contains half as much as A, it contains A/2 or better: B contains "x".
  • Thus the other envelope B for SURE contains 2A only under the condition that A="x", otherwise not, and B for SURE contains A/2 only under the condition that A="2x", otherwise not.
  • Thus the other envelope B contains "2x" with probability 1/2, and it contains "x" with probability 1/2. (On Richards talk page I wrote: Item 6 of the TEP says: Thus the other envelope contains 2A with probability 1/2 and A/2 with probability 1/2  –  and – for my perceptivity – by ignoring that the one is solely possible if A is the smaller amount, whereas the other can only be true if A is the greater amount – accepting herewith as a consequence that the given size ratio is no more 1:2 resp. 2:1, but unfounded is pretended to be 1:4 now resp. 4:1.)
  • So the expected value of money in the other envelope B = 1/2 (2A|A=x) + 1/2(A/2|A=2x). This restriction is essential. In other words:
  • The expected value of money in the other envelope B is 1/2("2x") + 1/2("x") = 3/2"x", and vice-versa (size ratio remains 1:2 resp. 2:1).

Don't let you fool by a flawed theorem. Regards, Gerhardvalentin (talk) 13:01, 25 October 2011 (UTC)

This is Martin's simple solution which he further specializes by taking x=2. As I pointed out there, we might just as well take x=2; we could define the currency unit (say "doubloon") precisely by defining x dollars (or pounds or Euros or whatever you want) = 2 doubloons. It is a solution which puts probability, randomness, *only* in the choice of first envelope. *Not* in the writer's prior beliefs about x.

This is already making one interpretation, one Anna Karenina bifurcation. The writer is not a subjective Bayesian. Next we have guess whether the writer was trying to compute E(B) (which of course is 3x/2) or E(B|A=a) (which is 2a or a/2 depending on whether a=x or 2x). Altogether four different interpretations so far, right? You'll find every one of them in the literature. In each of the four cases there is a mistake in the argument, but it is a different mistake each time. And for the Bayesian there is another bifurcation, whether the prior is proper or improper. I believe that gives us so far 5 different interpretations. For each interpretation the argument goes astray at a different point. All five interpretations and accompanying analysis of "what went wrong" can be found in the literature. I do not know of any others, yet. You can't collaboratively edit an article on the Two Envelopes Problem without recognising these five possibilities, since each of them has a big body of authoritative literature behind it. And you had better figure out a good notation which can be used for all five interpretations simultaneously, OR you must carefully and explicily introduce a new notation for each new solution. Richard Gill (talk) 17:08, 25 October 2011 (UTC)

There are more cases still, and you know that. The prior can be proper with finite or infinite support. And last but not least the problem can be stated without invoking any concept of probability at all. iNic (talk) 06:11, 25 November 2011 (UTC)
Whether or not the prior has bounded or unbounded support is not important. Whether it has infinite mean or not is important. If the mean is finite then we have solutions for both main interpretations of the paradox. If the mean is infinite then the paradox can be resolved by realizing that an infinite mean gives us no information about finite averages, hence gives us no guide for behaviour. Alternatively one might prefer to argue that utility of money is bounded and that decision should be based on expected utility, not expected cash. But if utility is a bounded and monotone function of money then we again have the required solutions for both of the main interpretations of the paradox, since after transformation to the utility scale, we still have the symmetry which drives the results of my "unified solution".

Finally what I would call the prequel, logician Smulyan's TEP without probability, is solved by several people in different ways. Both by amateurs and professionals. At the amateur level this paradox is merely a word-game, letting the same word mean something different in different contexts. What you would win if you win is ambiguous. You might like to think of it as the difference between the two amounts and you might like to think of it as the smaller of the two amounts. Comparing something ambiguous with something else ambiguous, when moreover both are only meaningful in mutually exclusive situations, is at best meaningless and at worst stupid. For professionals in probability the Smulyan paradox is completely empty: by leaving out probability altogether there is no way to choose. Finally, several logician/philosophers who study the logic of counterfactuals come to apparently different solutions at a technical level, because they work in different logical frameworks, but they all have solutions: either the argument is stupid or it is meaningless. Their analyses look different since they are written out with respect to different theoretical frameworks and with different background assumptions (some assume envelope A was chosen at random, other don't).

Anyway, this work of the academic logicians is extremely technical and seems to me not terribly interesting for amateurs. They use the paradox as a test-case to "show off" their pet theories of counterfactual logic. You have a different theory and you find a different solution, and you claim yours is the best one so far. This is Anna Karenina again. The TEP reasoning is incomplete: the assumed context is not spelt out explicitly, and the intention of the writer is not spelt out explicitly (he does not say what he is trying to do, in one step to the next, nor what theorem from probability he is using). Since the conclusion is false it is evidently self-contradictory, it is clear that whatever context and intention you assume, you will find a mistake. But no reason why the mistake should be the same mistake. This is equally true for original TEP and for TEP without probability (what I would call: TEP, the prequel). Richard Gill (talk) 15:37, 13 December 2011 (UTC)

Interesting that you have dropped the bounded support idea now and call that idea unimportant. Anyway, when taking account of the solutions you add here how many different solutions are there? In what solution category would you place the fish soup situation? iNic (talk) 00:56, 16 December 2011 (UTC)
No, Richard, your comment is an error. That's what I said  (from 29th July to 3rd August) on your talk page, and I got your answered there   "I don't understand you."
I said on your talk page that B can only be expected to hold 3/2X.

And I said that for E(B)  "2A"  is only correct if A is the envelope with the small amount, while  "A/2"  is only correct if A is the envelope with "twice the small amount". Otherwise not:  For "E(B) = 1/2 (2A|A>B) + 1/2 (A/2|A<B)" ... means invalid priors (or how do you call such invalid nonsense?)

And (not being a mathematical probabilist) I asked you how to express these necessary restraints within the theorem.

I meant sth. like  "B = 1/2 (2A|A=x) + 1/2(A/2|A=2x)".  And I said that ignoring such restraints is accepting as a consequence that the given size ratio is no more 1:2 resp. 2:1, but unfounded is pretended to be 1:4 now resp. 4:1. I find this is mathematically incorrect. And I guess the whole "paradox" was mainly intended as some kind of josh to lead on black ice, like "Is there weather under water if you're there when it rains".  Sorry to disturb your circles. Gerhardvalentin (talk) 18:51, 27 October 2011 (UTC)
I still don't understand you. What is this supposed to mean:  "B = 1/2 (2A|A=x) + 1/2(A/2|A=2x)" ? It looks like mathematics but it is not written in any mathematical language which I know. I wrote out a number of times things which I believe are true and when doing so I made distinctions between random variables and possible values they can take and expectation values and conditional expectation values. I tried to explain what my background assumptions were. I made use of well known and correct probability calculus rules. If you can't make these distinctions and/or don't know these rules you should not be writing mathematical-looking formulas. Richard Gill (talk) 12:40, 29 October 2011 (UTC)
If you are thinking of the smaller amount of money in the two envelopes as being a fixed amount x (even if unknown) so that there is only probability in the choice of first envelope (A=x or A=2x each with probability 1/2) then we can do various correct computations. For instance: E(B)=3x/2; E(B| B > A) = 2 x; E(B| A > B) = x. If you want to use probability calculus also when refering to your uncertainty as to what x might be, then we need to modify our notation to take account of the more complicated set-up. For instance, the previous results become E(B|X=x)=3x/2; E(B|X=x, B > A) = 2 x; E(B|X=x, A > B) = x but there are also other interesting results, for instance E(B)=3E(X)/2 and so on. Richard Gill (talk) 12:48, 29 October 2011 (UTC)

PS please read my essay on probability notation [7], written at the request of other wikipedia editors, for wikipedia. Richard Gill (talk) 14:37, 29 October 2011 (UTC)

Any action?

OK what happened to the elementary solution that everyone agreed upon should be reinserted in the article after many years of absence? iNic (talk) 12:19, 5 November 2011 (UTC)

I agree that it should come back. Let someone who is good at explaining simple things in simple words put it back. But please let them be careful to note that it's a resolution of one reading of the context and intention of the writer of TEP. Possibly the most simple context and simple intention, but not necessarily the most popular context and intention. Please note that Nickerson and Falk (2009) make this same point themselves.

In fact, Nickerson and Falk (2009) contains two quite different solutions, corresponding (according to them) to whether or not one looks in envelope A before deciding whether or not to switch. They also have a discussion about the impact of a priori beliefs about the amounts, and even about improper priors. Thus this short paper covers all known solutions and recognises the Anna Karenina principle.

The two main cases they consider correspond precisely to whether we assume the expectation being computed in step 7 is an unconditional or a conditional expectation. They forget that in case 2, if we could argue that one should switch whatever amount one saw in envelope A, then we could decide to switch anyway, without looking. So case 2 is not really about the case that we actually look in envelope 1: it's the case where we imagine looking in envelope 1.

They do emphasize the importance of distinguishing between random variables and possible values thereof. This was also the punch-line of Falk (2008). I will reproduce their two main solutions, using a notation which actually does make that very distinction which they claim is so important.

Nickerson and Falk's resolution of case 1 - the so-called elementary solution: in step 7 the writer wants to compute E(B). The probability that B=2A is 1/2, and the probability that B=A/2 is 1/2. So the writer could, correctly, deduce E(B)=1/2.E(2A|B=2A)+1/2.E(A/2|B=A/2)=E(A|B=2A)+E(A|B=2A)/4. However comparing with what the writer did actually write, we see that the writer is confusing the conditionally expected values of A in two completely different situations with one another and both with the random variable A itself! Equivocation squared! (In fact these two conditional expectation values must be different from one another, and from the unconditional expectation, unless all three are actually infinite).

Nickerson and Falk's resolution of case 2 (some people find this less elementary, but many people find it more realistic): in step 7 the writer wants to compute E(B|A=a), the conditional expectation of what is in the second envelope, given any particular amount a which we imagine might be in the first envelope. So the writer could, correctly, deduce E(B|A=a)=Prob(B=2A|A=a).2a+Prob(B=A/2).a/2. However, comparing with what the writer actually did write, we see that the writer is supposing that the conditional probability that the other envelope contains the larger or the smaller amount, given any amount imagined to be in the first envelope, is equal to 1/2, whatever that imagined amount is taken to be. Nickerson and Falk go on to note that this is impossible under any reasonable (realistic) idea of what amounts might be in the two envelopes. In particular we might mention Martin's very basic example: we know in advance that the two amounts are 2 and 4 pounds sterling. When a=2, Prob(B=2A|A=a)=1 and the other probability is zero. When a=4 it's the other way round: the two probabilities are 0 and 1. In neither case are they 50/50.

Nickerson and Falk furthermore point out that the only way these two conditional probabilities could be both 1/2 whatever a might be is when a priori, the amounts ...1/2, 1, 2, 4, ... are all equally likely (actually they say that all amounts should be equally likely, but here they are mistaken, as other authors have pointed out).

This connects to what I called the "unified solution'. For case 1 we use Fact 1: the two conditional probability distributions of A given, respectively, it's the smaller or the larger of A and B, are necessarily different from one another and from the unconditional distribution. For case 2 we use Fact 2: the conditional probability that A is the smaller or larger of A and B, given A=a, must always depend on the value of a (more precisely: cannot be the same for all a). What unifies these Facts 1 and 2 is that they are mathematically equivalent. They are two sides of the same coin: their relationship to one another is simply an expression of the symmetry of statistical (in)dependence.

All this holds as long as we are using proper probability calculus, but otherwise it doesn't make a difference whether we are being Bayesians or frequentists, whether we think of the two amounts (the smaller and the larger) as being fixed and known, or as being variable. The only way out is with improper Bayesian priors but then we get infinite expectations and again a resolution of the paradox.

That's why it can be called a unified solution: the same simple mathematical fact underlies all known resolutions of the paradox, as well as the related paradoxes of Schrodinger, Littlewood, Kraitchik. Many writers give examples and show how Fact 1 or Fact 2 is true in their specific example. No-one seems to have noticed that these facts are universal and complementary. Richard Gill (talk) 19:04, 6 November 2011 (UTC)