Jump to content

Wikipedia:Reference desk/Archives/Mathematics/2007 December 5

From Wikipedia, the free encyclopedia
Mathematics desk
< December 4 << Nov | December | Jan >> December 6 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


December 5

[edit]

Extensions, Polynomials, etc

[edit]

How do you pronounce things like Z[x] and Q(x,y)? Black Carrot (talk) 21:27, 5 December 2007 (UTC)[reply]

Generally, I use pauses that make it pretty clear where the brackets are. For example (x+y)-z would have (x+y) pronounced as a phrase, with a pause before -z. If there's much chance of it being ambiguous, I'd say the brackets ('open brackets' etc.). Daniel (‽) 21:49, 5 December 2007 (UTC)[reply]
I sometimes do that, and sometimes say things like 'Z adjoin x'. You (might) have to be more careful with the second example, of course, due to the difference between Q(x,y) and Q[x,y]. Algebraist 22:16, 5 December 2007 (UTC)[reply]
In situations where it could be unclear, (in my limited experience) we always said 'Z bracket x' for the first, while reserving adjoin (as in 'Q adjoin x and y') for fields. 134.173.93.150 (talk) 05:34, 6 December 2007 (UTC)[reply]
If by , you mean that Q is a function of x and y, I think it would be said, "Q of x and y". Strad (talk) 00:14, 6 December 2007 (UTC)[reply]
In the context, I think we're talking about the field of rational functions in two indeterminates over . That's certainly what I was talking about. Algebraist 00:25, 6 December 2007 (UTC)[reply]

We are. So, they're usually <pause>, "bracket", or "adjoin"? Black Carrot (talk) 14:38, 8 December 2007 (UTC)[reply]

What's wrong with this proof?

[edit]

Quite some time ago, I found the following proof on the internet:
Proof that 2=1

1) X=Y  ; Given
2) X^2=XY  ; Multiply both sides by X
3) X^2-Y^2=XY-Y^2  ; Subtract Y^2 from both sides
4) (X+Y)(X-Y)=Y(X-Y) ;Factor
5) X+Y=Y ;Cancel out (X-Y) term
6) 2Y=Y ;Substitute X for Y, by equation 1
7) 2=1  ; Divide both sides by Y

Since I'm fairly sure that 2 != 1, this is probably wrong, but I can't figure out where the mistake is made. Any ideas?
Thanks in advance. Horselover Frost (talk) 23:05, 5 December 2007 (UTC)[reply]

Between 4 and 5, you divided both sides by X-Y, which is 0 according to the initial assumption. Dividing by zero makes funny things happen. 69.246.218.176 (talk) 23:10, 5 December 2007 (UTC)[reply]
There's also another error: From 6 to 7, you divide by Y without knowing if it is 0 or not. The solution to is not but . -- Meni Rosenfeld (talk) 23:13, 5 December 2007 (UTC)[reply]
(ec) Once again we see that Wikipedia has an article on everything. Check out invalid proof for this plus a number of more cunning ones. Btw, this proof works in the trivial ring, in which you can divide by zero. Fortunately, in this case, 1 does equal 2. Algebraist 23:18, 5 December 2007 (UTC)[reply]

Limiting behaviour of Markov Chain

[edit]

Hi I have a stochastic matrix which represents a Markov Chain. The Markov chain basicly describes the probabilities of a simple game. The game involves 2 people A,B, with 5 counters, at each round theres a probability p that A gains a counter and a probability (1-p) B wins one of A's. I want to find limit of the probability matrix. Here is the matrix:

(where the column index, from 0, is the number of counters A has) I've found, using eigenvectors, that a solution is 6 rows of (alpha 0 0 0 0 beta) where alpha and beta are chosen arbitrarily but is there any way of finding their exact values? I've not look too much at this topic but i am very interested so any help is appreciated. Thanks 212.140.139.225 (talk) 23:58, 5 December 2007 (UTC)[reply]

This exact question (in slightly greater generality) is answered at Gambler's ruin#Coin flipping. Note that you don't need to think about eigenvectors: it's obvious (I suppose formally you'd appeal to a Borel-Cantelli lemma or something) that eventually all the counters are in either A's hands or B's, so the only question is, given that A starts with n counters, what is the probability of A ending up with everything. If one denotes this pn, one gets some recurrence relations in the pns, which are fairly easy to solve, giving the answers in the article I linked. Algebraist 00:17, 6 December 2007 (UTC)[reply]

Thanks for you reply. I'm only in my first year of a degree so I haven't come across Borel-Cantelli lemma and i was told the limting matrix can be made up of the eigenvector where the eigenvalue is 1. I see it is obvious that eventually one of the players will win but i thought using eigenvectors may tell me the probability of each player winning. I've had a look at the article you suggested but don't fully understand it, bearing in mind i am only a first year, could you explain the basic idea of recurrence relations? Thanks again 212.140.139.225 (talk) 15:50, 6 December 2007 (UTC)[reply]

The lemma I referred to is just the first thing that came to my head for proving rigorously that the game eventually ends: if you can see that this is obvious, then that's certainly good enough for a first year. The problem with using eigenvectors is that this only tells you the possible limiting distributions, which you knew already. You have to do more work to find out the probability of one result rather than the other. Let then pn be the probability that A wins starting with n counters. We have boundary conditions P0=0 and P5=1, since in these cases the game has ended already. For n strictly between 0 and 5, there is a probability p that A will gain a counter (giving him n+1 in total) and a probability 1-p that he will lose one (giving him n-1). We thus have the recurrence relation . Standard techniques (given in Recurrence relation#solving generally) allow us to solve this relation with these boundary conditions to obtain Pn for all n. Algebraist 18:12, 6 December 2007 (UTC)[reply]