Jump to content

User:NorwegianBlue/refdesk/science

From Wikipedia, the free encyclopedia

North Star

[edit]

Can you tell me the simplest way to identify the North Star. I do not know what to look for in order to find. Please spell it out so clear that i can follow step by step instructions on this article in order to find it. Thank you.

The way I normally find it is to locate the Big Dipper and then follow the line that is created by the two stars that make up the edge of the bowl of the dipper on the side opposite from the handle of the dipper. By following this line in the direction of the "opening" of the bowl of the dipper, they point to the North Star which is also the last star in the handle of the Little Dipper. The rest of the Little Dipper is generally harder for me to see since the stars aren't as bright in that constellation. Dismas|(talk) 05:41, 30 May 2006 (UTC)[reply]
P.S. After checking the links I supplied, I see that this is spelled out with diagrams on the page for the Big Dipper. Dismas|(talk)
But don't use the official flag of Alaska, that is shown on the big dipper page as a map! The north star is nowhere nearly as bright as the flag suggests. Use this link instead, and you will find a better map. --NorwegianBlue 19:13, 30 May 2006 (UTC)[reply]

Diamond

[edit]

I know how it crystallizes but how does that relate to valence bonding of it? (no it's not homework)Any help would be appreciated. And remember, as this question may seem ridiculously simple, as I just read above: don't bite the newbies

In diamond, carbon is in the sp3-hybridised state, see orbital hybridisation for a nice image. Each nucleus sits in the middle of a tetrahedron. Thus, it is tetravalent here as elsewhere, and each carbon atom "shares an electron" with each of its four neighbours. --NorwegianBlue 09:50, 4 June 2006 (UTC)[reply]
This brings another question - a cube is related to an octahedron in Plato's system, how can a self-dual tetraedron build cubic centered crystals ? I'd appreciate any hint. --DLL 20:18, 4 June 2006 (UTC)[reply]
I'm not sure if this answers your question, DLL, but there's a picture of the 3D structure of diamond in the carbon page. --NorwegianBlue 21:53, 4 June 2006 (UTC)[reply]
OK, the distance between the center of the tetrahedron and its summits must differ from the distance between summits linked by Valence. --DLL 22:13, 5 June 2006 (UTC)[reply]

Static electricity

[edit]

When you get a shock of static electricity, simply stated, the reason is presumably that your body has acquired a surplus or deficit of electrons compared to the environment. How would one go about to estimate the number of electrons that is transferred in such a shock? Does somebody have an idea of the approximate number? When walking on synthetic carpets etc., do we usually acquire a positive or negative charge? --NorwegianBlue 12:24, 21 May 2006 (UTC)[reply]

No ,when you get a shock, its because charge has passed through your body causing the muscles to contract. THe amount of current passing depends upon the voltage, the resistance of the static generator, and the body's internal reistance (which is about 700R).
As to charge build up, I would try the Faradays ice pail experiment with the victim standing in the pail. The pail needs to be insulated from earth. THe victim would not actually get a shock because current would not pass thro the body (hopefully). Then by knowing the capacitance of the pail and its final voltage after charge transfer, one can calcualte the amount of charge passed.8-)--Light current 12:36, 21 May 2006 (UTC)[reply]
There are some calculations done by Mr Static here. The answer in his case was a positive charge of about 3 x 10-8 coulombs, which is about 2 x 1011 electrons. --Heron 13:39, 21 May 2006 (UTC)[reply]
Thanks a lot, Heron, that was exactly what I was looking for. The value of 3 x 10-8 coulombs appears to be per step, the graph might suggest a charge buildup about 20 times larger. --NorwegianBlue 13:58, 21 May 2006 (UTC)[reply]

Is it possible to see clearly underwater by wearing spectacles instead of goggles?

[edit]

Inspired by the preceding question, as well as this one and this one: The reason we do not see clearly underwater is that the refractive index of the cornea is almost equal to that of the surrounding water, so that we lose the refraction at the interface between air and cornea. Is it possible to correct this by wearing spectacles underwater, (i.e. with water between the lenses and the eyes)? It would be kinda nice, goggles tend to get foggy... And if it indeed is possible, what lens strength would you need? --NorwegianBlue 21:07, 23 May 2006 (UTC)[reply]

http://www.liquivision.ca/fluidgogglesfeatures.htmlKeenan Pepper 21:12, 23 May 2006 (UTC)[reply]
Thanks, gotta get one of those... --NorwegianBlue 21:49, 23 May 2006 (UTC)[reply]
Thanks a trrrrrrrrrrrrrrrrrrrrrrrrrrrillion, that was the EXACT answer I was looking for 206.172.66.172 00:10, 24 May 2006 (UTC)[reply]
Or you could just get some defog for your SCUBA mask. There are commercial preparations, or you can use a dish soap solution, or your own saliva. They all work. And you can even get a prescription in your mask. --Ginkgo100 23:59, 24 May 2006 (UTC)[reply]

Balancing a bicycle

[edit]

Why is it (relatively)easy to 'balance' on a moving bicycle and the same, impossible on a standing cycle?

We all know this.... i just want to know the physics behind this everyday action

How can we easily balance ourselves (stay upright) on a moving bicycle (or bike) but the same thing is impossible when a try on a stationary two-wheeler? _________________

i did some re-searching and it looks like its not entirely 'angular momentum' as in the case of a top and also not entirely the gyro effect either... http://en.wiki.x.io/wiki/Bicycle_and_motorcycle_dynamics

all the facts are here... i can almost see the answer forming but not quite there... help me out - somebody pls put this phenomenon in nice understandable sentences

Thanks a lot

Seethahere 21:21, 22 February 2007 (UTC)[reply]

Contrary to a common belief, angular momentum (the gyroscope effect) has (almost) nothing to do with it. The main point is that you have an effective means of controlling your balance when moving, but not when standing still. It works like this: suppose your bike is starting to lean over slightly to the left (i.e. your center of mass is to the left of the line between the wheels' contact points with the ground, which is the axis you would pivot around if you were to fall over). Thus you are (your center of mass is) starting to fall over towards the left. When you notice this, your learned bicycling skill makes you make a nearly unconscious correction, by turning the handle bar slightly to the left. This causes your front wheel to move to the left under you, until the line between the wheels' contact points is once again vertically under your center of mass. In other words, you have regained balance. This correction doesn't work if you are standing still, because turning the handlebar doesn't then cause the front wheel to move sideways. --mglg(talk) 21:36, 22 February 2007 (UTC)[reply]

Ok, that is one of the most complex articles ever, for something so simple. Bikes have self-stability. That means if you just hold a bike still, you can make it steer left or right by leaning it. You can ride a bike without hands, and steer just by leaning. It's a bit like a skateboard. --Zeizmic 23:27, 22 February 2007 (UTC)[reply]

Sorry I don't believe it, a well "ghostied" bicycle will go forward without a rider for a long time without falling over, which obviously has nothing to do with slight, almost unconscious steering inputs from the rider. I've ridden both bicycles and motorcycles a lot in the past and definitely there is more to it then 'corrections' made by the rider especially on a motorcycle that's going at speed, so much so in fact that when you TRY to turn the motorcycle and then release the pressure the motorbike will return on its own to travelling in a straight line. Can you source your assumption of gyroscope effects not having much to do with it? Vespine 23:48, 22 February 2007 (UTC)[reply]
http://www2.eng.cam.ac.uk/~hemh/gyrobike.htm Skittle 10:43, 23 February 2007 (UTC)[reply]

Ah, ah. No complaining, unless you made a pretext of reading that long, long, article! :) (which covers this particular turning thing somewhere..) --Zeizmic 01:01, 23 February 2007 (UTC)[reply]

File:Head angle rake and trail.svg
Bicycle head angle, rake, and trail
The principle is correct, but there's no unconscious inputs involved - the stability comes from the bike turning itself. The horizontal "trail" (as pictured) is the key. Explaining the details of it is beyond me, but the gist of it is that when the bike starts to lean, the trail causes the wheel to steer into the turn, straightening it up and preventing the fall. It only works above a certain minimum speed, but it's why you can let go of the handlebars and not fall over. There's no special skill involved; the bike is genuinely driving itself.
Apparently a lot of time and money goes into getting the trail right, to ensure stability. I saw a really good explanation of this very recently, but I can't for the life of me remember where. If it springs to mind, I'll post the link. Spiral Wave 01:08, 23 February 2007 (UTC)[reply]
Might it have been in New Scientist, in the "The Last Word" section? I've linked to a bit of it, although I don't know how well this will work for those without a subscription. It also provides an answer for those who don't believe how little effect the gyroscopic effect has: a bike designed to cancel out all gyroscopic effects which people could ride with ease. Skittle 10:42, 23 February 2007 (UTC)[reply]
Turns out there is also a (very) brief mention of the effect in the trail article linked in the caption. I do remember that it's why bikes are built with trail. If the steering column was vertical, the result would be highly unstable. Spiral Wave 01:31, 23 February 2007 (UTC)[reply]
You can see from the diagram that if the bike is stationary - and leans to one side, there is a gravitational force acting through the frame of the bike and a ground reaction from the point where the front wheel touches the ground. This causes a rotational moment through the steering tube that causes the front wheel to try to turn 'into' the direction the bike is leaning. If you stand next to a bicycle and just lean it to one side, you can see that happening. But when you are moving rapidly, you have forward momentum and it takes a significant amount of force to cause the bike to change direction. The reaction to that force is pushing the wheel back into a straight line - so there is a net upward force countering gravity and making the bike stand up straight. It's hard to explain without a white-board and some force vector diagrams! SteveBaker 13:50, 23 February 2007 (UTC)[reply]
All of this is true, but please realize that the self-correction induced by the rake is only there to help you make roughly the right correction. The rider still does the fine tweaking to perfect the balance (and yes, this tweaking can be done either without hands, by leaning the frame so that the rake induces the front wheel to turn, or more simply by turning the handlebar by hand). The bike is NOT fully self-balanced, as you can establish by letting your rider-less bike roll down a hill by itself and seeing what happens (I don't recommend that test with an expensive bike...). --mglg(talk) 18:31, 23 February 2007 (UTC)[reply]

Distillation of ethanol, temperature

[edit]

When a mixture of ethanol and water is heated and reaches the boiling point of ethanol, will the temperature stay the same (approx. 78.5°C) until all the ethanol has evaporated, or is it possible to heat such a mixture to, say 85°C, at ambient pressure? --62.16.173.45 14:34, 28 May 2007 (UTC)[reply]

The mixture will approximately stay at the same temperature until the ethanol has evaporated. It is not totally accurate, there are some effects when you go into detail. Destillation talks about the variation of the boiling point of ethanol in a solution. There is also the possibility of Superheating.
Thank you! --62.16.173.45 18:15, 28 May 2007 (UTC)[reply]
Actually, the boiling point of the mixture will change as its concentration changes, and approach the boiling point of water (100°C) as the ethanol concentration goes to zero. The boiling point of a mixture of ethanol and water has a curious dependence on concentration: Pure ethanol boils at 78.4°C; with decreasing concentration of ethanol the boiling point first decreases (!) toward a minimum of 78.15°C at 95.6% (by weight) ethanol, and thereafter increases towards 100°C at 0% ethanol. The 95.6% mixture is called an azeotrope. The evaporating gas will not be pure ethanol but a mixture of ethanol and water vapors, with the mixture depending on temperature in its own way. The vapor will in general have a higher concentration of ethanol than the liquid, which is why you can concentrate ethanol by distillation. At the azeotropic temperature, however, the vapor mixture will have the same relative concentrations of ethanol and water as the liquid. The azeotropic concentration of 95.6% ethanol is therefore the most concentrated ethanol that can be produced by direct distillation of an ethanol/water mixture (without addition of other compounds). --mglg(talk) 16:17, 29 May 2007 (UTC)[reply]
Thanks a lot! Do you have any specific info on the boiling points at various ethanol concentrations, and on the corresponding relative concentrations of ethanol and water in the vapour phase? Is there a table somewhere on the net where I can look it up? And also, although I initially restricted the question to what would happen at ambient pressure, I am also interested in learning about the dependency on pressure, if a table that includes pressure is available. --62.16.173.45 17:03, 29 May 2007 (UTC)[reply]

Phase diagrams for mixtures of ethanol and water.

[edit]

I recently asked a question about the boiling point of a mixture of ethanol and water, and its dependency on pressure. After some googling, I realised that what I actually was asking for is a binary phase diagram for a mixture of ethanol and water. The best I could find on an image search was this one (middle panel), which is what I'm looking for, but with very low resolution in my region of interest. This exellent tutorial explains how to read the diagram.

  • Is anyone able to provide pointers to a similar diagram, with better resolution in the range 78-100°C?
  • Is all the information I need for calculating boiling points and vapour composition at various pressures in the phase diagram, or does the phase diagram itself depend on pressure? The fact that the diagram ends at 100°C for pure water suggests the latter, since water would have boiled at a higher temperature if the pressure were higher than 1 bar. If indeed several phase diagrams are needed, does anyone have information about where to find them? My region of interest for pressure is from atmospheric to approx. 1.5 bar. --62.16.173.45 20:48, 29 May 2007 (UTC)[reply]
Sorry, I don't have a table to point you to. But the following may be conceptually helpful for the pressure question: At any given temperature, a given mixture of ethanol and water has a well-defined vapor pressure (or partial pressure) of ethanol, and a different vapor pressure of water vapor. The vapor pressures are defined as the gas concentrations that the fluid would be in evaporation/condensation equilibrium with. The vapor pressures do not depend on the actual external pressure. The fluid(s) will evaporate or condense until the actual concentrations of each gas is equal to the fluid's vapor pressure for that gas (which may itself change during the process because evaporation/condensation can affect the temperature and fluid mixing ratio). At a certain temperature, the total vapor pressure will equal the ambient pressure; this temperature is the boiling point, at (or above) which evaporation can take the form of bubbling. Increases in pressure by addition of a third, more-or-less inert gas (such as air) does not in principle change the vapor pressures, just the boiling point. --mglg(talk) 21:37, 29 May 2007 (UTC)[reply]
Phase diagram


Water vapour pressure
Thanks again! Regarding the total pressure and partial pressures, I make the assumption that the only gases present are water and ethanol vapour. I made a phase diagram with better resolution based on the one I linked to. The blue curve is the boiling point of the liquid mixture, the red curve is the composition of the gas mixture. Based on the diagram, I can see that 40 vol% ethanol in water boils at appox. 81°C, and that the mixture in the gas phase then contains about 65% ethanol. This is valid, I presume, only when the sum of the partial pressures of ethanol and water vapour equals 1 bar. Is it possible to determine the corresponding data at 1.1 bar by using this diagram, or by combining it with the graph of water vapour pressure shown below? --62.16.173.45 10:08, 30 May 2007 (UTC)[reply]
Hmm. Did some calculations to check this out, using these calculators: Water vapour pressure, Ethanol vapour pressure, created by Shuzo Ohe. The vapour pressure of ethanol at 81°C is 844.8 mmHg = 1.126 bar, and that of water is 369.8 mmHg = 0.493 bar. I was wandering whether the pressures would add up to 1 bar, and see that they don't. Am I correct in thinking that the exact purpose of the phase diagram is to correct for this discrepancy? And again, is there some way of reading my phase diagram, in order to calculate boiling points for higher pressures than atmospheric? --62.16.173.45 11:33, 30 May 2007 (UTC)[reply]
The values don't add to one atmosphere because they represent the vapor pressures above pure ethanol and over pure water; the partial pressures over a mixture of water and ethanol are different from these. What you need for complete understanding is a set of curves or tables that show the partial pressures of the two gases above a mixture of ethanol and water, as a function of temperature and of the mixing ratio in the liquid. The boiling point at any given pressure is the temperature at which the two partial pressures add to the given pressure. --mglg(talk) 16:14, 31 May 2007 (UTC)[reply]
Thank you again, mglg! I think I understand it now. I'll see if the library can help me with some tables. --62.16.173.45 19:38, 31 May 2007 (UTC)[reply]

(outdent) I found a random old reference (Phys. Rev. 57, 1040–1041 (1940)) that contains partial pressure graphs for low temperatures (20-40°C). They got the data from the International Critical Tables, which might contain data for higher temperatures (more relevant for you) as well. You can find out in the online edition, but you may have to pay to use it. --mglg(talk)

Thank you for the book reference. It probably has the data I'm looking for, but the website wouldn't even let me read the index without paying! However, I'll check it out at a library workstation, they may have a subscription. -62.16.173.45 10:08, 30 May 2007 (UTC)[reply]

Global warming and over population

[edit]

Archived discussion

Which is a bigger problem global warming or the amazingly fasting growing population?--Sivad4991 (talk) 23:21, 13 December 2007 (UTC)[reply]

Thanks for asking the question. Overpopulation is the direct cause of the changes in climate. I find the trend now to put such great emphasis on climate, and so little emphasis on overpopulation, somewhat depressing. --NorwegianBlue talk 23:28, 13 December 2007 (UTC)[reply]
This article "Is Anyone Listening?" by Isaac Asimov might interest you. --NorwegianBlue talk 23:39, 13 December 2007 (UTC)[reply]
They're related but not exactly the same thing at all. Per capita the most populous countries produce far fewer greenhouse gases, etc., than do Western countries of less population but much higher energy consumption. See, for example, per capita carbon dioxide production; China and India don't even make it on the list, even though in terms of raw numbers they rank much higher (but still below the US). To blame all of climate change on overpopulation neglects the fact that there is not a direct correlation between population size and energy usage, it's a bit more complicated than that. If the US population suddenly jumped by 10% it would use a lot more energy than if the population of India jumped by 10%, even though in terms of raw bodies the US jump would be much smaller. --24.147.86.187 (talk) 23:43, 13 December 2007 (UTC)[reply]
They are both problems - but there isn't much doubt that global warming is the more urgent. If we don't get serious about it, global warming will become totally disasterous within a couple of generations. Population growth is currently running at 10% per 100 years - and that rate of increase is slowing down. What is encouraging is that the more developed countries have decreasing populations - and the population boom in India and China is levelling off. The massive boom is actually in Africa - which is unfortunate because that country is the most severely lacking in resources to support more people.
Supporting large populations in poorly resourced countries is something you could fix with limitless energy supplies - one hopes that what comes out of solving global warming will be better ways to produce energy cheaply and safely. If you have energy - you can build desalination plants - then you can irrigate fields - then you can use intensive agriculture - and then Africa's population can grow without consequences that are too serious. But the energy problem has to be fixed before that.
But without doubt, if the earth's population was 1% of what it is now, global warming wouldn't be a problem - we could all pollute all we wanted and the planet would hardly notice. But I think it's possible to solve global warming without having to address the population problem - so that should certainly be our priority. There are really no downsides to having a planet with 1% of the present number of humans - the problem is how to get there from here.
SteveBaker (talk) 00:08, 14 December 2007 (UTC)[reply]
Asimov wrote how he feared that the problem of going to get there from here will be solved:
--NorwegianBlue talk 00:52, 14 December 2007 (UTC)[reply]
If that 1% were Americans then it could still easily be an issue in the long run. America only makes up 4% of the current world population and yet out-pollutes in both raw numbers and per capita every other country on the globe. --24.147.86.187 (talk) 00:23, 14 December 2007 (UTC)[reply]
(Edit conflict): I would like like to reiterate that IMO the core of the problem is overpopulation, coupled with a very uneven sharing of wealth. The CO2 that contributes to global warming is produced by human activity, and it is a problem because there are so many of us. Sure, much more CO2 is produced per capita in rich countries, which tend to have lower birth rates. In a utopian future society where developing countries had caught up in wealth, their birth rates would probably also have lowered, leading to a stabilization of world population. Their CO2 emissions, of course would have increased dramatically. And world population would stabilize at a level disastrously high, climate being but one of the victims. I read Asimov's essay "The Power of Progression", on which the article I linked to was based, as a youth, and it made a great impact on me. I fully agree that harsh measures must be taken to limit CO2 emissions. I believe that even harsher measures are necessary to control the population explosion. --NorwegianBlue talk 00:26, 14 December 2007 (UTC)[reply]
Sorry, but you are claiming that population and CO2 emissions correlate and they don't, which is my point. The problem is not that there are "so many of us" but that "we have become incredibly energy dependent—some far more than others—and we derive this energy from really unpleasant sources." Population size is a variable here but not the primary one—many places with very large populations (Africa) don't have correspondingly high emissions, and many places with relatively small populations (the United States) do have high emissions. Appealing to a "utopian future society" doesn't really convince anyone of anything. I don't mind Asimov but come on, the man isn't gospel. The question of the relationship of population to crime, wealth, emissions, etc. is more complicated than anyone can just gesture at and expect to be compelling. I'm not saying that overpopulation isn't an issue—obviously it is—but claiming it is the only issue of note is hyperbole and not well thought-out, Asimov or no Asimov. Overpopulation is not the driving force of all the world's ills, sorry. The world is more complex than that. (Or put another way: If you want me to be compelled, cite some stronger reasoning/evidence than just repeating what Asimov wrote over a decade ago. People have been writing about overpopulation since the 19th century, the sky hasn't fallen in yet.) --24.147.86.187 (talk) 02:24, 14 December 2007 (UTC)[reply]
  • I made no statement about correlation in the previous post, I made a statement about causality. If I were to make a statement about correlation, it would be that CO2 emissions correlate with wealth×population.
  • I'm well aware that people have been writing about overpopulation since the 19th century. Whether global warming should be classified as "the sky falling in" appears to be a matter of debate. --NorwegianBlue talk 09:20, 14 December 2007 (UTC)[reply]
Growing population is arguably not a problem now. I believe the UN predictions are for the world population to peak at 12billion at 2050, most growth in 3rd world counteries. Many first world counteries (especially in europe) are already declining population numbers, as first world people tend to have significantly less children. What IS the problem is the polution in general. For example if everyone that were alive today used resources and poluted the way the average americian does, the resources and world would be destroyed very quickly. This is why polution is a concern, because if most of the population 'develops' to match america we will have a huge pollution problem. We already have enough population to cause this problem. Even if the world capped at 7 billion people the pollution problem from devlopment of most of the world would cause huge enviromental problems.--Dacium (talk) 00:41, 14 December 2007 (UTC)[reply]
I fully agree with the statement "Even if the world capped at 7 billion people the pollution problem from devlopment of most of the world [to match America] would cause huge enviromental problems." For the very same reason, I find little comfort in predictions of world population stabilizing at 12 billion in 2050. --NorwegianBlue talk 01:05, 14 December 2007 (UTC)[reply]
If overpopulation was solved thoroughly enough, it would eliminate global warming, in addition to solving the other problems associated with overpopulation. The reverse, however, is not true, in that just solving global warming alone wouldn't help with the other problems associated with overpopulation. So one could argue that overpopulation is the bigger problem, in that if you could ask a magic genie to fix just one problem, it would make more sense to ask for a (deathless, magical) large reduction in the world's population, rather than to ask for greenhouse gas concentrations to return to preindustrial levels.
Another reason that overpopulation could be viewed as the bigger problem is that it's probably harder to solve painlessly than global warming is. Global warming can probably be solved by technological means and political willpower by a small fraction of the world's population working on the problem. Overpopulation could be helped somewhat by improving access to family planning and reproductive health care and information, eliminating incentives to have larger families, public education about the consequences of continued population growth, and improving access of women to education and economic opportunities. But really substantial population reduction in a painless way would involve most people in the world choosing to have fewer than two children. It's easier to get a small fraction of the world's population to work toward a goal than it is to get most people in the world to work toward a goal. Plus, people react a lot more negatively to a suggestion that they consider choosing to have only one child, than they do to a suggestion to replace their lightbulbs and buy a more efficient car. MrRedact (talk) 03:08, 14 December 2007 (UTC)[reply]
I disagree. There is evidence (look at the UN estimates below) that populations are stabilising as countries become more affluent. You can see that aside from Asia and Africa, populations sill stabilise and even fall a little by 2150. Asia is levelling out rapidly and only Africa is really growing at alarming rates. It seems likely that - just as with Asia - Africa would self-stabilise eventually. If the UN numbers are to be believed, I think the earth's population will stabilise and then S-L-O-W-L-Y decrease without any drastic measures at all. It's certainly not an urgent panic. The effort to bring populations down would be tremendous - it would require horrible laws that would breach ethical and religious behaviors. It's virtually impossible to get a simple law passed to limit automobile gas consumption in every country around the world...you think you'd get countries to agree to pass laws to halve their birthrate?...I don't thing so! We're going to need to sort out global warming LONG before we can attack the population problem. We need to reduce emissions by 40% over 20 years. You'd have to ban ALL human reproduction for an entire generation to achieve a 40% population reduction - and then how would the smaller population of workers support all of those elderly people? It would be an utter nightmare. On the other hand, switching power stations over to wind, solar and (mostly) nuclear, requiring a very do-able 40mpg average for cars and light trucks, requiring industry to cut emissions by similar amounts...it's hard - but it's very definitely do-able. We have to focus on solving the most urgent - and the most solvable problem first. The evidence is that the population problem may well fix itself. SteveBaker (talk) 03:25, 14 December 2007 (UTC)[reply]

Woahhh there. There is something severely wrong with those numbers. Dacium said: "UN predictions are for the world population to peak at 12billion at 2050" - there is simply no way that can be true! There are about 6.6 billion of us now - for the population to DOUBLE in 43 years is virtually impossible! This table is from our "World population" and is referenced as coming from www.un.org:

World historical and predicted populations[1]
Region 1750 1800 1850 1900 1950 1999 2050 2150
World 791 978 1,262 1,650 2,521 5,978 8,909 9,746
Africa 106 107 111 133 221 767 1,766 2,308
Asia 502 635 809 947 1,402 3,634 5,268 5,561
Europe 163 203 276 408 547 729 628 517
Latin America and the Caribbean 16 24 38 74 167 511 809 912
Northern America 2 7 26 82 172 307 392 398
Oceania 2 2 2 6 13 30 46 51

So we'll only hit 8.9 billion by 2050 - and even by 2150 we'll only be at 9.7 billion. SteveBaker (talk) 03:10, 14 December 2007 (UTC)[reply]

I fully agree that it is urgently important to develop alternative energy sources, wind, solar and nuclear, and to limit CO2 emission from industry and transportation in whatever ways possible. However, when you write "We're going to need to sort out global warming LONG before we can attack the population problem.", I'm puzzled. In my view, much of what we have been discussing in this thread is whether we should treat the symptoms or the cause of the disease. I think we should do both. However, when I read the previous statement, it translates to "First, let's treat the symptoms and ignore the cause". You also write that "The evidence is that the population problem may well fix itself". I agree, one way or another it will. However, it may do so in very unpleasant ways. Therefore, I believe it is important to increase awareness that global warming and overpopulation are tightly interrelated. --NorwegianBlue talk 10:06, 14 December 2007 (UTC)[reply]

This is a reference desk, remember? Not a debate forum. --Anonymous, 16:04 UTC, December 14, 2007.

(You might think that but...)
It's not a matter of 'symptoms' and 'causes'. In the end (and I'm being deliberately vague about units and definitions) we have this "thought equation":
   GlobalWarming = PopulationSize x CarbonFootprintPerPerson
If we need to reduce global warming to (say) one quarter of it's present value in the next 20 years, we can either reduce the population to a quarter of it's present value (in 20 years) - or we can reduce the carbon footprint of each person by a factor of four over the same period (or some combination of the two). The problem is that if you attempted to fix global warming by reducing population, you simply couldn't do it fast enough without going out there with machine-guns and taking out 75% of the people out there. That would have to be 75% across-the-board too, you couldn't just take out 100% of the sick and elderly and all of the prison population and 30% of the other adults and leave the children alone. If you did that, the problem would come back again 20 years later when the kids are fully grown.
Even if you somehow prevented all human reproduction for the next 20 years, something like 70% of the people who are alive today would still be living - and you'd only have reduced the population by a third or so...nowhere near enough to prevent a global warming disaster. Cutting population by humane, acceptable means would take several hundred years - and we just don't have that long.
There simply isn't a way to solve global warming by attacking this particular root cause. We have to look at the other factor on the right side of the equation (which is just as much a 'root cause' as population size). We have to cut per-person CO2 production by a factor of four. This is also exceedingly difficult - but it's certainly not impossible. With care we can halve the amount of energy each person uses - better insulate our homes, have 45mpg cars, transport more goods by rail, waste less things that could be recycled, use less packaging, eat local food instead of shipping it, find better industrial processes, build "combined heat and power" schemes in cold parts of the world...you name it. And we can also try to halve the amount of CO2 we produce in generating that energy (carbon sequestration, biomass-fuels, nuclear, wind, solar). That's all do-able...although it's not cheap and requires politicians who are not invertibrates.
But killing 75% of the population just isn't going to happen and even the most draconian birth control measures won't make a dent in the problem in a reasonable time-frame.
SteveBaker (talk) 16:38, 14 December 2007 (UTC)[reply]
Taking heed of Anonymous' reminder that this is not a debate forum, it might be appropriate to return to the OP's question: Which problem is bigger, global warming or population growth? Since the two are interrelated, as shown by SteveBaker's equation above, the question needs to be rephrased to get a meaningful answer. --NorwegianBlue talk 12:47, 15 December 2007 (UTC)[reply]


What is the resistance of the human body?

[edit]

Italic text —Preceding unsigned comment added by 59.93.198.63 (talk) 06:08, 3 February 2008 (UTC)[reply]

It's quite variable, depending on where you measure, skin condition, fat/lean content, the connections used, and other factors I can't think of offhand. This article should be illustrative. — Lomn 06:39, 3 February 2008 (UTC)[reply]
It also depends on what you're trying to resist.--Shantavira|feed me 09:43, 3 February 2008 (UTC)[reply]
Presumably electricity (electrical resistance). —Pengo 14:22, 3 February 2008 (UTC)[reply]
Maybe Shantavira meant what type of signal? I doubt skin is an ohmic conductor. Trimethylxanthine (talk) 04:28, 4 February 2008 (UTC)[reply]
1200 Ohm was published on some obscure website.--Stone (talk) 13:02, 4 February 2008 (UTC)[reply]
That is clearly wrong. Although we're not supposed to, here's some original research. I found my ohmmeter and a box of resistors. Measuring from arm to arm, after having licked my fingers to reduce resistance at the surface, gave a reading of slightly over 100 kOhm. To verify this, I checked against various resistors, the closest being 130 kOhm. With dry skin, the resistance is considerably higher, and I was unable to get a consistent reading, but it is certainly higher than 500 kOhm. --NorwegianBlue talk 14:03, 4 February 2008 (UTC)[reply]
I had the same experience when measuring with an ohmmeter, but I think that it's lower at higher voltages (if it weren't then it would be safe to touch household power supply - you need about 50 mA to die or so I learnt). BTW, I've heard rumors that someone killed himself with a 9 V battery by inserting the contacts into the veins in his right and left hand .... Icek (talk) 02:03, 6 February 2008 (UTC)[reply]
Wow! Do you have a source? Googled it with no luck. Should qualify for a Darwin Award! --NorwegianBlue talk 09:10, 6 February 2008 (UTC)[reply]
Unfortunately not - I read it somewhere on the internet a few years ago. What is the electrical conductivity of blood and lymph? With a salinity of 0.7% it's maybe about 1 S/m (seawater's is 5 S/m according to our article, but other ions like phosphate will probably make blood's conductivity larger). If the current is 50 mA, then the conductance should be 1/180 S. If the distance between the contacts is 1.5 m, and we assume equal thickness along the conductor, its cross section should measure 83 cm2, e. g. a cylinder about 5 cm in radius. At least it looks as if it could be true. Icek (talk) 14:41, 6 February 2008 (UTC)[reply]

Dimensions in a World (includes string theory explanation)

[edit]

04:47, 2 October 2007 (UTC)210.0.136.138AHow many dimensions exist in the real world? And, how does this really mean to human beings? Can a specific person exist in a separte world of different dimensions, if that exists. Is it true that Eistein has already affirmed this?04:47, 2 October 2007 (UTC)210.0.136.138Allen Chau, from Hong Kong[reply]

There are as many dimensions as we define to exist. See degrees of freedom. One might say that the number of dimensions is equal to the rank of the system matrix. Alternatively, one might choose to describe the spatial extent of an object, which would only include three dimensions. One might also choose to represent system space in terms of phase or velocity - so we could easily have six dimensions. These concepts are quite complicated, but in short summary for layman's purposes, there are as many dimensions as we feel like adding to describe the situation at hand. Most systems can be easily described with three spatial dimensions (and often time as an additional dimension). Nimur 04:53, 2 October 2007 (UTC)[reply]
I think the OP was referring only to the commonsense meaning of "Dimension" as in space and time dimensions. We can plainly see three dimensions in space, and only three. As Einstein explained, time can be thought of as a fourth, somewhat wierd dimension. This gives our world four dimensions that we can observe. No higher dimensions have ever been observed, ever. Now, if a person were to exist in an "alternate set of dimensions" he'd better damn well be in another universe in the greater multiverse, or one of the many-worlds, because if he isn't, there's pretty much nothing but speculation to explain it (er, those first two were also speculation, but they've been floating around for quite a while). Now, the only remotely close to accepted theory that allows alternate dimensions to exist in our own universe without our observing them is string theory and its variants, but absolutely nothing can occupy these unobservable dimensions (except for strings themselves, which can sort of wiggle around in them). Everything you've seen on Sci-Fi shows about a person entering an "alternate phase" or something like that, and suddenly no one can see him, is entirely bullshit. Someguy1221 05:01, 2 October 2007 (UTC)[reply]
Mathematicians and scientists often deal in higher dimensions and calculate things using them. They can be assumed to exist on a theoretical level, in the same way that the square roots of negative numbers are assumed to exist on a theoretical level. These assumptions are useful in such contexts. But whether any human mind can actually visualise or even comprehend what they mean, outside of such theoretical considerations, is a moot point. -- JackofOz 13:05, 2 October 2007 (UTC)[reply]
Why do you say that? Have you ever read Flatland? The 2 dimensional people would have had 2 dimensional physics and called time the 3rd, and told their ref desk OPs that it's nonsense to think that you can just poof out into the 3rd dimension.. which of course the sphere does in the story, baffling their scientists --frotht 18:05, 2 October 2007 (UTC)[reply]
Erm, what? Flatland is fiction, by the way. In modern physics, if a spatial dimension exists, there is utterly nothing to prevent any particle from moving through it. And so there would be some quite severe consequences. For example, chirality could not exist in three dimensional objects, which would conflict quite severely with many observations in chemistry. That's just the simplest to imagine example (in my opinion) of where the existence of a fourth spatial dimension would alter the laws of physics (er, chemistry, whatever). Now, string theory does allow wierdness like the existence of extra dimensions that are unobservable to only some observers. For example, every particle on in the universe could be bound to a "three dimensional surface" of a higher dimensional object. Thus, as if flatland were on the surface of a sphere, we would exist in a higher dimensional universe we could not observe. And this does not necessarily prohibit other objects, universes, whatever, from not being bound and limited by this three dimensional surface we are bound to. The problem is that string theory is presently unverifiable. So it is quite correct to say that there is no accepted theory in physics that would allow the existence of unobservable spatial dimensions. Someguy1221 20:08, 2 October 2007 (UTC)[reply]
You can define any point in space relative to some fixed coordinate system using three distances. This makes it a three-dimensional world. If you follow Einstein and wish to employ the mathematical convenience of talking about 'space-time' then you need to add one time measurement. This makes three or four dimensions depending on what you are trying to measure. Nimur's degrees of freedom argument is wrong because that's an argument about measuring things other than space itself. You can choose to measure space with things other than three distances - but no matter what, you always need just three numbers...so for example, you can measure every point in space using two angles and one distance ('spherical polar coordinates') or one angle and two distances ('cylindrical polar'). In space/time, you always need four numbers. The exact formulation doesn't matter - the dimensionality of space (or space/time) doesn't change depending on how you measure it.
The extra dimensions that string theory predicts are claimed to be 'very small'. Understanding what this means is tricky - we have do take it in small steps:
  • Suppose for a moment that we were observing some two-dimensional creatures - living on the surface of a flat piece of paper. In our present world view, the paper is flat and infinitely large. There is no 'up/down' dimension for them because they are 2D creatures - they only have left/right and forwards/backwards.
  • But suppose one of those two spatial dimension (let's pick the left/right dimension) was 'small' - just a 10 miles across say. The universe can't have 'edges' - it has to 'wrap around'. By this, I mean that moving in the left/right dimension for exactly 10 miles would take you all the way around that dimension and back to where you started - for a 2D creature this would be a bit strange - but for us 3D creatures watching them, it would be like they were living on the surface of an infinitely long cylinder of paper that's just one mile in diameter. They could move as far as they wanted along the length of the cylinder - but if they moved a long distance in the other direction, they'd go all around the cylinder and back to the start. Because their 2D light beams are stuck in the 2D surface, if they looked off to the left or right using a pair of decent binoculars, they'd be able to see themselves 10 miles away.
  • In a three dimensional universe like ours, if our up/down dimension was only 10 miles across then you'd be able to travel as far as you wanted left/right or forwards/backwards - but if you moved upwards by 10 miles (or downwards by the same amount), you'd be back where you started. Also, if you were out in space and looked up using a pair of binoculars, you'd be able to see your own feet, just 10 miles away. Looking left or right or forwards or backwards - and everything looks kinda normal.
  • Now - imagine that third dimension isn't 10 miles across - but just one millimeter across. We would be almost like 2D beings - almost all of our existance would be in two dimensions since nothing in the universe could be more than a millimeter in height - and moving up or down would have almost no effect on your life. That third dimension exists - but it's hardly any use at all. We would have to be almost perfectly flat creatures - it would be ALMOST a 2D world...but not quite.
  • Now imagine that instead of the up/down dimension being a millimeter across, it's much MUCH smaller than the diameter of an atom...in that case we'd have no way to know that there even was a third dimension - it would seem exactly like being in a flat, 2D world since any motion at all in the 3rd dimension would have no effect and no object could be as tall as even an atom...atoms themselves would have to be almost exactly 2D objects. We wouldn't even know that the up/down direction existed at all. It the third dimension were that small, we might as well be living in a 2D world for all that it would matter to us.
  • OK - so back to a normal 3D world. What would happen if there were a 4th dimension? Well - we can't see it, measure it...it's not in any way detectable...so we might jump to the conclusion that there isn't one. But if the 4th dimension existed but was very small (much less than the diameter of an atom) - then it could very well be there but we'd be totally unaware of it...unable to detect it. It would SEEM like we were living in a 3D world.
The string theorists claim that there are DOZENS of extra dimensions beyond the three we can normally experience - but all but the first three are so small that we can't tell that they are there - even with the most sophisticated equipment we have. I've heard these extra dimensions described as being 'rolled up'. They might very well be correct - but we have no way to know.
SteveBaker 13:16, 2 October 2007 (UTC)[reply]
Just to note: dimensions aren't like they appear in cartoons. They aren't alternative worlds somehow layered on top of ours where aliens live (though note that in the many-worlds interpretation of quantum mechanics—something entirely distinct from the idea of "dimensions" in science—there can in fact be multiple layered realities). They aren't ways to conduct psychic or supernatural phenomena. They are different ways in which geometry can be expressed in the world in which we live, basically. The dimension of time can be as mundane as noting that things change — the apple disintegrates on your table as it moves through the time dimension. Dimensions are not all that exciting, from a science fiction point of view.
Einstein's work, via Minkowskii's interpretations of it, basically reduced discussions of time and space to questions of geometry, and emphasized that time has a geometrical, spatial component to it. This is why he is often credited with introducing the idea of time as a fourth dimension, though he was not really the first person to introduce such an idea and in fact most of our understanding of "Einstein's work" in this regard is through the filter of Minkowskii, who "geometricized" Einstein in really wonderful ways. --24.147.86.187 13:50, 2 October 2007 (UTC)[reply]
I'm not sure that I'd say that extra dimensions are not exciting in a science-fiction kind of way. If there are more than three spatial dimensions and they are 'small' (per string theory) then, indeed, they aren't much fun. But if there were a fourth dimension - but something about our minds/bodies/physics meant that we somehow couldn't percieve it - then indeed there would be sci-fi possibilities. An ability to move in that fourth dimension would allow you to do some pretty incredible tricks. Escaping from a locked (3-dimensional) room might be as simple as taking a step in the 'other' dimension, walking past the room then taking a step back again into our normal world. It would be like trying to imprison a 3D person in a 2D rectangle - they'd just step out of it using the 3rd dimension. You'd be able to tie knots that would be impossible to untie...all sorts of weird stuff. A lot of people worry about what the 4th dimension would look like - but that doesn't bother me at all - we can use computer graphics to simulate exactly how a 4D world would project onto 2D retinas just as we understand how a 3D world projects onto a 2D retina. The ikkier thing to contemplate is that some of the string theorists want more than one time dimension - and that's really hard to get one's head around. We can guess what 4D space would be like to 3D beings by analogy with how 3D space would seem to 2D beings. But we only percieve 1D time - and we can't use analogies to extrapolate out to 2D time...it's a real head-spinner. SteveBaker 15:19, 2 October 2007 (UTC)[reply]
There's a very clever little story along these lines by Heinlein, called ...and He Built a Crooked House. The opening half-page alone is worth the price of the anthology you get the story in. An LA architect builds a house in the shape of a tesseract, but cut open and unfolded into three dimensions, as you might cut a 3-d cube and unfold it into a 2-d shape. Then there's an earthquake....
The story is very carefully constructed to be geometrically accurate and it's an interesting exercise to verify that. A few details, like what happened to certain walls, are sloughed over, but after all it's just a story. --Trovatore 17:34, 2 October 2007 (UTC)[reply]
As I linked above, you might too enjoy Flatland. Many people (including myself) report it being much easier to visualize and work in additional spatial dimensions after reading flatland. I disagree with 24.147 and the other guy that extra dimensions aren't like cartoons- stevebaker's got the right idea from a common sense approach, which is what I'm inclined to believe since string theory isn't really demonstrated by anything in our real world -frotht 18:10, 2 October 2007 (UTC)[reply]
Yep - I agree, I'm quite doubtful that String Theory will ever be shown to be correct. It's a shame because it's very elegant - and correct things are usually elegant! But a theory that's unfalsifiable is not acceptable - so unless there is some kind of major new breakthrough, I think we have to put string theory back on the shelf and go back to looking for something else. SteveBaker 18:34, 2 October 2007 (UTC)[reply]
Despite all this talk of "rolling up" and string theory, I stand by my original assessment - there are exactly as many dimensions as we choose to model. I have worked physics problems which are not "wacky" (String Theory), but still imply high dimensionality - for example, a triple-pendulum can be described with six or 12 dimensions (perhaps each joint has a displacement, a momentum, and an acceleration; and maybe we want to throw in a nonlinear potential such as a magnetic attraction at each joint to an external magnet). Each one of these dimensions is a physical parameter where motion, displacement, energy, and other physical quantities can "go." We might start calling the dimensions (θ1, θ2, ...), (p1, p2...) and so forth. Dimensions can interact via the governing equations, derived from fundamental physical laws. We might take care to set up dimensions which are linearly independent and orthogonal, or we might not choose to do so. The system equations would be straightforward, and the dimensions would be quite complex.
I could just as well model the system in three dimensions of an absolute fixed frame, (X, Y, Z) and time (T). These dimensions are very straightforward, but the system equations would become much nastier, since the relationships would become very highly coupled. But, I could never reduce the complexity to fewer than the total number of variables in the system to begin with.
The same can be said of String Theory and any other "magic" theory which introduces a new variable. Decoupling complex interactions into "separate" dimensions is an operation on a mathematical model and does not change the system in any way. Simple transforms are heavily detailed in linear transform. More sophisticated decouplings are the crux of a lot of modern research topics. Nimur 17:34, 2 October 2007 (UTC)[reply]
There is a big difference between using multidimensional mathematics to solve a problem and saying that this many dimensions exist in space. It's not at all the same thing. I too have used as many as 14 dimensions to solve work-related problems in computer graphics...but the world still only has 3 dimensions.
Example: Computer graphics hardware really only draws triangles. If you want to draw a quadrilateral, it is usually split into two triangles. If you have two triangles that you think may originally have made up a quadrilateral - but you really wish (for various arcane reasons) that you could have split the quad along the OTHER diagonal, then you need to check that the two triangles lie in the same plane (if they don't then they didn't come from a quad and swapping the diagonal will do weird things to the graphics). This is a simple 3D problem as you might expect. However, if the triangles have (for example) smoothly varying colours that are linearly interpolated between their vertices - then swapping the diagonal can change the look of the final quad (imagine one triangle has three red vertices and the other has two red and one green - as is, the center of the line between the two triangles is red - but if you swap the diagonal, you get an orange colour in the middle - not at all the same thing). To check that it's safe to re-split it, you also need to check for "planarity in colour space" (Red/Green/Blue space) - so now you are doing a six-dimensional check in X/Y/Z/R/G/B space. But there are other parameters of a triangle in a graphics system such as texture coordinates, surface normal, transparency and so on - and to do a proper job, you need to know that ALL of them are 'planar'. I ended up with 14 per-vertex parameters - so I had to check for planarity in 14-dimensional space!
So yeah - it's easy to end up using math in higher dimensions as a convenient way of solving real-world problems - but that doesn't tell you anything about the number of dimensions of 'space'...which is still (seemingly) three. SteveBaker 18:31, 2 October 2007 (UTC)[reply]
Note that when I said extra dimensions weren't exciting, all I meant is "the current theories of extra dimensions are not that interesting when compared with the way that the idea of extra dimensions is invoked in popular fiction." You know, dimensional gateways, portals of alien worlds, etc. That's all. Sure, sure, Flatland, but that's not what most people have in mind when they talk about "dimensions". --65.112.10.56 20:41, 2 October 2007 (UTC)[reply]
I think we're largely in agreement, SteveBaker. Whether we are doing graphics or physics or string theory, adding new variables to the mathematics does not actually change the real system's dimensionality. It's only our model that changes. Nimur 16:01, 3 October 2007 (UTC)[reply]
Yeah - largely. I believe the string theorists claim that all of their extra dimensions are real, actual spatial dimensions - but 'curled up'. So small that we can never detect them. But they need the extra dimensions to give their strings the ability to vibrate in enough different modes to fulfill all of the things that are demanded of them in the theory. Super-strings are very tiny indeed - vastly smaller than an atom - so even the very tiny extra dimensions are large enough to let them vibrate in those directions as well as the usual three. SteveBaker 02:28, 4 October 2007 (UTC)[reply]

Reason for shift in apparent solar midday at winter solstice

[edit]

The sun begins to set later about 10 days before winter solstice, and the sun continues to rise later until about 10 days after solstice. In effect, this shifts the apparent solar midday later around the time of the winter solstice. Can anyone explain to me, in layman's terms, why this happens? (I have read the article Equation of time and largely failed to understand it.) Thanks. Marco polo (talk) 02:42, 31 December 2007 (UTC)[reply]

What you have to understand is that the Sun's movement in the sky that you see every day is not only caused by the Earth's rotation. Most of it is, but a small fraction is caused by the Earth moving in its orbit around the Sun. Think of a diagram of the Earth in its orbit. In one day, the Earth has moved 1/365 of the way around its orbit, or a little less than 1°, right? But that means that, over course of a day, the Sun is now in a different direction, as seen from the Earth, than it was. It's moved by about 1°. So the Earth has to rotate through almost 361°, not 360°, to bring the Sun back to the same place in the sky. (The difference between the two amounts adds up to exactly 360° per year, corresponding to the Earth making one revolution around the Sun. The time to rotate 360° is called a sidereal day and there is one more of those in a year than the ordinary or "solar" day.)
Okay, now the tricky part is that the extra amount that I called "about 1°" is not the same every day. This is because when the Earth orbits around the Sun, it does not move in an exact circle (the distance to the Sun changes by about 3,000,000 miles from nearest to farthest) and it does not move at a constant speed. So on a certain date the Earth might have to rotate by only 361.1° (say) to bring the Sun to the same place in the sky. That means that instead of 24 hours from one solar midday to the next, it takes 24 x 361.1/361 hours, and the solar midday shifts later by 24/3610 hours or about 24 seconds. On another date at another time of year, the Earth only has to rotate 360.9° from one solar midday to the next, and midday shifts back the other way against the clock. I just used 0.1° and 24 seconds as an example; I don't know what the actual maximum of the daily shift is.
These midday shifts are going on all year (except for the times when the shift happens to be zero), and the cumulative shift can get to about 15 minutes either side of the "middle". But you only notice it near the solstice because it's when the length of the day is nearly constant that you see the sunrise and sunset shifting the same way.
--Anonymous, 05:50 UTC, December 31, 2007.
Thank you: I understood that! Marco polo (talk) 15:53, 31 December 2007 (UTC)[reply]
If you were to take a picture from the same place at 12:00 noon every day (ignoring Daylight Saving Time/Summer Time), the pattern of the sun's locations would be called an analemma. That article may help give a visual interpretation of what's happening. -- Coneslayer (talk) 16:55, 31 December 2007 (UTC)[reply]

Voice

[edit]

What is it that distinguishes a male voice from a female voice? Why do they sound different? Black Carrot (talk) 19:58, 3 February 2008 (UTC)[reply]

Length of vocal cord is the short answer, see Human voice. SpinningSpark 20:26, 3 February 2008 (UTC)[reply]
You might also find Castrato interesting. SpinningSpark 20:32, 3 February 2008 (UTC)[reply]

I was looking for a longer answer. Black Carrot (talk) 02:14, 4 February 2008 (UTC)[reply]

Is this l-o-n-g enough? --hydnjo talk 08:39, 4 February 2008 (UTC)[reply]
I think Black Carrots question is a good one, which deserves a far better answer than it has received so far. If it were a mere question of vocal cord length, which basically translates to frequency, it only begs several new questions:
  • Why do children that have the same vocal cord length as women sound different from women, even if they speak in a grown-up way?
  • Why does a song played fast sound like the chipmunks?
  • Why is it usually easy to distinguish an Afro-American male from an American male of European descent, even when they use exactly the same words?
I'll try my best at answering, but this is far away from my areas of expertise, so if someone who actually knows something about this comes along, I shall gratefully stand corrected for any mistakes that I might have made. This is from the top of my head.
Anatomical reasons: The vocal cords in themselves produce a reedy sound, rich in overtones, something like a square or triangular wave (not sure which). The sounds produced are shaped by the resonances of the vocal tract. These resonances have fairly equal frequencies in women and men, but vary between the different vowels we produce because we change the shape of the resonant cavity when speaking. These resonant frequencies are called formants. The first three formants are the most important ones. What distinguishes one vowel sound from another is not the ratio of the fundamental frequency to the formants, but the ratio between the formants themselves. Therefore, when a woman speaks, the ratio between the fundamental and the first formant is quite different from the ratio between the fundamental and the first formant in a male voice. When you speed up a song, the ratio between the formants is preserved, but they have the wrong frequency. Therefore, you recognize the words, but it sounds unnatural. Children have smaller heads, and lack fully developed sinuses. I would expect that this results in their formants being located at a higher frequency, but since the ratio is preserved, the vowels are easy to distinguish. This may be one of the reasons for the wonderful timbre of a boy soprano, the elevated frequency of the first formant is well above the frequency of the fundamental, making it easy to distinguish the vowels. (WARNING: WP:OR). In contrast, adult female sopranos have a problem in that their highest notes have a frequency similar to the first formant, making it difficult to distinguish vowel sounds at high frequencies.
Cultural reasons: It is my impression that women chose different words, and also intonate slightly differently. This may vary from culture to culture. The stereotype gay parody in TV shows comes to mind. I would also suspect that cultural reasons explain the relative ease in distinguishing a male Afro-American from a male American of European descent.
Disclaimer: I am not an expert in this field. The above may contain mistakes. If you know something about this, please correct the mistakes. --NorwegianBlue talk 13:14, 4 February 2008 (UTC)[reply]
My impression is that "stereotypical" gay and black speaking styles are nothing more or less than accents. --Sean 00:01, 5 February 2008 (UTC)[reply]
Agreed, and that was exactly my point - there may be male and female accents or manners of speach, which may be difficult to separate from the physical features of the male/female voice. --NorwegianBlue talk 00:44, 5 February 2008 (UTC)[reply]
Relevant link (added 2009-02-06): Gender differences in spoken Japanese. --NorwegianBlue talk 20:53, 6 February 2009 (UTC)[reply]

Turning off all electronic equipment during take-off and landing

[edit]

Why are airline passengers instructed to turn off all electronic equipment during take-off and landing, even equipment that does not contain radio transmitters or receivers? I overheard a conversation recently, in which a fellow passenger claimed that it is done to ensure that people pay attention to what is being said over the loudspeakers, in case of emergencies during the most critical parts of a flight. Can anyone confirm this, or suggest other reasons for this requirement? --NorwegianBlue talk 11:07, 10 October 2008 (UTC)[reply]

I've heard the same reason (on numerous ocassions) as you suggest. LIke you say it ensures people are not distracted if there is a need to make an annoucement/emergency decisions. I have been told to stop reading my book before so I would suggest it is more about paying attention than it is about anything else. 194.221.133.226 (talk) 11:19, 10 October 2008 (UTC)[reply]
In the past it could have been do to with interference (even without transmitters any electronic equipment will emit some EM, I believe), but I'm pretty sure all critical systems on planes are shielded these days. As such, it is probably just to make sure people pay attention and, if not, at least don't make too much noise stopping other people from hearing announcements. On a related note, the reason you aren't allowed to use mobile phones in hospitals is simply because it annoys people, it's been a long time since medical equipment was sensitive to such things. --Tango (talk) 11:21, 10 October 2008 (UTC)[reply]
In general it's both. EM interference is a legitimate risk (though a much smaller one than when the rules were written in the 60s and 70s), and it is easier to swtich off all electronics than have flight attendents try to figure out which ones actually need to be disabled. At the same time, the FAA also cites the "possibility of missing important safety announcements during these important phases of flight" [1] as an additional reason to turn off electronics during takeoff and landing. Dragons flight (talk) 11:32, 10 October 2008 (UTC)[reply]
Note as well that handheld electronics represent dangerous projectiles in the cabin in the event of a crash. Headphone cables can present a tripping hazard. On takeoff and landing, the cabin crew want you to stow everything securely, not just electronics. TenOfAllTrades(talk) 13:30, 10 October 2008 (UTC)[reply]
The turning off electronics thing is just to "make sure", but realistically there's no point. If turning on an electronic device could really interfere with the cockpit's electronics, then terrorists would have a field day. 98.221.85.188 (talk) 14:41, 10 October 2008 (UTC)[reply]
The initial justification, Crossair Flight 498, was pretty lame since there were other confounding factors involved. That said, I can hear my speakers making odd noises when I point my cell phone at them the right way, and if I were talking to a control tower to avoid smacking into somebody at 400 knots, I think I'd rather the pilot have a clear signal. SDY (talk) 14:51, 10 October 2008 (UTC)[reply]
Your speakers (and the cables attached to them) aren't shielded from EM interference, I would hope the flight deck radio is. --Tango (talk) 15:10, 10 October 2008 (UTC)[reply]
How does that work with wireless communication, though? Then again, I'd imagine that the cell phone bands are all quite separate from the bands that aircraft use. SDY (talk) 15:18, 10 October 2008 (UTC)[reply]
Is anything in planes wireless? The computers they use for duty free transactions might be, but that's hardly a critical system! --Tango (talk) 15:27, 10 October 2008 (UTC)[reply]
Many planes have satellite radios, satellite TV, etc. for the passengers. Not to mention all of their telemetry equipment that is used to monitor where the plane is, how it is flying, etc. by flight control. --98.217.8.46 (talk) 15:49, 10 October 2008 (UTC)[reply]

The thinking is that if some of the electronic equipment onboard had been stripped of shielding (say, by shoddy maintenance) then your electronics could interfere. Of course, the plane has a high-voltage radio of its own, which would produce a thousand times more interference than your iPod. It is a dumb rule, but lots of these FAA rules are. They are rituals meant to make you feel safe, not actual safety measures. The lifejackets are a great example. How long do they spend teaching you how to put on a lifejacket? "Your life jacket is located under your seat, or under the arm rest between the seats. Pull the life jacket over your head and attach the strap. Infant life jackets will be distributed, if required. Do not inflate your jacket until you leave the aircraft. Pull the strap until the jacket is properly adjusted. If the life jacket does not inflate or needs more air, blow through the rubber tube." It's a nice image, you bobbing safely in the water with a bright yellow life jacket on. How many people have they actually saved? Zero. Meanwhile hundreds of people die from smoke inhalation which can be prevented by a lightweight mask. There is no rhyme or reason. Plasticup T/C 16:05, 10 October 2008 (UTC)[reply]

Are you sure of that number? I'm aware of several water landings where there were survivors; are you saying that in none of the cases were life vests used? --Carnildo (talk) 22:32, 10 October 2008 (UTC)[reply]
They shouldn't have been used if the evacuation went as planned since everyone would be in inflatable life rafts. Of course, if you're making a water landing, things aren't exactly going to plan, so... --Tango (talk) 23:14, 10 October 2008 (UTC)[reply]
Carnildo, for my interest, could you point to a water landing where there were survivors? My impression is that no commercial (large) jet passengers have ever survived a water impact. Skidding off runways, yes, but not "crashes". I'd be interested in the details. Franamax (talk) 00:58, 11 October 2008 (UTC)[reply]
See Ditching#Survival Rates of Passenger Plane Water Ditchings. From the article this crash] had 52 survivors. - Akamad (talk) 02:19, 11 October 2008 (UTC)[reply]
And more specifically, Ethiopian Airlines Flight 961, although I'm under the impression that life jackets actually killed more people than they saved in that particular incident. --antilivedT | C | G 05:12, 11 October 2008 (UTC)[reply]
Seen another way, improper use of life jackets caused loss of life, because people inflated them prior to exiting the plane, which is directly contrary to standard instruction. Maybe the relatively protracted training reflects the complexity of using these devices properly. Perhaps they should spend more time on when to inflate than how to inflate. --Scray (talk) 15:02, 12 October 2008 (UTC)[reply]
They usually say the standard, 'pull one just before you leave the plane, pull the second one after you leave' whenever I've been in a plane, that I recall anyway. Also, I think your summation is more accurate. We don't actually know whether it costs more people their lives then it saved. It's possible many of those who survived would have died without lifejackets and many of those who died would have died anyway. Nil Einne (talk) 13:16, 13 October 2008 (UTC)[reply]
coincidentally, yesterday:
Safety investigators will now ask passengers if they were using any electronic equipment at the time of this latest incident. "Certainly in our discussions with passengers that is exactly the sort of question we will be asking - 'Were you using a computer?'," The Courier Mail quoted an Australian Transport Safety Bureau (ATSB) spokesman as saying. The ATSB said the pilots received messages about "some irregularity with the aircraft's elevator control system", before the plane climbed 300 feet and then nosedived. [2] but apparently they've decided laptops were innocent.
that article does contain the following surprising (to me) sentence, though: In July, a passenger clicking on a wireless mouse mid-flight was blamed for causing a Qantas jet to be thrown off course, according to the Australian Transport Safety Bureau's monthly report. Gzuckier (talk) 05:33, 11 October 2008 (UTC)[reply]
Thanks, everyone, for your responses! --NorwegianBlue talk 12:52, 11 October 2008 (UTC)[reply]
This one on the same incident also mentions modems and previous cases [3] Nil Einne (talk) 13:12, 13 October 2008 (UTC)[reply]
Now seems that in the specific case that brought all this to light, it wasn't interference [4]

Self replicating hardware

[edit]

From a conversation on User_talk:SteveBaker

I believe that Wikipedia is essentially "finished". Sure, lots of articles need work - but there is almost no significant fact that humans care about that isn't somewhere in the 2.8 million pages. There will always be important new work to do - but the encyclopedia is here - it's useful - it's by far the largest accumulation of human knowledge there has ever been. So now, Wikipedia has turned from a vital project to a hobby for me. I want attack some other areas that matter.

My interest in the Arduino and RepRap projects is a good example of that. RepRap is a little robotic machine that's about the size of a Microwave oven that can completely automatically make almost any plastic part up to about 9" x 9" x 9" by 'printing' layers of molten plastic. It can also lay down thin strips of metal to embed wiring and small metal parts. The designs it works from can be downloaded from the Internet in Open file formats - and you can design new objects for it to make using OpenSourced tools like blender. Most interestingly, RepRap is designed to be able to (ultimately) make almost all of it's own parts from cheap, recycled materials. That makes the cost of building these machines be about the same as a TV or a Microwave oven - aside from some threaded rods, nuts and bolts and some motors, you can build one with no commercial input. People are even working on ways to have it make motors - and finding ways to have it snap together without nuts and bolts.

The plan is that a few people make them by hand - then make parts to give away to someone else - who builds him/herself another RepRap - makes more parts and gives those away. The machines multiply at geometric rates until everyone who wants one can have one.

The possibility here is that you'd have something close to the mythical Santa Claus machine. So suppose you need a new toy for your kid - you pick one you like from the OpenSource database (WikiCommons maybe?) and you find an old toy that the kid has grown out of - you pull out the snap-in motors, computer board and whatever - then you use your home recycling machine (which you built with your RepRap) to chew up the plastic and to feed the granules that result into your RepRap - which makes a new toy overnight. You snap in motors and computer board and download the OpenSourced software into it - and voila! A new toy. Since everything is OpenSourced, you can improve the toy - change the software, upload your new design.

But in a 3rd world setting, a solar powered RepRap can make things like cups and plates - educational toys for kids - perhaps tools, replacement parts for tractors and jeeps...and it does this using the plastic from soda bottles that would otherwise end up in landfill. There are RepRap projects looking into making plastics from materials like corn and milk.

The computer parts you need already exist as OpenHardware - the Arduino project, for example - has a computer design that anyone can make (and a dozen manufacturers make them and compete on price and features). The RepRap could perhaps make the circuit board using a plastic board with thin metal tracks extruded where needed. The computer chip costs $4.90 in one-off quantities and only needs one external component - a 20cent resistor - and can be powered from rechargeable AA batteries and hooked up to your PC with a USB cable.

There is another project (which I'm not involved with) that is working on using inkjet printer parts to print flexible circuits - including transistors, resistors and capacitors. Think what that does for low-density electronics!

I've already built a CNC milling machine which can cut almost any 3D shape from wood. My machine cost $250 to build - and can make it's own wooden parts (the first thing I built with it was a set of spare parts for itself).

If these kinds of machine become ubiquitous, people will improve them - and just as Wikipedia grew to 2.8 million articles in less than 10 years - so RepRap will become a fully automated small-scale manufacturing plant that you'll be able to have for $100 and some 'sweat equity' in your own home. A village in Africa can have one too. This unleashes the prospect of an Internet that's has web sites with 2.8 million designs for handy 3D objects, which are somewhat intelligent and infinitely recyclable. We won't need WalMart any more than we now need Britannica or Microsoft.

So - that's where my efforts are heading...and I think it's more important than arguing what the Moon article should be called.

SteveBaker (talk) 02:29, 15 May 2009 (UTC)[reply]

Cold air

[edit]

If i was to put a bottle of ice in front of a fan, would the fan blow air over the bottle which would cool, and would then cool down the surroundings? Would the air about a foot in front of the fan be colder with the bottle there than without the bottle? Would it make much difference to the temperature of the air around the fan, or would it be negligible? Thanks.—Preceding unsigned comment added by 86.177.122.34 (talk) 02:47, 24 May 2009 (UTC)[reply]

If the ambient air temperature is above the temperature of the ice (I presume it is) - then the fan will gradually warm up the bottle - eventually melting the ice - and the air blown over the bottle will be cooler than ambient. Heat is moved from the air into the bottle - so the bottle warms up and the air cools down.
Important Note: Fans don't make things colder.
They stir up the air and they make lightly clothed humans feel cooler - but they don't reduce the air temperature at all. The reason that standing in front of a fan makes you feel cooler is because your body produces heat which warms up the air next to your skin. That layer of warm air insulates you somewhat from feeling the ambient air temperature directly. The fan moves that layer of warm air away so that you can feel the cooler air that's all around you - which (because the ambient air temperature is lower than body heat) makes you feel cooler. This is what meteorologists mean when they talk about "wind chill factor".
But when the ambient air temperature is above body temperature - fans don't make you feel cooler - they make you feel hotter! It's like opening an oven door! (I speak from experience - it gets HOT in Texas!) SteveBaker (talk) 03:18, 24 May 2009 (UTC)[reply]
This is all true, except that Steve forgets that air circulation also cools human skin by helping sweat evaporates faster. However, most of it has nothing to do with the question, which is about cooling of the air due to ice as affected by the fan.
The bottle of ice will absorb heat and thus cool the air around it, at a rate which depends on the difference in temperature between the bottle's surface and the air nearby. The surroundings will be cooled as the cooled air in turn cools the air around it. If a fan is mixing the air, it will bring warmer air into contact with the bottle. This increases the difference in temperature and therefore the bottle will absorb heat, and cool the air, faster. So, yes, the fan will promote cooling of the surroundings (until all the ice is gone, which will also happen faster). However, I think the difference would be negligible, perhaps not even enough to make up for the heat added by the fan motor. --Anonymous, edited 04:31 UTC and again 07:27 UTC, May 24, 2009.
"But when the ambient air temperature is above body temperature - fans don't make you feel cooler - they make you feel hotter! It's like opening an oven door! (I speak from experience - it gets HOT in Texas!) "
Are you sure that this is true as a general statement? I would only expect it to be true at near 100% humidity. APL (talk) 07:43, 24 May 2009 (UTC)[reply]
Well, I see where you're coming from - removing the humid air from around the body theoretically gives sweat a better chance to evaporate into drier ambient air...but in the height of summer, it's rarely more than 30% humidity where I live - and fans definitely seem to make matters worse when the temperature hits 100F. Fortunately, we have air conditioned houses, cars, offices and shopping malls...so it's rarely a practical problem! SteveBaker (talk) 14:18, 24 May 2009 (UTC)[reply]
Thanks for the answer. The air conditioning point raises another question. Are most houses in the USA built with air conditioning pre-installed, or do most people have to have it installed afterwards? Are there many places in America where it isnt hot enough to warrant air conditioning in the summer months. Here in the UK we can have summers that have high enough temperatures, but those summers are an exceptional rather than the rule. As it is im sure the vast majority of houses are built without air conditioning because it usually isnt hot enough. My question is how much warmer would the average summer have to be in order for most houses to have air conditioning built-in,as a general rule?
In my personal experience, the existence of central air conditioning depends on several factors:
  1. The average temperature. When visiting Florida, even the smallest homes had central air conditioning, but around my home in New England, a lot of people don't.
  2. The age of the house. Back in the day, only large buildings had central air, with most having window-mounted air conditioners. Nowadays the technology's gotten a bit cheaper.
  3. The price of the house. Let's face it: if you're rich, and it gets hot sometimes, you're going to buy a house with central air. If you're not rich, installing the window-mounted units, though a hassle, is worth the cut in cost. From my personal experience, the richer my friend's parents, the more likely they were to have central air.
  4. The size of the house. This is slightly related to factor #3, but important: the bigger the house/building, the harder it would be to cool it with window/wall-mounted air conditioners.
Hope I've answered at least part of your question. -RunningOnBrains(talk page) 15:46, 24 May 2009 (UTC)[reply]
In Texas, all houses less than maybe 20 years old will have central air conditioning. Older - low-budget houses might only have one or two window-mounted units - but very, very few people would have none. We get temperatures over 100F every year - 110F is not all that unusual. I doubt you could sell a car here that didn't have A/C. However, it wasn't always this way. Quite a few people who were born here recall living in houses with "swamp coolers" which is basically a fan blowing over a wet surface - the evaporation of the water cools the air - however it also increases the humidity and promotes unhealthy mold growth in the house - so people wouldn't turn them on unless they absolutely had to. Most of the deaths from high temperatures are amongst the poor and elderly who either don't turn on their A/C because of the cost - or can't afford to get their A/C repaired when it breaks. SteveBaker (talk) 16:13, 24 May 2009 (UTC)[reply]

Clockwork

[edit]

How does the gear system in clocks work??? —Preceding unsigned comment added by 117.193.131.226 (talk) 17:51, 28 July 2009 (UTC)[reply]

Well.. from minutes to hours there is a 60:1 step down gear, hopefully that doesn't need more explanation. If you are wondering about old wind up watches then the second counter mechanism is controlled by an Escapement.
There's more info at Wheel train (horology) which should give you all the answers you need.83.100.250.79 (talk) 18:05, 28 July 2009 (UTC)[reply]
I think the article you are looking for is Movement (clockwork) (which is oddly not linked from the clock article). — Sam 63.138.152.238 (talk) 18:06, 28 July 2009 (UTC)[reply]

Electronic configuration of copper and chromium

[edit]

Electronic configuration of copper, chromium, and some other elements are different from what it should be. Why is it so ? Show your knowledge (talk) 13:49, 9 January 2013 (UTC)[reply]

Because the rubric we learn for predicting the electron configurations is an approximation of reality, and reality is much more complex. --Jayron32 14:08, 9 January 2013 (UTC)[reply]
A bit more: The Aufbau principle (aka Madelung's Rule) is a very rough approximation indeed, as it isn't really based on a rigorous mathematical understanding of the quantum mechanics of how electrons interact with the nucleus and with each other to produce a specific configuration. Instead, it is designed as a very rough "rule of thumb" that will get most people the right answer most of the time. There are other, more accurate, approximations , such as the Hartree–Fock method. So the answer is that the reason why copper, chromium, palladium (and indeed many other elements) don't directly obey the Aufbau principle is that the Aufbau principle is wrong, but it's right enough for first year chemistry students to get most elements correct. Indeed, for anyone that never gets to rigorous computational quantum mechanics, the Aufbua principle + memorize the exceptions is usually as far as they will ever get. --Jayron32 14:22, 9 January 2013 (UTC)[reply]
You used the word aka, what does it mean ? I still don't understand the reason behind my question. Show your knowledge (talk) 15:16, 9 January 2013 (UTC)[reply]
Sorry. Aka is an abbreviation for "also known as". The answer to your question is that the method you were taught for determining electron configurations is wrong. Better methods exist, but they involve the sorts of mathematics that 99% of people (indeed, that the majority of chemists themselves) never learn. All methods are wrong (as the quote goes "All models are wrong, but some are useful"), but the one taught you in your chemistry class is more wrong than others. It's right enough, however, for the purposes of your chemistry class, and teaching you less wrong models would require taking several years to teach the mathematics first. --Jayron32 15:24, 9 January 2013 (UTC)[reply]

Thank you Jayron, your second explanation was excellent. According to what have been taught in my chemistry class: In chromium second last shell and last shell contain 5 and 1 electrons respectively. On the other hand, second last shell and last shell of copper contain 10 and 1 electrons respectively. Is this true in reality ? Show your knowledge (talk) 15:49, 9 January 2013 (UTC)[reply]

Yes, that is really true. The electron configuration of every element can be determined spectroscopically, such that you can experimentally determine the actual configuration of electrons in an element. The "rules" you are taught in chemistry class (the Aufbau principle) whereby you add electrons to each element based on a simple formula, is mostly right, but it gets certain elements (like Chromium and Copper) wrong, insofar as the Aufbau principle predicts a configuration of 4s23d4 for chromium, but actual experiments have determined that the ground state configuration is actually 4s13d5. There are models better than the Aufbau principle that closer match reality (i.e. models that actually predict the correct configuration of chromium and copper rather than explain them away as "exceptions" to the rule) but those models require a level of mathematics which is well beyond the average first year chemistry student. --Jayron32 16:03, 9 January 2013 (UTC)[reply]

I read this in a book: "The deviation (from Aufbau principle) in electron configuration of some elements is because completely filled (d10, f14) or completely half-filled (d5, f7) configurations are more stable. The stability is due to two factors. One, these configurations are more symmetrical which increases their stability. In symmetrical arrangements, the electrons are farthest away from each other, and their mutual shielding is the minimum. The coulombic repulsive forces between them are also weakest. Due to both these reasons, the electrons are attracted more strongly towards nucleus. Two,the electrons in degenerate orbitals can exchange their position. These exchanges also increase the stability. In completely filled and completely half filled orbitals, the number of such possible exchanges are maximum which make such electron configurations more stable." Is this explanation correct ? Show your knowledge (talk) 05:32, 10 January 2013 (UTC)[reply]

Sure. That's pretty much it. There's also probably some small effects from spin-spin coupling of like-spin electrons as well, and larger atoms start to have relativistic effects which affect their configurations. Calculating the exact energy contributions of all of these various effects is quite messy, which is why they don't teach it to you right away, for the most part the Aufbau principle works, except for the "half-filled d" and "all-filled d" exceptions of the copper and chromium groups. There's also other exceptions besides those (there's actually upwards of two dozen of them, at least), and not all of them so easily follow the "half-filled/all-filled" rule-of-thumb, i.e. Ruthenium, Palladium, Cerium, and several more. --Jayron32 06:11, 10 January 2013 (UTC)[reply]

How to convert an arbitrary monochromatic light (its wavelenght) to a displayable trichromatic light (that is an sRGB screen)?

[edit]

Is there some linear interpolations formulaes which approximate this conversion (as exact conversion is not possible) while complying with the en:Wikipedia:No original research? ... — Preceding unsigned comment added by 77.199.96.124 (talk) 22:41, 15 October 2014 (UTC)[reply]

If you read the sRGB article, or any other resource on this topic, you will notice: the outer curved boundary is the monochromatic locus. That means that a wavelength can directly map to the XY colorspace. The formula for this curve is specified by any of several standards; for example, the CIE 1931 standard specifies tristimulus parameters. Then you can use the standard transform to the RGB color space of your choice, i.e. sRGB.
So: if you had pure monochromatic light, you would first compute its corresponding X,Y value by multiplying by the standard tristimulus functions. If you want to over-mathematicalize things, this computation is a weighted integral in which you are premultiplying the stimulus standard function with a dirac delta at the monochromatic light wavelength. In other words, three values are obtained by evaluating the three standard stimulus functions at that wavelength. Next you would multiply that 1x3 matrix by one of the standard 3x3 color space conversion matrices to obtain an "R/G/B" triplet.
Here is some reference code, RGB VALUES FOR VISIBLE WAVELENGTHS by Dan Bruton of Texas A&M / Austin State University Observatory. This code is written in the FORTRAN language, and was the standard model for the MATLAB MuPAD toolbox implementation of RGB::fromWaveLength. His model does not incorporate standard tristimulus functions to approximate human perception; in other words, it is a radiometric, rather than photometric, model.
In actual practice, there's a lot more guess-work and standards-fudging than you might expect! Nimur (talk) 02:35, 16 October 2014 (UTC)[reply]
The steps are:
  1. Convert from the wavelength to XYZ using color matching functions (as found here, for example) and then to linear sRGB using the matrix multiplication from sRGB#Specification of the transformation.
  2. Somehow convert those RGB coordinates into RGB coordinates in the range [0.0, 1.0].
  3. Convert that to (nonlinear) sRGB using the formula from the sRGB article.
Steps 1 and 3 are easy. Step 2 is hard because there's no right way to do it. At least one of the three RGB coordinates you get from step 1 will be negative. You can fix that by adding white, i.e. by adding an equal amount to all three coordinates. If you add the minimum amount of white (so that the smallest coordinate is 0.0), then normalize so that the largest coordinate is 1.0, the result will be fine for individual hues, but it will make a weird-looking spectrum with artificial lines and brightness gradients because the amount of white and the normalization factor vary wildly. If you want to display the whole spectrum, you will probably want to add a fixed amount of white across the whole range, and scale by a fixed amount, but this will lead to boringly desaturated colors.
the function that Nimur linked looks very inaccurate and I wouldn't use it if you care about getting the hues right. There's no such thing as a "radiometric" conversion to sRGB. -- BenRG (talk) 17:02, 16 October 2014 (UTC)[reply]
Well, BenRG, perhaps you disagree with the utility of that equation; but there does exist such an equation, and it is used by one of the most prominent vendors of image processing software, a package that is used by many researchers across the globe... this specific equation has been published in peer-reviewed journals with applications ranging from color image processing for video compression to hyperspectral imaging research; it has been recognized by the IEEE and the SPIE; it has been adopted by commercial vendors and open-source software...
Presumably, though, you are able to determine that "there's no such thing," and that its accuracy is insufficient for any purpose, so I guess I'll defer to your extensive expertise in the field.
Nimur (talk) 18:37, 16 October 2014 (UTC)[reply]
You can look at Bruton's code and at the definition of sRGB and see that the code is wrong. The mathematics isn't very difficult. Most obviously, it puts the RGB primaries (#F00, #0F0, #00F) at 645nm, 510nm, and 440nm, while the correct locations (for sRGB primaries and D65 white point) are roughly 610nm, 550nm, and 465nm, so the hues are actually very far off. The code doesn't claim to be based on sRGB, and in fact predates sRGB (which was published in late 1996), but I don't see how it could be accurate with any red-green-blue primaries. 510nm is more teal than green, and 440nm is violet.
I do see evidence that this function is very widely used, to the point that it's hard to find spectral images that show the correct hues, but this one seems to. This chromaticity diagram is also accurate. (Many other chromaticity diagrams on Commons and the web are incorrectly green at the top, such as this one, but they're otherwise pretty accurate.)
You wrote "His model does not incorporate standard tristimulus functions to approximate human perception; in other words, it is a radiometric, rather than photometric, model." That doesn't make sense, and I think you made it up. That was what I was trying to say, more politely, when I said that there's no such thing as a "radiometric" conversion to sRGB. -- BenRG (talk) 21:25, 16 October 2014 (UTC)[reply]

Why not simply use Yxz and let the device apply whatever transforms it needs to display the data containing the brightnesses and chromaticities? Count Iblis (talk) 20:10, 16 October 2014 (UTC)[reply]