Jump to content

Wikipedia:Reference desk/Archives/Computing/2010 June 7

From Wikipedia, the free encyclopedia
Computing desk
< June 6 << May | June | Jul >> June 8 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


June 7

[edit]

Advanced rotation in 3DS Max

[edit]
Precession on a gyroscope

Hey, been looking for an answer to this, but it's been tricky. If you were to do a precession motion in 3DS, how would be the easiest way to do it? Similar to the image at the right, which I did in POV-Ray.

In POV-Ray, I can run multiple transform commands on the same object. What I did there was simply rotate N degrees on the Z axis and then, in a different transform, rotate the object 360° (a bit for each frame) only on the Y axis. This gave me the movement you see there, which looks the way I want.

I can't seem to do that in 3DS, which is a problem I've been dealing with in other programs as of late (AfterEffects, Maya...). I'm stuck with a single transform (a single matrix, it seems), so I have X, Y and Z rotations being done at once, and that just doesn't work. I could manually input things through Euler angles, but that's such a pain, since I'd need to compute them back manually. There must be something trivial that can be done in these cases, something like creating a new reference frame or sub-group. I just haven't found it yet. Any ideas?

Convert Raw to PNG

[edit]

Hey, I've got RAW photo files from my Canon PowerShot SX 110 IS (taken using CHDK; the camera doesn't support it natively). The file extension is CRW. What's the best way to convert them to PNG files? I understand that Photoshop is good for this, but I don't have it and I don't want to pay for it. The Raw image format article gives a bunch of options, but I have no clue what's good. Does anyone know if Raw Therapee is any good? It looks like it ought to be able to save my raw files as PNGs, TIFFs, or JPEGs, which would be good. Or any other, better suggestions? I'm really unfamiliar with this type of thing. Buddy431 (talk) 01:18, 7 June 2010 (UTC)[reply]

http://www.irfanview.com/ is the easiest, cheapest, most fool-proof option. Won't have some of the fancier RAW developing options you'll find in Photoshop, but if you don't need those you're golden. Just open the file and save as a jpg or png. You can even do batch conversion. Riffraffselbow (talk) 05:30, 7 June 2010 (UTC)[reply]
IrfanView can read the files but it has almost no developing options at all, as far as I can tell. There's little point in shooting raw unless you want to tweak the conversion process. I downloaded Raw Therapee (version 2) and it looks pretty good. Try it out and see if you like it. -- BenRG (talk) 05:50, 7 June 2010 (UTC)[reply]
You're getting far less out of what you're doing then if you choose to tweak the options but I wouldn't say little point since AFAIK many camera support RAW which is often uncompressed and JPEG which is losslessly compressed. If you save as RAW you can then convert it to a lossless compression format which appears to be what the OP is trying to do, and hopefully with the same output without intervention you would expect from the camera saving JPEG, except you get lossless compression. And while I don't know if there's ever really such a case, it would seem theoretically possible that given the time and hardware constraints you could generally get better automatic results from a fancy computer program then from the camera hardware processing (and particularly if you have a fancy GPU the hardware capability is vastly superior even if it isn't dedicated, regardless of whether it can actually be used for better results). Edit: Also if you aren't deleting the RAW images, you can later choose to specially process any specific images that you aren't pleased with or that you decide are very important or whatever, something you can't do if your camera is saving the JPGs Nil Einne (talk) 06:52, 7 June 2010 (UTC)[reply]
Nin Einne's spot on: My camera normally only saves to JPEG files (lossy), and I want a PNG or other lossless file format. Having my camera save as a RAW file and then converting on my computer to a PNG (or other lossless format) is the only way I could see to do it. Buddy431 (talk) 13:41, 7 June 2010 (UTC)[reply]
dcraw (along with something like pnmtopng) may be a good choice, as it's well-suited to non-interactive use. I do question the overall goal here... why do you need PNGs? If your goal is to archive a lossless file, you already have that in the raw file. The conversion from raw to PNG loses data (not in the same way as JPEG compression, but you will lose dynamic range at least). If your goal is to edit the photo, then at least some of those manipulations (exposure, color balance, etc.) should probably be done during raw processing, before outputting to PNG or similar format. -- Coneslayer (talk) 13:52, 7 June 2010 (UTC)[reply]
You're quite right of course, but it wouldn't be very nice of me to distribute my pictures to others in a raw format, and they still might appreciate a lossless format (especially at, say, Commons). Buddy431 (talk) 14:42, 7 June 2010 (UTC)[reply]
Fair enough, I don't usually think of full-size PNGs as being "distribution-friendly" (e.g. to friends and family) but for something like the Commons it makes sense. -- Coneslayer (talk) 14:43, 7 June 2010 (UTC)[reply]
I would challenge you to use the SX 110 to save a raw and high quality jpeg of the same shot, convert them both to PNG (or a format of your choice) and then tell the difference between the two. I am honestly curious, but just to be honest I really doubt you will be able to, other than the likelihood that the raw version won't be on the same white balance (if it's balanced at all) as the jpeg thanks to CHDK and the conversion process. RAW formats work wonders on DSLR cameras, but that's because the sensors are dramatically different. Even on point-shoot cameras that are factory-equipped to shoot raw really don't score any better in tests when in raw mode. Just thought I would share this, and see if anyone has information on quantitative quality studies. I would be interested to see it! --Jmeden2000 (talk) 17:03, 8 June 2010 (UTC)[reply]
I have an SX100 running CHDK (almost the same setup as the OP) and the CRW as shown by IrfanView looks quite different from the JPEG produced by the camera's internal processing (and rather worse, in my opinion). IrfanView has the option to use Canon DLLs instead of its internal processing (which I presume is based on dcraw), but I haven't managed to get that to work. I wouldn't count on the Canon DLLs working identically to the camera since these processing algorithms are closely guarded secrets and DLLs can be disassembled. If the DLLs did work identically except without the JPEG compression stage, I suspect the result would be indistinguishable from the camera's super-fine JPEG except for file size. The odd texture of the image at high magnifications comes from the denoising algorithm, not from JPEG. The whole idea of "lossless" images of the natural world is nonsensical in the first place. My advice is to stick with JPEG. -- BenRG (talk) 20:04, 8 June 2010 (UTC)[reply]
That's been my experience, too. Even the newer canon point-shoot cameras with genuine raw capture and processing via the DPP suite (canon's raw postprocessor) tend to look grainier if anything, and basically no additional detail is derived from the RAW information. JPEG is the world's photograph-sharing standard for a reason; use a good tool and a high setting and you won't tell the difference except under extremely close inspection, and the file size savings and overall compatibility are worth that tiny bit of loss, IMO (as a photographer). RAWs only benefits come out when you have a DSLR and want to do specific postprocessing (and have a good RAW processor to do it in). --144.191.148.3 (talk) 14:30, 9 June 2010 (UTC)[reply]
I don't know if I would say in terms of pure opening JPEG is any more compatible then PNG in the modern world. Some very old browsers is about all. However it does appear there's no reliable/well supported way to add EXIF to PNGs. However the point about the different output is a good one. I did read a few people suggesting that many/most? raw automatic processing algorithms aren't as good as the camera's internal ones which was why I said 'and hopefully' above. As a personal opinion, not really having a digital camera, if size didn't matter, which is a big if (although with increasing HD sizes and memory card sizes apparently starting to get to the point that the average consumer doesn't need any large, at least according to some comments I read about SDXC once it's far less of an issue then say 3 years ago) and I could get the same output which appears to also be a big if, I would take the lossless over the lossy because although it's true you'd rarely notice the difference, there may be some specific cases when you will (obviously we're only talking about under high magnifications here) and ultimately you can produce lossy saves of your lossless images if you want. In terms of the commons, one advantage with uploading a PNG instead of JPEGs is people usually keep the same file and therefore same file format when making minor touchups. If multiple people make multiple minor touchup you may get some noticeable at higher magnifications generation loss. Of course you could just save your JPEGs as PNG before you upload although if you're going to upload as PNG anyway it may seem you might as well go lossless in the first instance if the earlier conditions are met (which as I specified they may not be). Alternatively try to encourage people at the commons to avoid that sort of thing. P.S. Just to emphasise I do agree that even if you could get the same output it's likely to be rare and definitely only under high magnification that you'll notice the difference between high quality JPEGs and lossless images so the actual advantage is going to be small. Nil Einne (talk) 17:35, 9 June 2010 (UTC)[reply]
Not to drag this out much farther but two things to add: 1) You can work losslessly on JPEG if you use an editing program that allows such things; 90 degree operations and localized changes will take place without an overall degradation of the image. It is up to the user to figure this out though; since it's not readily apparent whether your tool and workflow will end up being lossless. From a purist perspective it's still not ideal but from a practical one (moving files up and down and around the internet) there is a huge advantage to JPEG. And one correction to your comment from before (in case you hadnt figured it out) RAW is actually losslessly compressed, however the file size is still about 3-4x what a high detail JPEG would be. --144.191.148.3 (talk) 18:22, 10 June 2010 (UTC)[reply]

Linux clock problem

[edit]
Resolved

Hi! I'm running Debian Linux (lenny), and have been having some problems with the hardware and software clocks. I believe Linux threw off my BIOS's clock, and when I fixed it and continued to boot to Linux, my software's clock was wrong, AND the hardware's, too. That's to say

# hwclock
# date

return different times that are both wrong. I tried hwclock --localtime and setting the clocks to the right time, but every time I reboot they're off again. It's not the CMOS battery, either, because the BIOS maintains the correct time when I don't boot to Linux. What am I missing? Thank you!--el Aprel (facta-facienda) 04:36, 7 June 2010 (UTC)[reply]

How wrong is wrong (in other words, is it simply a few minutes or nearly exactly several hours or something else)? Also are you sure your timezone is set correctly? Is the Linux set to store/read the time in the bios as UTC (as it probably would by default, unlike say Windows) or local time? When you say 'when I don't boot to Linux' do you mean if you go into the bios before bootup the time is set correctly (this is the best way to tell what the bios is doing as otherwise you need to be sure whatever other OS isn't correcting the time)? Nil Einne (talk) 06:45, 7 June 2010 (UTC)[reply]
Okay, I was looking at it again and I've found some consistency. The BIOS's clock remains on the correct local time no matter how many times I boot to Linux as long as I don't try to change it with hwclock from there. After correctly setting the BIOS's clock to the local time, #hwclock and #date both return the same time that is exactly 4 hours behind the BIOS's (so if the BIOS says it's 15:00, Linux #hwclock and #date say it's 11:00). The minutes and seconds are consistent all around. hwclock is set to --localtime, which is why #hwclock and #date have identical times. Setting the time with #hwclock screws up the BIOS's. It shouldn't make any difference, but /etc/timezone is correctly set to US/Eastern for me. Any suggestions? Thank you!--el Aprel (facta-facienda) 19:45, 7 June 2010 (UTC)[reply]
As Nil Einne suggested, check your time zone settings, especially if time is off exactly by multiples of 30 minutes (yes, there are some silly time zones that are 30, rather than 60 minutes apart). Also, if you have internet connectivity, you could install ntp and/or ntpdate to regularly sync your clock with an official time server for your country. Two of hwclock's parameters might be of interest to you:
hwclock --hctosys 
and
hwclock --systohc
Oh, and if you're running Linux in a virtual machine, that might have an influence on clock speed as well. I remember reading quite a bit about it on VMware's web site, and could imagine other virtualization providers like VirtualBox or qemu have similar issues. -- 109.193.27.65 (talk) 19:48, 7 June 2010 (UTC)[reply]
Thank you for the tips. I was thinking about installing a time-server updater, but I'm not sure that would solve the problem, since it seems to be Linux's tinkering with the BIOS's clock causing it. I'm running Linux all by itself on the computer, with no other operating system to fiddle with the clock.--el Aprel (facta-facienda) 20:14, 7 June 2010 (UTC)[reply]
Well, in that case, ntp seems like the way to go, especially because of the way it keeps the clock in sync (it sloooows dooown graaaduaaalllyyy or spdsupvryfast, but doesn't do "jumps"). Or maybe it's the hwclock --systohc that's run as part of the Linux shutdown sequence that messes with your BIOS settings, so it would be sufficient to disable that particular line? Check /etc/init/d/hwclockfirst.sh and /etc/init.d/hwclock.sh - but be sure to read the inline documentation of these files first. Maybe something is messing with your /etc/adjtime file? -- 109.193.27.65 (talk) 20:27, 7 June 2010 (UTC)[reply]
Thank you, I will check those files. I should have mentioned that I use my BIOS to start up at a specific time in the morning, so I do need it to always have the right time and not let Linux mess with that. Otherwise, I agree ntp would be the easiest solution to my problem.--el Aprel (facta-facienda) 20:36, 7 June 2010 (UTC)[reply]
Well, for that, you could try to suspend your Linux to disk (not to RAM) and see if maybe hwclock --systohc isn't called during a suspend (I assume your boot loader defaults to Linux, so there shouldn't be any issues with one OS booting up while the other one is suspended instead of being properly shut down). Waking up from suspend should be faster than doing a cold boot, anyways. -- 109.193.27.65 (talk) 20:54, 7 June 2010 (UTC)[reply]
If it still doesn't work, please post the output of
grep UTC /etc/default/rcS
and
hwclock --debug
I think these two might not agree on whether your BIOS clock is running on UTC or not, when it is in fact not. -- 109.193.27.65 (talk) 21:04, 7 June 2010 (UTC)[reply]
Thank you! That was exactly it: UTC was set to "yes" in /etc/default/rcS. I changed it to "no" and now the the clock is working fine. Thanks again, --el Aprel (facta-facienda) 22:02, 7 June 2010 (UTC)[reply]
You're welcome. :-) Actually, there's a question during installation that asks if your BIOS clock is set to UTC, so I guess you either gave the wrong answer there during install or you picked a setting where the installer asks next to nothing (guessing on that one, as I don't want to look up the priority level of that question right now), skipping over the question and selecting what it considers the smartest choice. That's the problem with the computers of today - any attempt at artificial intelligence will sooner or later turn into a case of genuine stupidity. ;-) -- 109.193.27.65 (talk) 22:16, 7 June 2010 (UTC)[reply]
I'm not that experience with *nixes but still a bit confused by this and since the issue is resolved hopefully no one minds be taking this OT. I was originally going to suggest the UTC thing more clearly in my first answer but then got confused because if I understood the OP correctly Linux programs are reporting the incorrect time. I would have thought that if you set it in Linux to use UTC in the bios, once Linux corrects the time in the bios to be UTC provided the timezone is set correctly all programs should report the correct time (well unless you tell them to report UTC). (If you check the bios manually it would appear the incorrect time if you weren't aware it was set to UTC of course.) But from reading the above, it sounds like it was more complicated then that, and the problem appeared to be more then Linux trying to use UTC when the OP didn't want it, also different programs not agreeing on whether or not the bios was using UTC. Or to put it a different way, even if the OP had wanted Linux to use UTC and so had correctly answered the installer option, it seems like there would still be a problem with the OPs config based on my understanding of the above. Do multiple programs ask you whether your bios is set to UTC? Nil Einne (talk) 23:04, 7 June 2010 (UTC)[reply]
IIRC, the installer question isn't simply if you want to use UTC, but rather "Is your hardware clock set to UTC?". So if you want to use UTC, you have to set the clock to UTC in the BIOS before you start your installation, then when the question pops up, answer "yes". So what we were seeing here was probably that if it's not running on UTC and you answer yes, it'll store a wrong time in the hardware clock during the next shutdown/reboot, which will bite your computer in its shiny metal posterior upon next boot. This isn't even WP:OR, though - just wild guessing. If you want to experiment further, try to change the settings here and there and monitor the output of hwclock --debug so you can compare what the system thinks with what the hardware clock thinks. I'd do it myself, but: <span style="margin:0px;">Hanc marginis exiguitas non caperet.</span> -- 109.193.27.65 (talk) 23:26, 7 June 2010 (UTC)[reply]
Actually, my original post was incorrect. What had happened was #hwclock and #date were returning the same (incorrect) time until I tried setting the correct time with #hwclock --set, and I think I misinterpreted the result since it must have been UTC time since that was the setting. Interestingly enough, hwclock --localtime did not change the setting in /etc/default/rcS, so maybe the "UTC=yes/no" setting is contained in more than one file and they weren't consistent? Just a guess.--el Aprel (facta-facienda) 03:05, 8 June 2010 (UTC)[reply]

laserdisk

[edit]

i read the article but it doesn't say

how many megabytes does a laserdisc hold —Preceding unsigned comment added by Sunnyscraps (talkcontribs) 13:34, 7 June 2010 (UTC)[reply]

Assuming you mean Laserdisc, it appears they can hold up to 60 minutes of video per side in an analog format which, as far as I know, is not easily converted into a clear cut digital number of MB. On the other hand, assuming that the video is approximaetly VHS quality, then this would suggest that two hours of VHS quality video is about 1 GB (going on the fact that it says a 1.46 GB DVD can hold about 3 hours of VHS quality video). So that would be about 500 MB per side of the laserdisc. That's only a rough approximation though, of course. Buddy431 (talk) 14:19, 7 June 2010 (UTC)[reply]
The BBC Domesday Project's LaserVision Read Only Memory disks, an adaptation of Laserdisc that did support digital files, apparently stored up to 300 MB per side. -- Finlay McWalterTalk 14:39, 7 June 2010 (UTC)[reply]
However as noted in Laserdisc#Comparison with VHS Laserdisc quality was better then VHS (and I can also say that from personal experience that it was definitely noticeable). Somewhat in between DVD and VHS is the usual approximation.
Another perhaps interesting comparison is that of the CD Video, basically a combined Laserdisc+CD that was CD size. This could store up to 5 minutes of video plus 20 minutes of audio. I don't however know if the video was stored CLV or CAV (I would guess CLV since it was apparently fairly late and CDs are CLV). But regardless, this would mean you gave up 54 minutes of audio, or perhaps ~475MiB of the disc for 5 minutes of video. You may think then that a full size disc (capable of storing 30 minutes instead of 5 minutes if it's CAV or 60 minutes instead of 5 minutes if it's CLV) could theoretically store 2.8GB - 5.6GB.
(There were also Video Single Disc although it's not clear if these could store more video.)
However this would potentially require techology that wasn't properly developed until the CD-ROM hence the reason systems like the one FW mentions above were far more limited (actually reading the article it appears the Laservision also included audio and vision in addition to the digital data). In other words, this may not really be a fair comparison. You could for example in the same vein, wonder what you could store with a 30 cm DVD or Bluray (I expect this would be highly theoretical since there's lots of problems you're likely to encounter with such a system).
P.S. Note that in later variants Laserdiscs could store a minimum of one pair of full CD-quality digital audio tracks in addition to the video.
Nil Einne (talk) 22:26, 7 June 2010 (UTC)[reply]

data to audio?

[edit]

Can binary data be encoded into audio? Question Factory (talk) 13:36, 7 June 2010 (UTC)[reply]

I'm not really sure I understand the question. What sort of binary data? Why would you want to encode it into sound? Note that digitised sound (wav files, MP3s) are binary data and are played as sound. --Phil Holmes (talk) 13:48, 7 June 2010 (UTC)[reply]
Assuming you really mean encoding binary data into sound waves (and not vice versa), then one simple encoding is to use long bleeps for 1s and short bleeps for 0s. This encoding is used in the transmission of Morse code, for example. For more sophisticated methods, see amplitude-shift keying, Kansas City standard and frequency-shift keying. Gandalf61 (talk) 14:46, 7 June 2010 (UTC)[reply]
It's very easy just to take some binary data and call it PCM audio, but that will generally either sound like noise or like nothing at all. Representing your data in an audio format that actually conveys anything worthwhile (that is either meaningful or musical) is much more of a challenge, and depends entirely on what kind of data it is, what features of it you want to hear, and what synthesis technology you choose to use to bring that about. You might choose to use the data to drive parameters in a software synthesizer or perhaps do more sophisticated work on it with a audio programming language. Doing this is a lot like graphing data - you need to decide what to graph and figure out a way to do it so you get a worthwhile, meaningful result. -- Finlay McWalterTalk 14:49, 7 June 2010 (UTC)[reply]
Cue the old guys...I've spent hours adjusting the read/write head of an ordinary cassette tape recorder to get data off the notoriously finicky Sinclair ZX81 cassette tapes. Luxury was having a tape counter to know where your latest iteration of your program was stored... --Stephan Schulz (talk) 14:58, 7 June 2010 (UTC)[reply]
Even more obvious... how about a phone modem? It encoded binary data as audio that was transmitted over telephone lines. I wonder if kids today would recognize the ooo-weee-ooo-weee-grrrrrr of a modem sync since they've (luckily) had no reason to ever hear it. -- kainaw 16:58, 7 June 2010 (UTC)[reply]
Phone modem??? Get off my lawn, hedonist! In my days, we were happy when we got an acoustic coupler, because before that, we had to whistle into the phone with a boatswain's pipe. Oh, and of course, we had to carry our bit buckets uphill both ways! -- 109.193.27.65 (talk) 19:40, 7 June 2010 (UTC)[reply]

CSS zoom vs HTML size=

[edit]

What are the CSS font size percentages that would equal the HTML size="-1" and "-2" relative sizes? --70.167.58.6 (talk) 17:02, 7 June 2010 (UTC)[reply]

They mean two different things. A percent is a percent of the font size itself. So, if it is a 10pt font and you ask for it to be 80%, you get an 8pt font. If you ask for a font size of -1, you might get a 9pt font. You might get a 9.5pt font. You might get an 8pt font. The web browser is only being asked to make it one size smaller, but it is not told exactly how much smaller. My experience is that Firefox reduces by 1 point. So, if the base is a 10pt font, -1 will produce a 9pt font and +1 will produce an 11pt font. Because the base is 10pt, -1 is 90% and +1 is 110%. If you began with an 11pt font, -1 would produce a 10pt font, so it would reduce it to 91%, not 90%. -- kainaw 17:11, 7 June 2010 (UTC)[reply]
I think the issue here is that the HTML specification doesn't give absolute size differences for FONT SIZE values. Each browser presumably handles it a bit differently. CSS by contrast is meant to be handled fairly uniformly. I'm not sure there is a way to do a direct 1-to-1 conversion that looks the same on every browser. (All of this without assuming that the user has their own zoom/font settings.) --Mr.98 (talk) 17:58, 7 June 2010 (UTC)[reply]
So there's no relative font sizing from the default size in CSS? Obviously, I want to keep it a whole number and not have 10.3pt type. --70.167.58.6 (talk) 00:13, 8 June 2010 (UTC)[reply]
There is relative font sizing; it is percentage-based (e.g. 90%, 110%). It does not round to whole numbers. As for wanting whole numbers, I honestly don't see the reason for the worry. The difference between 10pt and 10.3pt is arbitrary and immaterial, as long as you are relatively consistent. (It has no relation to the actual size it will be in pixels on any given user's screen—it will be so many millimeters on my laptop, so many millimeters on my monitor, etc.) If you are really concerned with having absolute value font sizes, you will have to set them absolutely (e.g. create styles for the relative increases, decreases, etc. with hardcoded values). Of all of the many complaints one can have about CSS (and I have many!), this seems rather low on the totem pole to me. --Mr.98 (talk) 03:18, 8 June 2010 (UTC)[reply]

WWDC 2010

[edit]

Will the Keynote be available via Apple's Keynote video podcast? Chevymontecarlo 19:13, 7 June 2010 (UTC)[reply]

Answered my own question. The video came up in my download list earlier today. Chevymontecarlo 16:55, 8 June 2010 (UTC)[reply]