Jump to content

Wikipedia:Reference desk/Archives/Computing/2012 August 28

From Wikipedia, the free encyclopedia
Computing desk
< August 27 << Jul | August | Sep >> August 29 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


August 28

[edit]

Options request by firefox.

[edit]

Currently this is giving me problems, when I try submit stuff, (From an usefull userscript for a page), these servers are 'silly' and can't reply correctly to an OPTIONS request, I don't have any control over the servers, but I really want to fix this problem...

Is there anyway.. to disable options request, or to enable Cross-Domain Ajax just for one domain in firefox? , or at least allow cross domain request globally? Thanks.. 190.158.212.204 (talk) 00:19, 28 August 2012 (UTC) Also.. is there any way to add user quota in a page from google chrome? It can be user side fix. I'm getting QUOTA_EXCEEDED_ERR: DOM Exception 22. 190.158.212.204 (talk) 00:35, 28 August 2012 (UTC)[reply]

Is a GPU Required in a Computer?

[edit]

When a computer displays a graphic on the screen, does it have to use a GPU? Can the CPU communicate directly with a monitor? I am aware some modern CPUs have GPU circuitry built in, but for the sake of simplicity, please assume I am referring to CPUs manufactured before 2010. If they always use the GPU, how much do they rely on it for simple tasks like composing e-mails and browsing web sites? — Preceding unsigned comment added by 50.95.200.141 (talk) 05:16, 28 August 2012 (UTC)[reply]

At the very least, you need a PHY that operates a standard display protocol - like VGA or HDMI. Usually, there's a little hardware to control the PHY, and a video frame buffer. Without that hardware, you can't get a signal to a monitor. One could call that hardware a GPU, but in modern terminology, a GPU also usually supports certain specific mathematical operations: "T&L", often providing hardware support to a standard API like OpenGL. Those features are't required to put pixels on a screen; but this hardware is so commonplace that modern operating systems make use of it even during normal use, like displaying windows, editing text, and browsing the web. Nimur (talk) 05:32, 28 August 2012 (UTC)[reply]
Transform & lighting was a fairly late addition to consumer GPUs, which is why it was advertised more prominently as something they support—the rest was taken for granted. The earliest GPUs just did bulk pixel-moving operations (blitting) faster than the CPU could (although they were called blitters then, not GPUs). The first 3D GPUs rendered shaded polygons but didn't work out the vertex locations or colors (that's the transform and lighting part). As long as the polygon count was small T&L could be done on the CPU, since it's per-vertex rather than per-pixel. -- BenRG (talk) 17:38, 29 August 2012 (UTC)[reply]

Thanks for the reply. So, I assume, then, that disabling the GPU on most modern systems would render the monitor nonoperational, because the PHY chip and the framebuffer are build into the GPU? — Preceding unsigned comment added by 50.95.200.141 (talk) 06:41, 28 August 2012 (UTC)[reply]

That's usually where the functionality is built in to. Of course, it can be elsewhere: on the CPU, or on the CPU's chipset or main logic board controller; there may be more than one unit, digitally switched to drive one or more connector sockets. On many PCs, the main GPU is started late in the boot, so the very first signals you see on power-up are sourced by the simple built-in graphics system; once the necessary software has loaded, the operating system initializes the main GPU and switches the display signal to source from there. On embedded computers, or custom-designed systems, almost anything is possible. Nimur (talk) 15:55, 28 August 2012 (UTC)[reply]
A computer can well do without graphics circuitry of its own, it can have, say, a serial terminal attached to it. Early computers worked this way which I think is responsible for the rise of editors like vi, as you can't easily draw windows, menus and stuff over a 300 baud line. Уга-уга12 (talk) 11:49, 28 August 2012 (UTC)[reply]

Going back to earlier personal computers, the Apple II, for example, did not have a dedicated GPU. Tarcil (talk) 17:29, 28 August 2012 (UTC)[reply]
Apple II had an analog RF modulator that output NTSC signal. It is usually called a "video controller unit" or a "modulator," not a "graphics processor." It was, as I mentioned above, just a PHY, and did not include hardware to accelerate mathematical operations like matrix transforms for 3D projection or texture scaling. Apple II did include a hardware and ROM-supported text mode. Nimur (talk) 18:13, 28 August 2012 (UTC)[reply]
My understanding of the term "GPU" has always been that it's a specialized processor that writes to the frame buffer, and has nothing to do with the circuitry that reads the frame buffer and sends that data to the monitor. The GPU article seems to agree. So the answer is simply no. Even on a modern graphics card the GPU doesn't communicate with the monitor. You could remove the GPU from a modern card and still use it, if the drivers and the card wiring supported that.
The part that reads the frame buffer is called a RAMDAC, though I guess that term isn't appropriate any more when the output is digital (DVI/DisplayPort/HDMI). -- BenRG (talk) 17:12, 29 August 2012 (UTC)[reply]
In the case of digital signaling, like HDMI and DVI, the more generic term, "PHY", is used. A RAMDAC is an implementation of a specific type of PHY for analog protocols (often VGA). Nimur (talk) 18:38, 29 August 2012 (UTC)[reply]
I've never heard "PHY" used in this way and googling [ramdac "phy"] doesn't turn up anything. I don't think it makes sense as a name, either, since the video output circuitry does a lot more than just send bytes over the wire, including colorspace conversion, hardware overlays and hardware cursors, and character generation. I'm not sure that all of this is covered by RAMDAC either, though.
Actually, according to the framebuffer article, "framebuffer" is a collective term for all of this circuitry. I'd always thought of it as just the VRAM. -- BenRG (talk) 20:54, 29 August 2012 (UTC)[reply]
Terminology isn't nearly as standardized for engineering subsystems as it is for consumer-facing end-products. So, when you go down to a retail electronics store and buy a "graphics card," it's pretty clear that you mean "a GPU that supports (commonly-used graphics programming API A, B, and C version X); a connector-socket or sockets that connects to (recent modern video display standard D, E, and F version Y);" and so on. When you're looking across different types of computers, or across many generations spanning decades of engineering, the terminology is a lot less interchangeable. I used to work on at a silicon company that built a small camera processor (you might call it a "Digital Signal Processor"). For some customers, who wanted fancy animated graphical user-interfaces on their cameras, we used the camera's input pixel pipeline hardware to implement polygon rendering, and we used the DMAs to blit and scale and interpolate. By short-circuiting the hardware input from the sensor interface, we could even plug "textures" through the pipe. So, our chip and our operating system supported (some) 3D graphics, and a lot of 2D graphics operations. Nobody called it a "GPU" - we called it awful. And our "framebuffer" was just regular RAM; and our PHY was called "VOPU" for "video output unit." Almost nobody knew or used these acronyms, because we never marketed them. Anyway, my point is, there are "GPU"s - standard chips from NVIDIA and ATI and Matrox and Intel that support commonly-understood interfaces; and then there are "GPUs" - anything that resembles a "unit" that sort of "processes" "graphics." You can build any computer you want - with any kind of graphics processing capability - if you can resource the engineering talent and money to fab silicon. Otherwise, you have to take the feature-set that's available on the open market. Nimur (talk) 16:36, 30 August 2012 (UTC)[reply]
The component that sends the data to the monitor is the TMDS transmitter (for DVI connections). A RAMDAC is used for VGA output. The GPU is used as a graphics accelerator and is used by the CPU to offload some of the graphics-processing burden. The CPU can manipulate graphics, but is less efficient at doing so. Today, both the RAMDAC and TMDS transmitter are usually integrated onto the same die as the GPU. But many card manufacturers add on a standalone TMDS transmitter to the card to support dual DVI outputs.—Best Dog Ever (talk) 05:39, 30 August 2012 (UTC)[reply]
I read a tutorial once how to bit bang a tv signal using nothing but a single hi/lo pin on a microcontroller. It was certainly a demistifying experience. [here is such a tutorial] I think you'll find interesting. Vespine (talk) 23:56, 2 September 2012 (UTC)[reply]

Play Ogg Theora and Vorbis files directly on iPad

[edit]

Is there any app or extension which can play OGG files directly (without conversion) on iPad. Write English in Cyrillic (talk) 06:03, 28 August 2012 (UTC)[reply]

You can use VLC on a jailbroken iPad, but it is no longer available from the Apple App Store. -- Finlay McWalterTalk 11:23, 28 August 2012 (UTC)[reply]

DNS not working, but only for browsers

[edit]

I have a laptop running Windows 7 x64 which had some kind of Internet monitoring/filtering software on it called "Action Alert". I noticed that this software was chewing up a lot of CPU, so I uninstalled it via the "Uninstall Action Alert" in the Windows Start Menu. Immediately after this, I noticed that all of the web browsers on the laptop (IE, Firefox and Chrome) are unable to resolve any host names. I tried googling the issues and found this forum post, which describes exactly the same problem I am having, and provides additional technical details. To the best of my ability I ran the same tests as mentioned in that post, and confirmed that Ping still works, and typing an IP address directly in the address bar also works. Other computers on the network have no problems, so I know it's not a general network issue. I am not using any proxies, either. I have never modified Windows Firewall, so it is set to whatever default settings it came with.

What can I do to get the browsers working again? 98.103.60.35 (talk) 12:50, 28 August 2012 (UTC)[reply]

I would also check that nslookup www.google.com works. Have you checked your HOSTS file? Also it's possible the firewall has been set to prevent HTTP traffic - have you checked that?--Phil Holmes (talk) 15:00, 28 August 2012 (UTC)[reply]
Sorry for a lame suggestion, but I would run a malware detector, on the theory that "Action Alert" is a redirector of some sort that has installed something that petulantly blocks browser requests that don't go through "Action Alert". Tarcil (talk) 17:21, 28 August 2012 (UTC)[reply]

Googling vs. typing URLs

[edit]

Is there any data on how often people Google the keywords they're looking for vs. entering a URL in their browser with the hope that it's what they are looking for? I'm thinking along the lines of someone looking for the Transformers movie — they could Google "transformers", where the movie (or its sequel) is the third GHit; or they could type "transformers.com" as a try at the URL, and end up at Hasbro's Transformers toy page. (PS: I found this link by googling, but it's got no actual data, just opinions.) Tarcil (talk) 17:14, 28 August 2012 (UTC)[reply]

I will say that using a search engine is more likely to get you what you want. Anyone could have registered transformers.com. ¦ Reisio (talk) 17:46, 28 August 2012 (UTC)[reply]
I would say the opposite; I know the exact URLs of almost all the websites that I care to read, and I usually navigate directly to them, rather than searching for them. Anyone could rank highly in a web-search for a keyword - purely at the discretion of the search-engine operator - and the result can change minute by minute; but DNS has a little bit more persistence and requires a formal registration. Certain domains, like .gov, .mil, and .edu, require "sufficient" paperwork to establish identity, while a web-search result simply satisfies the output-criteria of a proprietary, unpublished search-engine algorithm.
It will be almost impossible to collect data "in the wild," because unless you install spyware on a system, or snoop its network traffic, you can't know which URLs it visits. Commercially-sponsored, well-controlled studies have probably been performed to collect such data, but it's unlikely that data is available for free. You might find Mozilla Pancake useful as a start. Here are their user experience data - which looks pretty sparse. And, here's a paper called Effective Browsing and Serendipitous Discovery with an Experience-Infused Browser (PDF) from Stanford's Human-Computer Interaction group. (And, I discovered all of these links without using a search-engine of any kind - by directly typing in the URL of a reliable source, and browsing). Nimur (talk) 18:30, 28 August 2012 (UTC)[reply]
And you'd be wrong. Even if you already know the URI, people make typos. I prefer typing them out myself as well, but using a search engine is going to be more reliable for most people most of the time. ¦ Reisio (talk) 17:47, 29 August 2012 (UTC)[reply]
I'm wrong about many things, but I'm curious: which thing am I wrong about in this case? It's not clear, from your comment. Nimur (talk) 16:43, 30 August 2012 (UTC)[reply]
Considering the OP's link is from four years ago and there haven't been too many major changes to how we use browsers, I think there are still people typing URLs. Certainly, that is what I do if I am pretty sure about the URL - typing "www.<companyname>.<tld>" usually gets me the site I want and if not, then I have to search. My reasoning is that there are so many other sites out there offering opinion, comments, or just a page of links to similar sites that I am not interested in, or worse still offering me a US site of the same company. So for example, if I'm looking for the French language site of a famous swedish furniture store, ikea.fr gets me straight there, while googling offers me ikea.com and links to their US, UK, Canada, Australia and Singapore sites. Anyway, you asked for some data: How about this forum in which the third post says "... read some stats recently that 70% of Internet users type URLs into search bars..." then adds "...the lack of an address bar in Chrome is meant to steer more people back to Google for searches." Unfortunately he doesn't provide a source. Or there's this page about people typing URLs. A search reveals more discussion on this topic, but a quick skim through found no proper data; just discussions. Astronaut (talk) 19:20, 28 August 2012 (UTC)[reply]
One illustration of Nimur's thought of ranking highly in a web search, might go some way to explaining Tango's question here. Perhaps Wikipedia simply ranks higher then Indian universities. Astronaut (talk) 19:26, 28 August 2012 (UTC)[reply]
[Original reasearch] In my experience as a public library employee, the less experienced the user, the more likely they are to append ".com" to everything and expect to get the site the want, no matter what (and then complain when it's the wrong site or material offensive to their taste). This generally covers only the people who use free public internet, though, so I"m sure it's not representative. Mingmingla (talk) 22:54, 30 August 2012 (UTC)[reply]

Google Image Search slows computer down on Fedora 17

[edit]

Further problems with Fedora 17: When I go on Google to make a Google Image Search that returns very many image hits, and I click on several images to go to the websites they appear on, sooner or later my whole computer starts slowing down. Even moving the mouse becomes difficult. Linux System Monitor shows that the current CPU usage is well over 50%, sometimes near 100%, even though Firefox is the only program doing anything, and even it's only loading a new website or going back to the previous one. Especially going back to the search results list can take over five minutes, during which even moving the mouse is difficult. Once a page has loaded, browsing it is fairly easy, but still prone to minor slowness. Once I close the browser window, everything becomes normal again. This did not happen with Fedora 14. What could there be in Google Image Search that causes such slowness? JIP | Talk 19:13, 28 August 2012 (UTC)[reply]

Have you checked the memory usage? A high level of paging can make things slow down like this.--Phil Holmes (talk) 08:50, 29 August 2012 (UTC)[reply]
A runaway JS script, maybe?Уга-уга12 (talk) 13:34, 29 August 2012 (UTC)[reply]
I have found out that it's a problem specifically with Google Image Search. Browsing the results list, going to the preview page of a search result, or going back to the results list slows the computer down. Especially going back to the results list can grind the computer to a halt for minutes. Actually browsing any site a search result is found on works all OK. And if I spend enough time browsing a search result (in the order of several minutes), going back to the results list works OK. But if I go back after only a few seconds, the whole computer slows down. JIP | Talk 18:25, 1 September 2012 (UTC)[reply]

A better window list applet for Cinnamon?

[edit]

Still more problems with Fedora 17. After upgrading, I immediately noticed that neither the GNOME Shell or the old GNOME Panel in GNOME 3 were anywhere near to my liking. I agree with Linus Torvalds's criticism that the GNOME project seems to be actively trying to take control away from the user. So I installed Cinnamon, which seems to sort-of work like the old GNOME Panel did. But for my image editing purposes, I would need a window list applet that does the following:

  • If there are few enough windows for all of them to fit comfortably with their names in the taskbar, list them individually.
  • Otherwise, if one application has opened too many windows, group that application's windows into a single list item, clicking on which produces a vertical list of that application's windows, preferably with a tiny preview picture.

The default window list applet in Cinnamon doesn't do that. Instead, it tries to list every window separately, even when there are hundreds of them, making the individual items unusable. The window list of the GNOME Panel in Fedora 14 used to work the way I want.

And while I'm at it, is there a resources applet available? Such an applet should show tiny real-time graphs of CPU, memory, disk and network usage. (Not how much disk space is in use - how much the disk is being accessed.) Again, the GNOME Panel in Fedora 14 used to do this. JIP | Talk 19:46, 28 August 2012 (UTC)[reply]