Jump to content

Talk:Operating system

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Edit request 3

[edit]

Please add the following content directly after the lead:

Extended content

Edit request 6

[edit]

Please replace the content of the "Memory management" and "Virtual memory" sections, after the hatnote under "Memory management", with: (note that the virtual memory section in my version is a subheading of "memory management")

Extended content

Memory hierarchy is the principle that a computer has multiple stocks of memory, from expensive, volatile (not retaining information in case of power shutoff), and fast cache memory, to less expensive, volatile, and slower main memory, and finally most of the computer's storage in the form of nonvolatile (persistent) and inexpensive, but less quickly accessed solid-state drive or magnetic disk.[1] The memory manager is the part of the operating system that manages volatile memory.[1] Cache memory is typically managed by hardware, while main memory is typically managed by software.[2]

Early computers had no virtual addresses. Multiple programs could not be loaded in memory at the same time, so during a context switch the entire contents of memory would be saved to nonvolatile storage, then the next program loaded in.[2] Virtual address space provided increased security by preventing applications from overwriting memory needed by the operating system or other processes[3][4] and enabled multiple processes to run simultaneously.[5] Virtual address space creates the illusion of nearly unlimited memory available to each process, even exceeding the hardware memory.[6]

Address translation is the process by which virtual addresses are converted into physical ones by the memory management unit (MMU).[7][8] To cope with the increasing amounts of memory and storage in modern computers, the MMU often contains a multi-level page table that can resolve any address along with a translation lookaside buffer (TLB) that caches the latest memory lookups for increased speed.[9] As part of address translation, the MMU prevents a process from accessing memory in use by another process (memory protection).[10]

Virtual memory

[edit]
Illustration of one process using memory segmentation

Often the amount of memory requested by processes will exceed the computer's total memory.[11] One strategy is that after a process runs for a while, it will be put on idle and its memory swapped to permanent storage. Then, the memory can be reused for another process.[12] The downside of this approach is that over time the physical memory becomes fragmented because not all processes use the same amount of physical address space.[13] Also, the user may want to run a process too large to fit in memory.[14] Free blocks are tracked either with bitmaps or free lists.[15]

The most common option of managing overflow from memory is dividing each process' memory usage into segments called pages.[14] All of the memory is backed up in disk storage, [16] and not all of the process' pages need to be in memory for execution to go ahead.[14] If the process requests an address that is not currently in physical memory (page fault), the operating system will fetch the page and resume operation.[8]

References

  1. ^ a b Tanenbaum & Bos 2023, p. 179.
  2. ^ a b Tanenbaum & Bos 2023, p. 180.
  3. ^ Tanenbaum & Bos 2023, p. 183.
  4. ^ Anderson & Dahlin 2014, pp. 371–372, 414.
  5. ^ Tanenbaum & Bos 2023, pp. 183–184.
  6. ^ Anderson & Dahlin 2014, pp. 425, 454.
  7. ^ Anderson & Dahlin 2014, p. 371.
  8. ^ a b Tanenbaum & Bos 2023, p. 193.
  9. ^ Anderson & Dahlin 2014, p. 414.
  10. ^ Silberschatz et al. 2018, p. 357.
  11. ^ Tanenbaum & Bos 2023, p. 185.
  12. ^ Tanenbaum & Bos 2023, p. 186.
  13. ^ Tanenbaum & Bos 2023, p. 187.
  14. ^ a b c Tanenbaum & Bos 2023, p. 192.
  15. ^ Tanenbaum & Bos 2023, p. 188.
  16. ^ Anderson & Dahlin 2014, p. 454.

Reason: Add sources, more closely harmonize the amount of detail for each subtopic with the amount of coverage in reliable sources Buidhe paid (talk) 06:26, 4 February 2024 (UTC)[reply]

The cache is largely managed by hardware, not by the OS's virtual memory code. The part of the memory hierarchy that's involved with virtual memory is the part that's of interest in this article.
In addition, whilst the main memory is volatile on the vast majority of current systems, on the first systems that supported demand-paged virtual memory, the main memory was magnetic core memory, which is non-volatile. The volatility of memory is relevant to the OS only if the OS provides hibernate/reawaken capabilities, allowing the hardware to shut down to a power-saving mode in which it doesn't refresh main memory after the OS has saved the content of memory to some non-volatile storage, and allowing the hardware to go back to a mode in which it refreshes main memory and then having the OS (or firmware) reload memory from the non-volatile storage, so this section shouldn't mention volatility. Guy Harris (talk) 08:22, 26 May 2024 (UTC)[reply]
 Not done for now: An editor has expressed a concern about this requested edit. ABG (Talk/Report any mistakes here) 11:30, 1 June 2024 (UTC)[reply]
We can't expect every reader to understand how computer hardware works. I think it is beneficial to give some basic background on this subject, even if it is not technically part of the OS. The volatility of memory is extensively discussed in OS textbooks so should not be omitted just because one of us thinks it is irrelevant. The content is supported by the cited sources and the article cannot reasonably cover every single possible OS or hardware ever in existence. Buidhe paid (talk) 05:13, 3 June 2024 (UTC)[reply]

Edit request 7

[edit]

Please replace the current content of the "User interface" section, after the hatnote and the image, with the following text:

Extended content

On personal computers, user input typically comes from a keyboard, mouse, trackpads, and/or touchscreen, which are connected to the operating system with specialized software.[1] Programmers often prefer output in the form of plain text, which is simple to support.[2] In contrast, other users often prefer graphical user interfaces (GUIs), which are supported by most PCs.[3] GUIs may be implemented with user-level code or by the operating system itself, and the software to support it is much more complex. They are also supported by specialized hardware in the form of a graphics card that usually contains a graphics processing unit (GPU).[4]

References

  1. ^ Tanenbaum & Bos 2023, pp. 396, 402.
  2. ^ Tanenbaum & Bos 2023, p. 402.
  3. ^ Tanenbaum & Bos 2023, pp. 395, 408.
  4. ^ Tanenbaum & Bos 2023, p. 409.

Reason: the current section is UNDUE, as major operating systems textbooks lack a top-level chapter about user interface, and cover the topic briefly if at all. My version exploits summary style to improve conciseness, and also resolves the issue of uncited text. Buidhe paid (talk) 06:16, 5 February 2024 (UTC)[reply]

Retained mention of shell, and the distinction between computers in general and PCs in particular. Keeping both images seems unnecessary. Unclear which (if either) Buidhe paid wants to retain. Am inserting references provided above. Will update request to indicate completion upon addition of sources.--FeralOink (talk) 17:09, 24 May 2024 (UTC)[reply]
Added sources, removed KDE visual as new version is available and image isn't needed. Will wait to close out COI template until Buidhe paid confirms satisfaction or proposes corrections/further changes.--FeralOink (talk) 17:49, 24 May 2024 (UTC)[reply]
My understanding is that shell is just another word for an interface to an OS. I maintain that the rest of the content in that section is UNDUE based on coverage in overview sources of OS. If a picture is included in that section, it should be a GUI—the vast majority of coverage that does exist is about GUIs. Buidhe paid (talk) 02:05, 26 May 2024 (UTC)[reply]
The term "shell" originally referred to command-line shells; it dates back to at least this 1965 paper "The SHELL: A Global Tool for Calling and Chaining Procedures in the System" by Louis Pouzin, which is about a shell for Multics.
Microsoft speaks of the "Windows Shell" as part of the overall GUI; it doesn't appear to refer to the entire GUI.
In Unix-like systems - including even macOS - however, "shell" usually seems to refer to command-line shell, probably because of Unix's history, including its historical connection to Multics. However, "GNOME shell" refers to the GUI shell for the GNOME desktop environment, and "KDE shell" is also used for a GUI shell for the KDE desktop environment.
I think that neither solely providing an image of a command-line shell nor providing an image of a GUI desktop environment would fully represent the notion of a "shell"; perhaps no image should be provided, with the task of providing screenshot examples being left to shell (computing).
And I'm not sure what "GUIs may be implemented with user-level code or by the operating system itself." means. Most of the code for a GUI runs in user mode on most operating systems, but is provided as part of the "operating system" in the larger sense of "a platform atop which applications run" rather than "the kernel code that performs privileged tasks and manages low-level resources such as the CPU and memory". Graphical device drivers may run in kernel mode, as may some code about the driver layer, but, as far as I know, graphical widgets such as text boxes, scrollbars, buttons, and spinboxes, and window decorations, are implemented by code running in user mode.
"GUIs evolve over time, e.g. Microsoft modified the GUI for almost every new version of its Windows operating system" doesn't strike me as relevant here. It might be relevant on graphical user interface, but it's not obvious to me that it's really notable; anybody who's updated the OS on their personal computer, tablet, or smartphone is likely to have seen at least one update that changes the look or feel of the user interface.
"GUIs may also require a graphics card, including a graphics processing unit (GPU)." Does that refer to an add-on graphics card? It may have been true of early PCs, but wasn't true of the Macintosh or of workstation computers, as they had, at minimum, a frame buffer built in. The early ones didn't have a full-blown GPU - the Mac used the CPU to do all the rendering. This might be another detail best left to graphical user interface or some such page.
So, yes, removal of at least some stuff from that section might be a good idea. Guy Harris (talk) 08:59, 26 May 2024 (UTC)[reply]
Thank you for your review, Guy Harris! I had removed all the images except one. I will remove the one remaining bash screenshot per your comments about the history of shells and the fact that a shell might (UNIX command line) or might not (e.g. Windows; GUI shells for GNOME and KDE desktop environments) be the entire GUI. I was a UNIX user long ago. I DO believe it is important to make a distinction between personal computer and non-personal computer interactions with an operating system. (I think that is accomplished.)
I will gladly remove the sentence about Microsoft changing GUIs with each new version of Windows as it isn't well-sourced!
I will remove Buidhe paid's sentence about how GUIs are implemented per your comment.
You suggest removing Buidhe paid's sentence, "They are also supported by specialized hardware in the form of a graphics card that usually contains a graphics processing unit (GPU). (with reference)" because historically, this wasn't always the case. As of about 2010, dedicated GPUs were often on a graphics card. I will remove that sentence if you think it is too ambiguous thus best avoided (and also, because the GUI article should cover it in depth).
Guy, does this capture your comments and seem satisfactory for the User Interface subsection of this Operating System article?

"A user interface (UI) is required to operate a computer, i.e. for human interaction to be supported. The two most common user interfaces are

  • command-line interface, in which computer commands are typed, line-by-line,
  • graphical user interface (GUI) using a visual environment, most commonly a combination of the window, icon, menu, and pointer elements, also known as WIMP.

For personal computers (PCs), user input is typically from a combination of keyboard, mouse, and trackpad or touchscreen, all of which are connected to the operating system with specialized software.[139] PC users who are not software developers or coders often prefer GUIs for both input and output; GUIs are supported by most PCs. The software to support GUIs is more complex than a command line for input and plain text output.[141] Plain text output is often preferred by programmers, and is easy to support.[140]"

I apologize for not indenting the above passage, as I couldn't get the Wiki syntax to cooperate.--FeralOink (talk) 22:26, 1 June 2024 (UTC)[reply]
Thank you for further refinements, Guy Harris. I am now closing the request as accepted.--FeralOink (talk) 07:23, 4 June 2024 (UTC)[reply]
 Done--FeralOink (talk) 07:42, 4 June 2024 (UTC)[reply]

Edit request 8

[edit]

Please remove the unsourced section "Networking". Reason: only one of the operating systems textbooks has a section on networking. I checked the source and it is a brief overview of networking in general, and does not cover how operating systems support networking (which makes up the current content of the section). Thus, I believe the section should be removed both for verifiability reasons as well as for being UNDUE. Buidhe paid (talk) 06:29, 5 February 2024 (UTC)[reply]

 Done TechnoSquirrel69 (sigh) 23:50, 29 May 2024 (UTC)[reply]

Edit request 9

[edit]

Please replace the current content of the "History" section, including the hatnote, with:

Extended content
An IBM System 360/65 Operator's Panel. OS/360 was used on most IBM mainframe computers beginning in 1966.

The first computers in the late 1940s and 1950s were directly programmed in machine code inputted via plugboards or punch cards, without programming languages or operating systems.[1] After the introduction of the transistor in the mid-1950s, mainframes began to be built. These still needed professional operators[1] but had rudimentary operating systems such as Fortran Monitor System (FMS) and IBSYS.[2] In the 1960s, IBM introduced the first series of intercompatible computers (System/360). All of them ran the same operating system—OS/360—which consisted of millions of lines of assembly language that had thousands of bugs. The OS/360 also was the first popular operating system to support multiprogramming, such that the CPU could be put to use on one job while another was waiting on input/output (I/O). Holding multiple jobs in memory necessitated memory partitioning and safeguards against one job accessing the memory allocated to a different one.[3]

Around the same time, terminals were invented so multiple users could access the computer simultaneously. The operating system MULTICS was intended to allow hundreds of users to access a large computer. Despite its limited adoption, it can be considered the precursor to cloud computing. The UNIX operating system originated as a development of MULTICS for a single user.[4] Because UNIX's source code was available, it became the basis of other, incompatible operating systems, of which the most successful were AT&T's System V and the University of California's Berkeley Software Distribution (BSD).[5] To increase compatibility, the IEEE released the POSIX standard for system calls, which is supported by most UNIX systems. MINIX was a stripped-down version of UNIX, developed in 1987 for educational uses, that inspired the commercially available, free software Linux. Since 2008, MINIX is used in controllers of most Intel microchips, while Linux is widespread in data centers and Android smartphones.[6]

Microcomputers

[edit]
Command-line interface of the MS-DOS operating system
Graphical user interface of a Macintosh

The invention of large scale integration enabled the production of personal computers (initially called microcomputers) from around 1980.[7] For around five years, the CP/M (Control Program for Microcomputers) was the most popular operating system for microcomputers.[8] Later, IBM bought the DOS (Disk Operating System) from Bill Gates. After modifications requested by IBM, the resulting system was called MS-DOS (MicroSoft Disk Operating System) and was widely used on IBM microcomputers. Later versions increased their sophistication, in part by borrowing features from UNIX.[8]

Steve Jobs' Macintosh, which after 1999 used the UNIX-based (via FreeBSD)[9] macOS, was the first popular computer to use a graphical user interface (GUI). The GUI proved much more user friendly than the text-only command-line interface earlier operating systems had used. Following the success of Macintosh, MS-DOS was updated with a GUI overlay called Windows. Windows later was rewritten as a stand-alone operating system, borrowing so many features from another (VAX VMS) that a large legal settlement was paid.[10] In the twenty-first century, Windows continues to be popular on personal computers but has less market share of servers. UNIX operating systems, especially Linux, are the most popular on enterprise systems and servers but are also used on mobile devices and many other computer systems.[11]

On mobile devices, Symbian OS was dominant at first, being usurped by BlackBerry OS (introduced 2002) and iOS for iPhones (from 2007). Later on, the open-source, UNIX-based Android (introduced 2008) became most popular.[12]

References

  1. ^ a b Tanenbaum & Bos 2023, p. 8.
  2. ^ Tanenbaum & Bos 2023, p. 10.
  3. ^ Tanenbaum & Bos 2023, pp. 11–12.
  4. ^ Tanenbaum & Bos 2023, pp. 13–14.
  5. ^ Tanenbaum & Bos 2023, pp. 14–15.
  6. ^ Tanenbaum & Bos 2023, p. 15.
  7. ^ Tanenbaum & Bos 2023, pp. 15–16.
  8. ^ a b Tanenbaum & Bos 2023, p. 16.
  9. ^ Tanenbaum & Bos 2023, pp. 17–18.
  10. ^ Tanenbaum & Bos 2023, p. 17.
  11. ^ Tanenbaum & Bos 2023, p. 18.
  12. ^ Tanenbaum & Bos 2023, pp. 19–20.

Reasons: make it more concise, use summary style, fix uncited text Buidhe paid (talk) 06:58, 6 February 2024 (UTC)[reply]

Plugboards weren't really "code" in the sense of machine code, and punched cards weren't the only way machine code could be entered; punched paper tape was also an input medium.
Programming languages came along relatively early; assembly language dates back to some of the earliest computers, and even FORTRAN dates back to the IBM 704.
The IBM 704 and IBM 709, both vacuum-tube rather than transistor computers, are both referred to as "mainframes" on their Wikipedia pages, so I don't think it's clear that the introduction of transistors was a requirement for building mainframes. The FORTRAN Monitor System ran on the 709, so operating systems date back before transistorized computers.
All S/360s (other than the incompatible IBM System/360 Model 20 and IBM System/360 Model 44) may have been able to run OS/360, but not all did; many ran, for example, DOS/360, as OS/360 may not have run well on smaller machines.
I'm not sure OS/360 was the first OS to support multiprogramming. The PDP-6 Monitor may have been available before OS/360 and perhaps even before DOS/360 (at least some configurations of which supported multiprogramming, as far as I know - and those may have come out before OS/360 MFT or MVT), and was a time-sharing OS that not only supported multiprogramming but supported time-slicing. The Burroughs MCP came out even earlier than either of those and, as far as I know, supported multiprogramming as well.
The first computer terminals weren't really invented at that point. They were just Teleprinter, such as the Flexowriter and various Teleprinter from Teletype Corporation (Model 28, Model 33, Model 35, etc.), which were invented earlier, and put to use as computer terminals at that later time.
The Compatible Time Sharing System (CTSS) preceded Multics as a time-sharing OS. It may be more correct to speak of time-sharing systems in general as predecessors to both client-server and cloud computing, rather than just mentioning Multics in particular (other time-sharing OSes may not have had the term "information utility" used when discussing them, but I don't think that makes Multics special intuit regard).
UNIX wasn't a direct derivative of Multics. Some aspects of UNIX were inspired by Multics, such as the hierarchical directory structure and the notion of a command-line interpreter in which command names were file names for programs that implemented the command (although Multics ran commands within the same process, rather than creating a new process, as UNIX did).
System V and BSD weren't completely incompatible with, for example, Seventh Edition (V7) UNIX or UNIX/32V. There were some incompatibilities introduced, but most of the system library APIs and commands were V7-compatible. POSIX provided an interface that both SV and later BSDs were changed to support; it's a standard for more than just system calls, in the sense of "APIs implemented as simple traps to the OS kernel" - it also includes APIs such as getpwnam() and getpwuid(), which are mostly implemented in a user-mode library, although they do perform system calls to read from a file or send requests to or receive replies from a directory server.
What are the Intel chips in which MINIX is used? MINIX may have inspired Linus Torvalds to write the original Linux kernel, but Linux wasn't, as far as I know, based on MINIX.
Were the first microprocessors based on LSI or VLSI?
The first Macintosh computers did not run Mac OS X/OS X/macOS; they ran the classic Mac OS, which was not UNIX-based. Mac OS X only showed up in the early 2000s; it was developed from the BSD-basedNeXTSTEP. Guy Harris (talk) 10:31, 26 May 2024 (UTC)[reply]
I have checked some of the alleged inaccuracies and tweaked some to be accurate both to the source text and to what you are saying. I maintain that my version is much better than the current version because at least it is more concise and better sourced, which makes it easier to improve in the future.
As for some specific points:
  • My text The UNIX operating system originated as a development of MULTICS for a single user, is based on the source: Ken Thompson... found a small PDP-7 minicomputer that no one was using and set out to write a stripped-down, one-user version of MULTICS. This work later developed into the UNIX operating system. Perhaps there is a more informative concise phrasing, but I do not see how that is contradicted by what you are saying.
  • As for Minix use in intel chips, Tanenbaum et al says: "MINIX was adapted by Intel for a separate and somewhat secret ‘‘management’’ processor embedded in virtually all its chipsets since 2008." He also says that Linux "was directly inspired by and developed on MINIX". I'm not entirely sure what relationship "developed on" entails (he mentions file systems) but we can go with inspired if you prefer.
  • Were the first microprocessors based on LSI or VLSI? He does not mention VLSI
  • The source does not go into pre-MacOSX operating systems, and I have edited to clarify.
Buidhe paid (talk) 05:08, 3 June 2024 (UTC)[reply]
The first-generation computer history part of Tanenbaum and Bos just describes very early first-generation computers. Several things, including assembler languages, some early higher-level languages, and business data processing were provided by later first-generation computers.
Tanenbaum and Bos does not say that OS/360 was the first popular OS to support multiprogramming. What it says is

Despite its enormous size and problems, OS/360 and the similar third-generation operating systems produced by other computer manufacturers actually satisfied most of their customers reasonably well. They also popularized several key techniques absent in second-generation operating systems. Probably the most important of these was multiprogramming.

(emphasis mine).
They did, at least, mention CTSS when talking about time-sharing (although their claim about protection hardware not showing up until the third generation is better stated as that hardware becoming common in the third generation - the modified 7090s and 7094s used for CTSS had special custom hardware from IBM providing relocation and protection). That section should probably mention the term "time-sharing".
I'd ask for a citation from Tanenbaum and Bos on their claim that Thompson was trying to write a "stripped-down, one-user version of MULTICS" - that sounds like folklore rather than fact. Dennis Ritchie says, in The Evolution of the Unix Time-sharing System, that Unix came from their desire to "find an alternative to Multics" and, in [https://www.bell-labs.com/usr/dmr/www/retro.pdf The UNIX Time-sharing System�A Retrospective], that ia good case can be made that it is in essence a modern implementation of MIT’s CTSS system"] - note the "in essence", so he's not saying it's a version of CTSS (which it isn't). Thompson himself, in an article in the May 1999 issue of IEEE Computer, that "The early versions [of Unix] were essentially me experimenting with some Multics concepts on a PDP-7", which isn't as strong as "a stripped-down, one-user version of MULTICS".
"MINIX is used in controllers of most Intel microchips" is a too-vague version of what Tanenbaum and Bos said, which is that it's the OS for a management processor in Intel chipsets, separate from the CPU. See, for example, "The Truth About the Intel's Hidden Minix OS and Security Concerns" and "MINIX: ​Intel's hidden in-chip operating system". Guy Harris (talk) 08:40, 3 June 2024 (UTC)[reply]
A few historical notes
  • Honeywell had multiprogramming on the Honeywell 800, announced in 1958 and installed in 1960. And, yes, so did the MCP in 1961 on the B5000
  • CDC, GE and UNIVAC all had block relocation prior to the S/360. The B5000 had segmentation. Atlas had paging.
  • Yes, the PDP-6 supported multiprogramming
I'm not sure whether a reference to Stretch is needed; it announced multiprogramming earlier than some of the others but delivery was delayed -- Shmuel (Seymour J.) Metz Username:Chatul (talk) 10:16, 3 June 2024 (UTC)[reply]
Yes, the PDP-6 supported multiprogramming And memory protection/address relocation. Guy Harris (talk) 17:04, 3 June 2024 (UTC)[reply]
Clearly you and I disagree on what the source intends to say. In my opinion, if something is already a feature of a popular product it cannot be popularized because it is already popular. I do think that more than a sentence or so on this issue is probably Undue Weight—the details on this belong in a different article. Would you be happy if I just took out the mention of multiprogramming? (t · c) buidhe 12:44, 4 June 2024 (UTC)[reply]
It makes more sense to move it earlier. Something like After the introduction of the transistor in the mid-1950s, mainframes began to be built. These still needed professional operators[1] but had rudimentary operating systems such as Fortran Monitor System (FMS) and IBSYS.[2] In the 1960s, vendors began to offer multiprogramming operating systems. In 1964, IBM introduced the first series of intercompatible computers (System/360)..
I'm not sure what to do about the Atlas and B5000; they were designed in the late 1950s but installed in the 1960s. -- Shmuel (Seymour J.) Metz Username:Chatul (talk) 17:10, 4 June 2024 (UTC)[reply]

References

  1. ^ Tanenbaum & Bos 2023, p. 8.
  2. ^ Tanenbaum & Bos 2023, p. 10.

Need wordsmithing for virtual memory

[edit]

The text If a program tries to access memory that is not in its current range of accessible memory, but nonetheless has been allocated to it, the kernel is interrupted in the same way as it would if the program were to exceed its allocated memory. (See section on memory management.) This kind of interrupt is referred to as a page fault. has multiple issues.

  1. There may be holes in the accessible memory
  2. The interrupt might not be a page fault
  3. A page fault might not be an error.

At first I was planning to just through in a reference to segmentation, but that would not address the other issues. Can someone come up with an accurate and clean rewording that takes into account such issues:

  1. Demand paging
  2. Discontinuous storage allocation
  3. Guard pages for expandable structures
  4. Protection rings
  5. Read only pages and segment
  6. Segmentation

without going into too much detail? -- Shmuel (Seymour J.) Metz Username:Chatul (talk) 15:16, 30 May 2024 (UTC)[reply]

This passage is completely unsourced and therefore the first priority is to rewrite based on reliable sources (as I did above). Wordsmithing is the last step, after sourcing & content. Buidhe paid (talk) 06:32, 5 June 2024 (UTC)[reply]
My main concern is accuracy, but I don't want to make the text awkward in the process of correcting it. -- Shmuel (Seymour J.) Metz Username:Chatul (talk) 17:56, 5 June 2024 (UTC)[reply]

Wiki99 summary

[edit]

Summary of changes as a result of the Wiki99 project (before, after, diff):

  • Large-scale rewrite from reliable sources, fixing many unsourced content issues
  • Added over 100 citations to the latest editions of various OS textbooks
  • Brought coverage of OS topics more in line with the due weight in reliable sources

Further possibilities for improvement:

  • Finish rewrite of article, updating the sections I didn't get to with new content based on reliable sources and summary style
  • Get the article to good article status

Buidhe paid (talk) 07:24, 5 August 2024 (UTC)[reply]

UNIX vs Unix-like Darwin vs FreeBSD VM vs OS

[edit]

There are multiple parts where info is just wrong. Darwin uses modified utilitities ffrom FreeBSD for compatibility, but the kernel and core of the OS is completely different. Android is not UNIX based, it's based on Linux which makes it Unix-like not actual UNIX. A VM is a virtualized or emulated computer, not an OS, it is typically used to run a seperate OS from the host machine's OS but it isn't an OS. Please fix these and other errors. Squid4572 (talk) 02:53, 21 September 2024 (UTC)[reply]

Darwin is a combination of Mach code, FreeBSD (and, at least at one point, also NetBSD) code, and Apple-developed code. Whether BSD code from 4.4-Lite is "UNIX" or "Unix-like" is a matter of debate; the trademark "UNIX" can be used for any operating system that passes the test suite for the Single UNIX Specification, regardless of how much AT&T code, if any, is in the operating system, and most versions of macOS, starting with Leopard, pass that test suite, making them UNIXes. (Lion, for some unknown reason, was never certified as passing it; Sequoia has not - yet - been announced as having passed it.) I just removed that bit about FreeBSD, which, over and above it being incomplete and possibly misleading, was out of place in a sentence talking about the Mac being the first popular computer with a GUI, as that's referring to the situation in 1984, long before the Mac had "OPENSTEP for Mach TNG" as its operating system.
Android has, as far as I know, never passed the Single UNIX Standard test suite; the UNIX-like code in it is the Linux kernel and the Bionic C library, the latter being based on the FreeBSD C library. I've changed it to say "Later on, the open-source Android operating system (introduced 2008), with a Linux kernel and a C library (Bionic) partially based on BSD code, became most popular."
A virtual machine isn't an OS. A hypervisor, which provides a virtual machine, could be considered a type of OS; I renamed the "Virtual machine" section to "Hypervisor" and modified it to say that "A hypervisor is an operating system that runs a virtual machine."
(I think the problems there are a combination of citing OS texts in which some statements were made without sufficient research - Nth-hand sources, for N > 2, so too far removed, a bit like the telephone game - and some misreading of what those sources say.) Guy Harris (talk) 08:44, 21 September 2024 (UTC)[reply]

Semi-protected edit request on 1 December 2024

[edit]
45.138.90.118 (talk) 00:16, 1 December 2024 (UTC)[reply]
 Not done: it's not clear what changes you want to be made. Please mention the specific changes in a "change X to Y" format and provide a reliable source if appropriate. Myrealnamm (💬Let's talk · 📜My work) 00:21, 1 December 2024 (UTC)[reply]