1965-1969 Together with General Electric Company the Bell Telephone Laboratories were working on a new operating system called Multics. The main goal was to develop a multitasking operating system for mini computers as well as main frames. The idea was to enable a large number of users to work in parallel on a single machine with limited resources such as memory.
1969-1971 Bell Laboratories withdrew from the project because they were not convinced of its success despite a simple prototypical implementation. The remaining project members soon began developing a new file system to increase the usability of the development environment. In doing so it turned out to be necessary to develop several additional sub systems and tools. This new operating system was labeled "Unix" to stress the opposition to the former Multics system.
1971 and the following years were devoted to enhancing and improving Unix. In particular the decision to rewrite the entire operating system in C and to use it for educational purposes at universities increased its popularity. As a result different flavors of the Unix system were brought to life to satisfy the needs of different developers. Finally, in 1987, AT&T offered commercial support for their System V.
In 1987 Andrew S. Tanenbaum, computer science professor at the University of Amsterdam, developed a Unix-like purely educational operating system called Minix. His goal was to demonstrate how an operating system works internally to his students and to make it available to 386 PCs. However, Tanenbaum posed various restrictions on Minix, for example he did not allow any enhancements to the Minix kernel. This effectively prevented any improvements to the Minix system.
In 1991 the Finnish student Linus Torvalds used the Minix code as a basis for his own operating system for PC. The initial goal was a pure terminal emulation but over time this project produced its own Unix-like system. Soon only very few of the original Minix elements were left as a result of the many enhancements and improvements. The new system was initially called "Freax" but was soon officially named "Linux" after its creator. Torvalds was continuously looking for help and offered his source code openly to fellow developers. Everyone was allowed to use and change the source code as long as the person changing it did the same. The development of the Open Source project Linux had begun.
In 1994 Torvalds considered the central component, the kernel, mature and complete enough to release Version 1.0 in March under the GNU General Public License. This system was already network-enabled and provided a graphical user interface.
Today Linux is a complete Unix system and is capable of working with all current components and latest ideas. The majority of free software packages are available for Linux and virtually any functionality one can think of can or is already integrated into the kernel. Many companies assemble complete Linux-based software packages called "distributions" and offer support for those. In particular in the server market Unix systems are popular because of their proven security and stability. Even in desktop computing Linux has become a viable alternative to Windows systems because of many improvements regarding ergonomics and compatibility. However, hardware compatibility still poses problems. It is not uncommon to wait for years before Linux drivers for a specific rarely used hardware component are available. Many manufacturers only release Windows versions of their driver software which then have to be ported to Linux.
Linux Systems architecture:
The central component of a Linux system is the so-called kernel. The kernel manages the hardware and offers services. Among the tasks of the kernel are:
- Process management (necessary for pseudo-parallel operations)
- Memory management
- File system
- Networking / network drivers
- Device and block device drivers
In practice the kernel can only be operated with so-called init-scripts. These scripts are mainly used when booting the system and orchestrate the initialization of all services and hardware components. The advantages of a static system kernel are its stability and simplicity. In addition it is possible to exchange or modify a kernel when desired.
The software architecture consists of a modular dependency system. A program typically requires specific packages or libraries to be installed prior to its execution. Often it is necessary to install several packages before the actual application can be used. Adding to the confusion, many software packages come in different versions. Therefore, a central piece of any Linux distribution is its installation tool providing more or less sophisticated package management functionality. Typically, an installation tool downloads an application and all its modules from a defined Internet source (repository) and installs everything locally so the user does not need to bother with version numbers or incompatibilities.
Server: Offering services required by other applications either locally or remotely. The server software has to respond to and manage a large number of service requests. Security, performance and stability are important issues. Linux is the first choice for many because of its track record of stability and its multi-user capabilities. Typical server applications are: Web server, FTP, Mail server, Proxy.
Desktop computing: Most distributions offer complete office software packages. This does not eliminate the need for the console entirely yet but the available graphical user interfaces provide a good Windows replacement.
Embedded Linux: Linux can be used on devices such as MP3 players or cell phones. A specially tuned kernel manages the surrounding hardware and offers a minimal operating system. Linux is very powerful for embedded solutions because of its network functionality integrated directly into the kernel and its low memory usage.