Thursday, July 30, 2009

multithreading models

  • many to one models
„ Many user-level threads mapped to single kernel thread
„ Examples:
1.Solaris Green Threads
2.GNU Portable Threads
  • one to one models

„ Each user-level thread maps to kernel thread

„ Examples

1. Windows NT/XP/2000

2.Linux

3. Solaris 9 and later

  • many to many

Allows many user level threads to be mapped to many kernel threads

Allows the operating system to create a sufficient number of kernel threads

Solaris prior to version 9

Windows NT/2000 with the ThreadFiber package

thread library

The threads library allows concurrent programming in Objective Caml. It provides multiple threads of control (also called lightweight processes) that execute concurrently in the same memory space. Threads communicate by in-place modification of shared data structures, or by sending and receiving data on communication channels.

The threads library is implemented by time-sharing on a single processor. It will not take advantage of multi-processor machines. Using this library will therefore never make programs run faster. However, many programs are easier to write when structured as several communicating processes.

kernel thread

We have taken the execution aspect of a process and separated it out into threads
◆ To make concurrency cheaper
As such, the OS now manages threads and processes
◆ All thread operations are implemented in the kernel
◆ The OS schedules all of the threads in the system
OS-managed threads are called kernel-level threadsor lightweight processes
◆ NT: threads
◆ Solaris: lightweight processes (LWP

kernel thread

We have taken the execution aspect of a process and separated it out into threads
◆ To make concurrency cheaperz
As such, the OS now manages threads and processes
◆ All thread operations are implemented in the kernel
◆ The OS schedules all of the threads in the system
OS-managed threads are called kernel-level threadsor lightweight processes
◆ NT: threads
◆ Solaris: lightweight processes (LWP

user thread

To make threads cheap and fast, they need to be implemented at user level
◆ Kernel-level threads are managed by the OS
◆ User-level threads are managed entirely by the run-time system (user-level library)z
User-level threads are small and fast
◆ A thread is simply represented by a PC, registers, stack, and small thread control block (TCB)◆ Creating a new thread, switching between threads, and synchronizing threads are done via procedure call
» No kernel involvement
◆ User-level thread operations 100x faster than kernel threads

benefits of multi threaded programming

Historically, code written is sequential, which means, code is executed one instruction after the next in a monolithic fashion, with no regard to the many possible resources available to the program. Overall performance can be serverely degraded if the program performs a blocking call. Why is it that many programs are sequential? One guess is that most of us think in a sequential manner. Parallelizing our thoughts does not come naturally nor is it an easy task.
However, with the increasing availability of Symmetric-Multiprocessing machines, and even more advanced multi-core processors; programming multithreaded code is a skill worth learning.
Threads can add substantial performance improvements to certain types of applications, even on single processor systems. Applications that require accessing data from multiple sources, performing different types of manipulation on data and/or transfering data to multiple end-points are all potential for threaded applications.
Basically, anytime a program sequence may be stopped waiting; that sequence is a good candidate for creating a thread. A program sequence may be stopped waiting for data from a hardware device, waiting for user input or waiting for a specific state or condition to be met.
NexusWare Core has enabled developers to develop and successfully deploy applications such as Lawful interception applications, Class 5 Soft-switch applications, SS7 line monitoring, SS7 STP, Suite of GSM mobile applications - SS7 Link replacement, Roaming Broker, Protocol converters, Radar data processing, defense applications and many more.
These types of applications are complex and require access and manipulation of data from many different data sources. Creating multiple threads within these applications has shown dramatic performance gains. Multi-threading are a good fit for the types of applications that are performed on NexusWare target hardware.

thread

  • single-Threaded Processes -

Single-threaded apartments consist of exactly one thread, so all COM objects that live in a single-threaded apartment can receive method calls only from the one thread that belongs to that apartment. All method calls to a COM object in a single-threaded apartment are synchronized with the windows message queue for the single-threaded apartment's thread. A process with a single thread of execution is simply a special case of this model.

  • Multi-Threaded Processes -

„ Each thread has a private stack

„ But threads share the process address space!

„ There’s no memory protection!

„ Threads could potentially write into each other’s stack

interprocess communication



  • Direct communication


When communication is direct, a person means exactly what they say. There is no implied meaning, insinuation, or mixed message. Think of a scientist saying “The results of the experiment are positive”, or a journalist saying ”The accident occurred at 6pm”; this is direct communication. When you say "I like your clothes", and you are being direct, you mean you like the other person's clothes. People can communicate how they feel by being direct. For example, "I feel hurt that you didn't meet me yesterday" (this is sometimes called an "I-statement").
When being direct, the speaker's tone of voice is usuall "plain" (even monotome), because they are not using a sarcastic or defensive tone (or any other inflection that creates a mixed message). Direct communication is the only form of communication in many fields, such as science, journalism, and in the legal system (a defendant would not plead guilty in court sarcastically, because the sarcastic tone would be disregarded and it would count as a real guilty plea).
In all important matters in society, people use direct communication. For example, when an airplane communicates with air traffic control, they say directly and exactly what they mean, in very specific terms. They don't use sarcasm or imply things, since the situation is too important to allow for any misunderstanding.



  • Indirect communication


By comparison, indirect communication conceals one's true position or feelings. There are may ways to be indirect, an obvious example is sarcasm. If you don't like someone's clothes and you say (in a sarcastic tone) "I like your clothes", the literal meaning and implied meaning are opposite.
While direct communication has a goal of cooperation, indirect communication has a goal of hurting or manipulating another person, or protecting one's self. Below is an incomplete list of some different forms of indirect communication, grouped into attacks and defenses, along with a description.



  • synchronization


  1. Blocking send-- caller blocked until send is completed

  2. non-blocking send--caller blocked until receive is finished

  3. blocking receiver --receiver blocks until message is available

  4. non-blocking receiver --receiver retrieves a valid message or returns an error code


  • buffering


  1. zero capacity-

• sender blocks until receiver is ready


• otherwise, message is lost


2. bounded capacity-


• when buffer is full, sender blocks


• when buffer is not full, no need to block sender


3. unbounded capacity-


no need to block sender



  • producer-consumers example

produce-- info to be consumed by consumer


consume-- information produced by producer



Thursday, July 16, 2009

Inter-process communication (IPC)

Inter-process communication (IPC) is a set of techniques for the exchange of data among multiple threads in one or more processes. Processes may be running on one or more computers connected by a network. IPC techniques are divided into methods for message passing, synchronization, shared memory, and remote procedure calls (RPC). The method of IPC used may vary based on the bandwidth and latency of communication between the threads, and the type of data being communicated.
There are several reasons for providing an environment that allows process cooperation:
Information sharing
Computation speedup
Modularity
Convenience
IPC may also be referred to as inter-thread communication and inter-application communication.
IPC, on par with the address space concept, is the foundation for address space independence/isolation.

cooperating process

cooperating process-A process which can affect or can be affected by other processes is called cooperating process.

process operation

Process Operation -- Starting, controlling, and ending a process or procedure.
  1. process creation-

A process operates in a cycle consisting of the following steps:
1.
the process is awakened by one of the awaited events
2.
the process responds to the event, i.e., executes a fragment of its code method
3.
the process puts itself to sleep
Before a process puts itself to sleep, it usually issues at least one wait request--to specify the event(s) that will wake it up in the future. A process may also issue a persistent wait request (see section 3.3.2), to be restarted cyclically by the subsequent occurrences of the same event. A process that goes to sleep without specifying a single waking condition is terminated. There is no sense to keep such a process around, as it will never run again. The effect is exactly the same as if the process performed delete on its object handle as its last statement (section 3.2).
To put itself to sleep, a process (its code method) can execute the following statement: sleep
or simply return from the code method.
The following operation handles wait requests:
wait ai event state priority
Only the first two arguments are mandatory. The first of them identifies the agent (the so-called activity interpreter (AI for short) responsible for triggering the waking event; the second argument specifies the actual event.
Activity interpreters are discussed in section 4. We saw one of them, the timer, in the example in section 3.2. In that case, the event was the delay in milliseconds after which the timer was expected to go off.
Argument state is optional and defaults to an empty string. It specifies the state to be assumed when the awaited event wakes up the process. This state will be returned in the process attribute State which can be examined by the code method.

2.process termination-

Process termination
Processes terminate in one of two ways:


Normal Termination occurs by a return from main or when requested by an explicit call to exit or _exit.
Abnormal Termination occurs as the default action of a signal or when requested by abort.
On receiving a signal, a process looks for a signal-handling function. Failure to find a signal-handling function forces the process to call exit, and therefore to terminate. The functions _exit, exit and abort terminate a process with the same effects except that abort makes available to wait or waitpid the status of a process terminated by the signal SIGABRT (see exit(2) and abort(3C)).
As a process terminates, it can set an eight-bit exit status code available to its parent. Usually, this code indicates success (zero) or failure (non-zero), but it can be used in any manner the user wishes. If a signal terminated the process, the system first tries to dump an image of core, then modifies the exit code to indicate which signal terminated the process and whether core was dumped. This is provided that the signal is one that produces a core dump (see signal(5)). Next, all signals are set to be ignored, and resources owned by the process are released, including open files and the working directory. The terminating process is now a ``zombie'' process, with only its process-table entry remaining; and that is unavailable for use until the process has finally terminated. Next, the process-table is searched for any child or zombie processes belonging to the terminating process. Those children are then adopted by init by changing their parent process ID to 1). This is necessary since there must be a parent to record the death of the child. The last actions of exit are to record the accounting information and exit code for the terminated process in the zombie process-table entry and to send the parent the death-of-child signal, SIGCHLD (see ``Signals, job control and pipes'').
If the parent wants to wait until a child terminates before continuing execution, the parent can call wait, which causes the parent to sleep until a child zombie is found (meaning the child terminated). When the child terminates, the death-of-child signal is sent to the parent although the parent ignores this signal. (Ignore is the default disposition. Applications that fork children and need to know the return status should set this signal to other than ignore.) The search for child zombies continues until the terminated child is found; at which time, the child's exit status and accounting information is reported to the parent (remember the call to exit in the child put this information in the child's process-table entry) and the zombie process-table entry is freed. Now the parent can wake up and continue executing.

process scheduling

1.The scheduler is the component of the kernel that selects which process to run next. The scheduler (or process scheduler, as it is sometimes called) can be viewed as the code that divides the finite resource of processor time between the runnable processes on a system. The scheduler is the basis of a multitasking operating system such as Linux. By deciding what process can run, the scheduler is responsible for best utilizing the system and giving the impression that multiple processes are simultaneously executing.
The idea behind the scheduler is simple. To best utilize processor time, assuming there are runnable processes, a process should always be running. If there are more processes than processors in a system, some processes will not always be running. These processes are waiting to run. Deciding what process runs next, given a set of runnable processes, is a fundamental decision the scheduler must make.
Multitasking operating systems come in two flavors: cooperative multitasking and preemptive multitasking. Linux, like all Unix variants and most modern operating systems, provides preemptive multitasking. In preemptive multitasking, the scheduler decides when a process is to cease running and a new process is to resume running. The act of involuntarily suspending a running process is called preemption. The time a process runs before it is preempted is predetermined, and is called the timeslice of the process. The timeslice, in effect, gives each process a slice of the processor's time. Managing the timeslice enables the scheduler to make global scheduling decisions for the system. It also prevents any one process from monopolizing the system. As we will see, this timeslice is dynamically calculated in the Linux scheduler to provide some interesting benefits.
Conversely, in cooperative multitasking, a process does not stop running until it voluntary decides to do so. The act of a process voluntarily suspending itself is called yielding. The shortcomings of this approach are numerous: The scheduler cannot make global decisions regarding how long processes run, processes can monopolize the processor for longer than the user desires, and a hung process that never yields can potentially bring down the entire system. Thankfully, most operating systems designed in the last decade have provided preemptive multitasking, with Mac OS 9 and earlier being the most notable exceptions. Of course, Unix has been preemptively multitasked since the beginning.
During the 2.5 kernel series, the Linux kernel received a scheduler overhaul. A new scheduler, commonly called the O(1) scheduler because of its algorithmic behavior1, solved the shortcomings of the previous Linux scheduler and introduced powerful new features and performance characteristics. In this section, we will discuss the fundamentals of scheduler design and how they apply to the new O(1) scheduler and its goals, design, implementation, algorithms, and related system calls.


2.Scheduling QueuesBelow is a list of the most common types of queues and their purpose.•

JobQueue - Each entering process goes into job queue. Processes in job queue reside on mass storage and awaits the allocation of main memory.•

Ready Queue - The set of all processes that are in main memory and are waiting for CPU time, are kept in ready queue.•

Waiting (Device) Queues - The set of processes waiting for allocation of certain I/O devices, are kept in waiting device queue.

3.A context switch is the computing process of storing and restoring the state (context) of a CPU such that multiple processes can share a single CPU resource. The context switch is an essential feature of a multitasking operating system. Context switches are usually computationally intensive and much of the design of operating systems is to optimize the use of context switches. A context switch can mean a register context switch, a task context switch, a thread context switch, or a process context switch. What constitutes the context is determined by the processor and the operating system.

the concept of Process

  1. process state: the stage of execution that a process is in. It is these states which determine which processes are eligible to receive CPU time.
  2. Process Control Block
    All of the information needed to keep track of a process when switching is kept in a data package called a process control block. The process control block typically contains:
    An ID number that identifies the process
    Pointers to the locations in the program and its data where processing last occurred
    Register contents
    States of various flags and switches
    Pointers to the upper and lower bounds of the memory required for the process
    A list of files opened by the process
    The priority of the process
    The status of all I/O devices needed by the process
    Each process has a status associated with it. Many processes consume no CPU time until they get some sort of input. For example, a process might be waiting for a keystroke from the user. While it is waiting for the keystroke, it uses no CPU time. While it's waiting, it is "suspended". When the keystroke arrives, the OS changes its status. When the status of the process changes, from pending to active, for example, or from suspended to running, the information in the process control block must be used like the data in any other program to direct execution of the task-switching portion of the operating system.
    This process swapping happens without direct user interference, and each process gets enough CPU cycles to accomplish its task in a reasonable amount of time. Trouble can begin if the user tries to have too many processes functioning at the same time. The operating system itself requires some CPU cycles to perform the saving and swapping of all the registers, queues and stacks of the application processes. If enough processes are started, and if the operating system hasn't been carefully designed, the system can begin to use the vast majority of its available CPU cycles to swap between processes rather than run processes. When this happens, it's called thrashing, and it usually requires some sort of direct user intervention to stop processes and bring order back to the system.
    One way that operating-system designers reduce the chance of thrashing is by reducing the need for new processes to perform various tasks. Some operating systems allow for a "process-lite," called a thread, that can deal with all the CPU-intensive work of a normal process, but generally does not deal with the various types of I/O and does not establish structures requiring the extensive process control block of a regular process. A process may start many threads or other processes, but a thread cannot start a process.
    So far, all the scheduling we've discussed has concerned a single CPU. In a system with two or more CPUs, the operating system must divide the workload among the CPUs, trying to balance the demands of the required processes with the available cycles on the different CPUs. Asymmetric operating systems use one CPU for their own needs and divide application processes among the remaining CPUs. Symmetric operating systems divide themselves among the various CPUs, balancing demand versus CPU availability even when the operating system itself is all that's running.
  3. In computer science, a thread of execution results from a fork of a computer program into two or more concurrently running tasks. The implementation of threads and processes differs from one operating system to another, but in most cases, a thread is contained inside a process. Multiple threads can exist within the same process and share resources such as memory, while different processes do not share these resources.
    On a single processor, multithreading generally occurs by time-division multiplexing (as in multitasking): the processor switches between different threads. This context switching generally happens frequently enough that the user perceives the threads or tasks as running at the same time. On a multiprocessor or multi-core system, the threads or tasks will generally run at the same time, with each processor or core running a particular thread or task. Support for threads in programming languages varies: a number of languages simply do not support having more than one execution context inside the same program executing at the same time. Examples of such languages include Python, and OCaml, because the parallel support of their runtime support is limited by the use of a central lock, called "Global Interpreter Lock" in Python, "master lock" in Ocaml. Other languages may be limited because they use threads that are user threads, which are not visible to the kernel, and thus cannot be scheduled to run concurrently. On the other hand, kernel threads, which are visible to the kernel, can run concurrently.
    Many modern operating systems directly support both time-sliced and multiprocessor threading with a process scheduler. The kernel of an operating system allows programmers to manipulate threads via the system call interface. Some implementations are called a kernel thread, whereas a lightweight process (LWP) is a specific type of kernel thread that shares the same state and information.
    Programs can have user-space threads when threading with timers, signals, or other methods to interrupt their own execution, performing a sort of ad-hoc time-slicing.

Tuesday, July 7, 2009

virtual machines

implementation-A virtual machine was originally defined by Popek and Goldberg as "an efficient, isolated duplicate of a real machine". Current use includes virtual machines which have no direct correspondence to any real hardware.Virtual machines are separated into two major categories, based on their use and degree of correspondence to any real machine. A system virtual machine provides a complete system platform which supports the execution of a complete operating system (OS). In contrast, a process virtual machine is designed to run a single program, which means that it supports a single process. An essential characteristic of a virtual machine is that the software running inside is limited to the resources and abstractions provided by the virtual machine -- it cannot break out of its virtual world.

  • benefits-So what does this mean to developers and testers. Let's look at a few scenarios that developers and testers find themselves in. For testers it is important that they test software against the various supported operating systems that an application runs against. The traditional approach is to run multiple physical machines, each with a different operating system. This is bad for several reasons. Space, maintenance, power and feasibility come to mind. Deployment of the software to these various machines can also be an issue. Instead a tester can run multiple virtual machines on one physical machine. Each virtual machine could have a different operating system. The application can be deployed to the virtual machines and tested.
    Another advantage of virtual machines is reproducibility. Build and test environments generally need to be well controlled. It would be undo work to have to wipe out a machine and rebuild it after each build or test run. A virtual machine allows the environment to be set up once. The environment is then captured. Any changes made after the capture can then be thrown away after the build or test run. Most emulation software packages offer this in some form or another.
    Another scenario, your application is currently released as version 1. Because of how the application is written you can only run a single version of your application on a machine. When you start development on version 2 you have to remove version 1. Part way through development an issue is found in the version 1 software that you need to replicate and fix. You can uninstall version 2 and install version 1, find and fix the issue and then revert back but that is a lot of work. A nicer approach is to have a virtual machine with version 1 installed. When you need to go back to version 1 you just start up the virtual machine. Even better is that you can easily compare the behavior of the two versions side by side rather than having to switch between two computers.
    IT departments have already found the benefits of running virtual servers over having multiple physical servers. Development and testing share many of the same benefits. Virtualization has become a buzzword in the industry. Windows is becoming more virtualized so even if you aren't using virtual machines today you may be in the future.

  • Example: A program written in Java receives services from the Java Runtime Environment (JRE) software by issuing commands to, and receiving the expected results from, the Java software. By providing these services to the program, the Java software is acting as a "virtual machine", taking the place of the operating system or hardware for which the program would ordinarily be tailored.

system generation

system generation (SYSGEN): The process of selecting optional parts of an operating system and of creating a particular operating system tailored to the requirements of a data processing installation.

system boot

The typical computer system boots over and over again with no problems, starting the computer's operating system (OS) and identifying its hardware and software components that all work together to provide the user with the complete computing experience. But what happens between the time that the user powers up the computer and when the GUI icons appear on the desktop?
In order for a computer to successfully boot, its BIOS, operating system and hardware components must all be working properly; failure of any one of these three elements will likely result in a failed boot sequence.
When the computer's power is first turned on, the CPU initializes itself, which is triggered by a series of clock ticks generated by the system clock. Part of the CPU's initialization is to look to the system's ROM BIOS for its first instruction in the startup program. The ROM BIOS stores the first instruction, which is the instruction to run the power-on self test (POST), in a predetermined memory address. POST begins by checking the BIOS chip and then tests CMOS RAM. If the POST does not detect a battery failure, it then continues to initialize the CPU, checking the inventoried hardware devices (such as the video card), secondary storage devices, such as hard drives and floppy drives, ports and other hardware devices, such as the keyboard and mouse, to ensure they are functioning properly.
Once the POST has determined that all components are functioning properly and the CPU has successfully initialized, the BIOS looks for an OS to load.
The BIOS typically looks to the CMOS chip to tell it where to find the OS, and in most PCs, the OS loads from the C drive on the hard drive even though the BIOS has the capability to load the OS from a floppy disk, CD or ZIP drive. The order of drives that the CMOS looks to in order to locate the OS is called the boot sequence, which can be changed by altering the CMOS setup. Looking to the appropriate boot drive, the BIOS will first encounter the boot record, which tells it where to find the beginning of the OS and the subsequent program file that will initialize the OS.
Once the OS initializes, the BIOS copies its files into memory and the OS basically takes over control of the boot process. Now in control, the OS performs another inventory of the system's memory and memory availability (which the BIOS already checked) and loads the device drivers that it needs to control the peripheral devices, such as a printer, scanner, optical drive, mouse and keyboard. This is the final stage in the boot process, after which the user can access the system’s applications to perform tasks.

system structure

Simple Structure



-->View the OS as a series of levels
-->Each level performs a related subset of functions
-->Each level relies on the next lower level to perform more primitive functions
-->This decomposes a problem into a number of more manageable subproblems



Layered Approach



The operating system is divided into a number of layers (levels), each built on top of lower layers. The bottom layer (layer 0), is the hardware; the highest (layer N) is the user interface.


With modularity, layers are selected such that each uses functions (operations) and services of only lower-level layers.

Thursday, July 2, 2009

system call

  • Process control is a statistics and engineering discipline that deals with architectures, mechanisms, and algorithms for controlling the output of a specific process. See also control theory.
    For example, heating up the temperature in a room is a process that has the specific, desired outcome to reach and maintain a defined temperature (e.g. 20°C), kept constant over time. Here, the temperature is the controlled variable. At the same time, it is the input variable since it is measured by a thermometer and used to decide whether to heat or not to heat. The desired temperature (20°C) is the setpoint. The state of the heater (e.g. the setting of the valve allowing hot water to flow through it) is called the manipulated variable since it is subject to control actions.
    A commonly used control device called a programmable logic controller, or a PLC, is used to read a set of digital and analog inputs, apply a set of logic statements, and generate a set of analog and digital outputs. Using the example in the previous paragraph, the room temperature would be an input to the PLC. The logical statements would compare the setpoint to the input temperature and determine whether more or less heating was necessary to keep the temperature constant. A PLC output would then either open or close the hot water valve, an incremental amount, depending on whether more or less hot water was needed. Larger more complex systems can be controlled by a Distributed Control System (DCS) or SCADA system.
    In practice, process control systems can be characterized as one or more of the following forms:
    »Discrete – Found in many manufacturing, motion and packaging applications. Robotic assembly, such as that found in automotive production, can be characterized as discrete process control. Most discrete manufacturing involves the production of discrete pieces of product, such as metal stamping.
    »Batch – Some applications require that specific quantities of raw materials be combined in specific ways for particular durations to produce an intermediate or end result. One example is the production of adhesives and glues, which normally require the mixing of raw materials in a heated vessel for a period of time to form a quantity of end product. Other important examples are the production of food, beverages and medicine. Batch processes are generally used to produce a relatively low to intermediate quantity of product per year (a few pounds to millions of pounds).
    »Continuous – Often, a physical system is represented though variables that are smooth and uninterrupted in time. The control of the water temperature in a heating jacket, for example, is an example of continuous process control. Some important continuous processes are the production of fuels, chemicals and plastics. Continuous processes, in manufacturing, are used to produce very large quantities of product per year(millions to billions of pounds).

  • File Management: These types of system calls are used to manage files. Some examples are Create file, delete file, open, close, read, write etc.

  • Device Management: These types of system calls are used to manage devices. Some examples are Request device, release device, read, write, get device attributes etc.

  • information maintenance

many system call exist simply for the purpose of transferring information between the user program ande the operating system. for example most system, most system have a system call to return thecurrent time and date. other system calls may return information about the system, sush as the number of current users, the version number of the operationg system, the ammount of free memoryor disk space, and so on

operating system services

Operating systems are responsible for providing essential services within a computer system:
»Initial loading of programs and transfer of programs between secondary storage and main »memory
»Supervision of the input/output devices
»File management
»Protection facilities

System Components

  • operating system process management-Multiprogramming systems explicitly allow multiple processes to exist at any given time, where only one is using the CPU at any given moment, while the remaining processes are performing I/O or are waiting.
    The process manager is of the four major parts of the operating system. It implements the process abstraction. It does this by creating a model for the way the process uses CPU and any system resources. Much of the complexity of the operating system stems from the need for multiple processes to share the hardware at the same time. As a conseuence of this goal, the process manager implements CPU sharing ( called scheduling ), process synchronization mechanisms, and a deadlock strategy. In addition, the process manager implements part of the operating system's protection and security.

  • Main memory management

Memory management is a tricky compromise between performance (access time) and quantity (available space). We always seek the maximum available memory space but we are rarely prepared to compromise on performance. Memory management must also perform the following functions:
»allow memory sharing (for a multi-threaded system);
»allocate blocks of memory space for different tasks;
»protect the memory spaces used (e.g. prevent a user from changing a task performed by »another user);
»optimise the quantity of available memory, specifically via memory expansion systems.

  • file management

Specifically, one may create a new file or edit an existing file and save it; open or load a pre-existing file into memory; or close a file without saving it. Additionally, one may group related files in directories. These tasks are accomplished in different ways in different operating systems and depend on the user interface design and, to some extent, the storage medium being used.

  • I/O system management

In a multi-computer system having a plurality of computers, an input/output device configuration definition table and an input/output device configuration reference table are adapted to be collectively managed. A configuration management program manages the configuration definition of all input/output devices of a plurality of computers by using the input/output device configuration definition table, and generates a changed data file when an input/output device configuration is changed. Dynamic system alteration is effected by changing the contents of the input/output device configuration reference table stored in a shared memory, in accordance with the changed data file. The input/output device configuration definition table and the input/output device configuration reference table each have an input/output device information part and an input/output device connection information part arranged in a matrix form to allow addition/deletion of an input/output device and a computer.

  • secondary storage management

Secondary storage management is a classical feature of database management systems. It is usually supported through a set of mechanisms. These include index management, data clustering, data buffering, access path selection and query optimization.
None of these is visible to the user: they are simply performance features. However, they are so critical in terms of performance that their absence will keep the system from performing some tasks (simply because they take too much time). The important point is that they be invisible. The application programmer should not have to write code to maintain indices, to allocate disk storage, or to move data between disk and main memory. Thus, there should be a clear independence between the logical and the physical level of the system.

  • protection system

protection agianst any harm from computer attacks.

  • command -interpreter system

A command interpreter is the part of a computer operating system that understands and executes commands that are entered interactively by a human being or from a program. In some operating systems, the command interpreter is called the shell.