Thursday, June 25, 2009

device status table

Is there a way to get any third-party devices that are regularly sending SNMP traps to CIM7 to show up in the Device Status field relating to the trap severity? I am getting plenty of major error messages from one of my switches, but the device is still showing as normal.

02.interup and trap


Interrupts and Traps. A great deal of the kernel consists of code that is invoked as the result of a interrupt or a trap.
While the words "interrupt" and "trap" are often used interchangeably in the context of operating systems, there is a distinct difference between the two.
An interrupt is a CPU event that is triggered by some external device.
A trap is a CPU event that is triggered by a program. Traps are sometimes called software interrupts. They can be deliberately triggered by a special instruction, or they may be triggered by an illegal instruction or an attempt to access a restricted resource.When an interrupt is triggered by an external device the hardware will save the the status of the currently executing process, switch to kernel mode, and enter a routine in the kernel.
This routine is a first level interrupt handler. It can either service the interrupt itself or wake up a process that has been waiting for the interrupt to occur.When the handler finishes it usually causes the CPU to resume the processes that was interrupted. However, the operating system may schedule another process instead.When an executing process requests a service from the kernel using a trap the process status information saved, the CPU is placed in kernel mode, and control passes to code in the kernel.
This kernel code is called the system service dispatcher. It examines parameters set before the trap was triggered, often information in specific CPU registers, to determine what action is required. Control then passes to the code that performs the desired action.When the service is finished, control is returned to either the process that triggered the trap or some other process.
Traps can also be triggered by a fault. In this case the usual action is to terminate the offending process. It is possible on some systems for applications to register handlers that will be evoked when certain conditions occur -- such as a division by zero.

01.bootstrap program

Code stored in ROM that is able to locate the kernel, load it into memory, and start its execution

In computing, booting is a bootstrapping process that starts operating systems when the user turns on a computer system.Most computer systems can only execute code found in the memory (ROM or RAM); modern operating systems are mostly stored on hard disk drives, LiveCDs and USB flash drive. Just after a computer has been turned on, it doesn't have an operating system in memory. The computer's hardware alone cannot perform complicated actions of the operating system, such as loading a program from disk on its own; so a seemingly irresolvable paradox is created: to load the operating system into memory, one appears to need to have an operating system already installed.

hardware protection

click the hyper link below to enter content...
http://informatik.unibas.ch/lehre/ws06/cs201/_Downloads/cs201-osc-svc-2up.pdf

strorage herachy

caching
-In computer science, a cache (pronounced /kæʃ/) is a collection of data duplicating original values stored elsewhere or computed earlier, where the original data is expensive to fetch (owing to longer access time) or to compute, compared to the cost of reading the cache. In other words, a cache is a temporary storage area where frequently accessed data can be stored for rapid access. Once the data is stored in the cache, it can be used in the future by accessing the cached copy rather than re-fetching or recomputing the original data.
A cache has proven to be extremely effective in many areas of computing because access patterns in typical computer applications have locality of reference. There are several kinds of locality, but this article primarily deals with data that are accessed close together in time (temporal locality). The data might or might not be located physically close to each other (spatial locality).

caching, coherency and consistency

Cache coherency problems can arise when more than one processor refers to the same data. Assuming each processor has cached a piece of data, what happens if one processor modifies its copy of the data? The other processor now has a stale copy of the data in its cache.
Cache coherency and consistency define the action of the processors to maintain coherence. More precisely, coherency defines what value is returned on a read, and consistency defines when it is available.
Unlike other Cray systems, cache coherency on Cray X1 systems is supported by a directory-based hardware protocol. This protocol, together with a rich set of synchronization instructions, provides different levels of memory consistency.
Processors may cache memory from their local node only; references to memory on other nodes are not cached. However, while only local data is cached, the entire machine is kept coherent in accordance with the memory consistency model. Remote reads will obtain the latest “dirty” data from another processor's cache, and remote writes will update or invalidate lines in another processor's cache. Thus, the whole machine is kept coherent.

storage structure

• Main Memory


– only large storage media that the CPU can access directly.





• Magnetic Disks


– rigid metal or glass platters covered with magnetic recording material.


– Disk surface is logically divided into tracks, which are subdivided into sectors.


– The disk controller determines the logical interaction between the device and the computer.


Moving Head Mechanism


• Magnetic Tapes
Magnetic tape is a medium for magnetic recording generally consisting of a thin magnetizable coating on a long and narrow strip of plastic. Nearly all recording tape is of this type, whether used for recording audio or video or for computer data storage. It was originally developed in Germany, based on the concept of magnetic wire recording. Devices that record and playback audio and video using magnetic tape are generally called tape recorders and video tape recorders respectively. A device that stores computer data on magnetic tape can be called a tape drive, a tape unit, or a streamer.Magnetic tape revolutionized the broadcast and recording industries. In an age when all radio (and later television) was live, it allowed programming to be prerecorded. In a time when gramophone records were recorded in one take, it allowed recordings to be created in multiple stages and easily mixed and edited with a minimal loss in quality between generations. It is also one of the key enabling technologies in the development of modern computers. Magnetic tape allowed massive amounts of data to be stored in computers for long periods of time and rapidly accessed when needed.Today, many other technologies exist that can perform the functions of magnetic tape. In many cases these technologies are replacing tape. Despite this, innovation in the technology continues and tape is still widely used.


Tuesday, June 23, 2009

4.user mode

In User mode, the executing code has no ability to directly access hardware or reference memory. Code running in user mode must delegate to system APIs to access hardware or memory. Due to the protection afforded by this sort of isolation, crashes in user mode are always recoverable. Most of the code running on your computer will execute in user mode.

1.

In computing, bootstrapping (from an old expression "to pull oneself up by one's bootstraps") is a technique by which a simple computer program activates a more complicated system of programs. In the start up process of a computer system, a small program such as BIOS, initializes and tests that hardware, peripherals and external memory devices are connected, then loads a program from one of them and passes control to it, thus allowing loading of larger programs, such as an operating system.
A different use of the term bootstrapping is to use a compiler to compile itself, by first writing a small part of a compiler of a new programming language in an existing language to compile more programs of the new compiler written in the new language. This solves the "chicken and egg" causality dilemma.
For the historical origins of the term bootstrapping, see Bootstrapping.

3.

Monitor mode, or RFMON (Radio Frequency Monitor) mode, allows a computer with a wireless network interface card (NIC) to monitor all traffic received from the wireless network. Unlike promiscuous mode, which is also used for packet sniffing, monitor mode allows packets to be captured without having to associate with an access point or ad-hoc network first. Monitor mode only applies to wireless networks, while promiscuous mode can be used on both wired and wireless networks. Monitor mode is one of the six modes that 802.11 wireless cards can operate in: Master (acting as an access point), Managed (client, also known as station), Ad-hoc, Mesh, Repeater, and Monitor mode.

6. direct memory access

Direct memory access (DMA) is a feature of modern computers and microprocessors that allows certain hardware subsystems within the computer to access system memory for reading and/or writing independently of the central processing unit. Many hardware systems use DMA including disk drive controllers, graphics cards, network cards and sound cards. DMA is also used for intra-chip data transfer in multi-core processors, especially in multiprocessor system-on-chips, where its processing element is equipped with a local memory (often called scratchpad memory) and DMA is used for transferring data between the local memory and the main memory. Computers that have DMA channels can transfer data to and from devices with much less CPU overhead than computers without a DMA channel. Similarly a processing element inside a multi-core processor can transfer data to and from its local memory without occupying its processor time and allowing computation and data transfer concurrency.
Without DMA, using programmed input/output (PIO) mode for communication with peripheral devices, or load/store instructions in the case of multicore chips, the CPU is typically fully occupied for the entire duration of the read or write operation, and is thus unavailable to perform other work. With DMA, the CPU would initiate the transfer, do other operations while the transfer is in progress, and receive an interrupt from the DMA controller once the operation has been done. This is especially useful in real-time computing applications where not stalling behind concurrent operations is critical. Another and related application area is various forms of stream processing where it is essential to have data processing and transfer in parallel, in order to achieve sufficient throughput.

7. difference of RAM and DRAM

RAM (Random Access Memory) is a generic name for any sort of read/write memory that can be, well, randomly accessed. All computer memory functions as arrays of stored bits, "0" and "1", kept as some kind of electrical state. Some sorts support random access, others (such as the flash memory used in MP3 players and digital cameras) has a serial nature to it. A CPU normally runs through a short sequence of memory locations for instructions, then jumps to another routine, jumps around for data, etc. So CPUs depend on dynamic RAM for their primary memory, since there's little or no penalty for jumping all around in such memory. There are many different kinds of RAM. DRAM is one such sort, Dynamic RAM. This refers to a sort of memory that stores data very efficiently, circuit-wise. A single transistor (an electronic switch) and a capacitor (charge storage device) store each "1" or "0". An alternate sort is called Static RAM, which usually has six transistors used to store each bit. The advantage of the DRAM is that each bit can be very small, physically. The disadvantage is that the stored charge doesn't last really long, so it has to be "refreshed" perodically. All modern DRAM types have on-board electronics that makes the refresh process pretty simple and efficient, but it is one additional bit of complexity. There are various sorts of DRAM around: plain (asynchronous) DRAM, SDRAM (synchronous, meaning all interactions are synchronized by a clock signal), DDR (double-data rate... data goes to/from the memory at twice the rate of the clock), etc. These differences are significant to hardware designers, but not usually a big worry for end-users... other than ensuring you buy the right kind of DRAM, if you plan to upgrade you system.

08. main memory

Refers to physical memory that is internal to the computer. The word main is used to distinguish it from external mass storage devices such as disk drives. Another term for main memory is RAM.
The computer can manipulate only data that is in main memory. Therefore, every program you execute and every file you access must be copied from a storage device into main memory. The amount of main memory on a computer is crucial because it determines how many programs can be executed at one time and how much data can be readily available to a program.
Because computers often have too little main memory to hold all the data they need, computer engineers invented a technique called swapping, in which portions of data are copied into main memory as they are needed. Swapping occurs when there is no room in memory for needed data. When one portion of data is copied into memory, an equal-sized portion is copied (swapped) out to make room.
Now, most PCs come with a minimum of 32 megabytes of main memory. You can usually increase the amount of memory by inserting extra memory in the form of chips.

9. magnetic disk

A memory device, such as a floppy disk, a hard disk, or a removable cartridge, that is covered with a magnetic coating on which digital information is stored in the form of microscopically small, magnetized needles.

10. storage hierarchy

The range of memory and storage devices within the computer system. The following list starts with the slowest devices and ends with the fastest. See storage and memory.
VERY SLOW
Punch cards (obsolete)
Punched paper tape (obsolete)
FASTER
Bubble memory
Floppy disks
MUCH FASTER
Magnetic tape
Optical discs (CD-ROM, DVD-ROM, MO, etc.)
Magnetic disks with movable heads
Magnetic disks with fixed heads (obsolete)
Low-speed bulk memory
FASTEST
Flash memory
Main memory
Cache memory
Microcode
Registers

Sunday, June 21, 2009

essential properties

A. Batch: Jobs with similar needs are batched together and run through the computer as a group, by an operator or automatic job sequencer. Performance is increased by attempting to keep CPU and I/O devices busy at all times through buffering, off-line operation, spooling, and multiprogramming.

B. time-sharing
  1. Computer Science. A technique permitting many users simultaneous access to a central computer through remote terminals.
  2. also time-share (-shâr') Joint ownership or lease of vacation property by several people who take turns occupying the premises for fixed period
C. Real time

In computer science, real-time computing (RTC) is the study of hardware and software systems that are subject to a "real-time constraint"—i.e., operational deadlines from event to system response. By contrast, a non-real-time system is one for which there is no deadline, even if fast response or high performance is desired or preferred. The needs of real-time software are often addressed in the context of real-time operating systems, and synchronous programming languages, which provide frameworks on which to build real-time application software.

A real time system may be one where its application can be considered (within context) to be mission critical. The anti-lock brakes on a car are a simple example of a real-time computing system — the real-time constraint in this system is the short time in which the brakes must be released to prevent the wheel from locking. Real-time computations can be said to have failed if they are not completed before their deadline, where their deadline is relative to an event. A real-time deadline must be met, regardless of system load.

D. network

A computer network is a group of interconnected computers. Networks may be classified according to a wide variety of characteristics. This article provides a general overview of some types and categories and also presents the basic components of a network.

E.

Distributed computing deals with hardware and software systems containing more than one processing element or storage element, concurrent processes, or multiple programs, running under a loosely or tightly controlled regime.

In distributed computing a program is split up into parts that run simultaneously on multiple computers communicating over a network. Distributed computing is a form of parallel computing, but parallel computing is most commonly used to describe program parts running simultaneously on multiple processors in the same computer. Both types of processing require dividing a program into parts that can run simultaneously, but distributed programs often must deal with heterogeneous environments, network links of varying latencies, and unpredictable failures in the network or the computers.

F. handheld
A mobile device (also known as cellphone device, handheld device, handheld computer, "Palmtop" or simply handheld) is a pocket-sized computing device, typically having a display screen with touch input or a miniature keyboard. In the case of the personal digital assistant (PDA) the input and output are combined into a touch-screen interface. Smartphones and PDAs are popular amongst those who require the assistance and convenience of a conventional computer, in environments where carrying one would not be practical. Enterprise digital assistants can further extend the available functionality for the business user by offering integrated data capture devices like Bar Code, RFID and Smart Card readers.

views of system and users

Operating system (commonly abbreviated to either OS or O/S) is an interface between hardware and user; it is responsible for the management and coordination of activities and the sharing of the resources of the computer. The operating system acts as a host for computing applications that are run on the machine. As a host, one of the purposes of an operating system is to handle the details of the operation of the hardware. This relieves application programs from having to manage these details and makes it easier to write applications. Almost all computers (including handheld computers, desktop computers, supercomputers, video game consoles) as well as some robots, domestic appliances (dishwashers, washing machines), and portable media players use an operating system of some type. [1] Some of the oldest models may however use an embedded operating system, that may be contained on a compact disk or other data storage device.

Operating systems offer a number of services to application programs and users. Applications access these services through application programming interfaces (APIs) or system calls. By invoking these interfaces, the application can request a service from the operating system, pass parameters, and receive the results of the operation. Users may also interact with the operating system with some kind of software user interface (UI) like typing commands by using command line interface (CLI) or using a graphical user interface (GUI, commonly pronounced “gooey”). For hand-held and desktop computers, the user interface is generally considered part of the operating system. On large multi-user systems like Unix and Unix-like systems, the user interface is generally implemented as an application program that runs outside the operating system. (Whether the user interface should be included as part of the operating system is a point of contention.)

Common contemporary operating system families include BSD, Darwin (Mac OS X), Linux, SunOS (Solaris/OpenSolaris), and Windows NT (XP/Vista/7). While servers generally run Unix or some Unix-like operating system, embedded system markets are split amongst several operating systems.

Thursday, June 18, 2009

multiprogramming, batch, time-sharing system

THE multiprogramming system was a computer operating system designed by a team led by Edsger W. Dijkstra, described in monographs in 1965-66 and published in 1968. Dijkstra never named the system; "THE" is simply the abbreviation of "Technische Hogeschool Eindhoven", then the name (in Dutch) of the Eindhoven University of Technology of the Netherlands. The THE system was primarily a batch system[1] that supported multitasking; it was not designed as a multi-user operating system. It was much like the SDS 940, but "the set of processes in the THE system was static."[1]The THE system apparently introduced the first forms of software-based memory segmentation (the Electrologica X8 did not support hardware-based memory management)[1], freeing programmers from being forced to use actual physical locations on the drum memory. It did this by using a modified ALGOL compiler (the only programming language supported by Dijkstra's system) to "automatically generate calls to system routines, which made sure the requested information was in memory, swapping if necessary."[1]

Time-sharing is sharing a computing resource among many users by multitasking. Its introduction in the 1960s, and emergence as the prominent model of computing in the 1970s, represents a major historical shift in the history of computing. By allowing a large number of users to interact simultaneously on a single computer, time-sharing dramatically lowered the cost of providing computing, while at the same time making the computing experience much more interactive.

Batch system-In computing, system for processing data with little or no operator intervention. Batches of data are prepared in advance to be processed during regular ‘runs’ (for example, each night). This allows efficient use of the computer and is well suited to applications of a repetitive nature, such bulk file format conversion, as a company payroll, or the production of utility bills.

advantge of parallel system

One capability of MatLab is that its Parallel Computing Toolbox can automatically parcel out work based on the availability of units of computation, whether they are cores, clusters, or grids. Further, the Parallel Computing Toolbox enables users to run MatLab applications in up to eight cores locally, taking advantage of the latest trends in desktop system processors.
For those of you who write your own MatLab code, you can add some code to help it still more. Using parallel commands as a part of, for example, FOR loops (assuming that the repeating computations are independent), you can explicitly tell those loops to execute simultaneously on whatever resources are available. Rather than FOR, the MatLab instruction is PARFOR. And any code will execute on all defined resources in the project.
There is more to analysis than code. You may also have extremely large datasets. By breaking up problems on different computers in a cluster or grid, you can keep a smaller set of data in memory during parallel computation and still collect the data and analyze it later. If you can t run on a desktop system because of heap space limitations, maybe now you can.
The end result is you can take existing MatLab code and run it in parallel on multiple cores, often with few or no changes. Depending on where you are executing this code, you can get some pretty significant performance improvements. And best of all, you can largely use your existing MatLab routines.
While not everyone uses MatLab, it provides engineers with the opportunity to make full use of the power of their desktop computers and other inexpensive clusters and grids, including the Amazon EC2 cloud. Just be prepared to say goodbye to renting time on your favorite supercomputer.

symmetric multiprocessing and Asymmetric multiprocessing

Asymmetric multiprocessing - In asymmetric multiprocessing (ASMP), the operating system typically sets aside one or more processors for its exclusive use. The remainder of the processors run user applications. As a result, the single processor running the operating system can fall behind the processors running user applications. This forces the applications to wait while the operating system catches up, which reduces the overall throughput of the system. In the ASMP model, if the processor that fails is an operating system processor, the whole computer can go down.
Symmetric mMultiprocessing - Symmetric multiprocessing (SMP) technology is used to get higher levels of performance. In symmetric multiprocessing, any processor can run any type of thread. The processors communicate with each other through shared memory.
SMP systems provide better load-balancing and fault tolerance. Because the operating system threads can run on any processor, the chance of hitting a CPU bottleneck is greatly reduced. All processors are allowed to run a mixture of application and operating system code. A processor failure in the SMP model only reduces the computing capacity of the system.
SMP systems are inherently more complex than ASMP systems. A tremendous amount of coordination must take place within the operating system to keep everything synchronized. For this reason, SMP systems are usually designed and written from the ground up.

Goals of OS

The main functions of an OS include:
In a multitasking operating system where multiple programs can be running at the same time, the operating system determines which applications should run in what order and how much time should be allowed for each application before giving another application a turn.
It manages the sharing of internal memory among multiple applications.
It handles and monitors input and output to and from attached hardware devices, such as hard disks, printers, and dial-up ports. [8]
It sends messages to each application or interactive user (or to a system operator) about the status of operation and any errors that may have occurred.
It can offload the management of what are called batch jobs (for example, printing) so that the initiating application is freed from this work.
On computers that can provide parallel processing, an operating system can manage how to divide the program so that it runs on more than one processor at a time.
Scheduling the activities of the CPU and resources to achieve efficiency and prevention of deadlock. [9]

Peer to Peer vs. Client/Server Networks

Peer-to-Peer Networks
A peer-to-peer network allows two or more PCs to pool their resources together. Individual resources like disk drives, CD-ROM drives, and even printers are transformed into shared, collective resources that are accessible from every PC.
Unlike client-server networks, where network information is stored on a centralized file server PC and made available to tens, hundreds, or thousands client PCs, the information stored across peer-to-peer networks is uniquely decentralized. Because peer-to-peer PCs have their own hard disk drives that are accessible by all computers, each PC acts as both a client (information requestor) and a server (information provider). In the diagram below, three peer-to-peer workstations are shown. Although not capable of handling the same amount of information flow that a client-server network might, all three computers can communicate directly with each other and share one another's resources.
A peer-to-peer network can be built with either 10BaseT cabling and a hub or with a thin coax backbone. 10BaseT is best for small workgroups of 16 or fewer users that do not span long distances, or for workgroups that have one or more portable computers that may be disconnected from the network from time to time.
After the networking hardware has been installed, a peer-to-peer network software package must be installed onto all of the PCs. Such a package allows information to be transferred back and forth between the PCs, hard disks, and other devices when users request it. Popular peer-to-peer NOS software includes Windows98, Windows 95, Windows for Workgroups, Artisoft LANtastic, and NetWare Lite.
Most NOSs allow each peer-to-peer user to determine which resources will be available for use by other users. Specific hard & floppy disk drives, directories or files, printers, and other resources can be attached or detached from the network via software. When one user's disk has been configured so that it is "sharable", it will usually appear as a new drive to the other users. In other words, if user A has an A and C drive on his computer, and user B configures his entire C drive as sharable, user A will suddenly have an A, C, and D drive (user A's D drive is actually user B's C drive). Directories work in a similar fashion. If user A has an A & C drive, and user B configures his "C:\WINDOWS" and "C:\DOS" directories as sharable, user A may suddenly have an A, C, D, and E drive (user A's D is user B's C:\WINDOWS, and E is user B's C:\DOS). I hope you got all of that?
Because drives can be easily shared between peer-to-peer PCs, applications only need to be installed on one computer... not two or three. If users have one copy of Microsoft Word, for example, it can be installed on user A's computer... and still used by user B.
The advantages of peer-to-peer over client-server NOSs include:
No need for a network administrator
Network is fast/inexpensive to setup & maintain
Each PC can make backup copies of its data to other PCs for security.
Easiest type of network to build, peer-to-peer is perfect for both home and office use.

Client - Server Networks
In a client-server environment like Windows NT or Novell NetWare, files are stored on a centralized, high speed file server PC that is made available to client PCs. Network access speeds are usually faster than those found on peer-to-peer networks, which is reasonable given the vast numbers of clients that this architecture can support. Nearly all network services like printing and electronic mail are routed through the file server, which allows networking tasks to be tracked. Inefficient network segments can be reworked to make them faster, and users' activities can be closely monitored. Public data and applications are stored on the file server, where they are run from client PCs' locations, which makes upgrading software a simple task--network administrators can simply upgrade the applications stored on the file server, rather than having to physically upgrade each client PC.
In the client-server diagram above, the client PCs are shown to be separate and subordinate to the file server. The clients' primary applications and files are stored in a common location. File servers are often set up so that each user on the network has access to his or her "own" directory, along with a range of "public" directories where applications are stored. If the two clients above want to communicate with each other, they must go through the file server to do it. A message from one client to another is first sent to the file server, where it is then routed to its destination. With tens or hundreds of client PCs, a file server is the only way to manage the often complex and simultaneous operations that large networks require.
Network Printing

In client-server networks, network printing is normally handled by a print server, a small box with at least two connectors: one for a printer, and another that attaches directly to the network cabling. Some print servers have more than two ports... they may, for example, support 2, 3, or 4 printers simultaneously. When a user sends a print job, it travels over the network cabling to the file server where it is stored. When the print server senses that the job is waiting, it moves it from the file server to its attached printer. When the job is finished, the print server returns a result message to the file server, indicating that the process is complete.
In the diagram below, the client PC sends a job to the file server. The file server, in turn, forwards the job to the print server, which sends it to the printer when it's available. Any client on the network can access the printer in this fashion, and it's quite fast. The print server can be placed anywhere on the network, and a network can have more than one print server... possibly one in an office's accounting department, another in marketing, and so on.
Print Servers are available for both client-server and peer-to-peer networks. They're incredibly convenient because they let you put a printer anywhere along your network even if there isn't a computer nearby. However, users often opt not to use a print-server with their peer-to-peer network. Why? Because every computer's resources are available to everyone on the network, A can print a job on B's printer... just as if A had a printer attached to her computer. In this example, the printer is attached to the computer on the right. When the PC on the left sends a job, it "thinks" that it is printing to a printer of its own. In actuality, the job travels over the network cables to the PC on the right, which stores and prints the job in the background. The user at the PC with the printer is never interrupted while his computer processes and prints the job transparently.

Remote Access & Modem Sharing

When a client-server network needs a gateway to the world, the network administrator usually installs a remote-node server, which serves up two functions: remote access and modem sharing. Most remote-node servers attach directly to the network cabling; they provide a bridge between the network, a modem, and a telephone line.
Remote access allows users to dial into their home networks from anywhere in the world. Once a connection has been established over ordinary phone lines by modem, users can access any programs or data on the network just as if they were seated at one of its local workstations. Some remote access servers only provide access to a file server's disk drives. Others can provide access to both the file server and direct access to any PC's hard disk on the network. This saves time because it allows a remote user to communicate directly with any network user without having to go through the file server.
Modem sharing lets local network users dial out from their individual network computers to access the Internet. After firing up their favorite communications software, local users establish a link with the remote-node server over the network, which opens up an outgoing telephone line. Users' individual PCs don't need modems, which is a big money saver... only a single modem & phone line are required for tens or hundreds of users. In the case of peer-to-peer networks, by contrast, every PC requires its own modem for access to the outside world, unless you use special software packages like Wingate or Sygate that can provide the same ability to a Peer-to-Peer network.