Tuesday, June 23, 2009

IT 213- Operating System

1.)Bootstrap Program
In computing, bootstrapping (from an old expression "
to pull oneself up by one's bootstraps") is a technique by which a simple computer program activates a more complicated system of programs. In the start up process of a computer system, a small program such as BIOS, initializes and tests that hardware, peripherals and external memory devices are connected, then loads a program from one of them and passes control to it, thus allowing loading of larger programs, such as an operating system.
A different use of the term bootstrapping is to use a
compiler to compile itself, by first writing a small part of a compiler of a new programming language in an existing language to compile more programs of the new compiler written in the new language. This solves the "chicken and egg" causality dilemma.
For the historical origins of the term bootstrapping, see
Bootstrapping. -


2.)Difference of Interrupt and trap and their use.

-Interrupt

An interrupt is an external hardware event (for example, a keypress) that triggers the CPU to interrupt the current instruction sequence and call a special interrupt service routine (ISR).
Typically, all computers provide a mechanism by which other modules (I/O, memory) may interrupt the normal processing of the processor. Figure3.1 lists the most common classes of interrupts. An interrupt is a signal to the operating system that an event has occurred, and it results in changes in the sequence of instructions that is executed by the CPU. In the case of a hardware interrupt, the signal originates from a hardware device such as a keyboard (e.g., when a user presses a key), mouse or system clock (a circuit that generates pulses at precise intervals that are used to coordinate the computer's activities). A software interrupt is an interrupt that originates in software, usually by a program in user mode.





Program Flow of Control Without and With Interrupts




Transfer Control via Interrupts


3.)Monitor Mode

-Monitor mode, or RFMON (Radio Frequency Monitor) mode, allows a computer with a
wireless network interface card (NIC) to monitor all traffic received from the wireless network. Unlike promiscuous mode, which is also used for packet sniffing, monitor mode allows packets to be captured without having to associate with an access point or ad-hoc network first. Monitor mode only applies to wireless networks, while promiscuous mode can be used on both wired and wireless networks. Monitor mode is one of the six modes that 802.11 wireless cards can operate in: Master (acting as an access point), Managed (client, also known as station), Ad-hoc, Mesh, Repeater, and Monitor mode.

4.)User mode

-User mode is one of two distinct execution modes for the CPU (central processing unit) in
Linux.
It is a non-privileged mode in which each
process (i.e., a running instance of a program) starts out. It is non-privileged in that it is forbidden for processes in this mode to access those portions of memory (i.e., RAM) that have been allocated to the kernel or to other programs. The kernel is not a process, but rather a controller of processes, and it alone has access to all resources on the system.
When a user mode process (i.e., a process currently in user mode) wants to use a service that is provided by the kernel (i.e., access system resources other than the limited memory space that is allocated to the user program), it must switch temporarily into
kernel mode, which has root (i.e., administrative) privileges, including root access permissions (i.e., permission to access any memory space or other resources on the system). When the kernel has satisfied the process's request, it restores the process to user mode.
This change in mode is termed a mode switch, which should not be confused with a
context switch (i.e., the switching of the CPU from one process to another). The standard procedure to switch from user mode to kernel mode is to call the 0x80 software interrupt.


5.)Device Status table

- There is a Device Status table that shows important device status at a glance. The number of events grouped by severity can be found on the left side of this table. You can click on the 'Event Rainbow' to view the list of events for the device.

6.)Direct Memory Acces DMA

-Direct memory access (DMA) is a feature of modern computers and microprocessors that allows certain hardware subsystems within the computer to access system memory for reading and/or writing independently of the central processing unit. Many hardware systems use DMA including disk drive controllers, graphics cards, network cards and sound cards. DMA is also used for intra-chip data transfer in multi-core processors, especially in multiprocessor system-on-chips, where its processing element is equipped with a local memory (often called scratchpad memory) and DMA is used for transferring data between the local memory and the main memory. Computers that have DMA channels can transfer data to and from devices with much less CPU overhead than computers without a DMA channel. Similarly a processing element inside a multi-core processor can transfer data to and from its local memory without occupying its processor time and allowing computation and data transfer concurrency.


7.)Difference of RAM and DRAM
Random-access memory Random-access memory (usually known by its acronym, RAM) is a form of computer data storage. Today, it takes the form of integrated circuits that allow stored data to be accessed in any order (i.e., at random). The word random thus refers to the fact that any piece of data can be returned in a constant time, regardless of its physical location and whether or not it is related to the previous piece of data.[1]

8.)Storage Structure



-Main Memory
Primary storage, presently known as memory, is the only one directly accessible to the CPU. The CPU continuously reads instructions stored there and executes them as required. Any data actively operated on is also stored there in uniform manner.
Historically,
early computers used delay lines, Williams tubes, or rotating magnetic drums as primary storage. By 1954, those unreliable methods were mostly replaced by magnetic core memory, which was still rather cumbersome. Undoubtedly, a revolution was started with the invention of a transistor, that soon enabled then-unbelievable miniaturization of electronic memory via solid-state silicon chip technology.
This led to a modern
random access memory (RAM). It is small-sized, light, but quite expensive at the same time. (The particular types of RAM used for primary storage are also volatile, i.e. they lose the information when not powered).




-Magnetic Disk


A memory device, such as a floppy disk, a hard disk, or a removable cartridge, that is covered with a magnetic coating on which digital information is stored in the form of microscopically small, magnetized needles or a storage device, consisting of magnetically coated disks, on the surfaces of which information is stored in the form of magnetic spots arranged in a manner to represent binary data.

-Moving Head disk Mechanism



-Magnetic tapes


Magnetic tape is a medium for magnetic recording generally consisting of a thin magnetizable coating on a long and narrow strip of plastic. Nearly all recording tape is of this type, whether used for recording audio or video or for computer data storage. It was originally developed in Germany, based on the concept of magnetic wire recording. Devices that record and playback audio and video using magnetic tape are generally called tape recorders and video tape recorders respectively. A device that stores computer data on magnetic tape can be called a tape drive, a tape unit, or a streamer.
Magnetic tape revolutionized the broadcast and recording industries. In an age when all
radio (and later television) was live, it allowed programming to be prerecorded. In a time when gramophone records were recorded in one take, it allowed recordings to be created in multiple stages and easily mixed and edited with a minimal loss in quality between generations. It is also one of the key enabling technologies in the development of modern computers
. Magnetic tape allowed massive amounts of data to be stored in computers for long periods of time and rapidly accessed when needed.



9.)Storage Hierarchy


The hierarchical arrangement of storage in current computer architectures is called the memory hierarchy. It is designed to take advantage of memory locality in computer programs. Each level of the hierarchy has the properties of higher bandwidth, smaller size, and lower latency than lower levels.
Most modern
CPUs are so fast that for most program workloads, the locality of reference of memory accesses and the efficiency of the caching and memory transfer between different levels of the hierarchy are the practical limitation on processing speed. As a result, the CPU spends much of its time idling, waiting for memory I/O to complete. This is sometimes called the space cost, as a larger memory object is more likely to overflow a small/fast level and require use of a larger/slower level.


-Caching

caching is the caching of web documents (e.g., HTML pages, images) in order to reduce bandwidth usage, server load, and perceived lag. A web cache stores copies of documents passing through it; subsequent requests may be satisfied from the cache if certain conditions are met.


-Coherency and Consistency

Transactional Coherence and Consistency (TCC) offers a way to simplify parallel programming by executing all code in transactions. In TCC systems, transactions serve as the fundamental unit of parallel work, communication and coherence. As each transaction completes, it writes all of its newly produced state to shared memory atomically, while restarting other processors that have speculatively read from modified data. With this mechanism, a TCC-based system automatically handles data synchronization correctly, without programmer intervention. To gain the benefits of TCC, programs must be decomposed into transactions. Decomposing a program into transactions is largely a matter of performance tuning rather than correctness, and that a few basic transaction programming optimization techniques are sufficient to obtain good performance over a wide range of applications with little programmer effort.
In computing, cache coherence (also cache coherency) refers to the integrity of data stored in local caches of a shared resource. Cache coherence is a special case of memory coherence.
When clients in a system maintain
caches of a common memory resource, problems may arise with inconsistent data. This is particularly true of CPUs in a multiprocessing system. Referring to the "Multiple Caches of Shared Resource" figure, if the top client has a copy of a memory block from a previous read and the bottom client changes that memory block, the top client could be left with an invalid cache of memory without any notification of the change. Cache coherence is intended to manage such conflicts and maintain consistency between cache and memory.




10.)Hardware Protection


-Dual-mode Operation


Needed to protect the OS from improper behavior of application programs
CPU must provide at least 2 modes of operation ,Monitor mode ,User mode
Application runs in user mode only ,OS runs in monitor mode only ,Needed to protect the OS from improper behavior of application programs .

-I/O protection
An I/O storage
protection arrangement in a computer system containing at least one central processor (CP) entity, at least one I/O processor interface, and a shared main storage accessible to both the CP entity and the I/O processor interface, the shared main storage having a plurality of storage blocks (storage) accessible to both the CP entity and I/O processor interface, the computer system executing CP programs and I/O programs that access data and programs in the blocks, the I/O storage protection arrangement comprising:


-Memory Protection
Memory protection is a way to control memory access rights on a computer, and is a part of nearly every modern operating system. The main purpose of memory protection is to prevent a process from accessing memory that has not been allocated to it. This prevents a bug within a process from affecting other processes, or the operating system itself. Memory protection also makes a Rootkit more difficult to implement. Memory protection is a behavior that is distinct from ASLR and the NX bit.


-CPU Protection
is protecting the CPU from being dominated by a process. To do this the OS must
be capable of pre-empting process execution, and usually restricts the execution time of the process.

































































































































































































Thursday, June 18, 2009

Operating System

1.What is the difference of OS in terms of user's view and system's view?
-USER'S VIEW -OS is designed mostly for ease of users, with some

attention to performance and none to resource utilization.
-SYSTEM'S VIEW- OS serves as resource allocator.
-OS as a control program.

2.Explain the goals of OS.

- Execute user programs and make solving user problems.
- make the computer system convenient to use.



3.What's the difference between Batch systems, multiprogrammed systems, and time-sharing systems?

-Batch system has been associated with
mainframe computers since the earliest days of electronic computing in 1950s. Because such computers were enormously costly, batch processing was the only economically-viable option of their use. In those days, interactive sessions with either text-based computer terminal interfaces or graphical user interfaces were not widespread. Initially, computers were not even capable of having multiple programs loaded into the main memory.


- Multiprogrammed systems, process delays are quite common, due to preemptions. Most lock-based synchronization algorithms perform poorly in the face of such delays, because a delayed process holding a lock can impede the progress of other processes waiting for that lock. Furthermore, lock-based algorithms are susceptible to problems such as deadlock and priority inversion. Lock-free and wait-free algorithms are implemented without locking mechanisms, and therefore do not suffer from these problems. This framework will be established through a combination of research on new algorithmic techniques for efficiently implementing lock-free and wait-free shared objects in multiprogrammed systems, and new lower-bound and impossibility results that help reveal characteristics that optimal or near-optimal algorithms must have. The framework to be developed will be evaluated experimentally through research involving simulation models, synthetic workloads, and real-world applications.


-Time-sharing is sharing a computing resource among many users by
multitasking. Its introduction in the 1960s, and emergence as the prominent model of computing in the 1970s, represents a major historical shift in the history of computing. By allowing a large number of users to interact simultaneously on a single computer, time-sharing dramatically lowered the cost of providing computing, while at the same time making the computing experience much more interactive.



4.Advantages of parallel systems.

Parallel processing advantages os shared memory systems are these:
-memory access is cheaper than inter-code communication. This means that internal synchronization is faster than using the lock manager.
-shared memory system are easier to administer than a cluster.



5.Differentiate Sytemmetric Multiprocessing and Asymmetric Multiprocessing.


-symmetric multiprocessing or SMP involves a
multiprocessor computer-architecture where two or more identical processors can connect to a single shared main memory. Most common multiprocessor systems today use an SMP architecture. In the case of multi-core processors, the SMP architecture applies to the cores, treating them as separate processors.
SMP systems allow any processor to work on any task no matter where the data for that task are located in memory; with proper
operating system support, SMP systems can easily move tasks between processors to balance the workload efficiently.

Asymmetric multiprocessing or ASMP is a type of
multiprocessing supported in DEC's VMS V.3 as well as a number of older systems including TOPS-10 and OS-360. It varies greatly from the standard processing model that we see in personal computers today. Due to the complexity and unique nature of this architecture, it was not adopted by many vendors or programmers during its brief stint between 1970 - 1980.
Where as a
symmetric multiprocessor or SMP treats all of the processing elements in the system identically, an ASMP system assigns certain tasks only to certain processors. In particular, only one processor may be responsible for fielding all of the interrupts in the system or perhaps even performing all of the I/O in the system. This makes the design of the I/O system much simpler, although it tends to limit the ultimate performance of the system. Graphics cards, physics cards and cryptographic accelerators which are subordinate to a CPU in modern computers can be considered a form of asymmetric multiprocessing.[citation needed] SMP is extremely common in the modern computing world, when people refer to "multi core" or "multi processing" they are most commonly referring to SMP.



6.Differentiate client-server systems and peer-to -peer systems.

Client-server

computing or networking is a
distributed application architecture that partitions tasks or work loads between service providers (servers) and service requesters, called clients.[1] Often clients and servers operate over a computer network on separate hardware. A server is a high-performance host that is a registering unit and shares its resources with clients. A client does not share any of its resources, but requests a server's content or service function. Clients therefore initiate communication sessions with servers which await (listen to) incoming requests.

Peer-to-peer (P2P)

networking is a method of delivering
computer network services in which the participants share
a portion of their own resources, such as processing power, disk storage, network bandwidth, printing facilities. Such resources are provided directly to other participants without intermediary network hosts or servers.
[1] Peer-to-peer network participants are providers and consumers of network services simultaneously, which contrasts with other service models, such as traditional client-server computing.




7.Differentiate the design issues of OS between a stand-alone PC and workstation
connected to a network.


a Stand-alone PC- refers to a device that is self-contained, one that does not require any other devices to function. For example, a fax machine is a stand alone device because it does not require a computer, printer, modem or other device. A printer, on the other hand is not a stand- alone device because it requires a computer to feed it data.



a workstation is a high-end microcomputer designed for technical or scientific applications.
intended primarily to be used by one person at a time, they commonly connected to a local
are network and run multi user-operating systems. The term workstation has also been used
to refer to a mainframe computer terminal or a PC connected to a network.

8.Define the essential properties of the following types of OS:



a.Batch- has been associated with
mainframe computers since the earliest days of electronic computing in 1950s. Because such computers were enormously costly, batch processing was the only economically-viable option of their use. In those days, interactive sessions with either text-based computer terminal interfaces or graphical user interfaces were not widespread. Initially, computers were not even capable of having multiple programs loaded into the main memory.

b.Time Sharing - is sharing a computing resource among many users by
multitasking. Its introduction in the 1960s, and emergence as the prominent model of computing in the 1970s, represents a major historical shift in the history of computing. By allowing a large number of users to interact simultaneously on a single computer, time-sharing dramatically lowered the cost of providing computing, while at the same time making the computing experience much more interactive.

c.Real Time- real-time computing (RTC) is the study of hardware and software systems that are subject to a "real-time constraint"—i.e., operational deadlines from event to system response. By contrast, a non-real-time system is one for which there is no deadline, even if fast response or high performance is desired or preferred. The needs of real-time software are often addressed in the context of real-time operating systems, and synchronous programming languages, which provide frameworks on which to build real-time application software.
A real time system may be one where its application can be considered (within context) to be
mission critical. The anti-lock brakes on a car are a simple example of a real-time computing system — the real-time constraint in this system is the short time in which the brakes must be released to prevent the wheel from locking. Real-time computations can be said to have failed if they are not completed before their deadline, where their deadline is relative to an event. A real-time deadline must be met, regardless of system load.

d.Network- A computer network is a group of interconnected computers. Networks may be classified according to a wide variety of characteristics. This article provides a general overview of some types and categories and also presents the basic components of a network.


e.Distributed -Distributed computing deals with hardware and software systems containing more than one processing element or storage element, concurrent processes, or multiple programs, running under a loosely or tightly controlled regime.
In distributed computing a program is split up into parts that run simultaneously on multiple computers communicating over a network. Distributed computing is a form of
parallel computing, but parallel computing is most commonly used to describe program parts running simultaneously on multiple processors in the same computer. Both types of processing require dividing a program into parts that can run simultaneously, but distributed programs often must deal with heterogeneous environments, network links of varying latencies, and unpredictable failures in the network or the computers.

f.Handheld-Handheld PC, or H/PC for short, is a term for a computer built around a form factor which is smaller than any standard laptop computer. It is sometimes referred to as a Palmtop. The first handheld device compatible with desktop IBM personal computers of the time was the Atari Portfolio of 1989. Another early model was the Poqet PC of 1989 and the Hewlett Packard HP 95LX of 1991. Other MS DOS compatible hand-held computers also existed.
Some Handheld PCs run on Microsoft's Windows CE operating system, with the term also covering Windows CE devices released by the broader commercial market.