Tuesday, June 23, 2009

IT 213- Operating System

1.)Bootstrap Program
In computing, bootstrapping (from an old expression "
to pull oneself up by one's bootstraps") is a technique by which a simple computer program activates a more complicated system of programs. In the start up process of a computer system, a small program such as BIOS, initializes and tests that hardware, peripherals and external memory devices are connected, then loads a program from one of them and passes control to it, thus allowing loading of larger programs, such as an operating system.
A different use of the term bootstrapping is to use a
compiler to compile itself, by first writing a small part of a compiler of a new programming language in an existing language to compile more programs of the new compiler written in the new language. This solves the "chicken and egg" causality dilemma.
For the historical origins of the term bootstrapping, see
Bootstrapping. -


2.)Difference of Interrupt and trap and their use.

-Interrupt

An interrupt is an external hardware event (for example, a keypress) that triggers the CPU to interrupt the current instruction sequence and call a special interrupt service routine (ISR).
Typically, all computers provide a mechanism by which other modules (I/O, memory) may interrupt the normal processing of the processor. Figure3.1 lists the most common classes of interrupts. An interrupt is a signal to the operating system that an event has occurred, and it results in changes in the sequence of instructions that is executed by the CPU. In the case of a hardware interrupt, the signal originates from a hardware device such as a keyboard (e.g., when a user presses a key), mouse or system clock (a circuit that generates pulses at precise intervals that are used to coordinate the computer's activities). A software interrupt is an interrupt that originates in software, usually by a program in user mode.





Program Flow of Control Without and With Interrupts




Transfer Control via Interrupts


3.)Monitor Mode

-Monitor mode, or RFMON (Radio Frequency Monitor) mode, allows a computer with a
wireless network interface card (NIC) to monitor all traffic received from the wireless network. Unlike promiscuous mode, which is also used for packet sniffing, monitor mode allows packets to be captured without having to associate with an access point or ad-hoc network first. Monitor mode only applies to wireless networks, while promiscuous mode can be used on both wired and wireless networks. Monitor mode is one of the six modes that 802.11 wireless cards can operate in: Master (acting as an access point), Managed (client, also known as station), Ad-hoc, Mesh, Repeater, and Monitor mode.

4.)User mode

-User mode is one of two distinct execution modes for the CPU (central processing unit) in
Linux.
It is a non-privileged mode in which each
process (i.e., a running instance of a program) starts out. It is non-privileged in that it is forbidden for processes in this mode to access those portions of memory (i.e., RAM) that have been allocated to the kernel or to other programs. The kernel is not a process, but rather a controller of processes, and it alone has access to all resources on the system.
When a user mode process (i.e., a process currently in user mode) wants to use a service that is provided by the kernel (i.e., access system resources other than the limited memory space that is allocated to the user program), it must switch temporarily into
kernel mode, which has root (i.e., administrative) privileges, including root access permissions (i.e., permission to access any memory space or other resources on the system). When the kernel has satisfied the process's request, it restores the process to user mode.
This change in mode is termed a mode switch, which should not be confused with a
context switch (i.e., the switching of the CPU from one process to another). The standard procedure to switch from user mode to kernel mode is to call the 0x80 software interrupt.


5.)Device Status table

- There is a Device Status table that shows important device status at a glance. The number of events grouped by severity can be found on the left side of this table. You can click on the 'Event Rainbow' to view the list of events for the device.

6.)Direct Memory Acces DMA

-Direct memory access (DMA) is a feature of modern computers and microprocessors that allows certain hardware subsystems within the computer to access system memory for reading and/or writing independently of the central processing unit. Many hardware systems use DMA including disk drive controllers, graphics cards, network cards and sound cards. DMA is also used for intra-chip data transfer in multi-core processors, especially in multiprocessor system-on-chips, where its processing element is equipped with a local memory (often called scratchpad memory) and DMA is used for transferring data between the local memory and the main memory. Computers that have DMA channels can transfer data to and from devices with much less CPU overhead than computers without a DMA channel. Similarly a processing element inside a multi-core processor can transfer data to and from its local memory without occupying its processor time and allowing computation and data transfer concurrency.


7.)Difference of RAM and DRAM
Random-access memory Random-access memory (usually known by its acronym, RAM) is a form of computer data storage. Today, it takes the form of integrated circuits that allow stored data to be accessed in any order (i.e., at random). The word random thus refers to the fact that any piece of data can be returned in a constant time, regardless of its physical location and whether or not it is related to the previous piece of data.[1]

8.)Storage Structure



-Main Memory
Primary storage, presently known as memory, is the only one directly accessible to the CPU. The CPU continuously reads instructions stored there and executes them as required. Any data actively operated on is also stored there in uniform manner.
Historically,
early computers used delay lines, Williams tubes, or rotating magnetic drums as primary storage. By 1954, those unreliable methods were mostly replaced by magnetic core memory, which was still rather cumbersome. Undoubtedly, a revolution was started with the invention of a transistor, that soon enabled then-unbelievable miniaturization of electronic memory via solid-state silicon chip technology.
This led to a modern
random access memory (RAM). It is small-sized, light, but quite expensive at the same time. (The particular types of RAM used for primary storage are also volatile, i.e. they lose the information when not powered).




-Magnetic Disk


A memory device, such as a floppy disk, a hard disk, or a removable cartridge, that is covered with a magnetic coating on which digital information is stored in the form of microscopically small, magnetized needles or a storage device, consisting of magnetically coated disks, on the surfaces of which information is stored in the form of magnetic spots arranged in a manner to represent binary data.

-Moving Head disk Mechanism



-Magnetic tapes


Magnetic tape is a medium for magnetic recording generally consisting of a thin magnetizable coating on a long and narrow strip of plastic. Nearly all recording tape is of this type, whether used for recording audio or video or for computer data storage. It was originally developed in Germany, based on the concept of magnetic wire recording. Devices that record and playback audio and video using magnetic tape are generally called tape recorders and video tape recorders respectively. A device that stores computer data on magnetic tape can be called a tape drive, a tape unit, or a streamer.
Magnetic tape revolutionized the broadcast and recording industries. In an age when all
radio (and later television) was live, it allowed programming to be prerecorded. In a time when gramophone records were recorded in one take, it allowed recordings to be created in multiple stages and easily mixed and edited with a minimal loss in quality between generations. It is also one of the key enabling technologies in the development of modern computers
. Magnetic tape allowed massive amounts of data to be stored in computers for long periods of time and rapidly accessed when needed.



9.)Storage Hierarchy


The hierarchical arrangement of storage in current computer architectures is called the memory hierarchy. It is designed to take advantage of memory locality in computer programs. Each level of the hierarchy has the properties of higher bandwidth, smaller size, and lower latency than lower levels.
Most modern
CPUs are so fast that for most program workloads, the locality of reference of memory accesses and the efficiency of the caching and memory transfer between different levels of the hierarchy are the practical limitation on processing speed. As a result, the CPU spends much of its time idling, waiting for memory I/O to complete. This is sometimes called the space cost, as a larger memory object is more likely to overflow a small/fast level and require use of a larger/slower level.


-Caching

caching is the caching of web documents (e.g., HTML pages, images) in order to reduce bandwidth usage, server load, and perceived lag. A web cache stores copies of documents passing through it; subsequent requests may be satisfied from the cache if certain conditions are met.


-Coherency and Consistency

Transactional Coherence and Consistency (TCC) offers a way to simplify parallel programming by executing all code in transactions. In TCC systems, transactions serve as the fundamental unit of parallel work, communication and coherence. As each transaction completes, it writes all of its newly produced state to shared memory atomically, while restarting other processors that have speculatively read from modified data. With this mechanism, a TCC-based system automatically handles data synchronization correctly, without programmer intervention. To gain the benefits of TCC, programs must be decomposed into transactions. Decomposing a program into transactions is largely a matter of performance tuning rather than correctness, and that a few basic transaction programming optimization techniques are sufficient to obtain good performance over a wide range of applications with little programmer effort.
In computing, cache coherence (also cache coherency) refers to the integrity of data stored in local caches of a shared resource. Cache coherence is a special case of memory coherence.
When clients in a system maintain
caches of a common memory resource, problems may arise with inconsistent data. This is particularly true of CPUs in a multiprocessing system. Referring to the "Multiple Caches of Shared Resource" figure, if the top client has a copy of a memory block from a previous read and the bottom client changes that memory block, the top client could be left with an invalid cache of memory without any notification of the change. Cache coherence is intended to manage such conflicts and maintain consistency between cache and memory.




10.)Hardware Protection


-Dual-mode Operation


Needed to protect the OS from improper behavior of application programs
CPU must provide at least 2 modes of operation ,Monitor mode ,User mode
Application runs in user mode only ,OS runs in monitor mode only ,Needed to protect the OS from improper behavior of application programs .

-I/O protection
An I/O storage
protection arrangement in a computer system containing at least one central processor (CP) entity, at least one I/O processor interface, and a shared main storage accessible to both the CP entity and the I/O processor interface, the shared main storage having a plurality of storage blocks (storage) accessible to both the CP entity and I/O processor interface, the computer system executing CP programs and I/O programs that access data and programs in the blocks, the I/O storage protection arrangement comprising:


-Memory Protection
Memory protection is a way to control memory access rights on a computer, and is a part of nearly every modern operating system. The main purpose of memory protection is to prevent a process from accessing memory that has not been allocated to it. This prevents a bug within a process from affecting other processes, or the operating system itself. Memory protection also makes a Rootkit more difficult to implement. Memory protection is a behavior that is distinct from ASLR and the NX bit.


-CPU Protection
is protecting the CPU from being dominated by a process. To do this the OS must
be capable of pre-empting process execution, and usually restricts the execution time of the process.

































































































































































































No comments: