Saturday, September 3, 2011

input output management in operating system

Introduction: -
In computing, input/output, or I/O, refers to the communication between an information processing system (such as a computer), and the outside world possibly a human, or another information processing system. Inputs are the signals or data received by the system, and outputs are the signals or data sent from it. The term can also be used as part of an action; to "perform I/O" is to perform an input or output operation. I/O devices are used by a person (or other system) to communicate with a computer.
a keyboard or a mouse may be an input device for a computer, while monitor sand printers are considered     output devices for a computer. Devices for communication between computers, such as modems and network cards, typically serve for both input and output.
Goals for I/O
• Users should access all devices in a uniform manner.
• Devices should be named in a uniform manner.
• The OS, without the intervention of the user program, should handle recoverable     errors.
• The OS must maintain security of the devices.
• The OS should optimize the performance of the I/O system.


Input and output operation in operating system:-
In computer architecture, the combination of the CPU and main memory (i.e. memory that the CPU can read and write to directly, with individual instructions) is considered the brain of a computer, and from that point of view any transfer of information from or to that combination, for example to or from a disk drive, is considered I/O. The CPU and its supporting circuitry provide memory-mapped I/O that is used in low-level computer programming in the implementation of device drivers. An I/O algorithm is one designed to exploit locality and perform efficiently when data reside on secondary storage, such as a disk drive.




Input/output memory-mapped in o/s:-
The CPU and its supporting circuitry provide memory-mapped I/O that is used in low-level computer programming in the implementation of device drivers. An I/O algorithm is one designed to exploit locality and perform efficiently when data reside on secondary storage, such as a disk drive.



Input/output port-mapped in o/s:-
Port-mapped I/O usually requires the use of instructions which are specifically designed to perform I/O operations.


Interaction b/w user and i/o:- .
The output from these devices is input for the computer. Similarly, printers and monitors take as input signals that a computer outputs. They then convert these signals into representations that human users can see or read. For a human user the process of reading or seeing these representations is receiving input. These interactions between computers and humans is studied in a field called human–computer interaction.



i/o procedure:-
A user process requesting I/O makes a call in the form:-DOIO(stream, mode, amount, destination, semaphores)  Where DOIO is the name of the relevant I/O procedure

stream:- is the ident no. of the stream on which I/O is to take place

mode:- operation required,- input, output, scan, etc,- may indicate char code too

amount: how much data to be transferred.destination (source): memory area where data is to be transferred to/from

semaphore: address of semaphore 'request serviced' to be signaled by the device handler when I/O operation is complete.

 Input /Output (I/O) Management in o/s
So far we have studied how resources like processor and main memory are managed. We shall now examine the I/O management. Humans interact with machines by providing information through IO devices. Also, much of whatever a computer system provides as on-line services is essentially made available through specialized devices such as screen displays, printers, keyboards, mouse, etc. Clearly, management of all these devices can affect the throughput of a system. For this reason, input output management also becomes one of the primary responsibilities of an operating system. In this chapter we shall
examine the role of operating systems in managing IO devices. In particular, we shall
examine how the end use of the devices determines the way they are regulated for
communication with either humans or with systems.
Issues in I/O Management
Let us first examine the context of input output in a computer system. We shall look at issues initially from the point of view of communication with a device. Later,  we shall also examine issues from the point of view of managing events. When we
analyze device communication, we notice that communication is required at the
following three levels:

􀂾 The need for a human to input information and receive output from a computer.
􀂾 The need for a device to input information and receive output from a computer.
􀂾 The need for computers to communicate (receive/send information) over
networks.

The first kind of IO devices operate at rates good for humans to interact. These may be character-oriented devices like a keyboard or an event-generating device like a mouse. Usually, human input using a key board will be a few key depressions at a time. This means that the communication is rarely more than a few bytes. Also, the mouse events can be encoded by a small amount of information (just a few bytes). Even though a human input is very small, it is stipulated that it is very important, and therefore requires an immediate response from the system. A communication which attempts to draw attention often requires the use of an interrupt mechanism or a programmed data mode of operation. Interrupt as well as programmed data mode of IO shall be dealt with in detail later in this chapter.

Managing Events
Our next observation is that a computer system may sometimes be embedded to interact with a real-life activity or process. It is quite possible that in some operational context a process may have to synchronize with some other process. In such a case this process may actually have to wait to achieve a rendezvous with another process. In fact, whichever of the two synchronizing processes arrives first at the point of rendezvous
would have to wait. When the other process also reaches the point of synchronization, the first process may proceed after recognizing the synchronizing event. Note that events may be communicated using signals which we shall learn about later. In some other cases a process may have to respond to an asynchronous event that may occur at any time. Usually, an asynchronous input is attended to in the next instruction cycle as we saw  In fact, the OS checks for any event which may have occurred in the intervening period. This means that an OS incorporates some IO event recognition mechanism. IO handling mechanisms may be like polling, or a programmed data transfer, or an interrupt mechanism, or even may use a direct memory access (DMA) with cycle stealing. We shall examine all these mechanisms in some detail The unit of data transfer may either be one character at a time or a block of characters. It may require to set up a procedure or a protocol. This is particularly the case when a

Programmed Data Mode
In this mode of communication, execution of an IO instruction ensures that a program shall not advance till it is completed. To that extent one is assured that IO happens before anything else happens. As depicted in Figure 5.1, in this mode an IO instruction is issued to an IO device and the program executes in “busy-waiting” (idling) mode till the IO is completed. During the busy-wait period the processor is continually interrogating to check if the device has completed IO. Invariably the data transfer is accomplished through an identified register and a flag in a processor.



Polling
In this mode of data transfer, shown in Figure 5.2, the system interrogates each device in turn to determine if it is ready to communicate. If it is ready, communication is initiated and subsequently the process continues again to interrogate in the same sequence. This is just like a round-robin strategy. Each IO device gets an opportunity to establish Communication in turn. No device has a particular advantage (like say a priority) over other devices. Polling is quite commonly used by systems to interrogate ports on a network. Polling may also be scheduled to interrogate at some pre-assigned time intervals. It should be  remarked here that most daemon software operate in polling mode. Essentially
In hardware, this may typically translate to the following protocol:
1.Assign a distinct address to each device connected to a bus.

2. The bus controller scans through the addresses in sequence to find which device
wishes to establish a communication.
3. Allow the device that is ready to communicate to leave its data on the register.
4. The IO is accomplished. In case of an input the processor picks up the data. In
case of an output the device picks up the data.
5. Move to interrogate the next device address in sequence to check if it is ready to
communicate.

Interrupt Mode
Let us begin with a simple illustration to explain the basic rationale behind interrupt
mode of data transfer. Suppose a program needs input from a device which
communicates using interrupt. Even with the present-day technology the devices are one thousand or more times slower than the processor. So if the program waits on the input device it would cycle through many processor cycles just waiting for the input device to be ready to communicate. This is where the interrupt mode of communication scores. To begin with, a program may initiate IO request and advance without suspending its operation. At the time when the device is actually ready to establish an IO, the device
raises an interrupt to seek communication. Immediately the program execution is
suspended temporarily and current state of the process is stored. The control is passed on to an interrupt service routine (which may be specific to the device) to perform the desired input. Subsequently, the suspended process context is restored to resume the program from the point of its suspension.

                                                                                                                                                    
device drive initiates i/o

initiates i/o

cpu receiving interrupt transfer


Input ready output complete or error interrupt signal
Interrupt handler processes data

                                                                                   



CPU resumes processing






 Internal Interrupt:
The source of interrupt may be a memory resident process or
a function from within the processor. We regard such an interrupt as an internal
interrupt. A processor malfunction results in an internal interrupt. An attempt to
divide by zero or execute an illegal or non-existent instruction code results in an
internal interrupt as well. A malfunction arising from a division by zero is called a
trap. Internal interrupt may be caused by a timer as well. This may be because
either the allocated processor time slice to a process has elapsed or for some
reason the process needs to be pre-empted. Note that an RTOS may pre-empt a
running process by using an interrupt to ensure that the stipulated response time
required is met. This would also be a case of internal interrupt.

External Interrupt:
 If the source of interrupt in not internal, i.e. it is other than a
process or processor related event then it is an external interrupt. This may be
caused by a device which is seeking attention of a processor. As indicated earlier,
a program may seek an IO and issue an IO command but proceed. After a while,
the device from which IO was sought is ready to communicate. In that case the
device may raise an interrupt. This would be a case of an external interrupt.

Software Interrupt:
Most OSs offer two modes of operation, the user mode and
the system mode. Whenever a user program makes a system call, be it for IO or a
special service, the operation must have a transition from user mode to system
mode. An interrupt is raised to effect this transition from user to system mode of
operation. Such an interrupt is called a software interrupt. We shall next examine how an interrupt is serviced. Suppose we are executing an instruction at i in program P when interrupt signal has been raised. Let us also assume that we have an interrupt service routine which is to be initiated to service the interrupt.
The following steps describe how a typical interrupt service may happen.

DMA (Direct memory access)Mode of Data Transfer
This is a mode of data transfer in which IO is performed in large data blocks. For
instance, the disks communicate in data blocks of sizes like 512 bytes or 1024 bytes. The direct memory access, or DMA ensures access to main memory without processor intervention or support. Such independence from processor makes this mode of transfer extremely efficient. When a process initiates a direct memory access (DMA) transfer, its execution is briefly
suspended (using an interrupt) to set up the DMA control. The DMA control requires the information on starting address in main memory and size of data for transfer. This information is stored in DMA controller. Following the DMA set up, the program
resumes from the point of suspension. The device communicates with main memory
stealing memory access cycles in competition with other devices and processor.
 of disk to main memory transfer in DMA mode. We first note that there is a disk
controller to regulate communication from one or more disk drives. This controller
essentially isolates individual devices from direct communication with the CPU or main memory. The communication is regulated to first happen between the device and the controller, and later between the controller and main memory or CPU if so needed. Note that these devices communicate in blocks of bits or bytes as a data stream. Clearly, an unbuffered communication is infeasible via the data bus. The bus has its own timing control protocol.

. Once the controller buffer has the required data, then one can envisage to put the
controller in contention with CPU and main memory or CPU to obtain an access to the bus. Thus if the controller can get the bus then by using the address and data bus it can directly communicate with main memory. This transfer shall be completely independent of program control from the processor mem



            
                    
i/o Hardware

IO management requires that a proper set-up is created by an application on computer system with an IO device. An IO operation is a combination of HW and SW instructions as shown in Figure
Following the issuance of an IO command, OS kernel resolves it, and then communicates



Principles of I/O Hardware
 

Handling Interrupt Using Device Drivers
Let us assume we have a user process which seeks to communicate with an input device using a device driver process. Processes communicate by signaling. The steps  describe the complete operational sequence (with corresponding numbers).

1. Register with listener chain of the driver: The user process P signals the device
driver as process DD to register its IO request. Process DD maintains a list data
structure, basically a listener chain, in which it registers requests received from
processes which seek to communicate with the input device.

2. Enable the device: The process DD sends a device enable signal to the device.

Some Additional Points
In this section we discuss a few critical services like clocks and spooling. We also discuss many additional points relevant to IO management like caches.


Spooling:
Suppose we have a printer connected to a machine. Many users may seek to
use the printer. To avoid print clashes, it is important to be able to queue up all the print requests. This is achieved by spooling. The OS maintains all print requests and schedules each users' print requests. In other words, all output commands to print are intercepted by the OS kernel. An area is used to spool the output so that a users' job does not have to wait for the printer to be available. One can examine a print queue status by using lpq and lpstat commands in Unix.


Clocks :
 The CPU has a system clock. The OS uses this clock to provide a variety of
system- and application-based services. For instance, the print-out should display the date and time of printing. Below we list some of the common clock-based services.
􀂾 Maintaining time of day. (Look up date command under Unix.)

􀂾 Scheduling a program run at a specified time during systems' operation. (Look up
at and cron commands under Unix.)

􀂾 Preventing overruns by processes in preemptive scheduling. Note that this is
important for real-time systems. In RTOS one follows a scheduling policy like the
earliest deadline first. This policy may necessitate preemption of a running
process.

􀂾 Keeping track of resource utilization or reserving resource use.

􀂾 Performance related measurements (like timing IO, CPU activity).

Addressing a device:
Most OSs reserve some addresses for use as exclusive addresses
for devices. A system may have several DMA controllers, interrupt handling cards (for some process control), timers, serial ports (for terminals) or terminal concentrators, parallel ports (for printers), graphics controllers, or floppy and CD ROM drives, etc. A  fixed range of addresses allocated to each of these devices. This ensures that the device drives communicate with the right ports for data.


Caching:
A cache is an intermediate level fast storage. Often caches can be regarded as
fast buffers. These buffers may be used for communication between disk and memory or memory and CPU. The CPU memory caches may used for instructions or data. In case cache is used for instructions, then a group of instructions may be pre-fetched and kept there. This helps in overcoming the latency experienced in instruction fetch. In the same manner, when it is used for data it helps to attain a higher locality of reference. As for the main memory to disk caches, one use is in disk rewrites. The technique is used almost always to collect all the write requests for a few seconds before actually a disk is written into. Caching is always used to enhance the performance of systems.


I/O channels:
An IO channel is primarily a small computer to basically handle I/O from
multiple sources. It ensures that I/O traffic is smoothed out.


OS and CDE:
The common desk top environment (CDE) is the norm now days. An OS
provides some terminal-oriented facilities for operations in a CDE. In particular the
graphics user interface (GUI) within windows is now a standard facility. The kernel I/O system recognizes all cursor and mouse events within a window to allow a user to bring windows up, iconize, scroll, reverse video, or even change font and control display. The I/O kernel provides all the screen management functions within the framework of a CDE.






I/O Buffering
• Instead of reading or writing data directly from the user’s memory, it is copied to or from an OS buffer
• Reasons for buffering
– Processes must wait for I/O to complete before proceeding
– Certain pages must remain in main memory during I/O



Single Buffer
• Operating system assigns a buffer in main memory for an I/O request
• Block-oriented
– Input transfers made to buffer
– Block moved to user space when needed
– Another block is moved into the buffer


Double Buffer
• Use two system buffers instead of one
• A process can transfer data to or from one
buffer while the operating system empties or fills the other buffer
• More than two buffers can be used for circular buffering

Levels of I/O
• User program
• User level I/O functions
• Device-independent OS software
• Device drivers
• Interrupt handlers

9 comments:

  1. that's a good work there it has helped me alot

    ReplyDelete
  2. Its really good specifically how u explain memory mapped I/O i have a post similar to this specifically explaining how basic buffer works in I/O
    fell free to check
    https://basicsofprocessor.blogspot.com/2018/06/how-does-inputoutput-work-in-processor.html

    ReplyDelete
  3. Can i get the pdf off all this point

    ReplyDelete