Search This Blog
Sunday, July 19, 2009
OPERATING SYSTEM
Ans: Scheduling criteria: Many criteria have been suggested for CPU scheduling algorithms. The criteria include the following:
1. CPU utilization: Conceptually CPU utilization can range from 0 to 100 percent. In a real system it should range from 40 percent to 90 percent.
2. Throughput: The number of processes that are completed per unit called throughput.
3. Turnaround time: The interval from the time submission to the time of completion is the turnaround time.
4. Waiting time: Waiting time get into memory waiting in the ready queue exactly on the CPU to I/O.
5. Response time: The time from the submission of a request until the first response is produced called response time.
Q. Write down the FCFS scheduling, SJF scheduling, RR scheduling, Priority scheduling, Multilevel queue scheduling, Multilevel feedback queue scheduling (with math).
Ans: FCFS scheduling: The simplest CPU scheduling algorithm is the first-come, first-serve scheduling algorithm. With this scheme, the process that requests the CPU first is allocated the CPU first. The implementation of the FCFS is similar with a FIFO queue. When a process enters the ready queue its PCB is linked onto the tail of the queue.
The FCFS scheduling algorithm is non preemptive. The CPU has been allocated to a process, that process keeps the CPU until it releases the CPU. This algorithm make some troublesome for timesharing systems. There is a convey effect as all the other processes wait for the one big process to get off the CPU.
SJF scheduling: A different approach to CPU scheduling is the shortest-job-first scheduling algorithm. This algorithm associates with each process the length of the processes next CPU burst. A more appropriate term for this scheduling method would be the shortest-next-CPU-burst algorithm, because scheduling depends on the length of the next CPU burst of a process, rather than its total length.
The SJF algorithm can be either preemptive or non preemptive. Preemptive SJF scheduling is sometimes called shortest-remaining-time-first scheduling. The SJF scheduling algorithm is provably optimal, in that it gives the minimum average waiting time for a given set of processes.
Priority scheduling: Another type of CPU scheduling is priority scheduling algorithm. A priority is associated with each process, and the CPU is allocated to the process with the highest priority. Equal priority processes are scheduled in FCFS order. The larger the CPU burst the lower the priority and vice versa. Priorities are generally indicated by some fixed range of numbers, such as 0 to 7 or 0 to 4095. Priorities can be defined either internally or externally.
Priority scheduling can be either preemptive or non preemptive. A major problem with priority scheduling algorithms is identified blocking or starvation.
RR scheduling: The round-robin scheduling algorithm is designed especially for time sharing systems. It is similar to FSFS scheduling, but preemption is added to switch between processes. A small unit of time called a time quantum or time slice is defined and generally from 10 to 100 milliseconds. The performance of the RR algorithm depends heavily on the size of the time quantum
The average waiting time under the RR policy is often long. The RR scheduling algorithm is preemptive.
Multilevel queue scheduling: A multilevel queue scheduling algorithm partitions the ready queue into several separate queues. There must be scheduling among the queues which is commonly implemented as fixed
Tuesday, July 14, 2009
OOP
Ans: Process management: A process is a program in execution. A process is an active entity and also passive entity. A single threaded process has one program counter specifying the next instruction to execute. The execution of such a process must be sequential. A process is the unit of work in a system. Such a system consists of a collection of processes, some of which are operating system processes and the rest of which are user processes.
The operating system is responsible for the following activities in connection with process management:
a. Creating and deleting both user and system processes.
b. Suspending and resuming processes.
c. Providing mechanisms for process synchronization.
d. Providing mechanisms for process communication.
e. Providing mechanisms for process deadlock handling.
Q. What are the three major activities of an operating system in regard to memory management?
Ans: The three major activities of an operating system in regard to main memory management are –
a. Keep track of which parts of memory are currently being used and by whom.
b. Decide which processes are to be loaded into memory when memory space becomes available.
c. Allocate and deallocate memory space as needed.
Q. What do you mean by file system management? What are the major activities of an operating system in regard to file management?
Ans: File system management: The operating system abstracts from the physical properties of its storage devices to define a logical storage unit, the file. Files are mapped by the OS onto physical devices. A file is a collection of related information defined by its creator. Files represent programs and data. Data files may be numeric, alphabetic, alphanumeric or binary. Files may be free form or they may be formatted rigidly.
The operating system in responsible for the following activities in connection with file management:
a. Creating and deleting files.
b. Creating and deleting directories to organize files.
c. Supporting primitives for manipulating files and directories.
d. Mapping files onto secondary storage.
e. Backing up files on stable storage media.
Q. What are the major activities of an operating system in regard to secondary storage management?
Ans: The major activities of an operating system in regard to secondary storage management are –
a. Free space management.
b. Storage allocation.
c. Disk scheduling.
Q. What do you mean by I/O system management? Write the activities of I/O system management.
Ans: The I/O system consists of several components:
A buffer caching system.
A general device driver interface.
Drivers for specific hardware devices.
Activities: To hide the peculiarities of specific hardware devices from the user. For example, in UNIX, the peculiarities of I/O devices are hidden from the bulk of the operating system itself by the I/O subsystem.
Monday, July 13, 2009
new
 Ans: Distributed  system: A distributed system is a collection of loosely coupled  processors interconnected by a communication network. There are four major  reasons for building distributed systems:
- Resource sharing: Resource sharing in a distributed system provides mechanisms for sharing files at remote sites, processing information in a distributed database, printing files at remote sites, using remote specialized hardware devices and performing other operations.
 
- Computer speedup: If a particular computation can be partitioned into sub computations that can be run concurrently and thus provide computation speedup.
 - Reliability: If one site fails in distributed system the remaining sites can continue operating giving the system better reliability.
 - Communication: When several  sites are connected to one another by a communication network the users at  different sites have the opprtunit6y to exchange information. 

 
 Fig: Distributed System.
 Parallel  system: Parallel system is known as multiprocessor system and also tightly  coupled system. Multiprocessor systems have two or more processors in close  communication, sharing the computer bus and sometimes the clock, memory and peripheral  devices. Multiprocessor systems have three main advantages: 
- Increased throughput: By increasing the number of processors we expect to get more work done in less time.
 - Economy of scale: Multiprocessor systems can cost less than equivalent multiple single processor systems.
 - Increased  reliability: If functions can be distributed properly among several  processors then the reliability increased.

 
The differences between parallel and distributed system are given  below:
Friday, July 10, 2009
DEFINATION AND APPLICATIONS OF OPERATING SYSTEM
Ans: Operating System: An  operating system is a program that acts as an intermediary between a user of a  computer and the computer hardware. 
 Purposes of an operating  system: The purpose of an OS is  to provide an environment in which a user can execute program. 
A secondary goal is to use the computer hardware in efficient manner.
Component of computer system: A computer system can be roughly divided into four components:
- The users
 
Fig: Abstract view of the components of  a computer system
The hardware provides the basic computing resources. The applications  programs define the ways in which there resources are used to solve the  computing problems of the users. The OS controls and coordinates the use of the  hardware among the various application programs for the various users.
Q. Discuss about the operating system.
Ans: Operating  system: An OS is similar to a government. Like a government the OS  performs no useful function by itself. It simply provides an environment within  which other programs can do useful work. We can view an OS as a resource  allocator. A computer system has many resources to solve a problem. The OS acts  as the manager of these resources and allocates them to specific programs and  users as necessary for the latter’s tasks.
The OS must decide which requests are allocated  resources to operate the computer system efficiently fairly.
An OS is a control program. A control program  controls the execution of user programs to prevent errors and improper use of  the computer.
The fundamental goal of computer system is to  execute user programs to make solving user problems easier. The common functions  of controlling and allocating resources are then brought together into one  piece of software: the operating system.
A more common definition is that the operating  system is the one programming running at all times on the computers with application  programs.
Q. What are the main goals of an operating system?
Ans: Goals of  operating system:
A secondary goal is efficient operation of the computer system. This goal is particularly important for large, shared multiuser systems. These systems are typically very expensive and so it is desirable to make them as efficient as possible.

