Tuesday 27 September 2011

Capital Structure : Modigliani and Merton Miller(M&M)




We know that the best capital structure for a corporation is when the WACC is minimized. This is partly derived From two famous Nobel prize winners, Franco Modigliani and Merton Miller who developed the M&M Propositions I,II,IIIand IV.


M&M Proposition I


M&M Proposition I states that the value of a firm does NOT depend on its capital structure. For example, think of 2 firms that have the same business operations, and same kind of assets. thus, the left side of their Balance Sheets look exactly the same. The only thing different between the 2 firms is the right side of the balance sheet, i.e the liabilities and how they finance their business activities.
     If a firm'sstocks make up 70% of The capital structure while bonds (debt) make up for 30%.In other firm it is the exact opposite. This is the case because the Assets of both capital structures are the exactly same.



M&M Proposition 1 therefore says how the debt and equity is structured in a corporation is Irrelevant. The value of the firm is determined by Real Assets and not its capital structure.

M&M Proposition II

    M&M Proposition II states that the Value of the firm depends on three things:
1) Required rate of return on the firm's Assets (Ra)
2) Cost of debt of the firm (Rd)
3) Debt/Equity ratio of the firm (D/E)

The WACC formula can be manipulated and written in another form:
Ra = (E/V) x Re + (D/V) x d


The above formula can also be rewritten as

Re = Ra + (Ra - Rd) x D/E)



This formula  is what M&M Proposition I is all about.

As Debt/Equity Ratio Increases -> Re will Increase (upwards sloping).
This is the basic identity of M&M Proposition I and II, that the capital structure of the firm does not affect its total value.
- WACC therefore remains the same even if the company borrows more debt (and increases its Debt/Equity ratio). 

·     M-M proposition 3:the distribution of dividends does not change the firm’s market value: it only changes the mix of E and D in the financing of the firm.



·     M-M proposition 4: in order to decide an investment, a firm should expect a rate of return at least equal to ra, no matter where the finance would come from. This means that the marginal cost of capital should be equal to the average one. The constant ra is sometimes called the “hurdle rate” (the rate required for capital investment).

Capital Structure: Theory

Friday 23 September 2011

Cost of Captial : Introduction


The cost of capital is a term used in the field of financial investment to refer to the cost of a company's funds (both debt and equity), or, from an investor's point of view "the shareholder's required return on a portfolio of all the company's existing securities".It is used to evaluate new projects of a company as it is the minimum return that investors expect for providing capital to the company, thus setting a benchmark that a new project has to meet.

The cost of debt is relatively simple to calculate, as it is composed of the rate of interest paid. In practice, the interest-rate paid by the company can be modelled as the risk-free rate plus a risk component (risk premium), which itself incorporates a probable rate of default (and amount of recovery given default). For companies with similar risk or credit ratings, the interest rate is largely exogenous (not linked to the company's activities).

The cost of equity is more challenging to calculate as equity does not pay a set return to its investors. Similar to the cost of debt, the cost of equity is broadly defined as the risk-weighted projected return required by investors, where the return is largely unknown. The cost of equity is therefore inferred by comparing the investment to other investments (comparable) with similar risk profiles to determine the "market" cost of equity.

Once cost of debt and cost of equity have been determined, their blend, the weighted-average cost of capital (WACC), can be calculated. This WACC can then be used as a discount rate for a project's projected cash flows.

Thursday 22 September 2011

Process Synchronization : Semaphores




Semaphore is a "process synchronization tool" which are assigned by two operation 
a. wait (p)
b. signal (v)
It states that if there are many proceess sharing a same variable, then other process must wait it until the process in critical section is completed, as the process in critical section is completed its send a signal to the other process to enter a critical section.
Semaphores provide mutual exclusion. They are used for process sync and are used to resolve deadlock conditions. 
System semaphores are used by the operating system to control system resources. A program can be assigned a resource by getting a semaphore (via a system call to the operating system). When the resource is no longer needed, the semaphore is returned to the operating system, which can then allocate it to another program.
  For More explanation and Algorithm visit this link. Click here....

Solutions for Critical Section Problem




Summary of Techniques for Critical Section Problem

Software
  1. Peterson's Algorithm: based on busy waiting
  2. Semaphores: general facility provided by operating system (e.g., OS/2)
    • based on low-level techniques such as busy waiting or hardware assistance
    • described in more detail below
  3. Monitors: programming language technique.

Hardware
  1. Exclusive access to memory location
    • always assumed
  2. Interrupts that can be turned off
    • must have only one processor for mutual exclusion
  3. Test-and-Set: special machine-level instruction
  4. Swap: atomically swaps contents of two words

Process Synchronization : Critical Section


Critical Section
  • set of instructions that must be controlled so as to allow exclusive access to one process
  • execution of the critical section by processes is mutually exclusive in time
Critical Section (S&G, p. 166) (for example, ``for the process table'')
repeat
      
      critical section
      
      remainder section
until FALSE


Solution to the Critical Section Problem must meet three conditions...
  1. mutual exclusion: if process  is executing in its critical section, no other process is executing in its critical section
  2. progress: if no process is executing in its critical section and there exists some processes that wish to enter their critical sections, then only those processes that are not executing in their remainder section can participate in the decision of which will enter its critical section next, and this decision cannot be postponed indefinitely
    • if no process is in critical section, can decide quickly who enters
    • only one process can enter the critical section so in practice, others are put on the queue
  3. bounded waiting: there must exist a bound on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted
    • The wait is the time from when a process makes a request to enter its critical section until that request is granted
    • in practice, once a process enters its critical section, it does not get another turn until a waiting process gets a turn (managed as a queue)

Process Synchronization


In computer science, synchronization refers to one of two distinct but related concepts: synchronization of processes, and synchronization of data. Process synchronization refers to the idea that multiple processes are to join up or handshake at a certain point, so as to reach an agreement or commit to a certain sequence of action.

Process synchronization or serialization, strictly defined, is the application of particular mechanisms to ensure that two concurrently-executing threads or processes do not execute specific portions of a program at the same time. If one process has begun to execute a serialized portion of the program, any other process trying to execute this portion must wait until the first process finishes. Synchronization is used to control access to state both in small-scale multiprocessing systems -- in multithreaded and multiprocessor computers -- and in distributed computers consisting of thousands of units -- in banking and database systems, in web servers, and so on.

Sunday 18 September 2011

Dijkstra's Shortest Path Algorithm




Friday 16 September 2011

Source of Finance

Sources Of Finance terms refers to the Sources from where an firm can get investments or money to run the business. Needs of Money :-
1. To run the business.
2. To maintain the status in market
3. For growth of Business.
4. To stay in compition of market etc.
These are some basic needs of finance on the basis of Needs of Finance there are different-2 Sources of Finance too which are as follows :-
I. On basis of Time Period :-
    A. Long-Term Finance (---10 Years or more----)
    B. Short- Term Finance (1 to 3Years)
    C. Mid- Term Finance(5 to 10 Years)




A. Long-term sources of finance: Long-term financing can be raised from the following sources:

  • Share capital or equity share
  • Preference shares
  • Retained earnings
  • Debentures/Bonds of different types
  • Loans from financial institutions
  • Loan from state financial corporation
  • Loans from commercial banks
  • Venture capital funding
  • Asset securitisation
  • International
B.Medium-term sources of finance: Medium-term financing can be raised from the following sources:
  • Preference shares
  • Debentures/bonds
  • Public deposits/fixed deposits for duration of three years
  • Commercial banks
  • Financial institutions
  • State financial corporations
  • Lease financing / hire purchase financing
  • External commercial borrowings
  • Euro-issues
  • Foreign currency bonds.
C.Short term sources of finance: Short-term financing can be raised from the following sources:


  • Trade credit
  • Commercial banks
  • Fixed deposits for a period of 1 year or less
  • Advances received from customers
  • Various short-term provisions

Wednesday 14 September 2011

Function of Finance : Dividend Decision

Dividend decision is concerned with the amount of profits to be distributed and retained in the
firm.
Dividend: The term ‘dividend’ relates to the portion of profit, which is distributed to shareholders
of the company. It is a reward or compensation to them for their investment made in the firm.
The dividend can be declared from the current profits or accumulated profits.
Which course should be followed – dividend or retention? Normally, companies distribute
certain amount in the form of dividend, in a stable manner, to meet the expectations of
shareholders and balance is retained within the organisation for expansion. If dividend is not
distributed, there would be great dissatisfaction to the shareholders. Non-declaration of dividend
affects the market price of equity shares, severely. One significant element in the dividend
decision is, therefore, the dividend payout ratio i.e. what proportion of dividend is to be paid
to the shareholders. The dividend decision depends on the preference of the equity shareholders
and investment opportunities, available within the firm. A higher rate of dividend, beyond the
market expectations, increases the market price of shares. However, it leaves a small amount
in the form of retained earnings for expansion. The business that reinvests less will tend to
grow slower. The other alternative is to raise funds in the market for expansion. It is not a
desirable decision to retain all the profits for expansion, without distributing any amount in the
form of dividend.
There is no ready-made answer, how much is to be distributed and what portion is to be
retained. Retention of profit is related to
• Reinvestment opportunities available to the firm.
• Alternative rate of return available to equity shareholders, if they invest themselves

Function of Finance : Liquidity Desicion


Liquidity decision is concerned with the management of current assets. Basically, this is Working
Capital Management. Working Capital Management is concerned with the management of current
assets. It is concerned with short-term survival. Short term-survival is a prerequisite for long-term
survival.
When more funds are tied up in current assets, the firm would enjoy greater liquidity. In
consequence, the firm would not experience any difficulty in making payment of debts, as and
when they fall due. With excess liquidity, there would be no default in payments. So, there would
be no threat of insolvency for failure of payments. However, funds have economic cost. Idle current assets do not earn anything. Higher liquidity is at the cost of profitability. Profitability
would suffer with more idle funds. Investment in current assets affects the profitability, liquidity
and risk.  A proper balance must be maintained between liquidity and profitability of the
firm. This is the key area where finance manager has to play significant role. The strategy
is in ensuring a trade-off between liquidity and profitability. This is, indeed, a balancing act
and continuous process. It is a continuous process as the conditions and requirements of business
change, time to time. In accordance with the requirements of the firm, the liquidity has to vary
and in consequence, the profitability changes. This is the major dimension of liquidity decisionworking capital management. Working capital management is day to day problem to the finance
manager. His skills of financial management are put to test, daily.

Function of Finance : Investment Decision


Function Of Finance :
A. Investment Decision:-
 Investment decisions relate to selection of assets in which funds are to be invested by the firm.
Investment alternatives are numerous. Resources are scarce and limited. They have to be rationed and discretely used. Investment decisions allocate and ration the resources among the competing
investment alternatives or opportunities.  The effort is to find out the projects, which are acceptable.
Investment decisions relate to the total amount of assets to be held and their composition
in the form of fixed and current assets. Both the factors influence the risk the organisation is
exposed to. The more important aspect is how the investors perceive the risk.
The investment decisions result in purchase of assets. Assets can be classified, under two
broad categories:
(i) Long-term investment decisions – Long-term assets
(ii) Short-term  investment decisions – Short-term assets
Long-term Investment Decisions:  The long-term capital decisions are referred to as capital
budgeting decisions, which relate to fixed assets. The fixed assets are long term, in nature.
Basically, fixed assets create earnings to the firm. They give benefit in future. It is difficult to
measure the benefits as future is uncertain.
The investment decision is important not only for setting up new units but also for expansion
of existing units. Decisions related to them are, generally, irreversible. Often, reversal of decisions
results in substantial loss. When a brand new car is sold, even after a day of its purchase, still,
buyer treats the vehicle as a second-hand car. The transaction, invariably, results in heavy loss for
a short period of owning. So, the finance manager has to evaluate profitability of every investment
proposal, carefully, before funds are committed to them.
Short-term Investment Decisions: The short-term investment decisions are, generally, referred
as working capital management. The finance manger has to allocate among cash and cash equivalents,
receivables and inventories. Though these current assets do not, directly, contribute to the earnings,
their existence is necessary for proper, efficient and optimum utilisation of fixed assets.

Scope of Financial Management

In order to achieve the objectives of the financial management, the financial manager of the business concern, has to manage various aspects of finance function which lay down the scope of his duty. These aspects are discussed as under:
1. Estimating the financial requirement.
2. Determining the structure of capitalization.
3. Selecting a source of finance.

4. Selecting a pattern of investment.
5. Management of cash flow.
6.Implementingfinancial control.
7. Proper use of surplus.

1. Estimating the financial requirement: on the basis of their forecast of the volume of business operations of the company, the finance executives have to estimate the amount of fixed capital and working capital required in a given period of time

2. Determining the structure of capitalization: after estimating the requirement of capital, the finance executives have to decide about the composition of capital. They have to determine the relative proportion of owner’s risk, capital and borrowed capital. These decisions have to be taken in the light of cost of raising form different resources, period for which funds are needed and several others factors.

3. Investment decision: the funds raised from different resources are to be intelligently invested in various assets so as to optimize their return of investment. While making investment decision, management should be guided by three important principles-safety, liquidity and profitability.

4. Management of cash flows: - Cash is needed to pay off creditors, for purchase of materials, pay labor and to meet everyday expenses. These should not be shortage of cash at any time as it will damage credit- worthiness of the company. These should not be access cash them required because money has
time value.

5. Management of earnings: - The finance executive has to decide about the allocation of earnings among several competitive needs. A certain amount of total earnings may be kept as reserve or a portion of earnings may be distributed among and ordinary and preference share holders, yet another portion may be ploughed back or re-invested. The finance executives must consider the merits and de-merits of alternative schemes of utilizing the funds generating from the companies own earnings.

6. Choice of sources of finance: - The management can raise finance from various sources like share holders, banks and others financial distributors finance executives has to evaluate each source over method of finance and choose the best source. Financial management is the new branch of accounting that deals with the acquisition of financial resources & management of them.

What is Financial Management


Basically financial management made of two words Finance and Management. Basic means from finance is taken as Money or wealth. and Management alwayz refer as control things or managing things according to business.So Financial Management can be termed as :-
The management of the finances of a business / organisation in order to achieve financial objectives
Taking a commercial business as the most common organisational structure, the key objectives of financial management would be to:
• Create wealth for the business
• Generate cash, and
• Provide an adequate return on investment bearing in mind the risks that the business is taking and the resources invested
There are three key elements to the process of financial management:

(1) Financial Planning
Management need to ensure that enough funding is available at the right time to meet the needs of the business. In the short term, funding may be needed to invest in equipment and stocks, pay employees and fund sales made on credit.
In the medium and long term, funding may be required for significant additions to the productive capacity of the business or to make acquisitions.
(2) Financial Control
Financial control is a critically important activity to help the business ensure that the business is meeting its objectives. Financial control addresses questions such as:
• Are assets being used efficiently?
• Are the businesses assets secure?
• Do management act in the best interest of shareholders and in accordance with business rules?
(3) Financial Decision-making
The key aspects of financial decision-making relate to investment, financing and dividends:
• Investments must be financed in some way – however there are always financing alternatives that can be considered. For example it is possible to raise finance from selling new shares, borrowing from banks or taking credit from suppliers
• A key financing decision is whether profits earned by the business should be retained rather than distributed to shareholders via dividends. If dividends are too high, the business may be starved of funding to reinvest in growing revenues and profits further

Round Robin Scheduling

It is one of the oldest, simplest, fairest and most widely used scheduling algorithms, designed especially for time-sharing systems. A small unit of time, called timeslice or quantum, is defined. All runnable processes are kept in a circular queue. The CPU scheduler goes around this queue, allocating the CPU to each process for a time interval of one quantum. New processes are added to the tail of the queue.
The CPU scheduler picks the first process from the queue, sets a timer to interrupt after one quantum, and dispatches the process.
If the process is still running at the end of the quantum, the CPU is preempted and the process is added to the tail of the queue. If the process finishes before the end of the quantum, the process itself releases the CPU voluntarily. In either case, the CPU scheduler assigns the CPU to the next process in the ready queue. Every time a process is granted the CPU, a context switch occurs, which adds overhead to the process execution time.
         

Tuesday 13 September 2011

PCB(Process Control Block)


A process in an operating system is represented by a data structure known as a process control block (PCB) or process descriptor. The PCB contains important information about the specific process including
  • The current state of the process i.e., whether it is ready, running, waiting, or whatever.
  • Unique identification of the process in order to track "which is which" information.
  • A pointer to parent process.
  • Similarly, a pointer to child process (if it exists).
  • The priority of process (a part of CPU scheduling information).
  • Pointers to locate memory of processes.
  • A register save area.
  • The processor it is running on.
The PCB is a certain store that allows the operating systems to locate key information about a process. Thus, the PCB is the data structure that defines a process to the operating systems.
   

System Calls & System Programs


System calls provide an interface between the process an the operating system. System calls allow user-level processes to request some services from the operating system which process itself is not allowed to do. In handling the trap, the operating system will enter in the kernel mode, where it has access to privileged instructions, and can perform the desired service on the behalf of user-level process. It is because of the critical nature of operations that the operating system itself does them every time they are needed. For example, for I/O a process involves a system call telling the operating system to read or write particular area and this request is satisfied by the operating system.
System programs provide basic functioning to users so that they do not need to write their own environment for program development (editors, compilers) and program execution (shells). In some sense, they are bundles of useful system calls.

Operating System Services


Following are the five services provided by an operating systems to the convenience of the users.

Program Execution

The purpose of a computer systems is to allow the user to execute programs. So the operating systems provides an environment where the user can conveniently run programs. The user does not have to worry about the memory allocation or multitasking or anything. These things are taken care of by the operating systems.

Running a program involves the allocating and deallocating memory, CPU scheduling in case of multiprocess. These functions cannot be given to the user-level programs. So user-level programs cannot help the user to run programs independently without the help from operating systems.

 

 

I/O Operations

Each program requires an input and produces output. This involves the use of I/O. The operating systems hides the user the details of underlying hardware for the I/O. All the user sees is that the I/O has been performed without any details. So the operating systems by providing I/O makes it convenient for the users to run programs.
For efficiently and protection users cannot control I/O so this service cannot be provided by user-level programs.


File System Manipulation
The output of a program may need to be written into new files or input taken from some files. The operating systems provides this service. The user does not have to worry about secondary storage management. User gives a command for reading or writing to a file and sees his her task accomplished. Thus operating systems makes it easier for user programs to accomplished their task.
This service involves secondary storage management. The speed of I/O that depends on secondary storage management is critical to the speed of many programs and hence I think it is best relegated to the operating systems to manage it than giving individual users the control of it. It is not difficult for the user-level programs to provide these services but for above mentioned reasons it is best if this service s left with operating system.

Communications

There are instances where processes need to communicate with each other to exchange information. It may be between processes running on the same computer or running on the different computers. By providing this service the operating system relieves the user of the worry of passing messages between processes. In case where the messages need to be passed to processes on the other computers through a network it can be done by the user programs. The user program may be customized to the specifics of the hardware through which the message transits and provides the service interface to the operating system.

Error Detection

An error is one part of the system may cause malfunctioning of the complete system. To avoid such a situation the operating system constantly monitors the system for detecting the errors. This relieves the user of the worry of errors propagating to various part of the system and causing malfunctioning.
This service cannot allowed to be handled by user programs because it involves monitoring and in cases altering area of memory or deallocation of memory for a faulty process. Or may be relinquishing the CPU of a process that goes into an infinite loop. These tasks are too critical to be handed over to the user programs.  A user program if given these privileges can interfere with the correct (normal) operation of the operating systems.

Process Scheduling Technique

Process Scheduling Queue


As processes enter the system they put in job queue. This queue consists of all processes of the system, some process is kept on main memory waiting or ready to execute, so they are placed in the list called ready queue.
So, each operating system consists of other queues when the process is allocated on its CPU. It executes for a while and eventually quite, is interrupted or waits for some particular events to occur such as I/O request I/O request is dedicated to tape driver or sharing device or disk. As we know that system comparing of number of process. The disk may be busy with I/O request of some other process. So, in that case the process has to wait for a disk, such process waiting for particular I/O device is placed in queues called I/O queue.
A common representation of process scheduling is the queuing diagram. The rectangular box represents the queue. The circle represents the resource.
processes scheduling quee
processes scheduling queue
New process always is put in ready queue. It unit in the ready queue until it is selected for execution. Once it is allocated to CPU is starts executing. The several events may occur.
1. A process may issue I/O request then be placed in I/O queue.
2. A process may create a sub process and wait for its termination.
3. A process may be interrupted in between and put back in ready queue. The process continues this cycle until if terminates.

Process State




As the program executes, it generally changes state. A state of the process is defined as the current activities of that process. Each process may have one of the following states:-
a.  New: – the process is being created.
b. Running: – instruction is being executed.
c. Waiting: – a process is waiting for some event (e.g. printer during sending signals) or reception of signal between computer and printer.
d. Ready: – the processor is ready to execute that particular process.
e. Terminated: – the process has finished execution.
Running state implies that process is currently run by the CPU. Ready  to run means that it needs cpu attention and time to run, otherwise it is not  blocked in any sense or in other words, process is waiting state implies that process is not running currently and is waiting for some event to occur( such as i/o completion  or reception of signal). The diagram of process state is given below:-
process states
process states and it's transition

Process Creation


A process can create several new processes. The creating process is called parent process whereas a new process is called child process of that process. New process can further create a more child processes, thus forming tree structure.
processes creation
processes creation
Here, a child has only one parent and a parent has more Childs. Every process will need certain resource (CPU, time, memory, I/O devices) to accomplish its work. If any processes create any sub-process that sub-process may use the resources from the operating system or parent may partition its resources among its child process.
There are several possible ways of executing when any parent creates the child process like:-
a. Synchronous: – if a process is created from another as a synchronous process then the new process must complete execution before the old one can resume.
b. Asynchronous: – if a new process is created asynchronously, then two processes may run concurrently.

Processes


Early computer system allowed any one program to be executed at a base, current days computer system allow multiple program to be loaded in memory and to be executed at the same time or step by step. This executing multiple programs is being resulted as process. A process is the unit of work in a modern time sharing system.
In modern system processing consist of processing a collection of processes. Operating system processes executing system code and a user processes executing user code all this process an executed simultaneously by the CPU.
Even on a single user system, such as Microsoft dos and Macintosh os, a user may be able to run several program at one time, a word processor, web browser and email package.
Even if the user can execute only one program at a time to support or control that particular program, an operating system executes several internal program activities. So, we can theses activities as a process. So, process can be defined as the program in execution compiling of an execution code is process.
A program by its self is not a process. A program is a passive entity such as content of a file or instruction code of particular files whereas process is an active entity. Which specify next instruction to execute and set if associated resources.
Operation on process
A process is usually sequential and consist of a sequence of action that take place one at a time when a set of process is running on a single cpu running the time slicing to multitask, then this is referred to as con current sequential processing or pseudo-parallel processing. If there is more than one CPU available, then processes may be running parallel.
The set of operation that a operating system can perform on process includes:-
a. Create a process
b. Destroy a process
c. Run a process
d. Suspend a process
e. Get process information
f. Set process information
Inter process communication:
Inter process communication is a facility provided by an operating system via which co-operating process can communicate with each other. This facility allows the process to co-operate and synchronize their active IPS. IPS is provided by co message system. IPS is useful in distributed system where the process communication depends on the different computer connected in the network.

What is Kernal


All the operating involving processes are controlled by a part of operating system called kernel or nucleus or core. In modern operating system the kernel is small portion of operating system but plays very important role in computer system. The kernel is interactive with the user mode or user program.
The kernel resides inside primary storage or ram when the computer is opened and controls the system. Other portion of operating system may reside in secondary storage and load in primary memory when needed.
The kernel gives services to the user programs or in depth gives services to the resources.  Kernel is responsible for accepting the interrupt to perform contain task, the user or resource or I/O device gives interrupt signal to kernel and kernel can process the interrupt. In multiuser system, the interrupt is directed to the processor by different user and processor gives service or controls all users, this environment is created by kernel. Here, processes are responsible for rapid response to resource as well as user. The main function of kernel is:-
a. Interrupt handling
b. Process creation and destruction.
c. In multiprocessing system, the control should transfer from one process to another that process is called switching of process.
d. It support file system
e. It support procedure call and return statement.
A Kernel contains different utilities which are used to control and manage computer resources. The utilities of a kernel are:
compiler:
Compiler and interpreter are used to translate high level language to machine understandable form. The compiler translates overall code into machine code. Debugging process is also performing this phrase. In compilation overall code is translate into machine code at a time. If any bug is found in a line at last all program in any line this will show at last. Two types of compiler are used quick-and-dirty compiler and continizing compiler. First compiler is fast and inefficient and second is slow but more efficient.
linker:
In early days, each and every instruction was written to solve the particular problems. even a complex instruction is coded into each machine language program but now a days the instructions can be reuse or instructions are reside into certain location they can be used into program due to this the length of program code will reduce. Large number of sub routine libraries is supplied so that programmers may use system supplied routines to perform common operations. Input and output in particular is normally handled by routine outside the user program. Therefore, machine language programs must normally be combined with other machine language programs to form useful execution unit. The linker links the program code with predefined sub routines. The program execution may be show due to linking. In linking stage, linker combines whatever programs required that loads them directly into primary storage to create an executable file.
Loader:
Programs must be place in primary storage in order to be executed. Associating instructions and data items with particular primary storage locations is an important task of kernel or operating system. This task is performed by sometimes user sometimes by translators and sometimes on the system program called a loader and sometimes on the operating system. The association of instruction and data items with particular storage locations is called binding. In machine language programming, binding is performed at coder section. The loader leads instruction and innate data into primary memory. There are two types of loader called absolute and relocating loader.
A loader is a program that places a program’s instruction and data items into primary storage locations. An absolute loader places instruction and data into the precise locations indicated in the machine language programs. But a relocating loader may load a program at various places in primary storage, depending on the availability of space in primary storage at load time.
shell:
It is the most important system program of operating system. It is a program which accepts user’s command from command line and executes it. A shell is not only the command interpreter and a line editor but also the language with variables, array, function and control statement. It is not graphic oriented. The shell user prompt $, %, # etc. to take a command. The shell carried out the system call editor, compiler, linker, command interpreter are not part of operating system or not part of kernel mode but play very important role in computer program. The shells provide the primary interface between user and operating system. When user log in computer, the shell is started up. It starts up by typing the prompt, or nay sign like $, # etc. and waiting for accepting user command.
In case dos, if two user type date, the shell creates the child process and takes the same program as the child. After completing the child process, it gives result or o/p and wait for another command.

Model of operating system


operating system structure:
An operating system might have many structure. According to the structure of the operating system; operating systems can be classified into many categories.
Some of the main structures used in operating systems are:
1. Monolithic architecture of operating system
monolithic sturucture of operating system
monolithic sturucture of operating system
It is the oldest architecture used for developing operating system. Operating system resides on kernel for anyone to execute. System call is involved i.e.  Switching from user mode to kernel mode and transfer control to operating system shown as event 1. Many CPU has two modes, kernel mode, for the operating system in which all instruction are allowed and user mode for user program in which I/O devices and certain other instruction are not allowed.  Two operating system then examines the parameter of the call to determine which system call is to be carried out shown in event 2. Next, the operating system index’s into a table that contains procedure that carries out system call. This operation is shown in events. Finally, it is called when the work has been completed and the system call is finished, control is given back to the user mode as shown in event 4.
2. Layered Architecture of operating system
The layered Architecture of operating system was developed in 60’s in this approach; the operating system is broken up into number of layers. The bottom layer (layer 0) is the hardware layer and the highest layer (layer n) is the user interface layer as shown in the figure.
layered architecture
layered architecture
The layered are selected such that each user functions and services of only lower level layer. The first layer can be debugged wit out any concern for the rest of the system. It user basic hardware to implement this function once the first layer is debugged., it’s correct functioning can be assumed  while the second layer is debugged & soon . If an error is found during the debugged of particular layer, the layer must be on that layer, because the layer below it already debugged. Because of this design of the system is simplified when operating system is broken up into layer.
Os/2 operating system is example of layered architecture of operating system another example is earlier version of Windows NT.
The main disadvantage of this architecture is that it requires an appropriate definition of the various layers & a careful planning of the proper placement of the layer.
3. Virtual memory architecture of operating system
virtual memory architecture
virtual memory architecture
Virtual machine is an illusion of a real machine. It is created by a real machine operating system, which make a single real machine appears to be several real machine. The architecture of virtual machine is shown above.
The best example of virtual machine architecture is IBM 370 computer. In this system each user can choose a different operating system. Actually, virtual machine can run several operating systems at once, each of them on its virtual machine.
Its multiprogramming shares the resource of a single machine in different manner.
The concepts of virtual machine are:-
a. Control program (cp):- cp creates the environment in which virtual machine can execute. It gives to each user facilities of real machine such as processor, storage I/0 devices.
B. conversation monitor system (cons):- cons is a system application having features of developing program. It contains editor, language translator, and various application packages.
c.    Remote spooling communication system (RSCS):- provide virtual machine with the ability to transmit and receive file in distributed system.
d. IPCS (interactive problem control system):- it is used to fix the virtual machine software problems.
4. client/server architecture of operating system
A trend in modern operating system is to move maximum code into the higher level and remove as much as possible from operating system, minimising the work of the kernel. The basic approach is to implement most of the operating system functions in user processes to request a service, such as request to read a particular file, user send a request to the server process, server checks the parameter and finds whether it is valid or not, after that server does the work and send back the answer  to client server model works on request- response technique i.e. Client  always send request to the side in order to perform the task, and on the other side, server gates complementing that request send back response. The figure below shows client server architecture.
client server model
client server model
In this model, the main task of the kernel is to handle all the communication between the client and the server by splitting the operating system into number of ports, each of which only handle some specific task. I.e. file server, process server, terminal server and memory service.
Another advantage of the client-server model is it’s adaptability to user in distributed system. If the client communicates with the server by sending it the message, the client need not know whether it was send a ……. Is the network to a server on a remote machine? As in case of client, same thing happen and occurs in client side that is a request was send and a reply come back.