Distributed Operating Systems


A distributed operating system (DOS) is an essential type of operating system. Distributed systems use many central processors to serve multiple real-time applications and users. As a result, data processing jobs are distributed between the processors.

It connects multiple computers via a single communication channel. Furthermore, each of these systems has its own processor and memory. Additionally, these CPUs communicate via high-speed buses or telephone lines. Individual systems that communicate via a single channel are regarded as a single entity. They’re also known as loosely coupled systems.

This operating system

consists of numerous computers, nodes, and sites joined together via LAN/WAN

lines. It enables the distribution of full systems on a couple of center processors, and it supports many real-time products and different users. Distributed operating systems can share their computing resources and I/O files while providing users with virtual machine abstraction.

Types of Distributed Operating System

There are various types of Distributed Operating systems. Some of them are as follows:

  1. Client-Server Systems
  2. Peer-to-Peer Systems
  3. Middleware
  4. Three-tier
  5. N-tier

Client-Server System

This type of system requires the client to request a resource, after which the server gives the requested resource. When a client connects to a server, the server may serve multiple clients at the same time.

Client-Server Systems are also referred to as “Tightly Coupled Operating Systems”. This system is primarily intended for multiprocessors and homogenous multicomputer. Client-Server Systems function as a centralized server since they approve all requests issued by client systems.

Server systems can be divided into two parts:

1. Computer Server System

This system allows the interface, and the client then sends its own requests to be executed as an action. After completing the activity, it sends a back response and transfers the result to the client.

2. File Server System

It provides a file system interface for clients, allowing them to execute actions like file creation, updating, deletion, and more.

Peer-to-Peer System

The nodes play an important role in this system. The task is evenly distributed among the nodes. Additionally, these nodes can share data and resources as needed. Once again, they require a network to connect.

The Peer-to-Peer System is known as a “Loosely Couple System”. This concept is used in computer network applications since they contain a large number of processors that do not share memory or clocks. Each processor has its own local memory, and they interact with one another via a variety of communication methods like telephone lines or high-speed buses.


Middleware enables the interoperability of all applications running on different operating systems. Those programs are capable of transferring all data to one other by using these services.


The information about the client is saved in the intermediate tier rather than in the client, which simplifies development. This type of architecture is most commonly used in online applications.

  • It may share all resources (CPU, disk, network interface, nodes, computers, and so on) from one site to another, increasing data availability across the entire system.
  • It reduces the probability of data corruption because all data is replicated across all sites; if one site fails, the user can access data from another operational site.
  • The entire system operates independently of one another, and as a result, if one site crashes, the entire system does not halt.
  • It increases the speed of data exchange from one site to another site.
  • It is an open system since it may be accessed from both local and remote locations.
  • It helps in the reduction of data processing time.
  • Most distributed systems are made up of several nodes that interact to make them fault-tolerant. If a single machine fails, the system remains operational.
  • Disadvantages

    There are various disadvantages of the distributed operating system. Some of them are as follows:

    1. The system must decide which jobs must be executed when they must be executed, and where they must be executed. A scheduler has limitations, which can lead to underutilized hardware and unpredictable runtimes.
    2. It is hard to implement adequate security in DOS since the nodes and connections must be secured.
    3. The database connected to a DOS is relatively complicated and hard to manage in contrast to a single-user system.
    4. The underlying software is extremely complex and is not understood very well compared to other systems.
    5. The more widely distributed a system is, the more communication latency can be expected. As a result, teams and developers must choose between availability, consistency, and latency.
    6. These systems aren’t widely available because they’re thought to be too expensive.
    7. Gathering, processing, presenting, and monitoring hardware use metrics for big clusters can be a real issue.


    The distributed information system is defined as “a number of interdependent computers linked by a network for sharing information among them”. A distributed information system consists of multiple autonomous computers that communicate or exchange information through a computer network. Design issues of distributed system –

    1. Heterogeneity : Heterogeneity is applied to the network, computer hardware, operating system and implementation of different developers. A key component of the heterogeneous distributed system client-server environment is middleware. Middleware is a set of services that enables application and end-user to interacts with each other across a heterogeneous distributed system.
    2. Openness: The openness of the distributed system is determined primarily by the degree to which new resource-sharing services can be made available to the users. Open systems are characterized by the fact that their key interfaces are published. It is based on a uniform communication mechanism and published interface for access to shared resources. It can be constructed from heterogeneous hardware and software.
    3. Scalability: Scalability of the system should remain efficient even with a significant increase in the number of users and resources connected.
    4. Security : Security of information system has three components Confidentially, integrity and availability. Encryption protects shared resources, keeps sensitive information secrets when transmitted.
    5. Failure Handling: When some faults occur in hardware and the software program, it may produce incorrect results or they may stop before they have completed the intended computation so corrective measures should to implemented to handle this case. Failure handling is difficult in distributed systems because the failure is partial i, e, some components fail while others continue to function.
    6. Concurrency: There is a possibility that several clients will attempt to access a shared resource at the same time. Multiple users make requests on the same resources, i.e read, write, and update. Each resource must be safe in a concurrent environment. Any object that represents a shared resource in a distributed system must ensure that it operates correctly in a concurrent environment.
    7. Transparency : Transparency ensures that the distributes system should be perceived as a single entity by the users or the application programmers rather than the collection of autonomous systems, which is cooperating. The user should be unaware of where the services are located and the transferring from a local machine to a remote one should be transparent.

    Communication Primitives

    Message Passing Primitives

    Message passing is one form of communication
        between two processes

    A physical copy of the message
        is sent from one process to the other

    “In the message passing model, several processes run in parallel and communicate with one another by sending and receiving messages. The processes do not have access to shared memory.”

    Message Passing Primitive Commands

    • SEND (msg, dest)
    • RECEIVE (src, buffer)

    This is a low-level approach to IPC,
        and puts the burden of communication on the programmer

    Message passing is the basis of MPI (Message Passing Interface)
            and PVM (Parallel Virtual Machine)
        both of which have libraries
            of message passing commands
                in C/C++/Fortran
        (For more information on 
        (For more information on MPI)

    Buffering Methods:

    • Standard Method:
          user buffer –> sender’s kernel buffer
                              –> receiver’s kernel buffer
                                –> user buffer
    • Unbuffered Method:
          user buffer –> user buffer
              The sender & receiver need to know
                  if the buffer at the other end is in use
                      or if it is static

    Synchronous Methods:
        The two communicating processes rendezvous
            No buffer is used

    Asynchronous Methods:
        Just buffer the message

    • Need to allocate and deallocate buffers
    • What if the buffer is full/empty?
    • What if receiver’s machine dies?

    Inherent Limitations:

    Distributed System is a collection of self-governing computer systems efficient of transmission and cooperation among each other by the means of interconnections between their hardware and software. It is a collection of loosely coupled processor that appears to its users a single systematic system. Distributed systems has various limitations such as in distributed system there is not any presence of a global state. This differentiates distributed system computing from databases in which a steady global state is maintained.

    Distributed system limitations has the impact on both design and implementation of distributed systems. There are mainly two limitations of the distributed system which are as following:

    1. Absence of a Global Clock
    2. Absence of Shared Memory

    The above two limitations of the distributed system are explained as following below:

    1. Absence of a Global Clock:
    In a distributed system there are a lot of systems and each system has its own clock. Each clock on each system is running at a different rate or granularity leading to them asynchronous. In starting the clocks are regulated to keep them consistent, but only after one local clock cycle they are out of the synchronization and no clock has the exact time.
    Time is known for a certain precision because it is used for the following in distributed system:

    • Temporal ordering of events
    • Collecting up-to-date information on the state of the integrated system
    • Scheduling of processes

    There are restrictions on the precision of time by which processes in a distributed system can synchronize their clocks due to asynchronous message passing. Every clock in distributed system is synchronize with a more reliable clock, but due to transmission and execution time lapses the clocks becomes different. Absence of global clock make more difficult the algorithm for designing and debugging of distributed system.

    2. Absence of Shared Memory:
    Distributed systems have not any physically shared memory, all computers in the distributed system have their own specific physical memory. As computer in the distributed system do not share the common memory, it is impossible for any one system to know the global state of the full distributed system. Process in the distributed system obtains coherent view of the system but in actual that view is partial view of the system.
    As in distributed system there is an absence of a global state, it is challenging to recognize any global property of the system. The global state in distributed system is divided by many number of computers into smaller entities.

    Books on Distributed Operating System


    Leave a Comment

    Your email address will not be published. Required fields are marked *

    This website is hosted Green - checked by thegreenwebfoundation.org