HOME


Top 10 List of Week 06

In this Top 10 list, I will give what materials I have learned during week 6 at my Operating System course. In each of the points I stated, there is a clickable link that will take you to the website I learned the material from, my thoughts and review of the page itself, and a short summary of the material.

One Concurrency in Operating System

Concurrency is the execution of the multiple instruction sequences at the same time. It happens in the operating system when there are several process threads running in parallel. The running process threads always communicate with each other through shared memory or message passing. Concurrency results in sharing of resources result in problems like deadlocks and resources starvation. It helps in techniques like coordinating execution of processes, memory allocation and execution scheduling for maximizing throughput.

Two What Is An Interrupt?

An interrupt is a signal which is sent from a device or from software to the operating system. The interrupt signal causes the operating system to temporarily stop what it is doing and ‘service’ the interrupt. The interrupt handler is the part of the operating system which is responsible for dealing with interrupt signals. The interrupt handler prioritises interruptions as they are received, placing them into a queue as necessary. For every interruption, the current task needs to be stopped, with it’s status saved (so it can resume later).

Three What is the difference between an OS and a Framework?

Framework : A re-usable design for a software system (or subsystem). Framework is a very broad term, it can be used to describe many types of subsystems. It could even describe an operating system.

Operating System : The infrastructure software component of a computer system. Operating System is more specific, it implies facilitation of interaction with a computers or group of computers hardware layer, through the use of human user interfaces. I think Azure fits this description.

Four Advantages of Using Concurrency

Concurrency gives the following two advantages

Five The Difference Between Dispatcher and Context Switcher

When an interrupt occurs the CPU hands control to system level code. This code is responsible for saving the context of the interrupted task, establishing a context to run system level code and restoring the context of a (possibly different) interrupted task. That’s what I’d call context switcher.

The term dispatching is associated with scheduling and means roughly selecting the next task to run.

So in a typical task switch, for example due to a timer interrupt, the context switcher first saves the context of the interrupted task, establishes the context to run system code and then calls the dispatcher. Its job is to select a task to switch into. That task is returned to the context switcher, which restores the associated context.

Six The Need For Synchronization

When multiple processes are dependent on a resource and they need to access it at the same time the operating system needs to ensure that only one processor accesses it at a given point in time. This reduces concurrency.

Seven How To Achieve Concurrency?

Consider you are given two tasks of cooking and speaking to your friend over the phone. You could do these two things simultaneously. You could cook as well as speak over the phone. Now you are doing your tasks parallelly. Parallelism means performing two or more tasks simultaneously. Parallel computing in computer science refers to the process of performing multiple calculations simultaneously.

Concurrency and Parallelism refer to computer architectures which focus on how our tasks or computations are performed. In a single core environment, concurrency happens with tasks executing over same time period via context switching i.e at a particular time period, only a single task gets executed. In a multi-core environment, concurrency can be achieved via parallelism in which multiple tasks are executed simultaneously.

Eight Process State Diagram In An Operating System

A process is a program which is currently in execution. A program by itself is not a process but it is a passive entity just like content of a file stored on disk, while a process is an active entity. A process also includes the process stack, which contains temporary data (such as local variables, function parameters, return address), and a data section, which contains global variables and a heap-memory allocated to a process to run and process state that defines its current state.

A process changes its state during its execution. Each process may be in one of the following states:

Nine Difference between Long-Term and Short-Term Scheduler

Long-Term Scheduler is also known as Job Scheduler. Long-term scheduler regulates the programs which are selected to system for processing. In this the programs are setup in the queue and as per the requirement the best one job is selected and it takes the processes from job pool. It regulates the Degree of Multi-programming (DOM).

Short-Term Scheduler is also known as CPU Scheduler. Short-Term Scheduler ensures which program is suitable or important for processing. It regulates the less DOM (Degree of Multi-programming).

Ten Difference between Process and Thread

Process: Process means any program is in execution. Process control block controls the operation of any process. Process control block contains information about processes for example Process priority, process id, process state, CPU, register, etc. A process can creates other processes which are known as Child Processes. Process takes more time to terminate and it is isolated means it does not share memory with any other process.

Thread: Thread is the segment of a process means a process can have multiple threads and these multiple threads are contained within a process. A thread have 3 states: running, ready, and blocked. Thread takes less time to terminate as compared to process and like process threads do not isolate.