Disciplined concurrent programming can improve the structure and performance of computer programs on both uniprocessor and multiprocessor systems. As a result, support for threads, lightweight processes, has become a common element of new operating systems and programming languages. A thread is a sequential stream of instruction execution (Anderson1). Thread is a basic unit of CPU utilization. It consists of a thread ID, a program counter, a register set, and a stack. Threads of the same process share its code section, data section and other operating system resources (Silberschatz 127).
Due to this sharing, one thread can read or write other thread’s data and there is no protection between the threads of the same process. Threads are originally designed to allow parallelism within sequential execution and to block system call (Choudhary 5).
A single thread executes a portion of a program, while cooperating with other threads that are concurrently executing the same program. Much of what is normally kept on a per-heavyweight-process basis can be maintained in common for all threads in a single program, yielding dramatic reductions in the overhead and complexity of a concurrent program. The basic idea is to represent a single task, such as fetching a particular block, within a single thread of control, and to rely on the thread management system to multiplex concurrent activities onto the available processor. In this way, the programmer can consider each function being performed by the system separately, and simply rely on automatic scheduling mechanisms to best assign available processing power (Anderson2).
These are just excerpts of essays please access the order form for custom essays, research papers, term papers, thesis, dissertations, book reports and case studies.