Published August 26, 2022 | Version v1
Journal article Open

Parallel Computational Models

Description

The purpose of this study is to examine the advantages of using parallel computing. The phrase "parallel computing" refers to a strategy for allocating all of a system's resources in order to maximize performance and programmability while abiding by time and cost constraints. The main driving forces are to improve performance, cut costs, and deliver precise outcomes. Look-ahead, Pipeline, Vectorization, Concurrency, Multitasking, Multiprogramming, Time Sharing, Multi-Threading, and Distributed systems are just a few of the methods that can be used to show parallelism. By dividing a task into its component pieces and allocating each of those parts to a different processor, parallel computing can be accomplished in order to reduce the amount of time required to finish a program.

Files

V10I806.pdf

Files (645.2 kB)

Name Size Download all
md5:f9fd57cd76bad5f442ae413e61d5516e
645.2 kB Preview Download