Parallel Processing, Grid computing & Clusters

1

SERIAL COMPUTING 

Traditionally, software for serial computation:    

has

been

written

A problem is broken into a discrete series of instructions Instructions are executed sequentially one after another Executed on a single processor Only one instruction may execute at any moment in time

2

1

SERIAL COMPUTING 

Example

3

PARALLEL COMPUTING 

In the simplest sense, parallel computing is the simultaneous use of multiple compute resources to solve a computational problem:  A problem is broken into discrete parts that can be solved concurrently 

Each part is further broken down to a series of instructions



Instructions from each part execute simultaneously on different processors



An overall control/coordination mechanism is employed 4

2

PARALLEL COMPUTING

5

PARALLEL COMPUTING The simultaneous use of more than one CPU or processor core to execute a program or multiple computational threads

6

3

PARALLEL COMPUTING 

Ideally, parallel processing makes programs run faster because there are more engines (CPUs or cores) running it.



In practice, it is often difficult to divide a program in such a way that separate CPUs or cores can execute different portions without interfering with each other. Most computers have just one CPU, but some models have several, and multicore processor chips are becoming the norm. There are even computers with thousands of CPUs.



With single-CPU, single-core computers, it is possible to perform parallel processing by connecting the computers in a network. However, this type of parallel processing requires very sophisticated software called distributed processing software. 7

DIFFERENCE BETWEEN PARALLELISM & CONCURRENCY: 



Concurrency is a term used in the operating systems and databases communities which refers to the property of a system in which multiple tasks remain logically active and make progress at the same time by interleaving the execution order of the tasks and thereby creating an illusion of simultaneously executing instructions. Parallelism, on the other hand, is a term typically used by the supercomputing community to describe executions that physically execute simultaneously with the goal of solving a problem in less time or solving a larger problem in the same time. Parallelism exploits concurrency Parallel processing is also called parallel computing. The idle time of processor cycles across network can be used 8 effectively by sophisticated distributed computing software

4

WHY USE PARALLEL COMPUTING Compared to serial computing, parallel computing is much better suited for modeling, simulating and understanding complex, real world phenomena. Main Reasons to use Parallel Computing • Save time and/or money: In theory, throwing more resources at a task will shorten it’s time to completion, with potential cost savings. Parallel clusters can be built from cheap, commodity components. 



Solve larger problems: Many problems are so large and/or complex that it is impractical or impossible to solve them on a single computer, especially given limited computer memory. Example: Web search engines/databases processing9 millions of transactions per second

WHY USE PARALLEL COMPUTING •

Provide concurrency: A single compute resource can only do one thing at a time. Multiple computing resources can be doing many things simultaneously. For example, the Access Grid (www.accessgrid.org) provides a global collaboration network where people from around the world can meet and conduct work "virtually”



Limits to serial computing: Both physical and practical reasons pose significant constraints to simply building ever faster serial computers: •

Transmission speeds - the speed of a serial computer is directly dependent upon how fast data can move through hardware. Increasing speeds necessitate increasing proximity of processing elements.

10

5

WHY USE PARALLEL COMPUTING •





Limits to miniaturization - processor technology is allowing an increasing number of transistors to be placed on a chip. However, even with molecular or atomic-level components, a limit will be reached on how small components can be. Economic limitations - it is increasingly expensive to make a single processor faster. Using a larger number of moderately fast commodity processors to achieve the same (or better) performance is less expensive.

Current computer architectures are increasingly relying upon hardware level parallelism to improve performance:  Multiple execution units  Pipelined instructions  Multi-core 11

WHY USE PARALLEL COMPUTING In Summary:    

Absolute physical limits of hardware components Economical reasons – more complex = more expensive Performance limits Large applications – demand too much memory & time

Advantages: 

Increasing speed & optimizing resources utilization

Disadvantages: 

Complex programming models – difficult development

12

6

WHO IS USING PARALLEL COMPUTING Several applications on parallel processing:

Science Computation

Digital Biology

Aerospace

Resources Exploration

13

WHO IS USING PARALLEL COMPUTING 

Science and Engineering: to model difficult problems in many areas of science and engineering



Industrial and Commercial: Databases, Web search engines, Advanced graphics and virtual reality, particularly in the entertainment industry, Networked video and multi-media technologies etc.



Global Applications: Parallel computing is now being used extensively around the world, in a wide variety of applications. 14

7

GRID COMPUTING 

Grid computing is a term referring to the combination of computer resources from multiple administrative domains to reach a common goal. The Grid can be thought of as a distributed system with non-interactive workloads that involve a large number of files.



What distinguishes grid computing from conventional high performance computing systems such as cluster computing is that grids tend to be more loosely coupled, heterogeneous, and geographically dispersed.



Although a grid can be dedicated to a specialized application, it is more common that a single grid will be used for a variety of different purposes. Grids are often constructed with the aid of general-purpose grid software libraries known as 15 middleware.

GRID COMPUTING 

Grid size can vary by a considerable amount. Grids are a form of distributed computing whereby a “super virtual computer” is composed of many networked loosely coupled computers acting together to perform very large tasks.



Furthermore, “Distributed” or “grid” computing in general is a special type of parallel computing that relies on complete computers (with onboard CPUs, storage, power supplies, network interfaces, etc.) connected to a network (private, public or the Internet) by a conventional network interface, such as Ethernet. This is in contrast to the traditional notion of a supercomputer, which has many processors connected by a local high-speed computer bus. 16

8

COMPARISON OF GRIDS & CONVENTIONAL COMPUTERS 

“Distributed” or “grid” computing in general is a special type of parallel computing that relies on complete computers (with onboard CPUs, storage, power supplies, network interfaces, etc.) connected to a network (private, public or the Internet) by a conventional network interface, such as Ethernet.



This is in contrast to the traditional notion of a supercomputer, which has many processors connected by a local high-speed computer bus

17

COMPARISON OF GRIDS & CONVENTIONAL COMPUTERS 

“The primary advantage of distributed computing is that each node can be purchased as commodity hardware, which, when combined, can produce a similar computing resource as multiprocessor supercomputer, but at a lower cost. This is due to the economies of scale of producing commodity hardware, compared to the lower efficiency of designing and constructing a small number of custom supercomputers.



The primary performance disadvantage is that the various processors and local storage areas do not have high-speed connections. This arrangement is thus well-suited to applications in which multiple parallel computations can take place independently, without the need to communicate18 intermediate results between processors.

9

COMPUTER CLUSTER 

A computer cluster consists of a set of loosely or tightly connected computers that work together so that, in many respects, they can be viewed as a single system. Unlike grid computers, computer clusters have each node set to perform the same task, controlled and scheduled by software. The components of a cluster are commonly, but not always, connected to each other through fast local area networks.



Clusters are usually deployed to improve performance and availability over that of a single computer, while typically being much more cost-effective than single computers of comparable speed or availability 19

High availability clusters (HA) (Linux)

Mission critical applications High-availability clusters (also known as Failover Clusters) are implemented for the purpose of improving the availability of services which the cluster provides.

Network Load balancing clusters

Compute or Science Clusters

operate by distributing a workload evenly over multiple back end nodes. Typically the cluster will be configured with multiple redundant loadbalancing front ends.

provide redundancy

all available servers process requests.

eliminate single points of failure.

Web servers, mail servers,..

cluster shares a dedicated network, is densely located, and probably has homogenous nodes.

Beowulf 20

10

CLUSTER CATEGORIZATIONS High-availability (HA) clusters  (also known as Failover Clusters) are implemented primarily for the purpose of improving the availability of services that the cluster provides. They operate by having redundant nodes, which are then used to provide service when system components fail. 

The most common size for an HA cluster is two nodes, which is the minimum requirement to provide redundancy. HA cluster implementations attempt to use redundancy of cluster components to eliminate single points of failure

21

CLUSTER CATEGORIZATIONS Load-balancing clusters  Load-balancing is when multiple computers are linked together to share computational workload or function as a single virtual computer. 

Logically, from the user side, they are multiple machines, but function as a single virtual machine.



Requests initiated from the user are managed by, and distributed among, all the standalone computers to form a cluster.



This results in balanced computational work among different machines, improving the performance of the cluster systems. 22

11

CLUSTER CATEGORIZATIONS Compute clusters  Often clusters are used primarily for computational purposes, rather than handling IO-oriented operations such as web service or databases. For instance, a cluster might support computational simulations of weather or vehicle crashes 

The primary distinction within computer clusters is how tightly-coupled the individual nodes are.

23

CLUSTER CATEGORIZATIONS Compute clusters  For instance, a single computer job may require frequent communication among nodes - this implies that the cluster shares a dedicated network, is densely located, and probably has homogenous (similar) nodes. 

The other extreme is where a computer job uses one or few nodes, and needs little or no inter-node communication. This latter category is sometimes called "Grid" computing. Tightly-coupled compute clusters are designed for work that might traditionally have been called "supercomputing".

24

12

25

13

Parallel Processing, Grid computing & Clusters

Clusters. 1. SERIAL COMPUTING. ➢ Traditionally, software has been written for serial .... “The primary advantage of distributed computing is that each node can ...

392KB Sizes 0 Downloads 260 Views

Recommend Documents

pdf-1833\parallel-computing-for-real-time-signal-processing-and ...
... apps below to open or edit this item. pdf-1833\parallel-computing-for-real-time-signal-proce ... dvanced-textbooks-in-control-and-signal-processing.pdf.

pdf-1843\server-architectures-multiprocessors-clusters-parallel ...
... of the apps below to open or edit this item. pdf-1843\server-architectures-multiprocessors-clusters-parallel-systems-web-servers-storage-solutions.pdf.

Agent Based Grid Computing
modified cost effective framework of a Grid. Computing ... Grid Computing architecture focuses on platform ..... director.com/article.php?articleid=2865.

Agent Based Grid Computing
agents to move to a system that contains services with which they want to interact and then to take advantage of being in the same hosts or network.

Parallel Computing Technologies -
Sep 4, 2015 - storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter .... (CP2K): IBM Regatta 690+ [8], Cray XT3 and XT5 [15], IBM BlueGene/P [2] and K-100 cluster of Keldysh ..

Parallel unstructured grid computations - Semantic Scholar
optional. Although the steps are the same for structured and unstructured grids as well ... ture part (DDD) and at the unstructured mesh data structure. Finally, we ...

Parallel unstructured grid computations - Semantic Scholar
Huge amounts of data are produced by parallel computations ..... mandatory to define standardized interfaces for the PDE software components such that.

Parallel unstructured grid computations - Semantic Scholar
as sequential and parallel computation, programming effort can vary from almost nothing .... the application programmer UG provides a framework for building ...

difference between grid computing and cloud computing pdf ...
difference between grid computing and cloud computing pdf. difference between grid computing and cloud computing pdf. Open. Extract. Open with. Sign In.

what is grid computing pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. what is grid ...

application of grid computing pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. application of ...

application of grid computing pdf
application of grid computing pdf. application of grid computing pdf. Open. Extract. Open with. Sign In. Main menu. Displaying application of grid computing pdf.

Grid Computing Looking Forward
fiber-optic cable that is already in the ground but not in use. A few technical hurdles still need to be overcome ..... PCI bus for attaching peripherals to a CPU in the early 1990s brought considerable improvement over the .... time with the tag rem

Some Basic Concepts of Grid Computing
that uses geographically and administratively dispa- rate resources. In grid ... Grid is a software environment that makes it possible to ... given amount of computer resources, (2) as a way to solve problems ... capacity among business groups that a

applications of parallel computing pdf
applications of parallel computing pdf. applications of parallel computing pdf. Open. Extract. Open with. Sign In. Main menu. Displaying applications of parallel ...

Review of Parallel Computing
Dec 17, 2007 - a model where parallel tasks all have the same ”picture” of memory and can directly address and access the same logical memory locations ...

applications of parallel computing pdf
Sign in. Loading… Whoops! There was a problem loading more pages. Whoops! There was a problem previewing this document. Retrying... Download. Connect ...

2007 International Conference on Parallel Processing
2007 International Conference on Parallel Processing. □ September 10-14 ... CONFERENCE ON. PROCESSING ... Column-Based Partitioning for Data in.

what is parallel computing pdf
Download. Connect more apps... Try one of the apps below to open or edit this item. what is parallel computing pdf. what is parallel computing pdf. Open. Extract.