DISCUSSION QUESTIONS



DQ #1

  • Be prepared to define:
    • API (Application Programming Interface)
    • Bandwidth
    • Barrier (in terms of synchronization)
    • Beowulf cluster
    • Broadcast
    • Cluster
    • Clock Speed
    • Concurrent execution
    • Core
    • Data parallelism
    • Task parallelism
    • Distributed computing
    • Embarrassingly parallel
    • Heterogeneous (in terms of a cluster and its components)
    • Homogeneous (in terms of a cluster and its components)
    • Hypercube
    • Latency
    • Load balance
    • Massively parallel processor
    • Parallel file system
    • Parallel overhead
    • Process
    • Shared memory
    • Threads
    • Workstation farm


DQ #2



DQ #3

  • Read the handout parallel program design and be prepared to discuss it in class on Wednesday after the MLK break. The reading starts in section 2.7 (bottom of the first page). You do not need to read section 2.8.


DQ #4

  • Be prepared to define these OpenMP and concurrent computing terms::
    • Atomicity (atomic)
    • Deadlock
    • Fork/Join
    • Mutex
    • pragma directives
    • Race condition
    • Reduction variable
    • Semaphore
    • Shared address space


DQ #5

  • Be prepared to define the following MPI terms:
    • MPI Communicator
    • SPMD (Single Program Multiple Data
    • Blocking and Non-Blocking Communication
    • Collective Communication
    • Point-to-Point Communication
    • Deadlock (in general)
  • List the diferences between MPI and OpenMP, that is Distributed and Shared memory parallel models.
  • Section 4.9: Exercises 1, 2, 3, and 6 from the book Roman Trobec et al. Introduction to Parallel Computing: from Algorithms to Programming on State-of-the-Art Platforms (pages 129-130)
    Skip to