UPC: Distributed Shared Memory Programming
Buy Rights Online Buy Rights

Rights Contact Login For More Details

More About This Title UPC: Distributed Shared Memory Programming

English

This is the first book to explain the language Unified Parallel C and its use. Authors El-Ghazawi, Carlson, and Sterling are among the developers of UPC, with close links with the industrial members of the UPC consortium. Their text covers background material on parallel architectures and algorithms, and includes UPC programming case studies. This book represents an invaluable resource for the growing number of UPC users and applications developers. More information about UPC can be found at: http://upc.gwu.edu/

An Instructor Support FTP site is available from the Wiley editorial department.

English

Tarek El-Ghazawi received his PhD in electrical and computer engineering from New Mexico State University. Currently, he is an associate professor in the Electrical and Computer Engineering??Department at the George Washington University. His research? interests are in high-performance computing, computer architecture, reconfigurable computing, embedded systems, and experimental performance. He has over 70 technical journal and conference publications in these areas. He has served as the principal investigator for over two dozen funded research projects, and his research has been supported by NASA, DoD, NSF and industry. He has served as a guest editor for the IEEE concurrency and was an Associate Editor for the International Journal of Parallel and Distributed Computing and Networking. El-Ghazawi has also served as a visiting scientist at NASA GSFC and NASA Ames Research Center. He is a senior member of the IEEE and a member of the advisory board for the IEEE Task Force on Cluster Computing.

William Carlson received his PhD in Electrical Engineering from Purdue University. From 1988 to 1990, he was an assistant professor at the University of Wisconsin-Madision. His research interestes include performance evaluation of advanced computer architectures, operating systems, languages and compilers for parallel and distributed computers.

Thomas Sterling received his PhD as a Hertz Fellow from the Massachusetts Institute of Technology. His research interests include parallel computer architecture, system software and evaluation. He holds six patents, is the co-author of several books and has published dozens of papers in the field of parallel Computing.

Katherine Yelick received her? PhD in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology. Her research interests include parallel computing, memory hierarchy optimizations, programming languages and compilers. Currently, she is a Professor of Computer Science at the University of California, Berkeley.

English

Preface vii

1. Introductory Tutorial 1

1.1 Getting Started 1

1.2 Private and Shared Data 3

1.3 Shared Arrays and Affinity of Shared Data 6

1.4 Synchronization and Memory Consistency 8

1.5 Work Sharing 10

1.6 UPC Pointers 11

1.7 Summary 14

Exercises 14

2. Programming View and UPC Data Types 17

2.1 Programming Models 17

2.2 UPC Programming Model 20

2.3 Shared and Private Variables 21

2.4 Shared and Private Arrays 23

2.5 Blocked Shared Arrays 25

2.6 Compiling Environments and Shared Arrays 30

2.7 Summary 30

Exercises 31

3. Pointers and Arrays 33

3.1 UPC Pointers 33

3.2 Pointer Arithmetic 35

3.3 Pointer Casting and Usage Practices 38

3.4 Pointer Information and Manipulation Functions 40

3.5 More Pointer Examples 43

3.6 Summary 47

Exercises 47

4. Work Sharing and Domain Decomposition 49

4.1 Basic Work Distribution 50

4.2 Parallel Iterations 51

4.3 Multidimensional Data 54

4.4 Distributing Trees 62

4.5 Summary 71

Exercises 71

5. Dynamic Shared Memory Allocation 73

5.1 Allocating a Global Shared Memory Space Collectively 73

5.2 Allocating Multiple Global Spaces 78

5.3 Allocating Local Shared Spaces 82

5.4 Freeing Allocated Spaces 89

5.5 Summary 90

Exercises 90

6. Synchronization and Memory Consistency 91

6.1 Barriers 92

6.2 Split-Phase Barriers 94

6.3 Locks 99

6.4 Memory Consistency 108

6.5 Summary 113

Exercises 114

7. Performance Tuning and Optimization 115

7.1 Parallel System Architectures 116

7.2 Performance Issues in Parallel Programming 120

7.3 Role of Compilers and Run-Time Systems 122

7.4 UPC Hand Optimization 123

7.5 Case Studies 128

7.6 Summary 135

Exercises 135

8. UPC Libraries 137

8.1 UPC Collective Library 137

8.2 UPC-IO Library 141

8.3 Summary 146

References 147

Appendix A: UPC Language Specifications, v1.1.1 149

Appendix B: UPC Collective Operations Specifications, v1.0 183

Appendix C: UPC-IO Specifications, v1.0 203

Appendix D: How to Compile and Run UPC Programs 243

Appendix E: Quick UPC Reference 245

Index 251

English

"This book is a good introduction to the UPC programming philosophy." (Computing Reviews.com, February 15, 2006)
loading