Add Book to My BookshelfPurchase This Book Online

Chapter 6 - Practical Considerations

Pthreads Programming
Bradford Nichols, Dick Buttlar and Jacqueline Proulx Farrell
 Copyright © 1996 O'Reilly & Associates, Inc.

Back at the beginning of Chapter 1, we claimed that multiple threads were more efficient than multiple processes at performing the same amount of work. Now that we've reached the end of the book, we've shown this to be true—by our examples throughout and, objectively, by the performance measurements we've just discussed. 
Efficient is an odd word to use. It hints of speed, and let's face it, speed is what we want from our programs. (Our programs give correct results, let's leave it at that!) However, speedy performance is only part of the story. We want to make it clear that threads not only streamline the many tasks in our programs, but they also allow us to make optimal use of our platform's processing cycles. 
Why should we spend CPU time running the operating system when we don't have to? When we choose threads over processes to multitask our programs, the CPUs spend less time in system scheduling code, managing the grand tectonic plate shifts that process context switches often seem to be (the many swap I/O requests, the allocation of memory for child processes, the copies of parent data to a child's address space). We avoid the system calls that establish and manage shared memory regions. Although we cannot forego the expense of synchronizing access to our shared data, this is a liability for both thread and process multitasking models. (Judicious use of shared data and well-placed synchronization calls are key to any well-designed multitasking program.) All in all, multithreading benefits not just our program, but anyone else who is sharing the CPUs with us. 

Previous SectionNext Section, Inc © 2000 –  Feedback