pthreads
to speed up code// This is left fold, since we iterate through the list from the start
int sequential_reduce(int (*function)(char, char), int* arr,
size_t size){
char initial = arr[0];
int offset;
for(offset = 1; offset < size; ++offset){
initial = function(initial, arr[offset]);
}
return initial;
}
int main(){
char arr[] = {1, 2, 3, 4, 5, 6};
int sum = sequential_reduce(add, arr,
sizeof(arr)/sizeof(arr[0]));
// Whatever you want
return 0;
}
Pthreads are short for POSIX-threads. They are a standardized way of doing multithreading on POSIX-compliant systems. A thread is short for thread of execution, meaning that the thread executes instructions independently of other threads.
int pthread_create(pthread_t *thread,
const pthread_attr_t *attr,
void *(*start_routine) (void *),
void *arg);
thread
somwhere to write the id of the threadattr
options that you set during pthread, for the most part you don’t need to worry about itstart_routine
where to start your pthreadarg
the arguments to give to each pthreadint pthread_join(pthread_t thread, void **retval);
thread
the value of the thread *not a pointer to it
retval
where should I put the resulting valueJust like waitpid, you want to join all your terminated threads. Calling pthread_join
on a thread makes your program wait for that thread to finish before continuing. There is no analog of waitpid(-1, …) (which waits for any child process to terminate) because if you need that ‘you probably need to rethink your application design.’ - man page.
You can guess what happens in pthread_kill
.
#include <pthread.h>
void* do_massive_work(void* payload){
/* Doing massive work */
return NULL;
}
int main(){
pthread_t threads[10];
for(int i = 0; i < 10; ++i){
pthread_create(threads+i, NULL, do_massive_work, NULL);
}
for(int i = 0; i < 10; ++i){
pthread_join(threads[i], NULL);
}
return 0;
}
Each thread gets its own registers, stack pointer, and stack. However, all threads within a program share the heap, static, and code (text) regions of memory.
We want you to split up the work done by reduce
into multiple threads, in order to parallelize it. Dividing up the work should look something like the following
You have been going through mutexes and other synchronization primitives in lecture, but the most efficient data structure uses no synchronization. This means that so long as no other thread touches the exact samepiece of memory that another thread is touching – there is no race condition. We are then using threads to their full potential of parallelism.