Are LockFree Concurrent Algorithms Practically WaitFree?
Abstract
Lockfree concurrent algorithms guarantee that some concurrent operation will always make progress in a finite number of steps. Yet programmers prefer to treat concurrent code as if it were waitfree, guaranteeing that all operations always make progress. Unfortunately, designing waitfree algorithms is generally a very complex task, and the resulting algorithms are not always efficient. While obtaining efficient waitfree algorithms has been a longtime goal for the theory community, most nonblocking commercial code is only lockfree. This paper suggests a simple solution to this problem. We show that, for a large class of lock free algorithms, under scheduling conditions which approximate those found in commercial hardware architectures, lockfree algorithms behave as if they are waitfree. In other words, programmers can keep on designing simple lockfree algorithms instead of complex waitfree ones, and in practice, they will get waitfree progress. Our main contribution is a new way of analyzing a general class of lockfree algorithms under a stochastic scheduler. Our analysis relates the individual performance of processes with the global performance of the system using Markov chain lifting between a complex perprocess chain and a simpler system progress chain. We show that lockfree algorithms are not only waitfree with probability 1, but that in fact a general subset of lockfree algorithms can be closely bounded in terms of the average number of steps required until an operation completes. To the best of our knowledge, this is the first attempt to analyze progress conditions, typically stated in relation to a worst case adversary, in a stochastic model capturing their expected asymptotic behavior.
 Publication:

arXiv eprints
 Pub Date:
 November 2013
 arXiv:
 arXiv:1311.3200
 Bibcode:
 2013arXiv1311.3200A
 Keywords:

 Computer Science  Distributed;
 Parallel;
 and Cluster Computing
 EPrint:
 25 pages