This paper reviews the theory and application of the “success run” test, found in reliability assessment applications, among others. A “success run” is a series of n independent tests without failure. The mathematical basis of the classical, non parametric success run test is reviewed, and it is shown with examples what can be said about the relation among reliability, confidence and sample size. The Bayesian success run is developed using the uniform distribution as prior distribution of reliability. This is further extended to include cases of uniform priors where 0< A < R < 1. The classical non-parametric case is reexamined and a natural prior distribution is revealed for the special case when one's only prior assumption is: 0 < A ≤ R ≤ 1. Finally, some comparisons are made among these models and it is shown that the lower confidence bounds on reliability, using these conservative priors, converge rather rapidly to results obtained using the classical non-parametric model.