A number of computer vision researchers have proposed segmentation methods based upon some form of robust least K-th order statistic (as opposed to least squares) model-fitting. The essential idea is that if the value of K is smaller than a certain value (determined by the fraction of the total data that belongs to the given segment), the K-th order statistic is totally insensitive to outliers. Generally, these approaches try to optimize the value of K by attempting to find the largest such K applicable (determined by the largest segment remaining in the data). The K-th order statistic also serves a second purpose in these schemes: It is used to estimate the scale of the noise and thus set the threshold on the size of the residuals to a model fit, to remove outliers (or, equivalently, to determine inliers) to the model fit. The methodology advocated here is similar to these approaches. However, the approach differs in two crucial ways: We do not attempt to optimize K at any stage, and we do not use the K-th order statistic to determine scale.