To maintain standards, all manufactured automotive items are subject to strict quality control. Quality control processes test items against agreed upon specification limits. Items which “pass” these tests are considered "good enough" for commercial use, while items that “fail” are discarded. Quality control testing for large numbers of manufactured products can be costly; as a result manufacturers often implement inexpensive test processes. Unfortunately, inexpensive test processes usually result in lower accuracy measurements. Lower accuracy tests can increase both the number of discarded items and the number of returned (or recalled) items. We have created a multi-dimensional ellipse statistical model, formed from both lower and higher accuracy measurements, with which the impact of lower accuracy test procedures can be assessed. The typical method used to overcome lower accuracy test results and optimize production yield is to shift the specification limits, increasing the capability index. However, this one-dimensional approach excludes the impact on returned and discarded items. While a small shift in specification limits may increase production yield it could unintentionally increase returned yield. The ellipse model uses higher accuracy measurements to assess the cost differential of returned and discarded items produced by specification changes. A sensitivity analysis is performed on the lower accuracy measurements resulting in optimal resource allocation. For example, the model can be used to determine if it is more economical to decrease the lower specification limit, increase the upper specification limit, or proportionally shift both limits. The generic mathematical formulation is completely defined and a Matlab implementation is described.