Current data monitoring techniques used by typical plant floor systems are good for collecting data, but operators have little visibility to what that data might be indicating. Processing the data into a story that leads to corrective action is usually done by specialists (e.g. Black Belts, Quality Analysts, etc.), requires mining and integration of data from multiple systems, and is typically only triggered by the occurrence of systemic poor quality (e.g. scrap production / rework cycles / waste). It was theorized that significant improvements in 1) throughput; 2) quality (consistency of throughput); and 3) reduction of waste could all be achieved if operators could be notified of the onset of variation before rather than after it cascades into a systemic failure mode. A new methodology for automating the traditional analysis practices and eliminating the need for operators to step aside to query any systems for current quality diagnostics was developed. Proving the concept, modern computing horsepower and commercially available software was used at the Ford Cleveland Engine Plant to monitor existing data feeds from multiple sources (systems), build a continuous, consistent, real time view of the operation, automatically apply well proven six-sigma analysis routines to that view, and proactively alert operators only when an impending fault, new source of variation, or negative trend emerges. The paper discusses the journey to the CEP experiment, key objectives and methodology differentiators, IT challenges, as well as lessons learned about how successful implementations extended well beyond the challenges of deploying new tools and systems and must encompass lean thinking cultural mindsets by both plant floor and management. Finally, next steps and a vision of where this automation methodology might lead to in the future are proposed.