For many mission critical systems, demonstrating that all requirements have been met via a set of requirements-based tests is often mandated by internal processes or external standards. Traditional coverage approaches, however, do not address this mandate because they measure only the coverage of the design by determining which paths in the design have been executed. Determining whether a given set of test vectors covers the design requirements (as opposed to merely covering the design) is a challenge.Creating a set of test vectors to cover the requirements can be difficult and time consuming. Using techniques first identified in the 1970s and modern Model-Based Design tools, we present a novel approach, based on work presented in , to automatically generate a set of requirements-based test vectors. In this paper, we discuss how requirements captured in a natural language can be modeled using Cause-Effect graphs, introduced in . We then translate the Cause-Effect graph into Simulink® and Stateflow® to identify conflicting requirements and automatically generate a set of test vectors that can be assessed for completeness using coverage objectives such as Modified Condition/Decision Coverage (MCDC). We apply the same test vectors to the design model, which can be developed independently from the original requirements. This approach enables engineers to determine if their design is sufficiently covered by the set of requirements-based test vectors. Any parts of the design not covered must be investigated further to determine if the design elements should exist or if there is a problem with the requirements.