An important aspect of an autonomous vehicle system, aside from the crucial features of path following and obstacle detection, is the ability to accurately and effectively recognize visual cues present on the roads, such as traffic lanes, signs and lights. This ability is important because very few vehicles on the road are autonomously driven and must integrate with conventionally operated vehicles. An enhanced infrastructure has yet to be available solely for autonomous vehicles to more easily navigate lanes and intersections non-visually. Recognizing these cues efficiently can be a complicated task as it not only involves constantly gathering visual information from the vehicle’s surroundings but also requires accurate processing. Ambiguity of traffic control signals challenges even the most advanced computer decision making algorithms. The vehicle then must keep a predetermined position within its travel lane based on its interpretation of its surroundings. A consumer grade day camera vision sensor was utilized to perform these tasks on a prototype scale model of an autonomous vehicle to recognize preprogrammed colors, their positions, and their sizes in its field of view. The sensor’s ability to accurately and precisely recognize this information was evaluated in this paper. It was proven that even though the visual data received by the robotic vision sensor was challenged with erratic position and sizing values originating mainly from varying lighting conditions, the data provided effective guidance cues when properly filtered. By utilizing a rolling average method of interpreting the data, a reliable and accurate system of recognizing the lanes and traffic signs was established. This enabled the test vehicle to effectively navigate a simulated road despite minor variations in lighting conditions and line spacing.