Visual odometry and mapping will be essential components in future ADAS and autonomous driving systems. In particular, visual odometry can provide localization when GPS estimates are lost or degraded, and can provide localization relative to the surrounding environment for purposes of navigation and hazard avoidance. Feature-based visual odometry algorithms extract distinct corners/features from the scene and track them over time in order to maintain an estimate of ego-motion. From prior work, it is known that odometry can fail depending on scene content. Tracking is lost when too few detected points contribute to tracking, where the remaining points are false-matches and are outliers to the motion estimator. For example, we observed that features on trees mostly fail to be tracked because they can be confused for one another and become outliers to the motion model. Exclusion of these poor features in advance can increase the robustness of visual odometry algorithms, in particular in challenging visual conditions due to weather, time of day, and the nature of the driving environment. This paper investigates the effect of scene content on visual odometry. A scene content classifier is used as a first step to analyze the scene and identify image regions which produce the most reliable image features. The scene content classifier is based on computing image region structural features. It classifies the image into Random, Textured, and Transient tiles. Results show that features detected in Random tiles area least likely to contribute to tracking, while Textured and Transient tiles are more reliable. Exclusion of these image regions in advance of motion estimation results in a more robust algorithm capable of maintain accurate motion estimation even with few tracked features. The approach is evaluated using the widely used KITTI dataset (project by Karlsruhe Institute of Technology and Toyota Technological Institute).