Current implementations of vision-based Advanced Driver Assistance Systems (ADAS) are largely dependent on real-time vehicle camera data along with other sensory data available on-board such as radar, ultrasonic, and GPS data. This data, when accurately reported and processed, helps the vehicle avoid collisions using established ADAS applications such as Forward Collision Avoidance (FCA), Autonomous Cruise Control (ACC), Pedestrian Detection, etc. Vehicle to Vehicle (V2V) and Vehicle to Infrastructure (V2I) over Dedicated Short Range Communication (DSRC) provides basic sensory data from other vehicles or roadside infrastructure including position information of surrounding traffic. Exchanging rich data such as vision data between vehicles and between vehicles and infrastructure provides a unique opportunity to advance driver assistance applications and Intelligent Transportation Systems (ITS). A primary example is to receive vision data from the vehicle ahead while approaching a busy intersection and then to use this as priori data for a pedestrian detection algorithm to reach decision with higher degree of confidence when the vehicle arrives at the intersection. While the possibility of improving ADAS applications utilizing V2V and V2I seems obvious, it is still currently unclear as to what extent. This paper explores the potential for utilizing V2V and V2I communication concepts to advance vision-based ADAS. Three use cases are discussed in terms of feasibility and viability.