One of the principal bottlenecks for sensing and perception algorithm development for autonomous vehicles is the ability to evaluate tracking algorithms against ground truth data. By ground truth we mean independent knowledge of the position, size, speed, heading, and class of objects of interest (moving and stationary) in complex operational environments. Our goal was to execute a data collection campaign at an urban test track in which trajectories of moving objects of interest are measured with auxiliary instrumentation, in conjunction with several AVs with a full sensor suite of radar, LiDAR, and cameras. Multiple autonomous vehicles (both moving and stationary) collected measurements in a variety of scenarios designed to incorporate real world interactions of vehicles with bicyclists and pedestrians. Trajectory data for a set of bicyclists and pedestrians was collected by separate means. In most cases, the real-time kinetic receivers on the bicyclists and pedestrians satisfy RTK-integer (a.k.a. RTK-fixed) or RTK-float accuracy, resulting in errors on the order of a few centimeters, or a few decimeters, respectively; position accuracy on the instrumented interaction vehicles is on the order of 10cm. We describe the data collection campaign at the University of Michigan M-City Urban Test Track, the interaction scenarios and test conditions, and will show some visualizations of the test as well as initial evaluation results. These data will serve as a global-frame, multi sensor/multi actor canonical dataset which can be used for the development and evaluation of extended-object tracking algorithms for autonomous vehicles.