The earliest Digital Human Modeling systems were non-interactive analysis packages with crude graphics. Next generation systems added interactivity and articulated kinematic human models. The newest systems use real-time computer graphics, deformable figures, motion controllers, and user interfaces. Our long-term goal is to free the user as much as possible from interactive human model manipulation through direct understanding and execution of task instructions. We present a next generation DHM testbed that includes a scriptable interface, real-time collision-avoidance reach, empirical joint motion models, a versatile locomotion engine, motion capture and synthetic motion blends and combinations, and a smooth skinned scalable human model.