3D scanners digitally capture the shape of physical objects. 3D scanners have been mounted to a robot in an effort to automated the scanning and inspection process. Robots moves the 3D scanner over the surface of on object to collect the point cloud of the surface. Point clouds of the surface are collected and forms a digital representation of an object. The kinematic relationship between the component surface of the part being scanned, robot, and scanner were derived in the past. However, the location of the point on the physical surface and acquired point cloud collected by the 3D scanner cannot be compared because the relationship between the two workspaces were not found. In this work we derive the transformation for the robot workspace and the scanner workspace (C-track camera space) to be able to know the location of a point being collected on the robot workspace. Knowing the relationship between all the workspaces is necessary in integrating the system and designing the Automated Laser Line Scanning systems (ALLS). It will allow having an autonomous system capable of automatically scanning and collected point while verifying the collected points in the same time, this will allow for defect classification based on the location and the properties of the point selected. Failure to connect the workspaces together will result in disintegrated system. This will make us unable to use the gathered information about the component surface in designing a trajectory to scan a specific part, or knowing the location of the part in relation to the robot workspaces and knowing accurately the location of the scanner. After linking the coordinate of the component surface with the coordinates of the robot in the robot workspaces, we will validate the model by comparing a single point on surface with single point cloud point. In this experiment we are going to use FANUC S430i, robot-MetraSCAN-R, robot offline programming software (Roboguide, Robcad, Workspace), test object to validate the proposed link between the robot workspace and the component surface. Validating the model will give the base ground to build upon for future application with the ALLS, and this will allow us to have an integrated system capable of knowing the points that should be visited in order to collect all the necessary point on the component surface. This paper will be the base for future application with the 3D scanner that what is learned from the paper and what the outcome was.