This sensor emulates a hight level camera that outputs the names of the objects that are located within the field of view of the camera.
The sensor determines first which objects are to be tracked (objects marked with a Logic Property called Object, cf documentation on passive objects for more on that). If the Label property is defined, it is used as exported name. Else the Blender object name is used.
Then a test is made to identify which of these objects are inside of the view frustum of the camera. Finally, a single visibility test is performed by casting a ray from the center of the camera to the center of the object. If anything other than the test object is found first by the ray, the object is considered to be occluded by something else, even if it is only the center that is being blocked.
The cameras make use of Blender’s bge.texture module, which requires a graphic card capable of GLSL shading. Also, the 3D view window in Blender must be set to draw Textured objects.
visible_objects: (List) An array containing the description of objects visible by the camera. Objects are described as a dictionary containing:
- name: (String) the name of the object
- type: (String) the type of the object
- position: (Vector3) the position of the object (x, y, z)
- orientation: (Quaternion) the orientation of the object (w, x, y, z)
The Empty object corresponding to this sensor has the following parameters:
No camera modifiers available at the moment