A camera capturing RGBA image
This sensor emulates a single video camera. It generates a series of RGBA images. Images are encoded as binary char arrays, with 4 bytes per pixel.
The cameras make use of Blender’s bge.texture module, which requires a graphic card capable of GLSL shading. Also, the 3D view window in Blender must be set to draw Textured objects.
The camera configuration parameters implicitly define a geometric camera in blender units. Knowing that the cam_focal attribute is a value that represents the distance in Blender unit at which the largest image dimension is 32.0 Blender units, the camera intrinsic calibration matrix is defined as
alpha_u 0 u_0 0 alpha_v v_0 0 0 1
where:
You can set these properties in your scripts with <component>.properties(<property1> = ..., <property2>= ...).
(no documentation available yet)
(no documentation available yet)
(no documentation available yet)
(no documentation available yet)
(no documentation available yet)
(no documentation available yet)
This sensor exports these datafields at each simulation step:
The data captured by the camera, stored as a Python Buffer class object. The data is of size (cam_width * cam_height * 4) bytes. The image is stored as RGBA.
The intrinsic calibration matrix, stored as a 3x3 row major Matrix.
Interface support:
Returns the current data stored in the sensor.
Return value
a dictionary of the current sensor’s data
Capture n images
The following example shows how to use this component in a Builder script:
from morse.builder import *
robot = ATRV()
# creates a new instance of the sensor
videocamera = VideoCamera()
# place your component at the correct location
videocamera.translate(<x>, <y>, <z>)
videocamera.rotate(<rx>, <ry>, <rz>)
robot.append(videocamera)
# define one or several communication interface, like 'socket'
videocamera.add_interface(<interface>)
env = Environment('empty')
(This page has been auto-generated from MORSE module morse.sensors.video_camera.)