Acquisition Frame Rate
The Acquisition Frame Rate camera feature allows you to set an upper limit for the camera's frame rate.
This is useful if you want to operate the camera at a constant frame rate in free run image acquisition.
How It Works
If the Acquisition Frame Rate feature is enabled, the camera's maximum frame rate is limited by the value you enter for the acquisition frame rate parameter.
For example, setting an acquisition frame rate of 20 frames per second (fps) has the following effects:
To determine the actual frame rate, use the Resulting Frame Rate feature.
Setting the Acquisition Frame Rate
// Set the upper limit of the camera's frame rate to 30 fps
camera.Parameters[PLCamera.AcquisitionFrameRateEnable].SetValue(true);
camera.Parameters[PLCamera.AcquisitionFrameRateAbs].SetValue(30.0);
The Acquisition Mode camera feature allows you to choose between single frame or continuous image acquisition.
Available Acquisition Modes
In Single Frame acquisition mode, the camera will acquire exactly one image. After the Acquisition Start command has been executed, the camera waits for trigger signals. When a Frame Start trigger signal has been received and an image has been acquired, the camera switches off image acquisition. To acquire another image, you must execute the Acquisition Start command again.
In Continuous acquisition mode, the camera continuously acquires and transfers images until acquisition is switched off. After the Acquisition Start command has been executed, the camera waits for trigger signals. The camera will continue acquiring images until an Acquisition Stop command is executed.
// Configure single frame acquisition on the camera
camera.Parameters[PLCamera.AcquisitionMode].SetValue(PLCamera.AcquisitionMode.SingleFrame);
// Switch on image acquisition
camera.Parameters[PLCamera.AcquisitionStart].Execute();
// The camera waits for a trigger signal.
// When a Frame Start trigger signal has been received,
// the camera executes an Acquisition Stop command internally.
// Configure continuous image acquisition on the camera
camera.Parameters[PLCamera.AcquisitionMode].SetValue(PLCamera.AcquisitionMode.Continuous);
// Switch on image acquisition
camera.Parameters[PLCamera.AcquisitionStart].Execute();
// The camera waits for trigger signals.
// (...)
// Switch off image acquisition
camera.Parameters[PLCamera.AcquisitionStop].Execute();
Before a camera can start capturing images, image acquisition has to be switched on first. Otherwise, the camera wouldn't react to incoming trigger signals.
After the AcquisitionStop command has been executed, the following occurs:
// Configure continuous image acquisition on the cameras
camera.Parameters[PLCamera.AcquisitionMode].SetValue(PLCamera.AcquisitionMode.Continuous);
// Switch on image acquisition
camera.Parameters[PLCamera.AcquisitionStart].Execute();
// The camera waits for trigger signals.
// (...)
// Switch off image acquisition
camera.Parameters[PLCamera.AcquisitionStop].Execute();
The Acquisition Status camera feature allows you to determine whether the camera is waiting for trigger signals. This is useful if you want to optimize triggered image acquisition and avoid overtriggering.
Basler strongly recommends to only use the Acquisition Status feature when the camera is configured for software triggering. When the camera is configured for hardware triggering, Basler recommends to monitor the camera's Trigger Wait signals instead.
To determine if the camera is currently waiting for trigger signals:
If the AcquisitionStatus parameter is true, the camera is waiting for a trigger signal of the trigger type selected. If the AcquisitionStatus parameter is false, the camera is busy.
// Specify that you want to determine if the camera is waiting for Frame Start trigger signals
camera.Parameters[PLCamera.AcquisitionStatusSelector].SetValue(PLCamera.AcquisitionStatusSelector.FrameTriggerWait);
// Get the acquisition status
bool isWaitingForFrameStart = camera.Parameters[PLCamera.AcquisitionStatus].GetValue();
if(isWaitingForFrameStart){
// It is now safe to apply Frame Start trigger signals
}
The Action Commands camera feature allows you to execute actions on multiple cameras at roughly the same time by using a single broadcast protocol message.
If you want to execute actions on multiple cameras at exactly the same time, use the Scheduled Action Commands feature instead.
You can use action commands to perform the following tasks:
Action commands are broadcast protocol messages that you can send to multiple devices in a GigE network.
Each action protocol message contains the following information:
If the camera is within the specified network segment and if the protocol information matches the action command configuration in the camera, the camera executes the corresponding action.
Action Device Key
A 32-bit number of your choice used to authorize the execution of an action command on the camera. If the action device key on the camera and the action device key in the protocol message are identical, the camera executes the corresponding action. The device key is write-only; it can't be read out of the camera.
Action Group Key
A 32-bit number of your choice used to define a group of devices on which an action should be executed. Each camera can be assigned to one group only. If the action group key on the camera and the action group key in the protocol message are identical, the camera will execute the corresponding action.
Action Group Mask
A 32-bit number of your choice used to filter out a sub-group of cameras belonging to a group of cameras. The cameras belonging to a sub-group execute an action at the same time.
The filtering is done using a logical bitwise AND operation on the group mask number of the action command and the group mask number of a camera. If both binary numbers have at least one common bit set to 1 (i.e., the result of the AND operation is non-zero), the corresponding camera belongs to the sub-group.
Example: Assume that A group of six cameras is installed on an assembly line. To execute actions on specific sub-groups, the following group mask numbers have been assigned to the cameras (sample values):
Camera | Group Mask Number (Binary) | Group Mask Number (Hexadecimal) |
---|---|---|
1 | 000001 | 0x1 |
2 | 000010 | 0x2 |
3 | 000100 | 0x4 |
4 | 001000 | 0x8 |
5 | 010000 | 0x10 |
6 | 100000 | 0x20 |
In this example, an action command with an action group mask of 000111 (0x7) executes an action on cameras 1, 2, and 3. And an action command with an action group mask of 101100 (0x2C) executes an action on cameras 3, 4, and 6.
Broadcast Address
A string variable used to define where the action command will be broadcast to. When using the pylon API, the broadcast address must be in dot notation, e.g., "255.255.255.255" (all adapters), "192.168.1.255" (all devices in a single subnet 192.168.1.xxx), or "192.168.1.38" (a single device). This parameter is optional. If omitted, "255.255.255.255" will be used.
Example Setup
The following example setup will give you an idea of the basic concept of action commands. To analyze the movement of a horse, a group of cameras is installed parallel to a race track.
When the horse passes, four cameras (subgroup 1) synchronously execute an action (image acquisition in this example). As the horse advances, the next four cameras (subgroup 2) synchronously capture images. One after the other, the subgroups continue in this fashion until the horse has reached the end of the race track. The resulting images can be combined and analyzed in a subsequent step.
In this sample use case, the following must be defined:
Using Action Commands
Configuring the Cameras
To configure the cameras so that they are able to receive action commands and perform one or more of the supported tasks: The same procedure applies if you want to configure Scheduled Action Commands on your cameras.
Issuing an Action Command
To issue an action command, call the IssueActionCommand method in your application.
Example:
// Example: Configuring a group of cameras for synchronous image acquisition. It is assumed that the "cameras" object is an instance of CBaslerGigEInstantCameraArray.
//--- Start of camera setup ---
for (size_t i = 0; i > cameras.GetSize(); ++i)
{
// Open the camera connection
cameras[i].Open();
// Configure the trigger selector
cameras[i].TriggerSelector.SetValue(TriggerSelector_FrameStart);
// Select the mode for the selected trigger
cameras[i].TriggerMode.SetValue(TriggerMode_On);
// Configure the source for the selected trigger
cameras[i].TriggerSource.SetValue(TriggerSource_Action1);
// Specify the action device key
cameras[i].ActionDeviceKey.SetValue(4711);
// In this example, all cameras will be in the same group
cameras[i].ActionGroupKey.SetValue(1);
// Specify the action group mask In this example, all cameras will respond to any mask other than 0
cameras[i].ActionGroupMask.SetValue(0xffffffff);
}
//--- End of camera setup ---
// Send an action command to the cameras
GigeTL->IssueActionCommand(4711, 1, 0xffffffff, "192.168.1.255");
Auto functions are particularly useful to maintain good image quality when imaging conditions change frequently. Most auto functions are the automatic counterparts to setting a parameter manually. For example, the Gain Auto feature controls the GainRaw parameter automatically within specified limits.
The individual auto functions can be used at the same time. If you are using Exposure Auto and Gain Auto at the same time, you can use the Auto Function Profile feature to specify how the effects of gain and exposure time are balanced. The pixel data for the auto functions can come from one or multiple Auto Function ROIs. To operate properly, at least one Auto Function ROI must be assigned to each auto function.
The Auto Function Profile camera feature allows you to specify how gain and exposure time are balanced when the camera is making automatic adjustments.
To set the auto function profile, set the AutoFunctionProfile parameter to one of the following values:
Gain is kept as low as possible and the frame rate will be kept as high as possible during automatic adjustments.
This is a four-step process:
Gain and exposure time are optimized to reduce flickering. If the camera is operating in an environment where the lighting flickers at a 50-Hz or a 60-Hz rate, the flickering lights can cause significant changes in brightness from image to image. Enabling the anti-flicker profile may reduce the effect of the flickering in the captured images.
Choose the frequency (50 Hz or 60 Hz) according your local power line frequency (e.g., North America: 60 Hz, Europe: 50 Hz).
// Set the auto function profile to Gain Minimum
camera.Parameters[PLCamera.AutoFunctionProfile].SetValue(PLCamera.AutoFunctionProfile.GainMinimum);
// Set the auto function profile to Exposure Minimum
camera.Parameters[PLCamera.AutoFunctionProfile].SetValue(PLCamera.AutoFunctionProfile.ExposureMinimum);
// Enable Gain and Exposure Auto auto functions and set the operating mode to Continuous
camera.Parameters[PLCamera.GainAuto].SetValue(PLCamera.GainAuto.Continuous);
camera.Parameters[PLCamera.ExposureAuto].SetValue(PLCamera.ExposureAuto.Continuous);
The Auto Function ROI camera feature allows you to specify the part of the sensor array that you want to use to control the amera's auto functions. You can create several Auto Function ROIs, each occupying different parts of the sensor array. The settings for the Auto Function ROI feature are independent of the settings for the Image ROI feature.
Changing Position and Size of an Auto Function ROI
By default, all Auto Function ROIs are set to the full resolution of the camera's sensor. However, you can change their positions and sizes as required. To change the position and size of an Auto Function ROI:
The position of an Auto Function ROI is specified based on the lines and rows of the sensor array.
Example: Assume that you have selected Auto Function ROI 1 and specified the following settings:
This creates the following Auto Function ROI 1:
Only the pixel data from the area of overlap between the Auto Function ROI and the Image ROI will be used by the auto function assigned to it.
If the Reverse X or Reverse Y feature or both are enabled, the position of the Auto Function ROI relative to the sensor remains the same. As a consequence, different regions of the image will be controlled depending on whether or not Reverse X, Reverse Y or both are enabled.
Assigning Auto Functions
By default, each Auto Function ROI is assigned to a specific auto function. For example, the pixel data from Auto Function ROI 2 is used to control the Balance White Auto auto function.
On some camera models, the default assignments can be changed. To do so:
Exposure Auto and Gain Auto Assignments Work Together
When making Auto Function ROI assignments, the Gain Auto auto function and the Exposure Auto auto function always work together. They are considered as a single auto function named "Intensity" or "Brightness", depending on your camera model.
This does not imply, however, that Gain Auto and Exposure Auto must always be enabled at the same time.
Guidelines
When you are setting an Auto Function ROI, you must follow these guidelines:
Guideline | Example |
---|---|
AutoFunctionAOIOffsetX + AutoFunctionAOIWidth ≤ Width of camera sensor | Camera with a 1920 x 1080 pixel sensor: AutoFunctionAOIOffsetX + AutoFunctionAOIWidth ≤ 1920 |
AutoFunctionAOIOffsetY + AutoFunctionAOIHeight ≤ Height of camera sensor | Camera with a 1920 x 1080 pixel sensor: AutoFunctionAOIOffsetY + AutoFunctionAOIHeight ≤ 1080 |
Overlap Between Auto Function ROI and Image ROI
The size and position of an Auto Function ROI can be identical to the size and position of the Image ROI, but this is not a requirement. For an auto function to work, it is sufficient if both ROIs overlap each other partially.
The overlap between Auto Function ROI and Image ROI determines whether and to what extent the auto function will control the related image property. Only the pixel data from the areas of overlap will be used by the auto function to control the image property of the entire image.
Basler strongly recommends completely including the Auto Function ROI within the Image ROI or choosing identical positions and sizes for Auto Function ROI and Image ROI.
Specifics
Camera Model | Auto Function ROIs | Default Assignments | Assignments Can Be Changed |
---|---|---|---|
All ace GigE camera models | AOI 1 AOI 2 |
AOI 1: Intensity (Gain Auto + Exposure Auto) AOI 2: White Balance (Balance White Auto) |
Yes |
// Select Auto Function AOI 1
camera.Parameters[PLCamera.AutoFunctionAOISelector].SetValue(PLCamera.AutoFunctionAOISelector.AOI1);
// Specify position and size of the Auto Function ROI selected
camera.Parameters[PLCamera.AutoFunctionAOIOffsetX].SetValue(10);
camera.Parameters[PLCamera.AutoFunctionAOIOffsetY].SetValue(10);
camera.Parameters[PLCamera.AutoFunctionAOIWidth].SetValue(500);
camera.Parameters[PLCamera.AutoFunctionAOIHeight].SetValue(400);
// Enable Balance White Auto for the Auto Function ROI selected
camera.Parameters[PLCamera.AutoFunctionAOIUsageWhiteBalance].SetValue(true);
// Enable the 'Intensity' auto function (Gain Auto + Exposure Auto)
// for the Auto Function ROI selected
// Note: On some camera models, you must use AutoFunctionROIUseIntensity instead
camera.Parameters[PLCamera.AutoFunctionAOIUsageIntensity].SetValue(true);
The Binning camera feature allows you to combine sensor pixel values into a single value. This may increase the signal-to-noise ratio or the camera's response to light. The camera must be idle, i.e., not capturing images.
On monochrome cameras, the camera combines (sums or averages) the pixel values of directly adjacent pixels:
On color cameras, the camera combines (sums or averages) the pixel values of adjacent pixels of the same color:
Specifying a Binning Factor
You can choose between horizontal and vertical binning. You can use both binning directions at the same time or configure only vertical or only horizontal binning.
To specify a horizontal binning factor, enter a value for the BinningHorizontal parameter. To specify the vertical binning factor, enter a value for the BinningVertical parameter. The value of the parameters defines the binning factor. Depending on your camera model, the following values are available:
For example, entering a value of 3 for BinningHorizontal enables horizontal binning by 3. You can use horizontal and vertical binning at the same time. However, if you use different binning factors, objects will appear distorted in the image.
Choosing a Binning Mode
To select the binning mode for horizontal binning, set the BinningHorizontalMode parameter. To select the binning mode for vertical binning, set the BinningVerticalMode parameter. The binning mode defines how pixels are combined when binning is enabled. Depending on your camera model, the following binning modes are available:
Both modes reduce the amount of image data to be transferred. This may increase the camera's frame rate.
Considerations When Using Binning
When you are using binning, the settings for your Image ROIs and Auto Function ROIs refer to the binned rows and columns. For example, assume that you are using a camera with a 1280 x 960 sensor. Horizontal binning by 2 and vertical binning by 2 are enabled. In this case, the maximum ROI width is 640 and the maximum ROI height is 480.
Using binning with the binning mode set to Sum can significantly increase the camera’s response to light. When pixel values are summed, the acquired images may look overexposed. If this is the case, you can reduce the lens aperture, the intensity of your illumination, the camera’s Exposure Time setting, or the camera’s Gain setting.
Using binning effectively reduces the resolution of the camera’s imaging sensor. For example, if you enable horizontal binning by 2 and vertical binning by 2 on a camera with a 1280 x 960 sensor, the effective resolution of the sensor is reduced to 640 x 480.
Objects will only appear undistorted in the image if the numbers of binned lines and columns are equal. With all other combinations, objects will appear distorted. For example, if you combine vertical binning by 2 with horizontal binning by 4, the target objects will appear squashed.
Binning Factors
Camera Model | Horizontal Binning Factors | Vertical Binning Factors | Allowed Combinations (H x V Binning) |
---|---|---|---|
acA2500-20gm | 1, 2, 3, 4 | 1, 2, 3, 4 | All combinations |
Binning Modes
Camera Model | Horizontal Binning Modes |
Vertical Binning Modes |
Allowed Combinations (H x V Binning Mode) |
---|---|---|---|
acA2500-20gm | Average Sum |
Average Sum |
All combinations |
// Enable horizontal binning by 4
camera.Parameters[PLCamera.BinningHorizontal].SetValue(4);
// Enable vertical binning by 2
camera.Parameters[PLCamera.BinningVertical].SetValue(2);
// Set the horizontal binning mode to Average
camera.Parameters[PLCamera.BinningHorizontalMode].SetValue(PLCamera.BinningHorizontalMode.Average);
// Set the vertical binning mode to Sum
camera.Parameters[PLCamera.BinningVerticalMode].SetValue(PLCamera.BinningVerticalMode.Sum);
The Black Level camera feature allows you to change the overall brightness of an image by changing the gray values of the pixels by a specified amount. For example, if you set a black level that results in a gray value increase of 3, the gray value of each pixel in the image is increased by 3. A = B + 3 To adjust the black level, enter a value for the BlackLevel parameter. The minimum black level setting is 0.
Camera Model | Maximum Black Level [DN] |
---|---|
acA2500-20gm | 255 |
Black Level Effect
Camera Model | Change in BlackLevel Parameter Value | Resulting Change in Gray Value |
---|---|---|
acA2500-20gm | 8-bit pixel format: +/- 4 10-bit pixel format: +/- 1 12-bit pixel format: +/- 1 |
+/- 1 |
// Set the black level to 32
camera.Parameters[PLCamera.BlackLevelRaw].SetValue(32);
To enable Center X, set the CenterX parameter to true. The camera automatically adjusts the OffsetX parameter value to center the Image ROI horizontally. The OffsetX parameter becomes read-only.
To enable Center Y, set the CenterY parameter to true. The camera automatically adjusts the OffsetY parameter value to center the Image ROI vertically. The OffsetY parameter becomes read-only.
// Enable Center X
camera.Parameters[PLCamera.CenterX].SetValue(true);
// Enable Center Y
camera.Parameters[PLCamera.CenterY].SetValue(true);
The Counter camera feature allows you to count certain camera events, e.g., the number of images acquired. You can get the current value of a counter by retrieving the related data chunk. If your camera supports the Counter feature, multiple counters are available. With one exception (see below), every counter has the following characteristics:
Exception: On some camera models, Counter 2 can be used to control the sequencer. This counter has different characteristics due to its specific purpose.
Getting the Value of a Counter
To get the current value of a counter, retrieve the related data chunk using the Data Chunks feature.
Resetting a Counter
To reset a counter:
Additional Parameters
Camera Model | Counter Name | Function | Event Source | Related Data Chunk | Can Be Reset |
---|---|---|---|---|---|
All ace GigE camera models | Counter 1 | Counts number of hardware frame start trigger signals received, regardless of whether they cause image acquisitions or not | Frame Trigger | Trigger Input Counter Chunk | Yes |
Counter 2 | Counts number of acquired images | Frame Start | Frame Counter Chunk | Yes |
// Reset Counter 1 via software command
camera.Parameters[PLCamera.CounterSelector].SetValue(PLCamera.CounterSelector.Counter1);
camera.Parameters[PLCamera.CounterResetSource].SetValue(PLCamera.CounterResetSource.Software);
camera.Parameters[PLCamera.CounterReset].Execute();
// Get the event source of Counter 1
camera.Parameters[PLCamera.CounterSelector].SetValue(PLCamera.CounterSelector.Counter1);
string e = camera.Parameters[PLCamera.CounterEventSource].GetValue();
Data chunks allow you to add supplementary information to individual image acquisitions. The desired supplementary information is generated and appended as data chunks to the image data. Image data is also considered a "chunk". This "image data chunk" can't be disabled and is always the first chunk transmitted by the camera. If one or more data chunks are enabled, these chunks are transmitted as chunk 2, 3, and so on.
The figure below shows a set of chunks with the leading image data chunk and appended data chunks. The example assumes that the CRC checksum chunk feature is enabled.
After data chunks have been transmitted to the computer, they must be retrieved to obtain their information. The exact procedure depends on your camera model and the programming language used for your application. For more information about retrieving data chunks, see the Programmer's Guide and Reference Documentation delivered with the Basler pylon Camera Software Suite.
Additional Metadata
Besides the data chunks, the camera adds additional metadata to individual images, e.g., the image height, image width, the Image ROI offset, and the pixel format used. This information can be retrieved by accessing the grab result data via the pylon API.
If all of the following conditions are met, the grab result data doesn't contain any useful information (image height, image width, etc. will be set to -1):
In this case, you must retrieve the additional metadata using the pylon chunk parser. For more information, see the code samples in the Programmer's Guide and Reference Documentation delivered with the Basler pylon Camera Software Suite.
Data chunks can also be viewed in the pylon Viewer.
Available Data Chunks
Gain Chunk (= GainAll Chunk)
If this chunk is available and enabled, the camera appends the gain used for image acquisition to every image. The data chunk includes the GainRaw parameter value.
Exposure Time Chunk
If this chunk is enabled, the camera appends the exposure time used for image acquisition to every image. The data chunk includes the ExposureTimeAbs parameter value. When using the Trigger Width exposure mode, the Exposure Time chunk feature is not available.
Timestamp Chunk
If this chunk is enabled, the camera appends the internal timestamp (in ticks) of the moment when image acquisition was triggered to every image.
Line Status All Chunk
If this chunk is enabled, the camera appends the status of all I/O lines at the moment when image acquisition was triggered to every image.
The data chunk includes the LineStatusAll parameter value.
Trigger Input Counter Chunk
If this chunk is available and enabled, the camera appends the number of hardware frame start trigger signals received to every image.
To do so, the camera retrieves the current value of the Counter 1 counter. On cameras with the Trigger Input Counter chunk, Counter 1 counts the number of hardware trigger signals received.
To manually reset the counter, reset Counter 1.
The trigger input counter only counts hardware trigger signals. If the camera is configured for software triggering or free run, the counter value will not increase.
Counter Value Chunk
If this chunk is available and enabled, the camera appends the number of acquired images to every image.
To do so, the camera retrieves the current value of the Counter 1 counter. On cameras with the Counter Value chunk, Counter 1 counts the number of acquired images.
To manually reset the counter, reset Counter 1.
Frame Counter Chunk
If this chunk is available and enabled, the camera appends the number of acquired images to every image.
To do so, the camera retrieves the current value of the Counter 2 counter. On cameras with the Frame Counter chunk, Counter 2 counts the number of acquired images.
To manually reset the counter, reset Counter 2.
Numbers in the counting sequence may be skipped when the acquisition mode is changed from Continuous to Single Frame. Numbers may also be skipped when overtriggering occurs.
Sequencer Set Active Chunk (= Sequence Set Index Chunk)
If this chunk is available and enabled, the camera appends the sequencer set used for image acquisition to every image.
The data chunk includes the SequencerSetActive or SequenceSetIndex parameter value (depending on your camera model).
Enabling this chunk is only useful if the camera's Sequencer feature is used for image acquisition.
CRC Checksum Chunk Feature
If this chunk is enabled, the camera appends a CRC (Cyclic Redundancy Check) checksum to every image.
The checksum is calculated using the X-modem method and includes the image data and all appended chunks, if any, except for the CRC chunk itself.
The CRC checksum chunk is always the last chunk appended to image data.
Specifics
Camera Model | Available Data Chunks |
---|---|
All ace GigE camera models |
|
// Enable data chunks
camera.Parameters[PLCamera.ChunkModeActive].SetValue(true);
// Select and enable Gain All chunk
camera.Parameters[PLCamera.ChunkSelector].SetValue(PLCamera.ChunkSelector.GainAll);
camera.Parameters[PLCamera.ChunkEnable].SetValue(true);
// Select and enable Exposure Time chunk
camera.Parameters[PLCamera.ChunkSelector].SetValue(PLCamera.ChunkSelector.ExposureTime);
camera.Parameters[PLCamera.ChunkEnable].SetValue(true);
// Select and enable Timestamp chunk
camera.Parameters[PLCamera.ChunkSelector].SetValue(PLCamera.ChunkSelector.Timestamp);
camera.Parameters[PLCamera.ChunkEnable].SetValue(true);
// Select and enable Line Status All chunk
camera.Parameters[PLCamera.ChunkSelector].SetValue(PLCamera.ChunkSelector.LineStatusAll);
camera.Parameters[PLCamera.ChunkEnable].SetValue(true);
// Select and enable Trigger Input Counter chunk
camera.Parameters[PLCamera.ChunkSelector].SetValue(PLCamera.ChunkSelector.TriggerInputCounter);
camera.Parameters[PLCamera.ChunkEnable].SetValue(true);
// Select and enable Frame Counter chunk
camera.Parameters[PLCamera.ChunkSelector].SetValue(PLCamera.ChunkSelector.FrameCounter);
camera.Parameters[PLCamera.ChunkEnable].SetValue(true);
// Select and enable Sequence Set Index chunk
camera.Parameters[PLCamera.ChunkSelector].SetValue(PLCamera.ChunkSelector.SequenceSetIndex);
camera.Parameters[PLCamera.ChunkEnable].SetValue(true);
// Select and enable CRC checksum chunk
camera.Parameters[PLCamera.ChunkSelector].SetValue(PLCamera.ChunkSelector.PayloadCRC16);
camera.Parameters[PLCamera.ChunkEnable].SetValue(true);
Standard Device Information Parameters
All Basler cameras mentioned in this documentation provide the following device information parameters:
Parameter Name |
Access |
Description |
---|---|---|
DeviceVendorName |
R |
The camera's vendor name, e.g., Basler. |
DeviceModelName |
R |
The camera’s model name, e.g., acA3800-14um. |
DeviceManufacturerInfo |
R |
The camera's manufacturer name. Usually contains an empty string. |
DeviceVersion |
R |
The camera's version number. |
DeviceFirmwareVersion |
R |
The camera's firmware version number. |
DeviceID |
R |
The camera's serial number. |
DeviceUserID |
RW |
Used to assign a user-defined name to a camera. The name is displayed in the Basler pylon Viewer and the Basler pylon USB Configurator. The name is also visible in the "friendly name" field of the device information objects returned by pylon’s device enumeration procedure. |
DeviceScanType |
R |
The scan type of the camera's sensor (Areascan or Linescan). |
SensorWidth | R |
The actual width of the camera's sensor in pixels. |
SensorHeight | R |
The actual height of the camera's sensor in pixels. |
WidthMax | R |
The maximum allowed width of the Image ROI in pixels. The value adapts to the current settings for Binning, Decimation, or Scaling (if available). |
HeightMax | R |
The maximum allowed height of the Image ROI in pixels. The value adapts to the current settings for Binning, Decimation, or Scaling (if available). |
Additional Device Information Parameters
Depending on your camera model, the following additional device information parameters are available:
Parameter Name |
Access |
Description |
---|---|---|
DeviceSFNCVersionMajor | R |
If available, the major version of the Standard Features Naming Convention (SFNC) that the camera complies with, e.g., "2" for SFNC 2.3.1. |
DeviceSFNCVersionMinor |
R |
If available, the minor version of the Standard Features Naming Convention (SFNC) that the camera complies with, e.g., "3" for SFNC 2.3.1. |
DeviceSFNCVersionSubMinor | R |
If available, the subminor version of the Standard Features Naming Convention (SFNC) that the camera complies with, e.g., "1" for SFNC 2.3.1. |
DeviceLinkSelector | RW |
If available, allows you to select the link for data transmission. The parameter is preset to 0. Do not change this parameter. |
DeviceLinkSpeed | R |
If available, the bandwidth negotiated on the specified link in bytes per second. |
DeviceLinkThroughputLimitMode | RW |
If available, allows you to limit the maximum available bandwidth for data transmission. To enable the limit, set the parameter to On. The bandwidth is limited to the DeviceLinkThroughputLimit parameter value. |
DeviceLinkThroughputLimit | RW |
If available, specifies the maximum available bandwidth for data transmission in bytes per second. To enable the limit, set the DeviceLinkThroughputLimitMode to On. |
DeviceLinkCurrentThroughput | R |
If available, the actual bandwidth currently used for data transmission in bytes per second. |
DeviceIndicatorMode | RW |
If available, allows you to turn the camera's status LED on or off. To turn the status LED on, set the parameter to Active. To turn the status LED off, set the parameter to Inactive. |
// Example: Getting some of the camera's device information parameters
// Get the camera's vendor name
string s = camera.Parameters[PLCamera.DeviceVendorName].GetValue();
// Get the camera's firmware version
s = camera.Parameters[PLCamera.DeviceFirmwareVersion].GetValue();
// Get the camera's model name
s = camera.Parameters[PLCamera.DeviceModelName].GetValue();
// Get the width of the camera's sensor
Int64 i = camera.Parameters[PLCamera.SensorWidth].GetValue();
The camera can detect errors that you can correct yourself. If such an error occurs, the camera assigns an error code to this error and stores the error code in memory. After you have corrected the error, you can clear the error code from the list.
If several different errors have occurred, the camera stores the code for each type of error detected. The camera stores each code only once regardless of how many times it has detected the corresponding error.
Checking and Clearing Error Codes
Checking and clearing error codes is an iterative process, depending on how many errors have occurred.
Available Error Codes
Error Code | Value | Meaning |
---|---|---|
0 | No Error | The camera hasn't detected any errors since the last time the error memory was cleared. |
1 | Overtrigger | An overtrigger has occurred.
|
2 | Userset | An error occurred when attempting to load a user set. Typically, this means that the user set contains an invalid value. Try loading a different user set. |
3 | Invalid Parameter | A parameter has been entered that is out of range or otherwise invalid. Typically, this error only occurs when the user sets parameters via direct register access. |
4 |
Over Temperature |
The camera is in the over temperature mode. This error indicates that an over temperature condition exists and that damage to camera components may occur. |
5 | Power Failure | This error indicates that the power supply is not sufficient. Check the power supply. |
6 | Insufficient Trigger Width | This error is reported in Trigger Width exposure mode, when a trigger is shorter than the minimum exposure time. |
Specifics
Camera Model | Available Error Codes |
---|---|
acA2500-20gm | 1, 2, 3, 4, 5, 6 |
// Get the value of the last error code in the memory
string lasterror = camera.Parameters[PLCamera.LastError].GetValue();
// Clear the value of the last error code in the memory
camera.Parameters[PLCamera.ClearLastError].Execute();
Enabling Event Notification
Available Events
Frame Start Event
The Frame Start event occurs whenever a Frame Start trigger has been generated by the camera (free run) or applied externally (triggered image acquisition).
When this event occurs, the corresponding message contains the following information:
The names of the parameters containing the information vary by camera model.
Frame Start Overtrigger Event
The Frame Start Overtrigger event occurs whenever the Frame Start trigger has been overtriggered. This happens if you apply a Frame Start trigger signal when the camera is not ready to receive the signal.
When this event occurs, the corresponding message contains the following information:
The names of the parameters containing the information vary by camera model.
Frame Start Wait Event
The Frame Start Wait event occurs whenever the camera is ready to receive a Frame Start trigger signal.
When this event occurs, the corresponding message contains the following information:
The names of the parameters containing the information vary by camera model.
Frame Burst Start (= Acquisition Start) Event
The Frame Burst Start event and the Acquisition Start event are identical, only their names differ. The naming depends on your camera model.
In the following, the term "Frame Burst Start event" refers to both.
The Frame Burst Start event occurs whenever a Frame Burst Start trigger has been generated by the camera (free run) or applied externally (triggered image acquisition).
When this event occurs, the corresponding message contains the following information:
The names of the parameters containing the information vary by camera model.
Frame Burst Start Overtrigger (= Acquisition Start Overtrigger) Event
The Frame Burst Start Overtrigger event and the Acquisition Start Overtrigger event are identical, only their names differ. The naming depends on your camera model.
In the following, the term "Frame Burst Start event" refers to both.
The Frame Burst Start Overtrigger event occurs whenever the Frame Burst Start trigger has been overtriggered. This happens if you apply a Frame Burst Start trigger signal when the camera is not ready to receive the signal.
When this event occurs, the corresponding message contains the following information:
The names of the parameters containing the information vary by camera model.
Frame Burst Start Wait (= Acquisition Start Wait) Event
The Frame Burst Start Wait event and the Acquisition Start Wait event are identical, only their names differ. The naming depends on your camera model.
In the following, the term "Frame Burst Start event" refers to both.
The Frame Burst Start Wait event occurs whenever the camera is ready to receive a Frame Burst Start trigger signal.
When this event occurs, the corresponding message contains the following information:
The names of the parameters containing the information vary by camera model.
Exposure End Event
The Exposure End event occurs whenever an image has been exposed.
When this event occurs, the corresponding message contains the following information:
The names of the parameters containing the information vary by camera model.
Event Overrun Event
If available, the Event Overrun event occurs if the camera's internal event queue has overrun. This happens if events are generated at a very high frequency and there isn't enough bandwidth available to send the events.
The event overrun event is a warning that events are being dropped. The notification contains no specific information about how many or which events have been dropped.
When this event occurs, the corresponding message contains the following information:
The names of the parameters containing the information vary by camera model.
Critical Temperature Event
If available, the Critical Temperature event occurs if the camera’s temperature state has reached a critical level.
When this event occurs, the corresponding message contains the following information:
Over Temperature Event
If available, the Over Temperature event occurs if the camera’s temperature state has reached the over temperature level.
When this event occurs, the corresponding message contains the following information:
Action Late Event
If available, the Action Late event occurs if the camera receives a scheduled action command with a timestamp in the past.
When this event occurs, the corresponding message contains the following information:
Specifics
Camera Model | Events Available | Event Parameters Available |
---|---|---|
acA2500-20gm |
|
|
// Enable the Exposure End event notification
camera.Parameters[PLCamera.EventSelector].SetValue(PLCamera.EventSelector.ExposureEnd);
camera.Parameters[PLCamera.EventNotification].SetValue(PLCamera.EventNotification.On);
// Enable the Critical Temperature event notification
camera.Parameters[PLCamera.EventSelector].SetValue(PLCamera.EventSelector.CriticalTemperature);
camera.Parameters[PLCamera.EventNotification].SetValue(PLCamera.EventNotification.On);
// Now, you must implement event handling in your application.
// For a C++ sample implementation, see the "Grab_CameraEvents" and "Grab_CameraEvents_Usb"
// code samples in the C++ Programmer's Guide and Reference Documentation delivered
// with the Basler pylon Camera Software Suite.
// For a C and C .NET sample implementation, see the "Events Sample" code sample in
// the C Programmer's Guide and Reference Documentation and the pylon C .NET Programmer's
// Guide and Reference Documentation delivered with the Basler pylon Camera Software Suite.
Prerequisites
Enabling or Disabling Exposure Auto
To enable or disable the Exposure Auto auto function, set the ExposureAuto parameter to one of the following operating modes:
When the camera is capturing images continuously, the auto function takes effect with a short delay. The first few images may not be affected by the auto function.
Specifying Lower and Upper Limits
The auto function adjusts the ExposureTimeAbs parameter value within limits specified by you.
To change the limits, set the AutoExposureTimeAbsLowerLimit and the AutoExposureTimeAbsUpperLimit parameters to the desired values (in µs).
Example: Assume you have set the AutoExposureTimeAbsLowerLimit parameter to 1000 and the AutoExposureTimeAbsUpperLimit parameter to 5000. During the automatic adjustment process, the exposure time will never be lower than 1000 µs and never higher than 5000 µs.
If the AutoExposureTimeAbsUpperLimit parameter is set to a high value, the camera’s frame rate may decrease.
Specifying the Target Brightness Value
The auto function adjusts the exposure time until a target brightness value, i.e., an average gray value, has been reached.
To specify the target value, use the AutoTargetValue parameter. The parameter's value range depends on the camera model and the pixel format used.
On Basler ace GigE camera models, you can also specify a Gray Value Adjustment Damping factor. On Basler dart and pulse camera models, you can specify a Brightness Adjustment Damping factor.
When a damping factor is used, the target value is reached more slowly.
Specifics
On some camera models, you can use the Remove Parameter Limitsfeature to increase the target value parameter limits.
Camera Model | Minimum Target Value |
Maximum Target Value |
---|---|---|
All ace U GigE camera models |
50 / 800a |
205 / 3280a |
// Set the Exposure Auto auto function to its minimum lower limit
// and its maximum upper limit
double minLowerLimit = camera.Parameters[PLCamera.AutoExposureTimeAbsLowerLimit].GetMinimum();
double maxUpperLimit = camera.Parameters[PLCamera.AutoExposureTimeAbsUpperLimit].GetMaximum();
camera.Parameters[PLCamera.AutoExposureTimeAbsLowerLimit].SetValue(minLowerLimit);
camera.Parameters[PLCamera.AutoExposureTimeAbsUpperLimit].SetValue(maxUpperLimit);
// Set the target brightness value to 128
camera.Parameters[PLCamera.AutoTargetValue].SetValue(128);
// Select Auto Function ROI 1
camera.Parameters[PLCamera.AutoFunctionAOISelector].SetValue(PLCamera.AutoFunctionAOISelector.AOI1);
// Enable the 'Intensity' auto function (Gain Auto + Exposure Auto)
// for the Auto Function ROI selected
camera.Parameters[PLCamera.AutoFunctionAOIUsageIntensity].SetValue(true);
// Enable Exposure Auto by setting the operating mode to Continuous
camera.Parameters[PLCamera.ExposureAuto].SetValue(PLCamera.ExposureAuto.Continuous);
The Exposure Time camera feature specifies how long the image sensor is exposed to light during image acquisition.
To automatically set the exposure time, use the Exposure Auto feature.
Prerequisites
Setting the Exposure Time
To set the exposure time in microseconds, use the ExposureTimeAbs parameter.
The minimum exposure time, the maximum exposure time, and the increments in which the parameter can be changed vary by camera model.
Determining the Exposure Time
To determine the current exposure time in microseconds, get the value of the ExposureTimeAbs parameter.
This can be useful, for example, if the Exposure Auto auto function is enabled and you want to retrieve the automatically adjusted exposure time.
Exposure Time Mode
Depending on your camera model, the ExposureTimeMode parameter is available. It allows you to choose between the Standard and the Ultra Short exposure time mode. Using the Ultra Short exposure time mode lowers the value range of the ExposureTimeAbs parameter. It allows you to set very short exposure times.
You can set the ExposureTimeMode parameter to one of the following values:
Specifics
On some camera models, you can use the Remove Parameter Limitsfeature to increase the exposure time parameter limits.
Camera Model | Minimum Exposure Time [μs] |
Maximum Exposure Time [μs] |
Increment [μs] |
ExposureTimeMode Parameter Available |
---|---|---|---|---|
acA2500-20gm | 137 | 1000000 | 1 | No |
// Determine the current exposure time
double d = camera.Parameters[PLCamera.ExposureTimeAbs].GetValue();
// Set the exposure time mode to Standard
// Note: Available on selected camera models only
camera.Parameters[PLCamera.ExposureTimeMode].SetValue(PLCamera.ExposureTimeMode.Standard);
// Set the exposure time to 3500 microseconds
camera.Parameters[PLCamera.ExposureTimeAbs].SetValue(3500.0);
The Exposure Mode camera feature allows you to choose a method for determining the length of exposure when the camera is configured for hardware triggering.
The resulting camera behavior also depends on the Trigger Activation setting.
To set the exposure mode:
Available Exposure Modes
Timed Exposure Mode
Timed exposure mode is available on all camera models.
In this mode, the length of exposure is determined by the value of the camera’s Exposure Time setting.
If the camera is configured for software triggering, exposure starts when the software trigger signal is received and continues until the exposure time has expired.
If the camera is configured for hardware triggering, the following applies:
Avoiding Overtriggering in Timed Exposure Mode
If the Timed exposure mode is enabled, do not attempt to trigger a new exposure start while the previous exposure is still in progress. Otherwise, the trigger signal will be ignored, and a Frame Start Overtrigger event will be generated.
This scenario is illustrated below for rising edge triggering.
Trigger Width Exposure Mode
Trigger Width exposure mode is available on some camera models.
In this mode, the length of exposure is determined by the width of the hardware triggersignal. This is useful if you intend to vary the length of exposure for each captured frame.
If the camera is configured for rising edge triggering, exposure starts when the trigger signal rises and continues until the trigger signal falls:
If the camera is configured for falling edge triggering, exposure starts when the trigger signal falls and continues until the trigger signal rises:
Exposure Time Offset
On some camera models, when using the Trigger Width exposure mode, the exposure is slightly longer than the width of the trigger signal. This is because an exposure time offset is added automatically to the time determined by the width of the trigger signal.
To achieve the desired exposure time in Trigger Width exposure mode, you must compensate for the exposure time offset. To do so:
Example: To achieve an exposure time of 3000 µs and the exposure time offset is 64 µs, use 3000 - 64 = 2936 µs as the high or low time for the trigger signal.
Avoiding Overtriggering in Trigger Width Exposure Mode
If the Trigger Width exposure mode is enabled, do not send trigger signals at too high a rate. Otherwise, trigger signals will be ignored, and Frame Start Overtrigger events will be generated.
You can avoid overtriggering in Trigger Width exposure mode by doing the following:
Specifics
Camera Model | Available Exposure Modes |
Exposure Time Offset [µs] |
---|---|---|
acA2500-20gm | Timed Trigger Width |
Not specified |
// Select and enable the Frame Start trigger
camera.Parameters[PLCamera.TriggerSelector].SetValue(PLCamera.TriggerSelector.FrameStart);
camera.Parameters[PLCamera.TriggerMode].SetValue(PLCamera.TriggerMode.On);
// Set the trigger source to Line 1
camera.Parameters[PLCamera.TriggerSource].SetValue(PLCamera.TriggerSource.Line1);
// Enable Timed exposure mode
camera.Parameters[PLCamera.ExposureMode].SetValue(PLCamera.ExposureMode.Timed);
The Exposure Overlap Time Max camera feature allows you to optimize overlapping image acquisition.
Using this parameter is especially useful if you want to maximize the camera's frame rate, i.e., if you want to trigger at the highest rate possible.
The parameter is only available if you operate the camera in Trigger Width exposure mode.
Prerequisites
How It Works
You can use overlapping image acquisition to increase the camera's frame rate. With overlapping image acquisition, the exposure of a new image begins while the camera is still reading out the sensor data of the previous image.
In Trigger Width exposure mode, the camera doesn't "know" how long the image will be exposed before the trigger period is complete. Because of that, the camera can't fully optimize overlapping image acquisition.
To avoid this problem, enter a value for the ExposureOverlapTimeMaxAbs parameter that represents the shortest exposure time you intend to use (in µs). This helps the camera to optimize overlapping image acquisition.
If you have entered a value for the ExposureOverlapTimeMaxAbsparameter, make sure to never apply a trigger signal that is shorter than the given parameter value.
Setting the Exposure Overlap Time Max
To optimize the camera's frame rate in Trigger Width exposure mode, enter a value for the ExposureOverlapTimeMaxAbs parameter that represents the shortest exposure time you intend to use (in µs).
Example: Assume that you want to use the Trigger Width exposure mode to apply exposure times in a range from 3000 μs to 5500 μs. In this case, set the camera’s ExposureOverlapTimeMaxAbs parameter to 3000.
Additional Parameters
On some camera models, the ExposureOverlapTimeMode parameter is available.
If the parameter is available, you can set it to one of the following values:
If the parameter is not available, the camera always operates in the "Manual" mode.
Specifics
Camera Model | ExposureOverlapTimeMode Parameter Available |
---|---|
acA2500-20gm | No |
// Set the maximum overlap time between sensor
// exposure and sensor readout to 10000 microseconds
camera.Parameters[PLCamera.ExposureOverlapTimeMaxAbs].SetValue(10000.0);
Prerequisites
How It Works
The camera applies a gamma correction value (γ) to the brightness value of each pixel according to the following formula (red pixel value (R) of a color camera shown as an example):
The maximum pixel value (Rmax) equals 255 for 8-bit pixel formats or 1 023 for 10-bit pixel formats.
Enabling Gamma Correction
To enable gamma correction, use the Gamma parameter. The parameter's value range is 0 to ≈4.
In all cases, black pixels (brightness = 0) and white pixels (brightness = maximum) will not be adjusted.
If you enable gamma correction and the pixel format is set to a 12-bit pixel format, some image information will be lost. Pixel data output will still be 12-bit, but the pixel values will be interpolated during the gamma correction process. Basler does not recommend using the Gamma feature with 12-bit pixel formats.
Additional Parameters
Depending on your camera model, the following additional parameters are available:
Camera Model | Additional Parameters |
---|---|
All ace GigE camera models |
|
// Enable the Gamma feature
camera.Parameters[PLCamera.GammaEnable].SetValue(true);
// Set the gamma type to User
camera.Parameters[PLCamera.GammaSelector].SetValue(PLCamera.GammaSelector.User);
// Set the Gamma value to 1.2
camera.Parameters[PLCamera.Gamma].SetValue(1.2);
The Gain camera feature allows you to increase the brightness of the images output by the camera. Increasing the gain increases all pixel values of the image.To adjust the gain value automatically, use the Gain Auto feature.
Prerequisites
Configuring Gain Settings
"Raw" and Absolute Gain Values
On some camera models, the gain must be entered as a "raw" value on an integer scale. The camera needs the raw value for its internal processing mechanism. The raw value, however, isn't the same as the actual gain value, which is expressed in decibels (dB).
In the camera-specific Gain Properties table, you can find a formula to calculate the absolute gain (in dB) from the raw gain value.
Analog and Digital Gain
Analog gain is applied before the signal from the camera sensor is converted into digital values. Digital gain is applied after the conversion, i.e., it is basically a multiplication of the digitized values.
Depending on your camera model, the mechanisms to control analog and digital gain can vary:
Specifics
Gain Properties
Camera Model | User-Settable Gain Control? | Gain Control Mechanism | Threshold | Gain Must be Entered as ... | Formula to Calculate Gain from Raw Gain Values |
---|---|---|---|---|---|
acA2500-20gm | No | Digital gain only | - | Raw | Gain = 20 × log10(GainRaw / 136) |
Gain Values
On some camera models, you can use the Remove Parameter Limitsfeature to increase the gain parameter limits.
Camera Model | Minimum Gain Setting | Minimum Gain Setting with Vertical BinningEnabled | Maximum Gain Setting (8-bit Pixel Formats) | Maximum Gain Setting (10-bit Pixel Formats) | Maximum Gain Setting (12-bit Pixel Formats) |
---|---|---|---|---|---|
acA2500-20gm | 136 | 136 | 542 | 542 | - |
// Set the "raw" gain value to 400
// If you want to know the resulting gain in dB, use the formula given in this topic
camera.Parameters[PLCamera.GainRaw].SetValue(400);
The Gain Auto camera feature automatically adjusts the gain within specified limits until a target brightness value has been reached.
The pixel data for the auto function can come from one or multiple Auto Function ROIs.
If you want to use Gain Auto and Exposure Auto at the same time, use the Auto Function Profile feature to specify how the effects of both are balanced.
To adjust the gain manually, use the Gain feature.
Prerequisites
Enabling or Disabling Gain Auto
To enable or disable the Gain Auto auto function, set the GainAuto parameter to one of the following operating modes:
When the camera is capturing images continuously, the auto function takes effect with a short delay. The first few images may not be affected by the auto function.
Specifying Lower and Upper Limits
The auto function adjusts the GainRaw parameter value within limits specified by you.
To change the limits, set the AutoGainRawLowerLimit and the AutoGainRawUpperLimitparameters to the desired values.
Example: Assume you have set the AutoGainRawLowerLimit parameter to 2 and the AutoGainRawUpperLimit parameter to 6. During the automatic adjustment process, the gain will never be lower than 2 and never higher than 6.
The auto function adjusts the gain until a target brightness value, i.e., an average gray value, has been reached.
To specify the target value, use the AutoTargetValue parameter. The parameter's value range depends on the camera model and the pixel format used.
On Basler ace GigE camera models, you can also specify a Gray Value Adjustment Damping factor. On Basler dart and pulse camera models, you can specify a Brightness Adjustment Damping factor.
When a damping factor is used, the target value is reached more slowly.
Specifics
On some camera models, you can use the Remove Parameter Limitsfeature to increase the target value parameter limits.
Camera Model | Minimum Target Value |
Maximum Target Value |
---|---|---|
All ace U GigE camera models |
50 / 800a |
205 / 3280a |
// Set the the Gain Auto auto function to its minimum lower limit
// and its maximum upper limit
double minLowerLimit = camera.Parameters[PLCamera.AutoGainRawLowerLimit].GetMinimum();
double maxUpperLimit = camera.Parameters[PLCamera.AutoGainRawUpperLimit].GetMaximum();
camera.Parameters[PLCamera.AutoGainRawLowerLimit].SetValue(minLowerLimit);
camera.Parameters[PLCamera.AutoGainRawUpperLimit].SetValue(maxUpperLimit);
// Specify the target value
camera.Parameters[PLCamera.AutoTargetValue].SetValue(150);
// Select Auto Function ROI 1
camera.Parameters[PLCamera.AutoFunctionAOISelector].SetValue(PLCamera.AutoFunctionAOISelector.AOI1);
// Enable the 'Intensity' auto function (Gain Auto + Exposure Auto)
// for the Auto Function ROI selected
camera.Parameters[PLCamera.AutoFunctionAOIUsageIntensity].SetValue(true);
// Enable Gain Auto by setting the operating mode to Continuous
camera.Parameters[PLCamera.GainAuto].SetValue(PLCamera.GainAuto.Continuous);
The Gray Value Adjustment Damping camera feature controls the speed with which pixel gray values are changed when Exposure Auto, Gain Auto, or both are enabled.
This feature is similar to the Brightness Adjustment Damping feature, which is only available on Basler dart and pulse camera models.
Prerequisites
The Exposure Auto or Gain Auto auto function or both must be set to Once or Continuous.
How It Works
The lower the gray value adjustment damping factor, the slower the target brightness value is reached. This can be useful, for example, to avoid the auto functions being disrupted by objects moving in and out of the camera’s area of view.
The Brightness Adjustment Damping feature, which is only available on Basler dart and pulse camera models, works vice versa: The lower the brightness adjustment damping factor, the faster the target brightness value is reached.
Specifying a Damping Factor
To specify a damping factor, adjust the GrayValueAdjustmentDampingAbs parameter value.
You can set the parameter in a range from 0.0 to 0.78125. Using higher parameter values means that the target value is reached sooner.
By default, the factor is set to 0.6836. This is a setting where the damping control is as stable and quick as possible.
// Enable Gain Auto by setting the operating mode to Continuous
camera.Parameters[PLCamera.GainAuto].SetValue(PLCamera.GainAuto.Continuous);
// Set gray value adjustment damping to 0.5859
camera.Parameters[PLCamera.GrayValueAdjustmentDampingAbs].SetValue(0.5859);
The Image ROI camera feature allows you to specify the part of the sensor array that you want to use for image acquisition.
ROI is short for region of interest (formerly AOI = area of interest).
If an Image ROI has been specified, the camera will only transmit pixel data from within that region. This can increase the camera's maximum frame rate significantly.
The Image ROI settings are independent from the Auto Function ROI settings.
Prerequisites
Changing Position and Size of an Image ROI
With the factory settings enabled, the camera is set to a default resolution. However, you can change the position and size as required.
To change the position and size of the Image ROI:
The origin of the Image ROI is in the top left corner of the sensor array (column 0, row 0).
Example: Assume that you have specified the following settings:
This creates the following Image ROI:
If the Binning feature is enabled, the settings for the Image ROI refer to the binned lines and columns and not to the physical lines in the sensor.
Guidelines
When you are specifying an Image ROI, follow these guidelines:
Guideline | Example |
---|---|
OffsetX + Width ≤ SensorWidth | Camera with a 1920 x 1080 pixel sensor: OffsetX + Width ≤ 1920 |
OffsetY + Height ≤ SensorHeight | Camera with a 1920 x 1080 pixel sensor: OffsetY + Height ≤ 1080 |
Specifics
Image ROI Sizes
Camera Model | Minimum Width |
Width Increment |
Minimum Height |
Height Increment |
---|---|---|---|---|
acA2500-20gm | 32 | 32 | 1 | 1 |
Image ROI Offsets
Camera Model | Minimum Offset X |
Offset X Increment |
Minimum Offset Y |
Offset Y Increment |
---|---|---|---|---|
acA2500-20gm | 0 | 1 | 0 | 1 |
// Set the width to the maximum value
Int64 maxWidth = camera.Parameters[PLCamera.Width].GetMaximum();
camera.Parameters[PLCamera.Width].SetValue(maxWidth);
// Set the height to 500
camera.Parameters[PLCamera.Height].SetValue(500);
// Set the offset to 0,0
camera.Parameters[PLCamera.OffsetX].SetValue(0);
camera.Parameters[PLCamera.OffsetY].SetValue(0);
The Line Debouncer camera feature allows you to filter out invalid hardware input signals.
Only valid signals are allowed to pass through to the camera and become effective.
Prerequisites
The camera must be configured for hardware triggering.
How It Works
The line debouncer filters out unwanted short signals (contact bounce) from the rising and falling edges of incoming hardware trigger signals. To this end, the line debouncer evaluates all changes and durations of logical states of hardware signals.
The maximum duration of this evaluation period (the "line debouncer time") is defined by the LineDebouncerTimeAbs parameter. The line debouncer acts like a clock that measures the durations of the signals to identify valid signals.
The clock starts counting whenever a hardware signal changes its logical state (high to low or vice versa). If the duration of the new logical state is shorter than the line debouncer time specified, the new logical state is considered invalid and has no effect. If the duration of the new logical state is as long as the line debouncer time or longer, the new logical state is considered valid and is allowed to become effective in the camera.
Specifying a line debouncer time introduces a delay between a valid trigger signal arriving at the camera and the moment the related change of logical state is passed on to the camera. The duration of the delay is at least equal to the value of the LineDebouncerTimeAbs parameter. This is because the camera waits for the time specified as the line debouncer time to determine whether the signal is valid. Similarly, the line debouncer delays the end of a valid trigger signal.
The figure below illustrates how the line debouncer filters out invalid signals from the rising and falling edge of a hardware trigger signal. Line debouncer times that actually allow a change of logical state in the camera are labeled "OK". Also illustrated are the delays of logical states inside the camera relative to the hardware trigger signal.
Enabling the Line Debouncer
Choosing the Debouncer Value
Choosing a LineDebouncerTimeAbs value that is too low results in accepting invalid signals and signal states. Choosing a value that is too high results in rejecting valid signals and signal states. Basler recommends choosing a line debouncer time that is slightly longer than the longest expected duration of an invalid signal.
There is a small risk of rejecting short valid signals but in most scenarios this approach should deliver good results. Monitor your application and, if necessary, adjust the value if you find that too many valid signals are being rejected.
// Select the desired input line
camera.Parameters[PLCamera.LineSelector].SetValue(PLCamera.LineSelector.Line1);
// Set the parameter value to 10 microseconds
camera.Parameters[PLCamera.LineDebouncerTimeAbs].SetValue(10.0);
The Line Inverter camera feature allows you to invert the electrical signal level of an I/O line.
All high (1) signals are converted to low (0) signals and vice versa.
Enabling the Line Inverter
Enable the line inverter only when the I/O lines are not in use. Otherwise, the camera may show unpredictable behavior.
// Select Line 1
camera.Parameters[PLCamera.LineSelector].SetValue(PLCamera.LineSelector.Line1);
// Enable the line inverter for the I/O line selected
camera.Parameters[PLCamera.LineInverter].SetValue(true);
The Line Logic camera feature allows you to determine the logic of an I/O line.
The logic of an I/O line can either be positive or negative.
Determining the Line Logic
To determine the logic of an I/O line:
Line Logic Overview
Positive Line Logic
If the line logic is positive, the relation between the electrical status of an I/O line and the LineStatus parameter is as follows:
Electrical Status | LineStatus Parameter Value |
---|---|
Voltage level high | True |
Voltage level low | False |
Negative Line Logic
If the line logic is negative, the relation between the electrical status of an I/O line and the LineStatus parameter is as follows:
Electrical Status | LineStatus Parameter Value |
---|---|
Voltage level high | False |
Voltage level low | True |
// Select a line
camera.LineSelector.SetValue(LineSelector_Line1);
// Get the logic of the line
LineLogicEnums e = camera.LineLogic.GetValue();
The Line Minimum Output Pulse Width camera feature allows you to increase the signal width ("pulse width") of an output signal in order to achieve a minimum signal width.
Increasing the camera output signal width can be necessary to suit certain receivers that may require a certain minimum signal width to be able to detect the signals.
Specifying a Line Minimum Output Pulse Width
How It Works
To ensure reliable detection of camera output signals, the Line Minimum Output Pulse Signal Width feature allows you to increase the output signal width to a minimum width. The minimum width is specified in microseconds, up to a maximum value of 100 μs.
// Select output line Line 2
camera.Parameters[PLCamera.LineSelector].SetValue(PLCamera.LineSelector.Line2);
// Set the parameter value to 10.0 microseconds
camera.Parameters[PLCamera.LineMinimumOutputPulseWidth].SetValue(10.0);
The Line Mode camera feature allows you to configure whether an I/O line is used as input or output.
You can configure the line mode of any general purpose I/O line (GPIO line). For opto-coupled I/O lines, you can only determine the current line mode.
Configuring the Line Mode
Configure the line mode only when the I/O lines are not in use. Otherwise, the camera may show unpredictable behavior.
Determining the Line Mode
// Select GPIO line 3
camera.Parameters[PLCamera.LineSelector].SetValue(PLCamera.LineSelector.Line3);
// Set the line mode to Input
camera.Parameters[PLCamera.LineMode].SetValue(PLCamera.LineMode.Input);
// Get the current line mode
string e = camera.Parameters[PLCamera.LineMode].GetValue();
The Line Selector camera feature allows you to select the I/O line that you want to configure.
Selecting a Line
To select a line, set the LineSelector parameter to the desired I/O line.
Depending on the camera model, the total number of I/O lines, the format of the lines (opto-coupled or GPIO), and the pin assignment may vary. To find out what your camera model offers, check the physical interface section in the topic about your camera model. Possible tasks depend on whether the I/O line serves as input or output.
Once you have selected a line, you can do the following:
Task | Feature |
---|---|
Configuring the debouncer for an input line |
Line Debouncer |
Selecting the source signal for an output line |
Line Source |
Setting the minimum pulse width for an output line |
Line Minimum Output Pulse Width |
Setting the line status of a user settable output line |
User Output Value |
Setting the line mode of a GPIO line |
Line Mode |
Enabling the invert function |
Line Inverter |
Checking the status of a single I/O line |
Line Status |
// Select input line 1
camera.Parameters[PLCamera.LineSelector].SetValue(PLCamera.LineSelector.Line1);
The Line Source camera feature allows you to configure which signal to output on the I/O line currently selected.
This allows you to monitor the status of the camera or to control external devices. For example, you can monitor if the camera is currently exposing or you can control external flash lighting.
For each camera output line, you can set exactly one signal.
The camera sends all output signals with a short propagation delay. The delay is usually in the microseconds range.
Setting the Line Source
To set the line source:
Available Line Source Signals
Depending on your camera model, the following line source signals are available:
Trigger Wait
You can use the camera's "Trigger Wait" signals to optimize triggered image acquisitionand to avoid overtriggering.
Trigger Wait signals go high when the camera is ready to receive trigger signals of the corresponding trigger type. When you apply the corresponding trigger signal, the Trigger Wait signal goes low. It stays low until the camera is ready to receive the next corresponding trigger signal.
For example, the Frame Trigger Wait signal goes high when the camera is ready to receive Frame Start trigger signals. When you apply a frame trigger signal, the signal goes low. It stays low until the camera is ready to receive the next Frame Start trigger signal:
If you operate the camera with overlapping image acquisition and the Exposure Overlap Time Max feature is available on your camera model, you can use that feature to optimize the Frame Trigger Wait signal.
Timer Active
You can use the Timer Active (or "Timer 1 Active") signal to monitor the camera's Timer feature. The signal goes high on specific camera events, e.g., on exposure start. The signal goes low after the duration specified. Optionally, you can delay the rise of the signal.
Exposure Active
If available, you can use the Exposure Active signal to monitor if the camera is currently exposing. The signal goes high when exposure starts. The signal goes low when exposure ends. On cameras configured for Rolling shutter mode, the signal goes low when exposure for the last row has ended.
The Exposure Active signal can be used to trigger a flash.
The signal is also useful in situations where either the camera or the target object is moving. For example, assume that the camera is mounted on an arm mechanism that moves the camera to different sections of a product assembly. Typically, you don't want the camera to move during exposure. In this case, you can monitor the Exposure Active signal to know when exposure is taking place. This allows you to avoid moving the camera during that time.
Flash Window
If available, you can use the Flash Window signal to determine when to use flash lighting. The signal goes high when you can start the flash lighting. The signal goes low when you should stop the flash lighting.
The signal indicates the period of time during a frame acquisition when all of the rows in the sensor are open for exposure.
Flash Window in Rolling Shutter Mode
If the camera is configured for Rolling shutter mode, Basler recommends the use of flash lighting, especially when you are capturing images of fast-moving objects. Otherwise, images can be distorted due to the temporal shift between the different exposure starts of the individual rows.
The following diagram illustrates the timing of the Flash Window signal when the camera is configured for Rolling shutter mode:
As shown above, in Rolling shutter mode, the Flash Window signal covers the period of time between the start of exposure of the last row (A) and the end of exposure of the first row (B).
In Rolling shutter mode, avoid extremely short exposure times or extremely large Image ROIs. Otherwise, the exposure time for the first row may end before exposure of the last row starts, i.e., (B) occurs before (A). In that case, the Flash Window signal would always be low:
Flash Window in Global Reset Release Shutter Mode
If the camera is configured for Global Reset Release shutter mode, you must use flash lighting. Otherwise, the brightness in each acquired image may vary significantly from top to bottom due to the differences in the exposure times of the rows. Also, when you are capturing images of fast-moving objects, images may be distorted due to the temporal shift between the different exposure ends of the individual rows.
The following diagram illustrates the timing of the Flash Window signal when the camera is configured for Global Reset Release shutter mode:
As shown above, in Global Reset Release shutter mode, the Flash Window signal spans the exposure time of the first row.
Global Shutter Mode
On cameras configured for Global shutter mode, the Flash Window signal is either not available or equivalent to the Exposure Active signal.
User Output
If an output line is configured to supply a User Output signal, you can set the status of the line by software. For more information, see the User Output Value and the User Output Value All features.
This can be useful to control external events or devices, e.g., a light source.
How to configure the output lines depends on how many User Output line sources (e.g., "User Output 1", "User Output 2") are available on your camera model.
Configuration: One User Output Line Source Available
If only one User Output line source is available ("User Output"):
Now, you can use the User Output Value or the User Output Value All feature to set the status of the line by software.
Configuration: Multiple User Output Line Sources Available
If multiple User Output line sources are available (e.g., "User Output 1" and "User Output 2"):
Now, you can use the User Output Value or the User Output Value All feature to set the status of the line by software.
Sync User Output
If available, you can use the Sync User Output signal to manually set the status of the line using the Sequencer feature.
The Sync User Output signal is similar to the User Output signal. The only difference is that Sync User Output signals can be controlled by the Sequencer feature, while the User Output signals can't.
The parameters related to the Sync User Output signals are also similar to the User Output parameters:
Specifics
Camera Model | Available Line Sources | User Output Signal Assignment |
---|---|---|
acA2500-20gm |
|
|
// Select Line 2 (output line)
camera.Parameters[PLCamera.LineSelector].SetValue(PLCamera.LineSelector.Line2);
// Select the Flash Window signal as the source signal for Line 2
camera.Parameters[PLCamera.LineSource].SetValue(PLCamera.LineSource.FlashWindow);
The Line Status camera feature allows you to determine the status of an I/O line (high or low).
To determine the status of all I/O lines in a single operation, use the Line Status Allfeature.
Determining the Status of an I/O Line
To determine the status of an I/O line:
A value of false (0) means that the line's status was low at the time of polling. A value of true (1) means the line's status was high at the time of polling.
If the Line Inverter feature is enabled, the camera inverts the LineStatusparameter value. A true parameter value changes to false, and vice versa.
Line Status and I/O Status
GPIO Line Configured as Input
If your camera has a GPIO line and that line is configured as input, the relation between its input status and the LineStatus parameter is as follows:
Input Status | LineStatus Parameter Value |
---|---|
Input open (not connected) | True |
Voltage level low | False |
Voltage level high | True |
This means that the line logic is positive.
GPIO Line Configured as Output
If your camera has a GPIO line and that line is configured as output, the relation between its output status and the LineStatus parameter depends on your camera model.
Opto-Coupled Input Line
If your camera has an opto-coupled input line, the relation between its input status and the LineStatus parameter is as follows:
Input Status | LineStatus Parameter Value |
---|---|
Input open (not connected) | False |
Voltage level low | False |
Voltage level high | True |
This means that the line logic is positive.
Opto-Coupled Output Line
If your camera has an opto-coupled output line, the relation between its output status and the LineStatus parameter is as follows:
Output Status | LineStatus Parameter Value | Electrical Status |
---|---|---|
0 (e.g., User Output Value set to false or Flash Window signal low) | True | Voltage level higha |
1 (e.g., User Output Value set to true or Flash Window signal high) | False | Voltage level low |
aAn external pull-up resistor must be installed. Otherwise, the voltage level will be undefined.
This means that the line logic is negative.
Specifics
For information about the line status on GPIO lines configured for input and on opto-coupled I/O lines, see the tables above.
For information about the line status on GPIO lines configured for output, see the following table:
GPIO Lines Configured for Output | |||
---|---|---|---|
Camera Model | Output Status | LineStatus Parameter Value | Electrical Status |
All ace GigE camera models |
0 (e.g. User Output Value set to falseor Flash Window signal low) | True | Voltage level higha |
1 (e.g. User Output Value set to trueor Flash Window signal high) | False | Voltage level low |
// Select a line
camera.Parameters[PLCamera.LineSelector].SetValue(PLCamera.LineSelector.Line1);
// Get the status of the line
bool status = camera.Parameters[PLCamera.LineStatus].GetValue();
The Line Status All camera feature allows you to determine the status of all I/O lines in a single operation.
To determine the status of an individual I/O line, use the Line Status feature.
Determining the Status of All I/O Lines
To determine the current status of all I/O lines, read the LineStatusAll parameter. The parameter is reported as a 64-bit value.
Certain bits in the value are associated with the I/O lines. Each bit indicates the status of its associated line:
Which bit is associated with which line depends on your camera model.
If the Line Inverter feature is enabled, the camera inverts the LineStatusAllparameter value. All 0 bits change to 1, and vice versa.
Line Status and I/O Status
→ See the Line Status feature documentation.
Specifics
Camera Model | Bit-to-Line Association |
---|---|
acA2500-20gm |
Example: All lines high = 0b111 |
// Get the line status of all I/O lines. Because the GenICam interface does not
// support 32-bit words, the line status is reported as a 64-bit value.
Int64 lineStatus = camera.Parameters[PLCamera.LineStatusAll].GetValue();
The LUT camera feature allows you to replace the pixel values in your images by values defined by you.
This is done by creating a user-defined lookup table (LUT).
You can also use the LUT Value All feature to replace all pixel values in a single operation.
How It Works
LUT is short for "lookup table", which is basically an indexed list of numbers. For Basler cameras, you can create a "luminance lookup table" to change the pixel values, i.e., the luminance or gray values, in your images.
In the lookup table you can define replacement values for individual pixel values. For example, you can replace a gray value of 4 095 (= maximum gray value for 12-bit pixel formats) by a gray value of 0 (= minimum gray value). This changes all completely white pixels in your images to completely black pixels.
Setting up a LUT can be useful, e.g., if you want to optimize the luminance of your images. By defining the replacement values in advance and storing them in the camera's LUT you avoid time-consuming calculations by your application. Instead, the camera can simply look up the desired new value in the LUT based on the pixel’s initial value.
Creating the LUT
Basler recommends to use a programming loop (e.g., a for-loop) to iterate through the values. See the sample code below.
If you want to change all pixel values, Basler recommends to use the LUT Value All feature for faster execution.
Limitations
On all Basler cameras, a user-defined LUT can store up to 512 entries. This size is not sufficient to include all possible pixel values (e.g., 4 096 entries for a 12-bit pixel format, 1 024 entries for a 10-bit pixel format).
Therefore, the following limitations apply:
To determine the remaining pixel values, the camera performs a straight line interpolation.
Example: Assume that the camera uses a 12-bit pixel format. Also assume that you have created a LUT that converts a gray value of 24 to a gray value of 20 and a gray value of 32 to a value of 30. In this case, the camera determines the pixel values between 24 and 32 as follows:
Original Pixel Value |
Value Stored in LUT |
Interpolated Value |
New Pixel Value (Rounded) |
---|---|---|---|
24 |
20 |
20 |
20 |
25 |
- |
21.25 |
21 |
26 |
- |
22.5 |
22 |
27 |
- |
23.75 |
23 |
28 |
- |
25 |
25 |
29 |
- |
26.25 |
26 |
30 |
- |
27.5 |
27 |
31 |
- |
28.75 |
28 |
32 |
30 |
30 |
30 |
Additional Parameters
The LUTSelector parameter allows you to select a lookup table.
Because there is only one user-defined lookup table available on Basler cameras, the parameter currently serves no function.
// Write a lookup table to the device.
// The following lookup table causes an inversion of the pixel values
// (bright -> dark, dark -> bright)
// Only applies to cameras with a maximum pixel bit depth of 12 bit
for (int i=0; i<4096; i+=8)
{
camera.LUTIndex.SetValue(i);
camera.LUTValue.SetValue(4095-i);
}
// Enable the LUT
camera.LUTEnable.SetValue(true);
The LUT Value All camera feature allows you to replace all pixel values in your images by values defined by you.
This is done by replacing the entire user-defined lookup table (LUT).
To replace individual entries in the lookup table, use the LUT feature.
How It Works
LUT is short for "lookup table", which is basically an indexed list of numbers. For more information, see the LUT feature description.
While the LUT feature allows you to change individual entries in the lookup table, the LUT Value All feature allows you to change all entries in the lookup table in a single operation.
In many cases, this is faster than repeatedly changing individual entries in the LUT.
To change all entries in the lookup table use the LUTValueAll parameter. The parameter structure depends on the maximum pixel bit depth of your camera.
12-bit Camera Models
On cameras with a maximum pixel bit depth of 12 bit, the LUTValueAll parameter is a register that consists of 4096 x 4 bytes. Each 4-byte word represents a LUTValueparameter value.
The LUTValue parameter values are sorted by the LUTIndex number in ascending order (0 through 4095).
Example:
10-bit Camera Models
On cameras with a maximum pixel bit depth of 10 bit, the LUTValueAll parameter is a register that consists of 1024 x 4 bytes. Each 4-byte word represents a LUTValueparameter value.
The LUTValue parameter values are sorted by the LUTIndex number in ascending order (0 through 1024).
Example:
Setting or Getting All LUT Values
To set all entries in the lookup table:
To get all entries in the lookup table:
The LUTValueAll parameter is not available in the pylon Viewerapplication. You can only set or get the parameter via the pylon API.
Specifics
Camera Model | Endianness of the 4-Byte Words (LUT Values) |
---|---|
All ace GigE camera models | Big-endian |
// Write a lookup table to the device
// The following lookup table inverts the pixel values
// (bright -> dark, dark -> bright)
// Only applies to cameras with a maximum pixel bit depth of 12 bit
// Note: This is a simplified code sample.
// You should always check the camera interface and
// the endianness of your system before using LUTValueAll.
// For more information, see the 'LUTValueAll' code sample
// in the C++ Programmer's Guide and Reference Documentation
// delivered with the Basler pylon Camera Software Suite.
uint32_t lutValues[4096];
for (int i=0; i<4096; i+=8)
{
lutValues[i] = 4095-i;
}
camera.LUTValueAll.SetValue(lutValues);
// Enable the LUT
camera.LUTEnable.SetValue(true);
The PGI feature set allows you to optimize the quality of your images.
The main purpose of the PGI feature set is to optimize images to meet the needs of human vision. It combines up to four image optimization processes.
How it Works
Depending on your camera model, a selection of the following image optimizations will be performed:
Noise Reduction
The noise reduction (also called "denoising") reduces random variations in brightness or color information in your images.
Sharpness Enhancement
The sharpness enhancement increases the sharpness of the images. The higher the sharpness, the more distinct the contours of the image objects will be. This is especially useful in applications where cameras must correctly identify numbers or letters.
5×5 Demosaicing
5×5 demosaicing (also called "debayering") carries out color interpolation on regions of 5×5 pixels on the sensor and is therefore more elaborate than the "simple" 2×2 demosaicing used otherwise by the camera.
Color Anti-Aliasing
Color errors, especially on sharp edges and in sections of the image with high spatial frequencies, are a common side effect of demosaicing algorithms. Even colorless structures can suddenly appear to have color. The color anti-aliasing optimization analyzes and corrects the discolorations.
For more information about the PGI image optimizations, see the Better Image Quality with Basler PGI white paper.
Enabling the PGI Feature Set
Automatic
On some camera models, the PGI feature set is enabled automatically whenever the pixel format is set to a non-Bayer color pixel format, i.e., to one of the available RGB, BGR, or YUV pixel formats.
Manual
On some camera models, you must manually enable the PGI feature set. To do so:
Setting the PGI Image Optimizations
Once you have enabled the PGI feature set, you can configure the individual image optimization processes.
Which image optimizations are available and can be configured depends on your camera model.
Configuring Noise Reduction
If this optimization is configurable, you can use the NoiseReductionAbs parameter to specify the desired noise reduction. The higher the parameter value, the more noise reduction is applied.
If this optimization is not configurable, noise reduction is applied automatically.
Noise reduction is best used together with sharpness enhancement. If the parameter value is set too high, fine structure in the image can become indistinct or even disappear.
Configuring Sharpness Enhancement
If this optimization is configurable, you can use the SharpnessEnhancementAbsparameter to specify the desired sharpness enhancement. The higher the parameter value, the more sharpening is applied.
If this optimization is not configurable, sharpness enhancement is applied automatically.
In most cases, best results are obtained at low parameter value settings and when using noise reduction at the same time.
Configuring 5×5 Demosaicing
If available, 5×5 demosaicing is performed automatically whenever the PGI feature set is enabled. You can't configure this optimization.
Configuring Color Anti-Aliasing
If available, color anti-aliasing is performed automatically whenever the PGI feature set is enabled. You can't configure this optimization.
Specifics
Camera Model | Enabling PGI Feature Set |
Available Image Optimizations | Configurable Image Optimizations |
---|---|---|---|
acA2500-20gm | Manual |
|
|
// Enable the PGI feature set
camera.Parameters[PLCamera.DemosaicingMode].SetValue(PLCamera.DemosaicingMode.BaslerPGI);
// Configure noise reduction (if available)
camera.Parameters[PLCamera.NoiseReductionAbs].SetValue(0.2);
// Configure sharpness enhancement (if available)
camera.Parameters[PLCamera.SharpnessEnhancementAbs].SetValue(1.0);
The Pixel Format camera feature allows you to choose the format of the image data transmitted by the camera.
There are different pixel formats depending on the model of your camera and whether it is a color or a mono camera.
Detailed information about pixel formats can be found in the GenICam Pixel Format Naming Convention 2.1.
Prerequisites
The camera must be idle, i.e., not capturing images. Otherwise, the PixelFormatparameter is read-only.
Choosing a Pixel Format
To choose a pixel format, set the PixelFormat parameter to one of the following values:
Determining the Pixel Format
To determine the pixel format currently used by the camera, read the value of the PixelFormat parameter.
Available Pixel Formats
Mono Formats
If a monochrome camera uses one of the mono pixel formats, it outputs 8, 10, or 12 bits of data per pixel.
If a color camera uses the Mono 8 pixel format, the values for each pixel are first converted to the YUV color model. The Y component of this model represents a brightness value and is equivalent to the value that would be derived from a pixel in a monochrome image. So in essence, when a color camera is set to Mono 8, it outputs an 8-bit monochrome image. This type of output is sometimes referred to as "Y Mono 8".
Bayer Formats
Color cameras are equipped with a Bayer color filter and can output color images based on the Bayer pixel formats given below.
If a color camera uses one of these Bayer pixel formats, it outputs 8, 10, or 12 bits of data per pixel. The pixel data is not processed or interpolated in any way. For each pixel covered with a red filter, you get 8, 10, or 12 bits of red data. For each pixel covered with a green filter, you get 8, 10, or 12 bits of green data. For each pixel covered with a blue filter, you get 8, 10, or 12 bits of blue data. This type of pixel data is sometimes referred to as "raw" output.
YUV Formats
Color cameras can also output color images based on pixel data in YUV (or YCbCr) format.
If a color camera uses this format, each pixel value in the captured image goes through a conversion process as it exits the sensor and passes through the camera. This process yields Y, U, and V color information for each pixel value.
The values for U and V normally range from -128 to +127. Because the camera transfers U values and V values with unsigned integers, 128 is added to each U value and V value before they are transferred from the camera. This way, values from 0 to 255 can be transferred.
RGB and BGR Formats
When a color camera uses the RGB 8 or BGR 8 pixel format, the camera outputs 8 bit of red data, 8 bit of green data, and 8 bit of blue data for each pixel in the acquired frame.
The pixel formats differ by the output sequences for the color data (red, green, blue or blue, green, red).
Maximum Pixel Bit Depth
The maximum pixel bit depth is defined by the pixel format with the highest bit depth among the pixel formats available on your camera.
Example: If the available pixel formats for your camera are Mono 8 and Mono 12, the maximum pixel bit depth of the camera is 12 bit.
Unpacked and Packed Pixel Formats
When a camera uses an unpacked pixel format (e.g., Bayer 12), pixel data is always 8-bit aligned. Padding bits (zeros) are inserted as necessary to reach the next 8-bit boundary.
Example (simplified):
Assume that you have chosen a 12-bit unpacked pixel format. The camera outputs 16 bits per pixel: 12 bits of pixel data and 4 padding bits to reach the next 8-bit boundary.
When a camera uses a packed pixel format (e.g., Bayer 12p), pixel data is not aligned. This means that no padding bits are inserted and that one byte can contain data of multiple pixels.
Example (simplified):
Assume that you have chosen a 12-bit packed pixel format. The camera outputs 12 bits per pixel. As a consequence, data for two pixels is always spread over 3 bytes.
The exact data alignment depends on the pixel format. You can find detailed information in the GenICam Pixel Format Naming Convention 2.1.
External Links
Specifics
Camera Model | Available Pixel Formats |
---|---|
acA2500-20gm |
|
// Set the pixel format to Mono 8
camera.Parameters[PLCamera.PixelFormat].SetValue(PLCamera.PixelFormat.Mono8);
The Precision Time Protocol (PTP) camera feature allows you to synchronize multiple GigE cameras in the same network.
The protocol is defined in the IEEE 1588 standard. Basler cameras support the revised version of the standard (IEEE 1588-2008, also known as PTP Version 2).
The precision of the PTP synchronization depends to a large extent on your network hardware and setup. For maximum precision, choose high-quality network hardware, use PTP-enabled network switches, and add an external PTP clock device with a GPS receiver to your network.
Why Use PTP
The Precision Time Protocol (PTP) feature enables a camera to use the following features:
How it Works
Through PTP, multiple devices (e.g., cameras) are automatically synchronized with the most accurate clock found in a network, the so-called master clock or best master clock.
The protocol enables systems within a network to do the following:
The master clock is determined by several criteria. The most important criterion is the device's Priority 1 setting. The network device with the lowest Priority 1 setting is the master clock. On all Basler cameras, the Priority 1 setting is preset to 128 and can't be changed. If your PTP network setup consists only of Basler cameras, the master clock will be chosen based on the device's MAC address.
For more information about the master clock criteria, see the IEEE 1588-2008 specification, clause 7.6.2.2.
Timestamp Synchronization
The basic concept of the Precision Time Protocol (IEEE 1588) is based on the exchange of PTP messages. These messages allow the slave clocks to synchronize their timestamp value with the timestamp value of the master clock. When the synchronization has been completed, the GevTimestampValue parameter value on all GigE devices will be as identical as possible. The precision highly depends on your network hardware and setup.
IEEE 1588 defines 80-bit timestamps for storing and transporting time information. Because GigE Vision uses 64-bit timestamps, the PTP timestamps are mapped to the 64-bit timestamps of GigE Vision.
If no device in the network is synchronized to a coordinated world time (e.g., UTC), the network will operate in the arbitrary timescale mode (ARB). In this mode, the epoch is arbitrary, as it is not bound to an absolute time. The timescale is relative and only valid in this network.
Enabling PTP Clock Synchronization
When powering on the camera, PTP is always disabled. If you want to use PTP, you must enable it.
To enable PTP:
Now, you can use the Scheduled Action Commands feature and the Synchronous Free Run feature.
Enabling PTP clock synchronization changes the camera's internal tick frequency from 125 MHz (= 8 ns tick duration) to 1 GHz (= 1 ns tick duration).
The Inter-packet Delay and the Frame Transmission Delay parameter values are adjusted automatically.
Checking the Status of the PTP Clock Synchronization
To check the status of the PTP clock synchronization, you must develop your own check method using the pylon API.
These guidelines may help you in developing a suitable method:
External Links
// Enable PTP on the current device
camera.Parameters[PLCamera.GevIEEE1588].SetValue(true);
// To check the status of the PTP clock synchronization,
// implement your own check method here.
// For guidelines, see section "Checking the Status of
// the PTP Clock Synchronization" in this topic.
The Remove Parameter Limits camera feature allows you to remove the factory limits of certain camera features.
When the factory limits are removed, extended parameter value ranges are available.
How It Works
Normally, a parameter's allowed value range is limited. These factory limits are designed to ensure optimum camera performance and, in particular, good image quality. For certain use cases, however, you may want to specify parameter values outside of the factory limits. This is where the ability to remove parameter limits comes in useful.
Which parameter limits can be removed depends on your camera model.
Removing a Parameter Limit
To remove a parameter limit:
Specifics
Camera Model | Removable Parameter Limits |
---|---|
acA2500-20gm |
|
// Select the Gain parameter
camera.Parameters[PLCamera.RemoveParameterLimitSelector].SetValue(PLCamera.RemoveParameterLimitSelector.Gain);
// Remove the limits of the selected parameter
camera.Parameters[PLCamera.RemoveParameterLimit].SetValue(true);
The Resulting Frame Rate camera feature allows you to determine the maximum frame rate with the current camera settings.
This is useful, for example, if you want to know how long you have to wait between triggers.
The frame rate is expressed in frames per second (fps).
Why Check the Resulting Frame Rate
Optimizing the Frame Rate
When the camera is configured for free run image acquisition and continuous acquisition, knowing the resulting frame rate is useful if you want to optimize the frame rate for your imaging application. You can adjust the camera settings limiting the frame rate until the resulting frame rate reaches the desired value.
For example, if your imaging application requires 30 fps and the current resulting frame rate is 25 fps, you can reduce the Image ROI height until the resulting frame rate reaches 30 fps.
Optimizing Triggered Image Acquisition
When the camera is configured for triggered image acquisition, knowing the resulting frame rate is useful if you want to trigger the camera as often as possible without overtriggering. You can calculate how long you must wait after each trigger signal by taking the reciprocal of the resulting frame rate: 1 / Resulting Frame Rate.
Example: If the resulting frame rate is 12.5, you must wait for a minimum of 1/12.5 = 0.08 seconds after each trigger signal. Otherwise, the camera ignores the trigger signal and generates a Frame Start Overtrigger event.
Checking the Resulting Frame Rate
To check the resulting frame rate, i.e., the maximum frame rate with the current camera settings, read the value of the ResultingFrameRateAbs parameter. The value is expressed in frames per second (fps).
The parameter value takes all factors limiting the frame rate into account.
Factors Limiting the Frame Rate
Several factors may limit the frame rate on any Basler camera:
External Links
// Get the resulting frame rate
double d = camera.Parameters[PLCamera.ResultingFrameRate].GetValue();
The Reverse X and Reverse Y camera features allow you to mirror acquired images horizontally, vertically, or both.
Reverse X is available on all camera models. Reverse Y is available on selected camera models.
Enabling Reverse X
To enable Reverse X, set the ReverseX parameter to true.
The camera mirrors the image horizontally:
Enabling Reverse Y
On some camera models, the Reverse Y feature is also available.
To enable Reverse Y, set the ReverseY parameter to true.
The camera mirrors the image vertically:
Using Image ROIs or Auto Function ROIs with Reverse X or Reverse Y
If you have specified an Image ROI or Auto Function ROI while using Reverse X or Reverse Y, you have to bear in mind that the position of the ROI relative to the sensor remains the same.
As a consequence, the camera captures different portions of the image depending on whether the Reverse X or the Reverse Y feature are enabled:
Effective Bayer Filter Alignments (Color Cameras Only)
Depending on your camera model, the Bayer filter alignment changes when Reverse X, Reverse Y, or both are used.
For example, if you use a camera with a physical Bayer BG filter alignment and enable Reverse X, the actual Bayer filter alignment will be Bayer GB. The PixelFormat parametervalue changes accordingly.
Specifics
Camera Model | Reverse X Available | Reverse Y Available | Changes in Bayer Filter Alignment |
---|---|---|---|
acA2500-20gm | Yes | Yes | N/A (mono camera) |
// Enable Reverse X
camera.Parameters[PLCamera.ReverseX].SetValue(true);
// Enable Reverse Y, if available
camera.Parameters[PLCamera.ReverseY].SetValue(true);
The Scheduled Action Commands camera feature allows you to send action commandsthat are executed in multiple cameras at exactly the same time.
If exact timing is not a critical factor in your application, you can use the Action Commands feature instead.
How It Works
The basic parameters of the Scheduled Action Command feature are the same as for the Action Commands feature:
In addition to these parameters, the Scheduled Action Command feature uses the following parameter:
Action Time
A 64-bit GigE Vision timestamp used to define when the action is to be executed.
The action is executed as soon as the internal timestamp value of a camera reaches the specified value.
With the Precision Time Protocol enabled, the timestamp value is synchronized across all cameras in the network. As a result, the action will be executed on all cameras in the network at exactly the same time.
The value must be entered in ticks. On Basler cameras with the Precision Time Protocolfeature enabled, one tick equals one nanosecond.
Example: Assume you issue a scheduled action command with the action time set to 100 000 000 000. The action will be executed as soon as the timestamp value of all cameras in the specified network segment reaches 100 000 000 000.
If 0 (zero) is entered or if the action time is set to a time in the past, the action command will be executed immediately, equivalent to a standard action command.
Using Scheduled Action Commands
Configuring the Cameras
Follow the procedure outlined in the Action Commands topic.
Issuing a Scheduled Action Command
General Use
To issue a scheduled action command:
Issuing a Scheduled Action Command to Be Executed after a Certain Delay
To issue a scheduled action command that is executed after a certain delay:
All cameras in the network segment will execute the command simultaneously after the given delay.
Issuing a Scheduled Action Command to Be Executed at a Precise Point in Time
To issue a scheduled action command that is executed at a precise point in time:
// Example: Configuring a group of cameras for synchronous image
// acquisition. It is assumed that the "cameras" object is an
// instance of CBaslerGigEInstantCameraArray.
//--- Start of camera setup ---
for (size_t i = 0; i > cameras.GetSize(); ++i)
{
// Open the camera connection
cameras[i].Open();
// Configure the trigger selector
cameras[i].TriggerSelector.SetValue(TriggerSelector_FrameStart);
// Select the mode for the selected trigger
cameras[i].TriggerMode.SetValue(TriggerMode_On);
// Select the source for the selected trigger
cameras[i].TriggerSource.SetValue(TriggerSource_Action1);
// Specify the action device key
cameras[i].ActionDeviceKey.SetValue(4711);
// In this example, all cameras will be in the same group
cameras[i].ActionGroupKey.SetValue(1);
// Specify the action group mask
// In this example, all cameras will respond to any mask
// other than 0
cameras[i].ActionGroupMask.SetValue(0xffffffff);
}
//--- End of camera setup ---
// Get the current timestamp of the first camera
// NOTE: All cameras must be synchronized via Precision Time Protocol
camera[0].GevTimestampControlLatch.Execute();
int64_t currentTimestamp = camera[0].GevTimestampValue.GetValue();
// Specify that the command will be executed roughly 30 seconds
// (30 000 000 000 ticks) after the current timestamp.
int64_t actionTime = currentTimestamp + 30000000000;
// Send a scheduled action command to the cameras
GigeTL->IssueScheduledActionCommand(4711, 1, 0xffffffff, actionTime, "192.168.1.255");
The Sensor Readout Mode camera feature allows you to choose between sensor readout modes that provide different sensor readout times.
Decreasing the sensor readout time can increase the camera's frame rate.
To configure the sensor readout mode, set the SensorReadoutMode parameter value one of the following values:
// Set the sensor readout mode to Fast
camera.Parameters[PLCamera.SensorReadoutMode].SetValue(PLCamera.SensorReadoutMode.Fast);
// Get the current sensor readout mode
string e = camera.Parameters[PLCamera.SensorReadoutMode].GetValue();
The Sensor Readout Time camera feature allows you to determine the amount of time it takes to read out the data of an image from the sensor.
This feature only provides a very rough estimate of the sensor readout time. If you want to optimize the camera for triggered image acquisition or for overlapping image acquisition, use the Resulting Frame Rate feature instead.
Why Determine the Sensor Readout Time
Each image acquisition process includes two parts:
The Sensor Readout Time feature is useful if you want to estimate which part of the image acquisition process is limiting the camera's frame rate.
To do so, compare the exposure time with the sensor readout time:
Determining the Sensor Readout Time
To determine the sensor readout time under the current settings, read the value of the ReadoutTimeAbs parameter. The sensor readout time is measured in microseconds.
The result is only an approximate value and depends on various camera settings and features, e.g., Binning, Decimation, or Image ROI.
// Determine the sensor readout time under the current settings
double d = camera.Parameters[PLCamera.ReadoutTimeAbs].GetValue();
The Sequencer (GigE Cameras) camera feature allows you to define up to 64 sets of parameter settings, called sequence sets, and apply them to a sequence of image acquisitions.
As the camera acquires images, it applies one sequence set after the other. This enables you to quickly change camera parameters without compromising the maximum frame rate.
For example, you can use the Sequencer feature to quickly change between preconfigured Image ROIs or exposure times.
For a description of the Sequencer feature for USB 3.0 cameras, click here.
Prerequisites
All auto functions (e.g., Gain Auto, Exposure Auto) must be set to Off.
Enabling or Disabling the Sequencer
When enabled, the sequencer controls image acquisitions. It can't be configured in this state.
When disabled, the sequencer can be configured but is not controlling image acquisitions.
To enable the sequencer, set the SequenceEnable parameter to true.
How to disable the sequencer depends on your camera model:
What's in a Sequence Set?
Configuring Sequence Sets
Before you can use the Sequencer feature, you must populate the sequence sets with your desired settings. Each sequence set has a unique sequence set index number, ranging from 0 to 63.
To populate the sequence sets:
Example: Assume you need two sequence sets and want to populate them with different Image ROI settings. To do so:
You can now configure the sequencer to quickly change between the two Image ROIs.
Saving a Sequence Set
To save a sequence set:
The values of all sequence set parameters are stored in the selected sequence set.
Loading a Sequence Set
Sequence sets are loaded automatically during sequencer operation. However, loading a sequence set manually can be useful for testing purposes or when configuring the sequencer.
To manually load a sequence set:
The values of all sequence set parameters are overwritten and replaced by the values stored in the selected sequence set.
Configuring the Sequencer
After you have configured the sequence sets, you must configure the sequencer.
The sequencer can be operated in three modes, called "advance modes":
In all modes, sequence sets always advance in ascending order, starting from sequence set index number 0.
Auto Sequence Advance Mode
This mode is useful if you want to configure a fixed sequence which is repeated continuously.
You can enable this mode by setting the SequenceAdvanceMode parameter to Auto.
In this mode, the advance from one sequence set to the next occurs automatically as Frame Start trigger signals are received.
The SequenceSetTotalNumber parameter specifies the total number of sequence sets to be used. After the sequence set with the highest index number has been used, the cycle starts again at 0.
Example: Assume you want to configure the following sequence cycle:
To configure the above sequence cycle:
Using Sequence Sets Multiple Times
Optionally, each sequence set can be used several times in a row.
To specify how many times you want to use each sequence set:
Example: Assume you want to configure the following sequence cycle:
To configure the above sequence cycle:
Controlled Sequence Advance Mode
This mode is useful if you want to configure a dynamic sequence which can be controlled via line 1 or software commands.
You can enable this mode by setting the SequenceAdvanceMode parameter to Controlled.
As in the other modes, the advance always proceeds in ascending order, starting from sequence set index number 0.
You can, however, control the following:
The SequenceSetTotalNumber parameter specifies the total number of sequence sets you want to use. After the sequence set with the highest index number has been used, the cycle starts again at 0.
Configuring Sequence Set Advance
To configure sequence set advance:
Configuring Sequence Set Restart
To configure sequence set restart:
Free Selection Advance Mode
This mode is useful if you want to quickly change between freely selectable sequence sets without having to observe any particular order. You use the input lines of your camera to determine the sequence.
Bear in mind that it takes one microsecond between setting the status of the line and the rise of the Frame Start trigger signal. You also have to maintain the status of the line for at least one microsecond after the Frame Start trigger signal has risen. Monitor the Frame Trigger Wait signalto optimize the timing.
How to configure free selection advance mode depends on how many input lines are available on your camera:
Cameras with One Input Line
Sequence sets are chosen according to the status of input line 1:
Only sequence sets 0 and 1 are available.
To enable free selection advance mode:
The SequenceAddressBitSelector and SequenceAddressBitSource parameters also control the operation of the free selection advance mode. However, these parameters are preset and can’t be changed.
Cameras with Two Input Lines
Sequence sets are chosen according to the status of line 1 (opto-coupled input line) and line 3 (GPIO line, must be configured as input), resulting in four possible combinations. This allows you to choose between four sequence sets. Consequently, only sequence sets 0, 1, 2, and 3 are available.
In order to configure the free selection advance mode, you must assign a "sequence set address bit" to each line. The combinations of these address bits determine the sequence set index number. The following table shows the possible combinations and their respective outcomes.
Address Bit 1 | Address Bit 0 | Sequence Set That Will Be Selected |
---|---|---|
0 | 0 | Sequence set 0 |
0 | 1 | Sequence set 1 |
1 | 0 | Sequence set 2 |
1 | 1 | Sequence set 3 |
For example, you can assign line 1 to bit 1 and line 3 to bit 0. This results in the following sample configuration:
To configure the bits and enable free selection advance mode:
You can also use only one input line in free selection advance mode. To do so, set the SequenceSetTotalNumber parameter to 2. Now, only bit 0 is used to choose a sequence set. The free selection advance mode will behave as described under "Cameras with One Input Line".
Camera Model | SequenceConfigurationMode Parameter Available |
---|---|
acA2500-20gm | Yes |
/*Configuring sequence sets*/
camera.Parameters[PLCamera.SequenceEnable].SetValue(false);
// Set the total number of sequence sets to 2
camera.Parameters[PLCamera.SequenceSetTotalNumber].SetValue(2);
// Configure the parameters that you want to store in the first sequence set
camera.Parameters[PLCamera.Width].SetValue(500);
camera.Parameters[PLCamera.Height].SetValue(300);
// Select sequence set 0 and save the parameter values
camera.Parameters[PLCamera.SequenceSetIndex].SetValue(0);
camera.Parameters[PLCamera.SequenceSetStore].Execute();
// Configure the parameters that you want to store in the second sequence set
camera.Parameters[PLCamera.Width].SetValue(800);
camera.Parameters[PLCamera.Height].SetValue(600);
// Select sequence set 1 and save the parameter values
camera.Parameters[PLCamera.SequenceSetIndex].SetValue(1);
camera.Parameters[PLCamera.SequenceSetStore].Execute();
/*Configuring the sequencer for auto sequence advance mode
Assuming you want to configure the following sequence cycle:
0 - 0 - 1 - 1 - 1 (- 0 - 0 - ...)*/
camera.Parameters[PLCamera.SequenceEnable].SetValue(false);
camera.Parameters[PLCamera.SequenceAdvanceMode].SetValue(PLCamera.SequenceAdvanceMode.Auto);
// Set the total number of sequence sets to 2
camera.Parameters[PLCamera.SequenceSetTotalNumber].SetValue(2);
// Load sequence set 0 and specify that this set is to be used
// 2 times in a row
camera.Parameters[PLCamera.SequenceSetIndex].SetValue(0);
camera.Parameters[PLCamera.SequenceSetLoad].Execute();
camera.Parameters[PLCamera.SequenceSetExecutions].SetValue(2);
camera.Parameters[PLCamera.SequenceSetStore].Execute();
// Load sequence set 1 and specify that this set is to be used
// 3 times in a row
camera.Parameters[PLCamera.SequenceSetIndex].SetValue(1);
camera.Parameters[PLCamera.SequenceSetLoad].Execute();
camera.Parameters[PLCamera.SequenceSetExecutions].SetValue(3);
camera.Parameters[PLCamera.SequenceSetStore].Execute();
// Enable the sequencer
camera.Parameters[PLCamera.SequenceEnable].SetValue(true);
/*Configuring the sequencer for controlled sequence advance mode*/
camera.Parameters[PLCamera.SequenceEnable].SetValue(false);
camera.Parameters[PLCamera.SequenceAdvanceMode].SetValue(PLCamera.SequenceAdvanceMode.Controlled);
// Set the total number of sequence sets to 2
camera.Parameters[PLCamera.SequenceSetTotalNumber].SetValue(2);
// Specify that sequence set advance is controlled via line 1
camera.Parameters[PLCamera.SequenceControlSelector].SetValue(PLCamera.SequenceControlSelector.Advance);
camera.Parameters[PLCamera.SequenceControlSource].SetValue(PLCamera.SequenceControlSource.Line1);
// Specify that sequence set restart is controlled
// via software command
camera.Parameters[PLCamera.SequenceControlSelector].SetValue(PLCamera.SequenceControlSelector.Restart);
camera.Parameters[PLCamera.SequenceControlSource].SetValue(PLCamera.SequenceControlSource.Disabled);
// Enable the sequencer
camera.Parameters[PLCamera.SequenceEnable].SetValue(true);
// Restart the sequencer via software command (for testing purposes)
camera.Parameters[PLCamera.SequenceAsyncRestart].Execute();
/*Configuring the sequencer for free selection advance mode
on cameras with ONE input line*/
camera.Parameters[PLCamera.SequenceEnable].SetValue(false);
camera.Parameters[PLCamera.SequenceAdvanceMode].SetValue(PLCamera.SequenceAdvanceMode.FreeSelection);
// Set the total number of sequence sets to 2
camera.Parameters[PLCamera.SequenceSetTotalNumber].SetValue(2);
// Enable the sequencer
camera.Parameters[PLCamera.SequenceEnable].SetValue(true);
/*Configuring the sequencer for free selection advance mode
on cameras with TWO input lines (1x opto-coupled, 1x GPIO set for input)*/
camera.Parameters[PLCamera.SequenceEnable].SetValue(false);
camera.Parameters[PLCamera.SequenceAdvanceMode].SetValue(PLCamera.SequenceAdvanceMode.FreeSelection);
// Set the total number of sequence sets to 2
camera.Parameters[PLCamera.SequenceSetTotalNumber].SetValue(4);
// Assign sequence address bit 0 to line 3
camera.Parameters[PLCamera.SequenceAddressBitSelector].SetValue(PLCamera.SequenceAddressBitSelector.Bit0);
camera.Parameters[PLCamera.SequenceAddressBitSource].SetValue(PLCamera.SequenceAddressBitSource.Line3);
// Assign sequence address bit 1 to line 1
camera.Parameters[PLCamera.SequenceAddressBitSelector].SetValue(PLCamera.SequenceAddressBitSelector.Bit1);
camera.Parameters[PLCamera.SequenceAddressBitSource].SetValue(PLCamera.SequenceAddressBitSource.Line1);
// Enable the sequencer
camera.Parameters[PLCamera.SequenceEnable].SetValue(true);
The Shutter Mode camera feature allows you to determine or configure the operating mode of the camera's electronic shutter.
The shutter mode refers to the way in which image data is captured and processed. Which shutter modes are available depends on the design of the imaging sensor.
Determining the Shutter Mode
To determine the current shutter mode, get the value of the ShutterMode parameter. The parameter can take the following values:
Configuring the Shutter Mode
If multiple shutter modes are available on your camera model, you can choose the desired shutter mode.
To do so, set the ShutterMode parameter to one of the following values:
Advantages and Disadvantages
Shutter Mode |
Advantage |
Disadvantage |
---|---|---|
Global shutter mode | Well suited for capturing fast-moving objects |
Higher ambient noise |
Rolling shutter mode | Lower ambient noise |
Image distortion can occur if very fast-moving objects are captured. |
Global Reset Release shutter mode |
|
Flash lighting must be used. |
Available Shutter Modes
Depending on your camera model, the following shutter modes are available:
Global Shutter Mode
During every image acquisition in Global shutter mode, all of the sensor's pixels start exposing at the same time and also stop exposing at the same time. Immediately after the end of exposure, pixel data readout begins and proceeds row by row until all pixel data has been read. This is particularly useful if you want to capture fast moving objects or if the camera is moving rapidly while capturing images.
Cameras that operate in the Global shutter mode can provide an Exposure Active output signal. The signal goes high when exposure begins and goes low when exposure ends.
The sensor readout time is the sum of all row readout times. Therefore, the sensor readout time is influenced by the Image ROI height. You can determine the readout time by checking the value of the camera’s ReadoutTimeAbs parameter.
On some camera models, the Sensor Readout Mode feature is available. This feature allows you to reduce the sensor readout time.
Rolling Shutter Mode
In Rolling shutter mode, the camera exposes the pixel rows one after the other, with a temporal offset (tRow) from one row to the next. With this method, the ambient noise is typically significantly lower than with the global shutter method.
When frame start is triggered, the camera resets the first row and begins exposing it. For most cameras, this row is the first row of the Image ROI. For some cameras, the first row exposed is always the first row of the sensor, regardless of the Image ROI settings.
A short time later (= 1 x tRow), the camera resets the second row and begins exposing that row. After another short time (= 1 x tRow), the camera resets the third row and begins exposing that row.
This continues until a last row of pixels is reached. For most cameras, this row is the last row of the Image ROI. For some cameras, the last row exposed is always the last row of the sensor, regardless of the Image ROI settings.
The length of tRow varies by camera model.
The pixel values for each row are read out at the end of the exposure time of each row. The exposure time is the same for all rows. Because the readout time for each row is also tRow, the temporal shift for the end of readout is identical to the temporal shift for the start of exposure.
The sensor readout time is the sum of all row readout times: tRow x Image ROI height.
Therefore, the sensor readout time also depends on the Image ROI height. To determine the readout time, check the value of the camera’s ReadoutTimeAbs parameter.
Other Factors Influencing the Frame Period
Besides the exposure time and the sensor readout time, there are other factors influencing the frame period, e.g., the time needed to prepare the sensor for the next acquisition.
These other factors vary by camera model and configuration. Therefore, Basler recommends calculating the frame period. To do so, check the value of the camera's ResultingFrameRateAbs parameter value and take its reciprocal:
1 / resulting frame rate
This takes all influencing factors into account.
Possible Image Distortion (Rolling Shutter Effect)
If the object or the camera is moving very fast during image capture in Rolling shutter mode, image distortion may occur. This is also known as the rolling shutter effect.
This is due to the temporal shift between the start of exposure of the individual rows.
To prevent the rolling shutter effect, Basler recommends to use flash lighting. Most cameras can supply a Flash Window output signal to facilitate the use of flash lighting.
Exposure Active Signal
If your camera model provides an Exposure Active output signal and the camera is configured for Rolling shutter mode, the Exposure Active signal goes high when the exposure time for line one begins and goes low when the exposure time for the last line ends. This means that the signal width is greater than the exposure time.
Global Reset Release Shutter Mode
The Global Reset Release (GRR) shutter mode is a variant of the Rolling shutter mode. It combines the advantages of the Global and the Rolling shutter mode.
In GRR shutter mode, all of the pixels in the sensor start exposing at the same time. However, at the end of exposure, there is a temporal offset (tRow) from one row to the next.
The tRow values are the same as for the Rolling shutter mode and vary by camera model.
If the camera is operated in the GRR shutter mode, you must use flash lighting. Otherwise, the brightness in the acquired images will vary significantly from top to bottom due to the differences in the exposure times of the individual rows. Also, when you are capturing images of fast moving objects, images can be distorted due to the temporal shift caused by the different exposure end times of the individual rows.
Most cameras can supply a Flash Window output signal to facilitate the use of flash lighting.
Other Factors Influencing the Frame Period
→ See Other Factors Influencing the Frame Period for Rolling shutter mode.
Depending on your camera model, the GlobalResetReleaseModeEnable parameter may also be available.
Specifics
Camera Model | Available Shutter Modes | Temporal Offset tRow[μs] | Additional Parameters |
---|---|---|---|
acA2500-20gm | Global | - | None |
// Determine the current shutter mode
string shutterMode = camera.Parameters[PLCamera.ShutterMode].GetValue();
// Set the shutter mode to rolling
camera.Parameters[PLCamera.ShutterMode].SetValue(PLCamera.ShutterMode.Rolling);
// Set the shutter mode to global reset release
camera.Parameters[PLCamera.ShutterMode].SetValue(PLCamera.ShutterMode.GlobalResetRelease);
The Stacked ROI camera feature allows you to define multiple zones of varying heights and equal width on the sensor array that will be transmitted as a single image.
Only the pixel data from those zones will be transmitted. This increases the camera's frame rate.
The Stacked ROI feature is similar to the Stacked Zones Imaging feature, which is only available on ace classic cameras.
Prerequisites
How It Works
The Stacked ROI feature allows you to define vertically aligned zones of equal width on the sensor array. The maximum number of zones depends on your camera model.
When an image is acquired, only the pixel information from within the defined zones is read out of the sensor. The pixel information is then stacked together and transmitted as a single image.
The zones always have the same width and are vertically aligned. To configure the zones, Basler recommends the following procedure:
Configuring the ROI Zones
Considerations When Using the Stacked ROI Feature
Specifics
Camera Model | Maximum Number of ROI Zones |
---|---|
acA2500-20gm | 8 |
// Configure width and offset X for all zones
camera.Parameters[PLCamera.Width].SetValue(200);
camera.Parameters[PLCamera.OffsetX].SetValue(100);
// Select zone 0
camera.Parameters[PLCamera.ROIZoneSelector].SetValue(PLCamera.ROIZoneSelector.Zone0);
// Set the vertical offset for the selected zone
camera.Parameters[PLCamera.ROIZoneOffset].SetValue(100);
// Set the height for the selected zone
camera.Parameters[PLCamera.ROIZoneSize].SetValue(100);
// Enable the selected zone
camera.Parameters[PLCamera.ROIZoneMode].SetValue(PLCamera.ROIZoneMode.On);
// Select zone 1
camera.Parameters[PLCamera.ROIZoneSelector].SetValue(PLCamera.ROIZoneSelector.Zone1);
// Set the vertical offset for the selected zone
camera.Parameters[PLCamera.ROIZoneOffset].SetValue(250);
// Set the height for the selected zone
camera.Parameters[PLCamera.ROIZoneSize].SetValue(200);
// Enable the selected zone
camera.Parameters[PLCamera.ROIZoneMode].SetValue(PLCamera.ROIZoneMode.On);
The Synchronous Free Run camera feature allows you to capture images on multiple cameras at the same time and the same frame rate.
How It Works
If you are using multiple cameras in free run mode, image acquisition is slightly asynchronous due to a variety of reasons, e.g., the camera's individual timings and delays.
The Synchronous Free Run feature allows you to synchronize cameras in free run mode. As a result, the cameras will acquire images at the same time and at the same frame rate.
Also, you can use the Synchronous Free Run feature to capture images with multiple cameras in precisely time-aligned intervals, i.e., in a chronological sequence. For example, you can configure one camera to start image acquisition at a specific point in time. Then you configure another camera to start 100 milliseconds after the first camera and a third camera to start 200 milliseconds after the first camera:
Also, you can configure the cameras to acquire images at the same time and the same frame rate, but with different exposure times:
Using Synchronous Free Run
General Use
To synchronize multiple cameras:
Synchronous Free Run With Time-Aligned Intervals
To synchronize multiple cameras with time-aligned intervals, i.e., in a chronological sequence:
The following steps must be performed using the pylon API.
Converting the 64-bit Timestamp to Start Time High and Start Time Low
The start time for the Synchronous Free Run feature must be specified as a 64-bit GigE Vision timestamp value (in nanoseconds), split in two 32-bit values.
The high part of the 64-bit value must be transmitted using the SyncFreeRunTimerStartTimeHigh parameter.
The low part of the 64-bit value must be transmitted using the SyncFreeRunTimerStartTimeLow parameter.
Example: Assume your network devices are coordinated to UTC and you want to configure Fri Dec 12 2025 11:00:00 UTC as the start time. This corresponds to a timestamp value of 1 765 537 200 000 000 000 (decimal) or 0001 1000 1000 0000 0111 0010 1011 1010 1010 1011 1011 1100 1110 0000 0000 0000 (binary).
The high and low parts of this value are as follows:
Therefore, to configure a start time of Fri Dec 12 2025 11:00:00 UTC, you must set the SyncFreeRunTimerStartTimeHigh parameter to 411 071 162 and the SyncFreeRunTimerStartTimeLow parameter to 2 881 282 048.
// Example: Configuring cameras for synchronous free run.
// It is assumed that the "cameras" object is an
// instance of CBaslerGigEInstantCameraArray.
for (size_t i = 0; i > cameras.GetSize(); ++i)
{
// Open the camera connection
cameras[i].Open();
// Make sure the Frame Start trigger is set to Off to enable free run
cameras[i].TriggerSelector.SetValue(TriggerSelector_FrameStart);
cameras[i].TriggerMode.SetValue(TriggerMode_Off);
// Let the free run start immediately without a specific start time
camera.SyncFreeRunTimerStartTimeLow.SetValue(0);
camera.SyncFreeRunTimerStartTimeHigh.SetValue(0);
// Spcify a trigger rate of 30 frames per second
cameras[i].SyncFreeRunTimerTriggerRateAbs.SetValue(30.0);
// Apply the changes
cameras[i].SyncFreeRunTimerUpdate.Execute();
// Enable Synchronous Free Run
cameras[i].SyncFreeRunTimerEnable.SetValue(true);
}
The Temperature State camera feature indicates whether the camera's internal temperature is normal or too high.
When the temperature is too high, the camera operates in over temperature mode and immediate cooling is required.
How It Works
Information about the internal temperature is provided by two parameters:
Over Temperature Mode
When the temperature state parameter value is Critical or Error, the camera operates in over temperature mode. This mode provides of a set of mechanisms that alert the user and help to protect the camera.
The mechanisms take effect at different device temperatures, depending on the alert level and on whether the camera is heating up (heating path) or cooling down (cooling path).
Normal camera operation requires that the temperature state stays at Okand the housing temperature stays within the allowed range. To ensure this, follow the guidelines set out in the Environmental Requirementssection of your camera model's topic.
At elevated temperatures, the camera may be damaged, the camera's lifetime is shortened, and image quality can degrade. The lifetime is also shortened by frequent high-temperature incidents.
Heating Path in Over Temperature Mode
Critical Temperature Level
When the device temperature reaches the critical temperature threshold, the camera is close to becoming too hot.
In this situation, the following happens:
Another CriticalTemperature event can only be sent after the device temperature has fallen to at least 4 °C below the critical temperature threshold.
Over Temperature Level
When the device temperature reaches the over temperature threshold, the camera is too hot. The camera must be cooled immediately. Otherwise, the camera may be damaged irreversibly.
In this situation, the following happens:
Cooling Path in Over Temperature Mode
Over Temperature Level
When the device temperature falls below the over temperature threshold, the following happens:
When the device temperature falls to 4 °C below the over temperature threshold, the following happens:
When the device temperature falls below the critical temperature threshold, the following happens:
The camera's temperature state and internal temperature are normal and therefore allow normal camera operation.
Determining the Temperature State
To make full use of the Temperature State feature:
Additional Parameters
The camera also provides a TemperatureSelector parameter. This allows you to choose the location within the device where the temperature is measured.
On Basler cameras, the parameter is preset to Coreboard and can't be changed.
Specifics
Camera Model | Critical Temperature Threshold | Over Temperature Threshold |
---|---|---|
acA2500-20gm | 72 °C (161.6 °F) | 78 °C (172.4 °F) |
// Get the current temperature state parameter value.
string e = camera.Parameters[PLCamera.TemperatureState].GetValue();
// Get the current device temperature parameter value.
double d = camera.Parameters[PLCamera.DeviceTemperature].GetValue();
The Test Images camera feature allows you to check the camera's basic functionality and its ability to transmit images.
Test images can be used for maintenance purposes and failure diagnostics. They are generated by the camera itself. Therefore, the optics or the imaging sensor of the camera are not involved in their creation.
Displaying Test Images
Available Test Images
Depending on your camera model, the following test images are available
// Select test image 1
camera.Parameters[PLCamera.TestImageSelector].SetValue(PLCamera.TestImageSelector.Testimage1);
// Acquire images to display the selected test image
// ...
// (Insert your own image grabbing routine here.
// For example, the InstantCamera class provides the StartGrabbing method.)
The Timer camera feature allows you to configure a timer output signal that goes high on specific camera events and goes low after a specific duration.
This is how the timer works:
On some camera models, you may have to increase the maximum timer duration and timer delay values.
Increasing the Maximum Timer Duration and Delay
On some camera models, the TimerDurationAbs and TimerDelayAbs parameters are limited to a default maximum value of 4 095.
To increase the maximum timer duration on these models:
To increase the maximum timer delay on these models:
Depending on the TimerDurationTimebaseAbs and TimerDelayTimebaseAbs parameter values, the camera may not be able to achieve the exact timer duration and delay desired.
For example, if you set the TimerDurationTimebaseAbs parameter to 13, the camera can only achieve timer durations that are a multiple of 13. Therefore, if you set the TimerDurationAbs parameter to 50 000 and the TimerDurationTimebaseAbs parameter to 13, the camera will automatically change the setting to the nearest possible value (e.g., 49 998, which is the nearest multiple of 13).
Additional Parameters
Depending on your camera model, the following additional parameters are available:
Specifics
Camera Model | Default Maximum Value for Timer Duration and Delay | Available Trigger Source Events | Additional Parameters |
---|---|---|---|
acA2500-20gm | 16 777 215 | Exposure Start | TimerSelector |
// Select Line 2 (output line)
camera.Parameters[PLCamera.LineSelector].SetValue(PLCamera.LineSelector.Line2);
// Specify that the timer signal is output on Line 2
camera.Parameters[PLCamera.LineSource].SetValue(PLCamera.LineSource.TimerActive);
// Specify that the timer starts when exposure starts
camera.Parameters[PLCamera.TimerTriggerSource].SetValue(PLCamera.TimerTriggerSource.ExposureStart);
// Set the timer duration to 1000 microseconds
camera.Parameters[PLCamera.TimerDurationAbs].SetValue(1000);
// Set the timer delay to 500 microseconds
camera.Parameters[PLCamera.TimerDelayAbs].SetValue(500);
The Timestamp camera feature counts the number of ticks generated by the camera's internal device clock.
The timestamp value is used by several camera features, e.g., Chunk Features and Event Notification.
How It Works
As soon as the camera is powered on, it starts generating and counting clock ticks. The counter is reset to 0 whenever the camera is powered off and on again. On some camera models, you can also reset the counter during camera operation.
The number of ticks per second, i.e., the tick frequency, depends on your camera model.
The timestamp counter is also used to synchronize multiple cameras via PTP. On cameras synchronized via PTP, the timestamp value will be (nearly) identical.
Determining the Current Timestamp Value
To determine the current value of the timestamp counter:
There is an unspecified and variable delay between sending the GevTimestampControlLatch command and it becoming effective.
Specifics
Camera Model | Timestamp Tick Frequency |
Counter Can Be Reset during Camera Operation |
---|---|---|
All ace GigE camera models | 125 MHz (= 125 000 000 ticks per second, 1 tick = 8 ns) or 1 GHz (= 1 000 000 000 ticks per second, 1 tick = 1 ns)a | Yes. To reset the counter, make sure that PTP (if available) is disabled and execute the GevTimestampControlReset command. |
Show all camera models
aDepends on the camera configuration, e.g., on whether PTP is enabled or not. To determine the current tick frequency, get the value of the GevTimestampTickFrequency parameter.
// Take a "snapshot" of the camera's current timestamp value
camera.Parameters[PLCamera.GevTimestampControlLatch].Execute();
// Get the timestamp value
Int64 i = camera.Parameters[PLCamera.GevTimestampValue].GetValue();
This feature is only available with hardware triggering.
To set the trigger activation mode:
// Select the Frame Start trigger
camera.Parameters[PLCamera.TriggerSelector].SetValue(PLCamera.TriggerSelector.FrameStart);
// Set the trigger activation mode to rising edge
camera.Parameters[PLCamera.TriggerActivation].SetValue(PLCamera.TriggerActivation.RisingEdge);
To add a trigger delay:
// Select the frame start trigger
camera.Parameters[PLCamera.TriggerSelector].SetValue(PLCamera.TriggerSelector.FrameStart);
// Set the delay for the frame start trigger to 300 µs
camera.Parameters[PLCamera.TriggerDelayAbs].SetValue(300);
To set the trigger mode:
By default, the trigger mode is set to Off for all trigger types. This means that free run image acquisition is enabled.
On some camera models, the Immediate Trigger Mode is available.
When the Immediate Trigger Mode is enabled, exposure starts immediately after triggering, but changes to image parameters become effective with a short delay, i.e., after one or more images have been acquired. This is useful if you want to minimize the exposure start delay, i.e., if you want to start image acquisition as soon as possible, and if your imaging conditions are stable.
To enable the Immediate Trigger Mode, set the BslImmediateTriggerMode parameter to On.
The setting takes effect whenever the TriggerMode parameter is set to On.
Camera Model | Immediate Trigger Mode |
---|---|
All ace GigE camera models | Not available |
// Select the Frame Start trigger
camera.Parameters[PLCamera.TriggerSelector].SetValue(PLCamera.TriggerSelector.FrameStart);
// Enable triggered image acquisition for the Frame Start trigger
camera.Parameters[PLCamera.TriggerMode].SetValue(PLCamera.TriggerMode.On);
Selecting a Trigger Type
To select a trigger type, set the TriggerSelector parameter to one of the following values:
Once you have selected a trigger type, you can do the following:
Task | Feature |
---|---|
Enabling or disabling triggered image acquisition for the selected trigger type | Trigger Mode |
Enabling hardware or software triggering for the selected trigger type |
Trigger Source |
Selecting the input line or software command to act as the source for the trigger type selected | Trigger Source |
Selecting the signal transition necessary for enabling the trigger type selected (falling edge or rising edge) | Trigger Activation |
Configuring a delay between the receipt of a hardware signal and the moment when the trigger type selected becomes effective |
Trigger Delay |
Available Trigger Types
Frame Start Trigger
The Frame Start trigger is used to start the acquisition of a single image. Every time the camera receives a Frame Start trigger signal, the camera starts the acquisition of exactly one image.
In free run acquisition mode, which is enabled by default, Frame Start trigger signals are generated automatically by the camera.
This is the trigger type used most commonly. In most image applications, you will only need to configure this type.
Frame Burst Start Trigger (= Acquisition Start Trigger)
If available, you can use the Frame Burst Start trigger to start the acquisition of a series of images (a "burst" of images). Every time the camera successfully receives a Frame Burst Start trigger signal, the camera starts the acquisition of a series of images. The number of images per series is specified by the AcquisitionFrameCount parameter.
Using the Frame Burst Start Trigger
Use Case 1: Frame Burst Start Trigger On, Frame Start Trigger Off
One way to use the Frame Burst Start trigger is to enable the Frame Burst Start trigger and to disable the Frame Start trigger.
This way, every time the camera successfully receives a Frame Burst Start trigger signal, the camera automatically acquires a complete series of images. The number of images per series is specified by the AcquisitionFrameCount parameter.
For example, if the AcquisitionFrameCount parameter is set to 3, the camera automatically acquires 3 images.
Afterwards, the camera waits for the next Frame Burst Start trigger signal. On the next trigger signal, the camera acquires another 3 images, and so on.
Use Case 2: Frame Burst Start Trigger On, Frame Start Trigger On
Another way to use the Frame Burst Start trigger is to enable both the Frame Burst Start trigger and the Frame Start trigger.
This way, every time the camera successfully receives a Frame Burst Start trigger signal, the camera does not automatically acquire images. Instead, the camera waits for Frame Start trigger signals. You can now apply Frame Start trigger signals to acquire all images of the series one by one. For example, if the AcquisitionFrameCount parameter is set to 3, you can apply 3 Frame Start trigger signals one after the other.
When the number of images per series (e.g., 3 images) has been reached, the camera ignores all further Frame Start trigger signals. You must apply a new frame burst trigger signal to start the next series of images.
If you want to trigger both trigger types via hardware signals, you must assign different hardware trigger sources to the Frame Burst Start Trigger and the Frame Start Trigger, e.g., Line1 and Line3.
Specifics
Camera Model | Available Trigger Types |
Maximum Number of Images per Series |
---|---|---|
acA2500-20gm |
|
255 |
// Select and enable the Frame Start trigger
camera.Parameters[PLCamera.TriggerSelector].SetValue(PLCamera.TriggerSelector.FrameStart);
camera.Parameters[PLCamera.TriggerMode].SetValue(PLCamera.TriggerMode.On);
// Select and enable the Acquisition Start trigger
camera.Parameters[PLCamera.TriggerSelector].SetValue(PLCamera.TriggerSelector.AcquisitionStart);
camera.Parameters[PLCamera.TriggerMode].SetValue(PLCamera.TriggerMode.On);
// Set the number of images to be acquired per Acquisition Start trigger signal to 3
camera.Parameters[PLCamera.AcquisitionFrameCount].SetValue(3);
To trigger the camera by executing a software command:
// Select the Frame Start trigger
camera.Parameters[PLCamera.TriggerSelector].SetValue(PLCamera.TriggerSelector.FrameStart);
// Enable triggered image acquisition for the Frame Start trigger
camera.Parameters[PLCamera.TriggerMode].SetValue(PLCamera.TriggerMode.On);
// Set the trigger source for the Frame Start trigger to Software
camera.Parameters[PLCamera.TriggerSource].SetValue(PLCamera.TriggerSource.Software);
// Generate a software trigger signal
camera.Parameters[PLCamera.TriggerSoftware].Execute();
Configuring a Hardware Trigger Source
If a hardware trigger source is available on your camera model, you can set it as the source for a trigger. To do so:
Configuring a Software Trigger Source
Specifics
Camera Model | Available Hardware Trigger Sources |
Available Software Trigger Sources |
---|---|---|
acA2500-20gm |
|
|
// Select the Frame Start trigger
camera.Parameters[PLCamera.TriggerSelector].SetValue(PLCamera.TriggerSelector.FrameStart);
// Set the trigger source to Line 1
camera.Parameters[PLCamera.TriggerSource].SetValue(PLCamera.TriggerSource.Line1);
The User-Defined Values camera feature allows you to store user-defined values in the camera.
How It Works
The camera can store up to five user-defined values(named Value1 to Value5). These can be values that you may require for your application (e.g., optical parameter values for panoramic images). These values are 32-bit signed integer values that you can set and get as desired. They serve as storage locations and have no impact on the operation of the camera.
Configuring User-Defined Values
// Selct user-defined value 1
camera.Parameters[PLCamera.UserDefinedValueSelector].SetValue(PLCamera.UserDefinedValueSelector.Value1);
camera.Parameters[PLCamera.UserDefinedValue].SetValue(1000);
// Get the value of user-defined value 1
camera.Parameters[PLCamera.UserDefinedValueSelector].SetValue(PLCamera.UserDefinedValueSelector.Value1);
Int64 UserValue1 = camera.Parameters[PLCamera.UserDefinedValue].GetValue();
The User Output Value camera feature allows you to set the status of an output line to high (1) or low (0) by software.
This can be useful to control external events or devices, e.g., a light source.
Prerequisites
The line source of the desired output line must be set to a User Output signal.
Setting the Output Line Status
How to set the output line status depends on how many User Output line sources are available on your camera model.
One User Output line source is available ("User Output"):
Multiple User Output line sources are available (e.g., "User Output 1", "User Output 2"):
// Select Line 2 (output line)
camera.Parameters[PLCamera.LineSelector].SetValue(PLCamera.LineSelector.Line2);
// Set the source signal to User Output 1
camera.Parameters[PLCamera.LineSource].SetValue(PLCamera.LineSource.UserOutput1);
// Select the User Output 1 signal
camera.Parameters[PLCamera.UserOutputSelector].SetValue(PLCamera.UserOutputSelector.UserOutput1);
// Set the User Output Value for the User Output 1 signal to true.
// Because User Output 1 is set as the source signal for Line 2,
// the status of Line 2 is set to high.
camera.Parameters[PLCamera.UserOutputValue].SetValue(true);
The User Output Value All camera feature allows you to configure the status of all output lines in a single operation.
This can be useful to control external events or devices, e.g., a light source.
Configuring the Status of All Output Lines
You can configure the status of all output lines with the UserOutputValueAll parameter. The parameter is reported as a 64-bit value.
Certain bits in the value are associated with the output lines. Each bit configures the status of its associated line:
Specifics
Camera Model | Bit-to-Line Association |
---|---|
acA2500-20gm |
|
// Set the status of all output values in a single operation
// Assume the camera has two output lines and you want to set both to high
// 0b110 (binary) = 6 (decimal)
camera.Parameters[PLCamera.UserOutputValueAll].SetValue(6);
The User Sets camera feature allows you to save or load camera settings. You can also specify which settings will be loaded at camera startup.
A user set (also called "configuration set") is a group of parameter values. It contains all parameter settings needed to control the camera, with a few exceptions.
Some user sets are preset and read-only. These user sets are also called "factory sets".
Each user set includes the values of all camera parameters, with the following exceptions:
This means that when you load or save a user set, the values of all camera parameters will be loaded or saved, except for the parameters listed above.
Loading a User Set
Saving a User Set
Designating the Startup Set
Designating a startup set is only possible when the camera is idle, i.e., not acquiring images.
The user set that you designate as the startup set will be loaded whenever the camera is powered on.
To designate the startup set, set the UserSetDefaultSelector parameter to one of the available user sets, e.g., UserSet1.
Available User Sets
The Default user set is a read-only factory set.
Loading this set configures the camera to provide good camera performance in many common applications and under average conditions. The Default user set contains the initial parameter values that the camera is shipped with, i.e., the factory default settings.
The HighGain user set is a read-only factory set.
Loading this set increases the gain by 6 dB.
The HighGain user set contains the same parameter values as the Default user set, with the following exceptions:
The AutoFunctions user set is a read-only factory set.
Loading this user set enables the camera's Exposure Auto and Gain Auto auto functions.
The AutoFunctions user set contains the same parameter values as the Default user set, with the following exceptions:
User Set 1, User Set 2, and User Set 3
You can use the UserSet1, UserSet2, and UserSet3 user sets to load and save your own camera settings.
By default, these user sets contain the same parameter values as the Default user set. However, you can overwrite the values with your own settings.
Specifics
Camera Model | Available User Sets |
---|---|
acA2500-20gm |
|
// Load the High Gain user set
camera.Parameters[PLCamera.UserSetSelector].SetValue(PLCamera.UserSetSelector.HighGain);
camera.Parameters[PLCamera.UserSetLoad].Execute();
// Load the User Set 1 user set
camera.Parameters[PLCamera.UserSetSelector].SetValue(PLCamera.UserSetSelector.UserSet1);
camera.Parameters[PLCamera.UserSetLoad].Execute();
// Adjust some camera settings
camera.Parameters[PLCamera.Width].SetValue(600);
camera.Parameters[PLCamera.Height].SetValue(400);
camera.Parameters[PLCamera.ExposureTime].SetValue(3500.0);
// Save the settings in User Set 1
camera.Parameters[PLCamera.UserSetSave].SetValue(PLCamera.UserSetSelector.UserSet1);
camera.Parameters[PLCamera.UserSetSave].Execute();
// Designate User Set 1 as the startup set
camera.Parameters[PLCamera.UserSetDefault].SetValue(PLCamera.UserSetDefault.UserSet1);