Baumer工业相机堡盟相机是一种高性能、高质量的工业相机,可用于各种应用场景,如物体检测、计数和识别、运动分析和图像处理。
Baumer的万兆网相机拥有出色的图像处理性能,可以实时传输高分辨率图像。此外,该相机还具有快速数据传输、低功耗、易于集成以及高度可扩展性等特点。
Baumer工业相机由于其性能和质量的优越和稳定,常用于高速同步采集领域,通常使用各种图像算法来提高其捕获的图像的质量。
Baumer工业相机的BGAPI SDK是Baumer公司开发的针对其相机产品系列的一套软件开发工具包。该SDK提供了一组API,使开发人员可以编写专业应用程序,从而控制、捕获、处理和显示Baumer相机的图像和数据。BGAPI SDK支持多种编程语言,包括C++、C#、Visual Basic、LabVIEW、Matlab等,并提供了大量示例代码和文档,以帮助用户轻松上手,快速完成应用程序的开发。
BGAPI SDK提供了丰富的功能,可以控制Baumer相机的所有参数,包括曝光时间、增益、白平衡、触发模式等,以及支持各种数据格式,例如Raw、BMP、JPG等,同时还提供了实时显示、数据采集、图像处理等功能,为开发人员提供了高度定制化的解决方案。此外,BGAPI SDK还支持多相机系统的开发,并可支持各种计算机操作系统,如Windows、Linux、Mac OS等。
工业相机的AutoFocus功能是指该相机具备自动对焦功能,可以通过相机自身或外部设备(如光电传感器、激光测距仪等)对物体进行快速、精准的自动对焦。
自动对焦功能可以大大提高相机的拍摄效率和准确性,特别是在工业生产、机器视觉、智能制造等领域中,可以快速、准确地识别物体、测量尺寸和形状等参数,从而实现自动化生产和质量控制。
在选择工业相机时,需要根据具体应用场景考虑是否需要AutoFocus功能,并根据需求选择相应的对焦方式和适合的镜头。
本文介绍的使用BGAPI SDK进行使用AutoFocus功能。
下面介绍在C++里Baumer工业相机如何通过BGAPISDK使用AutoFocus功能方式
代码如下(示例):
#include
#include
#include
#include
#include
#include
#include
#include "bgapi2_genicam/bgapi2_genicam.hpp"
Baumer工业相机设置自动对焦AutoFucus功能核心代码如下所示:
SystemList
Open a System
Get the InterfaceList and fill it Open an Interface
Get the DeviceList and fill it
Open a Device
typedef struct _Result {
int64_t sharp_value;
int64_t lens;
int64_t method_value;
} Result;
std::map<int64_t, Result> result_map;
//---------------------------------------------------------------------------------------------------------------------
// This example uses a Software Trigger to get images from the camera. This method will trigger regularly using
// a separate thread to trigger and calculate in parallel.
void SoftwareTriggerThread(BGAPI2::Device* device, int64_t time_delay) {
std::this_thread::sleep_for(std::chrono::milliseconds(time_delay));
device->GetRemoteNode("TriggerSoftware")->Execute();
}
void ExampleAutoFocus(BGAPI2::Device* device) {
std::thread trigger_thread;
const uint64_t lens_delay = 70; // A delay required for settling the liquid lens
const int64_t focus_threshold = 85; // Threshold in percent for detection of a peak!
BGAPI2::ImageProcessor* image_processor = new BGAPI2::ImageProcessor();
device->GetRemoteNode("AcquisitionStop")->Execute(); // to make sure that camera stopped
BGAPI2::Node* lens_focus = device->GetRemoteNode("OpticFeatureValue");
BGAPI2::String pixel_format = device->GetRemoteNode("PixelFormat")->GetString();
const int image_width = static_cast<int>(device->GetRemoteNode("Width")->GetInt());
const int image_height = static_cast<int>(device->GetRemoteNode("Height")->GetInt());
ROI roi = {200, 200, 400, 300}; // We only use a region of the image to set the focus
int64_t focus_start_step = 1000; // The step size to jump for next measurement
int64_t focus_step = focus_start_step;
int64_t lens_focus_max = std::min(static_cast<bo_int64>(40000), lens_focus->GetIntMax());
int64_t lens_focus_min = std::max(static_cast<bo_int64>(10000), lens_focus->GetIntMin());
int64_t focus_value = lens_focus->GetInt(); // the start value for the algorithm
// Detect and open the data stream
BGAPI2::DataStreamList *datastreamList = device->GetDataStreams();
datastreamList->Refresh();
BGAPI2::DataStream *datastream = datastreamList->begin()->second;
datastream->Open();
// Add 4 buffers to the data stream
BGAPI2::BufferList *bufferList = datastream->GetBufferList();
for (int i = 0; i < 4; i++) {
BGAPI2::Buffer* buffer = new BGAPI2::Buffer();
bufferList->Add(buffer);
buffer->QueueBuffer();
}
BGAPI2::Buffer* buffer_filled = nullptr;
//------------------------------------------------------------------------------------
// A high gain value is bad for a good auto focus (regarding noise)!
device->GetRemoteNode(SFNC_GAINSELECTOR)->SetString(SFNC_GAINSELECTORVALUE_ALL);
device->GetRemoteNode(SFNC_GAIN)->SetValue("1.0");
datastream->StartAcquisitionContinuous();
// use camera auto function to obtain well balanced images
AutoBrightnessWhiteBalance(device, datastream, roi);
//------------------------------------------------------------------------------------
device->GetRemoteNode("TriggerSource")->SetString("Software");
device->GetRemoteNode("TriggerMode")->SetString("On");
device->GetRemoteNode("AcquisitionStart")->Execute();
device->GetRemoteNode("TriggerSoftware")->Execute();
int64_t sharp_min = 1L << 30;
int64_t sharp_max = 0;
int64_t sharp_maxno = 0;
const std::chrono::system_clock::time_point start_time = std::chrono::system_clock::now();
bool is_min_focus = false;
bool is_max_focus = false;
bool is_finished = false;
for (int i = 0; !is_finished; i++) {
int64_t read_focus_value = lens_focus->GetInt();
buffer_filled = datastream->GetFilledBuffer(1000);
if (buffer_filled == nullptr) {
std::cout << "Error: Buffer Timeout after 1000 ms" << std::endl;
device->GetRemoteNode("TriggerSoftware")->Execute();
} else if (buffer_filled->GetIsIncomplete() == true) {
std::cout << "Error: Image is incomplete" << std::endl;
buffer_filled->QueueBuffer();
} else {
// get a correct picture, check the calculated value and calculate the next step
//=======================================================
if (lens_focus) {
int64_t next_focus = focus_value + focus_step;
if (next_focus > lens_focus_max)
is_max_focus = true;
if (next_focus < lens_focus_min)
is_min_focus = true;
if (is_min_focus && is_max_focus) {
is_finished = true; break; // reached both limits!
}
if (!is_min_focus && !is_max_focus) {
if (focus_step > 0) {
focus_step = -focus_step - focus_start_step;
if (next_focus + focus_step < lens_focus_min) {
is_min_focus = true;
focus_step = +focus_start_step;
}
} else {
focus_step = -focus_step + focus_start_step;
if (next_focus + focus_step > lens_focus_max) {
is_max_focus = true;
focus_step = -focus_start_step;
}
}
} else {
if (!is_min_focus) {
focus_step = -focus_start_step;
} else if (!is_max_focus) {
focus_step = +focus_start_step;
}
next_focus = focus_value + focus_step;
}
focus_value = next_focus;
if (focus_value > lens_focus_max) {
is_finished = true;
}
//=======================================================
if (!is_finished) { // -> next setting and image
lens_focus->SetInt(focus_value);
trigger_thread = std::thread(&SoftwareTriggerThread, device, lens_delay);
}
void* buffer_pointer = buffer_filled->GetMemPtr();
uint64_t buffer_size = buffer_filled->GetSizeFilled();
BGAPI2::Image* image = image_processor->CreateImage(image_width, image_height,
pixel_format, buffer_pointer, buffer_size);
CheckAndFixRoi(&roi, image_width, image_height);
#if USE_OPENCV
// Display the image
if (pixel_format == "Mono8") {
ShowImage(buffer_pointer, image_width, image_height, CV_8UC1, &roi);
} else {
// Convert to BGR8
const size_t size = static_cast<size_t>(image->GetTransformBufferLength("BGR8"));
char* mem_buffer = new char[size];
if (mem_buffer) {
image_processor->TransformImageToBuffer(image, "BGR8", mem_buffer, size);
ShowImage(mem_buffer, image_width, image_height, CV_8UC3, &roi);
delete[] mem_buffer;
}
}
#endif
// Convert to Mono8 for measurement sharpness
const size_t size = static_cast<size_t>(image->GetTransformBufferLength("Mono8"));
uint8_t* mem_buffer = new uint8_t[size];
if (mem_buffer) {
image_processor->TransformImageToBuffer(image, "Mono8", mem_buffer, size);
int64_t sharpness = 0;
//-----------------------------------------------------------------
// The algorithm for measurement of sharp:
// Here you can use: prewitt, sobel, scharr - or you use an own algorithm like
// open_cv laplacian, DFT (FFT), canny or something else
// we use Sobel Sharpening! see HelperFuntions -> sobel matrix
sharpness = CrossCalculateMono(mem_buffer, image_width, image_height,
sobel_x, sobel_y, roi);
//-----------------------------------------------------------------
result_map[read_focus_value].lens = read_focus_value;
result_map[read_focus_value].method_value = sharpness;
if (sharpness < sharp_min) {
sharp_min = sharpness;
}
if (sharpness > sharp_max) {
sharp_max = sharpness;
sharp_maxno = read_focus_value;
}
if (sharpness < (focus_threshold * sharp_max) / 100) {
if (read_focus_value < sharp_maxno) {
is_min_focus = true;
} else {
is_max_focus = true;
}
}
delete[] mem_buffer;
}
buffer_filled->QueueBuffer();
}
}
if (trigger_thread.joinable())
trigger_thread.join();
}
int64_t meanvalue = 0;
int64_t meancount = 0;
int64_t auto_focus_value = -1;
int64_t new_focus = -1;
int64_t max_sharpness = sharp_max;
lens_focus_min = lens_focus_max = -1;
for (auto result : result_map) {
if (lens_focus_min < 0)
lens_focus_min = result.first;
if (result.second.method_value * 100 > focus_threshold * max_sharpness) {
lens_focus_max = result.first;
meanvalue +=
result.second.method_value * (result.first - lens_focus_min);
meancount++;
} else if (lens_focus_max < 0)
lens_focus_min = result.first;
}
int64_t outside_min = -1;
int64_t outside_max = -1;
for (auto result : result_map) {
if (result.second.method_value * 100 > focus_threshold * max_sharpness) {
outside_max = result.first;
} else if (outside_max < 0) {
outside_min = result.first;
} else {
outside_max = result.first;
break;
}
}
if (outside_min > 0 && outside_max > 0 &&
(result_map.find(outside_min + 1000) != result_map.end()) &&
(result_map.find(outside_max - 1000) != result_map.end())) {
int64_t threshold = (focus_threshold * max_sharpness) / 100;
int64_t min = 1000 * (threshold - result_map[outside_min].method_value) /
(result_map[outside_min + 1000].method_value - result_map[outside_min].method_value);
outside_min += min;
int64_t max = 1000 * (threshold - result_map[outside_min].method_value) /
(result_map[outside_max - 1000].method_value - result_map[outside_min].method_value);
outside_max -= max;
new_focus = (outside_min + outside_max) / 2;
}
if (new_focus < 0) {
new_focus = sharp_maxno;
}
if (lens_focus && (new_focus > lens_focus_min) && (new_focus < lens_focus_max)) {
auto_focus_value = new_focus;
lens_focus->SetInt(auto_focus_value); // set the new calculated value to the camera!
#if USE_OPENCV
// read an image and show it (if you use OpenCV)
SoftwareTriggerThread(device, lens_delay);
buffer_filled = datastream->GetFilledBuffer(1000);
if (buffer_filled) {
BGAPI2::Image* image = image_processor->CreateImage(image_width, image_height,
buffer_filled->GetPixelFormat(), buffer_filled->GetMemPtr(),
buffer_filled->GetSizeFilled());
const size_t size = static_cast<size_t>(image->GetTransformBufferLength("BGR8"));
char* mem_buffer = new char[size];
if (mem_buffer) {
image_processor->TransformImageToBuffer(image, "BGR8", mem_buffer, size);
ShowImage(mem_buffer, image_width, image_height, CV_8UC3, nullptr);
}
buffer_filled->QueueBuffer();
} else {
std::cout << "don't get last buffer!" << std::endl;
}
#endif
std::chrono::duration<double, std::milli> overall_delay =
std::chrono::system_clock::now() - start_time;
std::cout << "AutoFocus finished after " << overall_delay.count() << "ms and " <<
result_map.size() << " images, focus value = " << auto_focus_value << "!" << std::endl;
} else {
std::cout << "The camera couldn't focused correctly!" << std::endl;
}
// clean up camera and buffers
device->GetRemoteNode("AcquisitionAbort")->Execute();
device->GetRemoteNode("AcquisitionStop")->Execute();
device->GetRemoteNode("TriggerMode")->SetString("Off");
datastream->StopAcquisition();
bufferList->DiscardAllBuffers();
while (bufferList->size() > 0) {
BGAPI2::Buffer* buffer = bufferList->begin()->second;
bufferList->RevokeBuffer(buffer);
delete buffer;
}
datastream->Close();
if (image_processor != nullptr) {
delete image_processor;
image_processor = nullptr;
}
}
SystemList
Open a System
Get the InterfaceList and fill it Open an Interface
Get the DeviceList and fill it
Open a Device
int main(int numArgs, char *args[])
{
int returncode = 0;
int64_t camfound = 0;
std::cout << "BGAPI2 Example 503 - AutoFocus" << std::endl;
#ifndef USE_OPENCV // OpenCV
// this part is use if no matching OpenCV found in CMake!
std::cout << "Without OpenCV buffer images are not shown on screen and not saved to files!" << std::endl;
std::cout << "Availability of OpenCV is checked while CMake creates this project." << std::endl;
std::cout << "Please install OpenCV (version 2.3 or later) or set 'OpenCV_DIR' to the" << std::endl;
std::cout << "correct path in the CMakeList.txt script or as a variable in your environment" << std::endl;
std::cout << "and run CMake again. " << std::endl;
std::cout << "######################################" << std::endl << std::endl;
#endif // USE_OPENCV
try {
// First search for a camera which supports the liquid lens
// We use the feature "OpticFeatureValue" to check if the camera supports the liquid lens
// Get the list of systems and loop through
BGAPI2::SystemList *system_list = BGAPI2::SystemList::GetInstance();
system_list->Refresh();
for (auto system_pair : *system_list) {
auto system = system_pair->second; // gige, usb3, ..
system->Open();
// Get the list of interfaces on the system and loop through
auto interface_list = system->GetInterfaces();
interface_list->Refresh(100);
for (auto interface_pair : *interface_list) {
auto interface = interface_pair->second;
interface->Open();
auto device_list = interface->GetDevices();
device_list->Refresh(100);
// Get the list of devices and loop though them to find a camera supporting a liquid lens
for (auto device_pair: *device_list) {
BGAPI2::Device* device = device_pair->second;
device->Open();
std::stringstream camera_name;
camera_name << device->GetModel() << "(SN = " << device->GetSerialNumber() << ")";
if (device->GetRemoteNodeList()->GetNodePresent("OpticFeatureValue")) {
if (!device->GetRemoteNode("OpticFeatureValue")->IsWriteable()) {
continue;
}
} else {
continue;
}
std::cout << "Camera found!" << std::endl;
#ifdef _DEBUG
// Switch off the heartbeat in debug mode, otherwise the camera might disconnect during debug!
if (device->GetRemoteNodeList()->GetNodePresent("DeviceLinkHeartbeatMode"))
device->GetRemoteNode("DeviceLinkHeartbeatMode")->SetString("Off");
#endif
// no try to focus the camera
ExampleAutoFocus(device);
#ifdef _DEBUG
// Switch the heartbeat back on
if (device->GetRemoteNodeList()->GetNodePresent("DeviceLinkHeartbeatMode"))
device->GetRemoteNode("DeviceLinkHeartbeatMode")->SetString("On");
#endif
// close the camera
device->Close();
camfound = 1;
break; // if one camera found!
}
interface->Close();
if (camfound)
break; // if one camera found!
}
system->Close();
if (camfound)
break; // if one camera found!
}
if (camfound == 0) {
std::cout << "No camera found on any system and any interface!" << std::endl;
}
}
catch (BGAPI2::Exceptions::IException& ex) {
returncode = (returncode == 0) ? 1 : returncode;
std::cout << "Error in function: " << ex.GetFunctionName() << std::endl << "Error description: "
<< ex.GetErrorDescription() << std::endl << std::endl;
}
BGAPI2::SystemList::ReleaseInstance();
#ifdef USE_OPENCV // OpenCV
// Wait a delay
cv::waitKey(10000); // show a little while of 5 seconds the focused image
cv::destroyAllWindows();
#else
for (int i = 0; i < 5; i++) { // a little while of 5 seconds...
std::this_thread::sleep_for(std::chrono::milliseconds(1000));
}
#endif
return returncode;
}
工业相机使用AutoFocus自动对焦功能有以下优势:
提高工作效率:自动对焦功能可以快速、准确地识别并对焦目标物体,避免了手动调节对焦焦距带来的时间浪费。因此,使用自动对焦功能可以提高生产效率,节省时间和人力成本。
提高测量精度:自动对焦功能可以根据不同的拍摄距离和物体大小,自动调整对焦焦距,保证图像清晰度和测量精度。
适应不同场景:自动对焦功能适应范围广泛,可以应用于工业生产、机器视觉、智能制造等多个领域中,对于不同大小、不同形状的目标物体均能有效识别并对焦,具有很高的通用性。
减少使用难度:相对于手动调节对焦焦距,自动对焦功能更易于操作和使用,减少了对操作人员技术水平的要求,也降低了需要培训的成本。
总之,自动对焦功能是现代工业相机的重要功能之一,对于提高工作效率、测量精度和操作便利性都有很大的帮助。
工业相机使用AutoFocus自动对焦功能在许多行业具有广泛的应用价值,以下是一些常见的行业应用:
机器视觉检测:自动对焦功能可以帮助机器视觉系统自动识别各种零件,进行尺寸测量、外观检测和定位等。这在制造业、电子行业、汽车行业等领域具有重要价值。
材料科学与无损检测:借助自动对焦功能,工业相机可用于对材料表面进行精确的缺陷检测(如裂纹、气孔等),提高检测的准确性和速度,用于金属制品、塑料制品等行业。
医疗健康:利用自动对焦功能,工业相机可以提供精确的成像数据,辅助进行病理分析、影像诊断等医学应用。例如,在显微镜成像、内窥镜成像等医学设备中进行快速对焦。
生物科技:自动对焦功能可用于生物制品样品检测、显微成像、基因芯片成像等领域,实现高速、高精度的图像处理与分析。
半导体工业:自动对焦可用于芯片制程的检测与质量控制,检查晶圆、封装等组件的缺陷。减少缺陷率,监控生产质量。
能源行业:工业相机的自动对焦功能可以应用于太阳能电池板、核电站等能源设施的检测与维护,保障设备的正常运行与安全。
安防监控:自动对焦功能可用于监控摄像头的调整与控制,确保拍摄到清晰的监控图像,提高安防系统的效果。
总之,在各行业应用中,工业相机使用AutoFocus自动对焦功能可以提高图像处理速度和准确性,实现对目标物体的快速捕捉和检测,提高生产效率和质量。