【事件图像】RGB Image conversion to event Image

本文介绍了一种将传统RGB图像转换为事件图像的方法,并提供了一个简单的Python实现案例。该方法通过像素位移比较来模拟事件相机的数据特性,适用于缺乏真实事件图像数据的研究场景。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

前言

        有些时候我们需要使用事件图像来做实验,但是事件图像数据实在是太少了,冲浪几乎得不到想要的数据,我们不得不使用正常的图像转换为事件图像来保证数据来源。

论文链接http://file///C:/Users/86137/Documents/temp/Gehrig_Video_to_Events_Recycling_Video_Datasets_for_Event_Cameras_CVPR_2020_paper.pdf        This repository contains code that implements video to events conversion as described in Gehrig et al. CVPR'20 and the used dataset. The paper can be found here

If you use this code in an academic context, please cite the following work:

Daniel GehrigMathias GehrigJavier Hidalgo-CarrióDavide Scaramuzza, "Video to Events: Recycling Video Datasets for Event Cameras", The Conference on Computer Vision and Pattern Recognition (CVPR), 2020

@InProceedings{Gehrig_2020_CVPR,
  author = {Daniel Gehrig and Mathias Gehrig and Javier Hidalgo-Carri\'o and Davide Scaramuzza},
  title = {Video to Events: Recycling Video Datasets for Event Cameras},
  booktitle = {{IEEE} Conf. Comput. Vis. Pattern Recog. (CVPR)},
  month = {June},
  year = {2020}
}

【事件图像】RGB Image conversion to event Image

代码Code:

演示:

转化前: 

转换后: 


代码Code:

def RGB_to_EventImg(im):
    gray_img=cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
    # 偏移方向 建议设置区间[-5,5]
    xshift = 3
    yshift = -3
    xlong = gray_img.shape[1]-2*abs(xshift)
    ylong = gray_img.shape[0]-2*abs(yshift)
    pic_shape = [ylong,xlong,3]
    img = np.full(pic_shape, 0, dtype=np.uint8)
    # 如果图像相素偏移量>10那么记录该像素点
    for i in range(abs(yshift),ylong):
        for j in range(abs(xshift),xlong):
            if int(gray_img[i+yshift][j+xshift])-int(gray_img[i][j])>10:
                img[i][j]=[0,255,0]
            if int(gray_img[i+yshift][j+xshift])-int(gray_img[i][j])<-10:
                img[i][j]=[0,0,255]
    return img

if __name__ == '__main__':
    img = cv2.imread('test.png')
    cv2.imshow('frame', RGB_toEventImg(img))
    cv2.waitKey(0)

演示:

转化前: 

转换后: 

        不得不说,居然还有点好看~ 

完毕!

是不是超级简单呢?如果觉得有用的话欢迎点赞+关注哦! 

#include <QApplication> // 替换QCoreApplication为QApplication,适用于GUI应用 #include <QImageReader> #include <QImage> #include <QPixmap> #include <QFileDialog> #include <QDebug> #include <QDir> #include <QLabel> #include <QFont> #include <QDateTime> #include <QMessageBox> #include <QMouseEvent> #include <QWheelEvent> #include "CCD1.h" #include <opencv2/opencv.hpp> // 明确包含OpenCV头文件 #include <zbar.h> // 明确包含ZBar头文件 #include <QMenu> #include <QContextMenuEvent> #include <QStyleOption> #include <QPainter> #include <QKeyEvent> using namespace std; // 声明类成员变量,补充缺失的成员 CCD1::CCD1(QWidget* parent) : QMainWindow(parent) , isCameraOpen(false) , isImageDisplayed(false) , isDragging(false) , pixmapScale(1.0) , lastMousePos(0, 0) , m_Pressed(false) , m_ZoomValue(1.0) , m_XPtInterval(0) , m_YPtInterval(0) { ui.setupUi(this); initCodeInfoTreeWidget(); // Configure single display label setupLabelProperties(ui.CameraLabel, &pixmapScale, &lastMousePos, &isDragging); // Initialize timer and signal connections timer = new QTimer(this); connect(timer, &QTimer::timeout, this, &CCD1::updateFrame); connect(ui.CleanImage, &QPushButton::clicked, this, &CCD1::onCleanImage); connect(ui.AnalyzeImage, &QPushButton::clicked, this, &CCD1::onAnalyzeImage); connect(ui.OpenCamera, &QPushButton::clicked, this, &CCD1::onOpenCamera); connect(ui.actionOpen_Camera, &QAction::triggered, this, &CCD1::onOpenCamera); connect(ui.CloseCamera, &QPushButton::clicked, this, &CCD1::onCloseCamera); connect(ui.actionClose_Camera, &QAction::triggered, this, &CCD1::onCloseCamera); connect(ui.actionLoading_Image, &QAction::triggered, this, &CCD1::onLoadingImage); connect(ui.actionSave_Image, &QAction::triggered, this, &CCD1::onSaveImage); connect(ui.SaveImage, &QPushButton::clicked, this, &CCD1::onSaveImage); connect(ui.actionExit, &QAction::triggered, this, &CCD1::onExit); connect(ui.CaptureImage, &QPushButton::clicked, this, &CCD1::onCaptureImage); } // Initialize label properties void CCD1::setupLabelProperties(QLabel* label, double* scale, QPoint* lastMousePos, bool* isDragging) { label->setScaledContents(false); label->setAlignment(Qt::AlignCenter); label->setStyleSheet("background-color: black;"); label->setMouseTracking(true); label->setAttribute(Qt::WA_AcceptTouchEvents, true); } void CCD1::SetPic(QImage Image) { m_Image = Image; } CCD1::~CCD1() { onCloseCamera(); // 调用关闭相机方法释放资源 if (timer) { timer->stop(); delete timer; } } void CCD1::onOpenCamera() { if (!isCameraOpen && (!isImageDisplayed || ui.CameraLabel->pixmap().isNull())) { // Try to open camera (default index 0) if (cap.open(0)) { timer->start(30); // Start timer, refresh every 30ms isCameraOpen = true; ui.OpenCamera->setEnabled(false); ui.CloseCamera->setEnabled(true); ui.CaptureImage->setEnabled(true); ui.CleanImage->setEnabled(true); ui.AnalyzeImage->setEnabled(true); qDebug() << "Camera opened successfully"; } else { // Try backup camera index 1 if (cap.open(1)) { timer->start(30); isCameraOpen = true; ui.OpenCamera->setEnabled(false); ui.CloseCamera->setEnabled(true); ui.CaptureImage->setEnabled(true); ui.CleanImage->setEnabled(true); ui.AnalyzeImage->setEnabled(true); qDebug() << "Camera index 1 opened successfully"; } else { qDebug() << "Failed to open any camera!"; QMessageBox::warning(this, "Error", "Failed to open camera. Check device connection."); } } } else if (isImageDisplayed) { QMessageBox::warning(this, "Warning", "Please clear the current image first."); } } void CCD1::initCodeInfoTreeWidget() { // Set tree widget columns and headers ui.Codeinformation->setColumnCount(2); QStringList headers; headers << "NO." << "Code Information"; ui.Codeinformation->setHeaderLabels(headers); ui.Codeinformation->setColumnWidth(0, 50); ui.Codeinformation->setColumnWidth(1, 200); } void CCD1::onCloseCamera() { if (isCameraOpen) { cap.release(); timer->stop(); isCameraOpen = false; ui.OpenCamera->setEnabled(true); ui.CloseCamera->setEnabled(false); ui.CaptureImage->setEnabled(false); ui.CameraLabel->clear(); currentImage.release(); isImageDisplayed = false; pixmapScale = 1.0; qDebug() << "Camera closed"; } } void CCD1::onCaptureImage() { if (!isCameraOpen || !cap.isOpened()) { QMessageBox::warning(this, "Warning", "Please open the camera first!"); return; } cv::Mat frame; if (cap.read(frame)) { // Save original frame for barcode recognition cv::Mat originalFrame = frame.clone(); // Recognize barcodes auto [barcodeResults, barcodeLocations] = recognizeBarcodes(originalFrame); // Overlay barcode results on the frame overlayBarcodeResults(frame, barcodeResults, barcodeLocations); // Save captured image currentImage = frame.clone(); // Convert color space cv::cvtColor(frame, frame, cv::COLOR_BGR2RGB); // 确保QImage有自己的数据副本,避免数据被释放 QImage image(frame.data, frame.cols, frame.rows, frame.step, QImage::Format_RGB888); image = image.copy(); // 创建深拷贝 // 调整图像大小以适应标签(可选) QPixmap pixmap = QPixmap::fromImage(image); if (pixmap.width() > ui.CameraLabel->width() || pixmap.height() > ui.CameraLabel->height()) { pixmap = pixmap.scaled(ui.CameraLabel->size(), Qt::KeepAspectRatio, Qt::SmoothTransformation); } // 显示图像 ui.CameraLabel->setPixmap(pixmap); pixmapScale = 1.0; // Reset zoom scale isImageDisplayed = true; // Mark image as displayed // 强制更新UI ui.CameraLabel->update(); // Update code labels updateCodeLabels(barcodeResults); // 释放相机资源并关闭相机 if (isCameraOpen) { cap.release(); timer->stop(); isCameraOpen = false; ui.OpenCamera->setEnabled(true); ui.CloseCamera->setEnabled(false); ui.CaptureImage->setEnabled(false); qDebug() << "Camera closed after image capture"; } qDebug() << "Image captured successfully, size:" << frame.cols << "x" << frame.rows; } else { QMessageBox::warning(this, "Capture Failed", "Failed to read frame from camera!"); } } void CCD1::onSaveImage() { // Get image to save QPixmap pixmap = ui.CameraLabel->pixmap(); if (pixmap.isNull()) { QMessageBox::warning(this, "Warning", "No image to save!"); return; } // Generate default filename with timestamp QString timestamp = QDateTime::currentDateTime().toString("yyyyMMdd_HHmmss"); QString defaultFileName = "image_" + timestamp + ".jpg"; // Define cross-platform path QString defaultPath = QDir::homePath() + QDir::separator() + defaultFileName; // Define file format filters QString filter = tr("JPEG Image (*.jpg);;PNG Image (*.png);;BMP Image (*.bmp)"); QString selectedFilter; QString filePath = QFileDialog::getSaveFileName( this, tr("Save Image"), defaultPath, filter, &selectedFilter ); if (filePath.isEmpty()) { return; } // Determine file format QString fileExt = QFileInfo(filePath).suffix().toLower(); const char* fileFormat = nullptr; if (fileExt == "jpg" || fileExt == "jpeg") { fileFormat = "JPEG"; } else if (fileExt == "png") { fileFormat = "PNG"; } else if (fileExt == "bmp") { fileFormat = "BMP"; } else { // Select default format based on filter if (selectedFilter.contains("JPEG")) { filePath += ".jpg"; fileFormat = "JPEG"; } else if (selectedFilter.contains("PNG")) { filePath += ".png"; fileFormat = "PNG"; } else { filePath += ".jpg"; fileFormat = "JPEG"; } } // Save image if (pixmap.save(filePath, fileFormat)) { qDebug() << "Image saved to:" << filePath; QMessageBox::information(this, "Save Successful", "Image saved successfully!"); } else { QMessageBox::warning(this, "Save Failed", "Failed to save image. Check file permissions."); qDebug() << "Save error at:" << filePath; } } // Clear image void CCD1::onCleanImage() { ui.CameraLabel->clear(); currentImage.release(); isImageDisplayed = false; pixmapScale = 1.0; // Clear label contents for (int i = 1; i <= 8; i++) { QLabel* label = findChild<QLabel*>(QString("Codelabel%1").arg(i)); if (label) label->setText(""); } // Clear tree widget ui.Codeinformation->clear(); qDebug() << "Image cleared"; } // Analyze barcodes in the image void CCD1::onAnalyzeImage() { // Get image to analyze QPixmap pixmap = ui.CameraLabel->pixmap(); if (pixmap.isNull()) { QMessageBox::warning(this, "Warning", "No image to analyze!"); return; } // Convert to QImage QImage qImage = pixmap.toImage(); if (qImage.isNull()) { QMessageBox::warning(this, "Warning", "Image conversion failed!"); return; } // Convert to OpenCV Mat cv::Mat image = qImageToMat(qImage); // Check image validity if (image.empty()) { QMessageBox::warning(this, "Analysis Failed", "Invalid image data"); return; } // Recognize barcodes auto [barcodeResults, barcodeLocations] = recognizeBarcodes(image); // Clear previous tree widget items ui.Codeinformation->clear(); // Display analysis results in tree widget for (int i = 0; i < barcodeResults.size(); i++) { QTreeWidgetItem* item = new QTreeWidgetItem(ui.Codeinformation); item->setText(0, QString::number(i + 1)); item->setText(1, barcodeResults[i]); } // Overlay barcode results on the image overlayBarcodeResults(image, barcodeResults, barcodeLocations); // Convert back to RGB for display cv::cvtColor(image, image, cv::COLOR_BGR2RGB); QImage resultImage(image.data, image.cols, image.rows, image.step, QImage::Format_RGB888); // Display analyzed image QPixmap resultPixmap = QPixmap::fromImage(resultImage); ui.CameraLabel->setPixmap(resultPixmap); pixmapScale = 1.0; // Reset zoom scale // Update code labels updateCodeLabels(barcodeResults); // Show analysis results if (barcodeResults.isEmpty()) { QMessageBox::information(this, "Analysis Results", "No barcodes found"); } else { QString resultText = "Found " + QString::number(barcodeResults.size()) + " barcodes:\n"; for (int i = 0; i < barcodeResults.size(); i++) { resultText += QString("%1. %2\n").arg(i + 1).arg(barcodeResults[i]); } QMessageBox::information(this, "Analysis Results", resultText); } } // Recognize barcodes, return results and locations std::pair<QVector<QString>, QVector<std::vector<cv::Point>>> CCD1::recognizeBarcodes(const cv::Mat& image) { QVector<QString> results; QVector<std::vector<cv::Point>> locations; try { // Convert to grayscale cv::Mat grayImage; if (image.channels() == 3) { cv::cvtColor(image, grayImage, cv::COLOR_BGR2GRAY); } else { grayImage = image.clone(); } // Create ZBar scanner zbar::ImageScanner scanner; // Configure scanner scanner.set_config(zbar::ZBAR_NONE, zbar::ZBAR_CFG_ENABLE, 0); // Disable all types // Enable common barcode types scanner.set_config(zbar::ZBAR_QRCODE, zbar::ZBAR_CFG_ENABLE, 1); scanner.set_config(zbar::ZBAR_CODE128, zbar::ZBAR_CFG_ENABLE, 1); scanner.set_config(zbar::ZBAR_EAN13, zbar::ZBAR_CFG_ENABLE, 1); scanner.set_config(zbar::ZBAR_CODE39, zbar::ZBAR_CFG_ENABLE, 1); scanner.set_config(zbar::ZBAR_UPCA, zbar::ZBAR_CFG_ENABLE, 1); scanner.set_config(zbar::ZBAR_I25, zbar::ZBAR_CFG_ENABLE, 1); // Prepare image for ZBar zbar::Image zbarImage(grayImage.cols, grayImage.rows, "Y800", grayImage.data, grayImage.cols * grayImage.rows); int result = scanner.scan(zbarImage); if (result > 0) { // Iterate through barcodes for (zbar::Image::SymbolIterator symbol = zbarImage.symbol_begin(); symbol != zbarImage.symbol_end(); ++symbol) { // Get barcode data QString data = QString::fromStdString(symbol->get_data()); results.append(data); // Get barcode locations std::vector<cv::Point> points; for (int i = 0; i < symbol->get_location_size(); ++i) { points.push_back(cv::Point(symbol->get_location_x(i), symbol->get_location_y(i))); } locations.append(points); } } } catch (const std::exception& e) { qDebug() << "Barcode recognition error:" << QString::fromStdString(e.what()); } return { results, locations }; } // Overlay barcode results and bounding boxes void CCD1::overlayBarcodeResults(cv::Mat& frame, const QVector<QString>& results, const QVector<std::vector<cv::Point>>& locations) { // Draw bounding boxes and labels for (int i = 0; i < results.size(); ++i) { const QString& result = results[i]; if (!result.isEmpty()) { // Draw bounding box if (i < locations.size() && locations[i].size() >= 4) { cv::line(frame, locations[i][0], locations[i][1], cv::Scalar(0, 255, 0), 2); cv::line(frame, locations[i][1], locations[i][2], cv::Scalar(0, 255, 0), 2); cv::line(frame, locations[i][2], locations[i][3], cv::Scalar(0, 255, 0), 2); cv::line(frame, locations[i][3], locations[i][0], cv::Scalar(0, 255, 0), 2); // Label with index cv::putText(frame, QString::number(i + 1).toStdString(), locations[i][0], cv::FONT_HERSHEY_SIMPLEX, 0.7, cv::Scalar(0, 0, 255), 2); } // Display barcode content at the top int fontFace = cv::FONT_HERSHEY_SIMPLEX; double fontScale = 0.7; int thickness = 2; int baseline = 0; cv::Size textSize = cv::getTextSize(result.toStdString(), fontFace, fontScale, thickness, &baseline); cv::Point textOrg(10, 30 + i * (textSize.height + 15)); // Add background for text cv::rectangle(frame, textOrg + cv::Point(0, baseline), textOrg + cv::Point(textSize.width, -textSize.height), cv::Scalar(0, 0, 0), cv::FILLED); // Add text cv::putText(frame, result.toStdString(), textOrg, fontFace, fontScale, cv::Scalar(255, 255, 255), thickness); } } } // Update frame and recognize barcodes void CCD1::updateFrame() { cv::Mat frame; // Read a frame from the camera if (cap.read(frame)) { // Save the original frame for barcode recognition cv::Mat originalFrame = frame.clone(); // Recognize barcodes and get their locations auto [barcodeResults, barcodeLocations] = recognizeBarcodes(originalFrame); // Overlay barcode results on the frame overlayBarcodeResults(frame, barcodeResults, barcodeLocations); // Convert color space from BGR to RGB cv::cvtColor(frame, frame, cv::COLOR_BGR2RGB); // Convert to Qt's QImage for display QImage image(frame.data, frame.cols, frame.rows, frame.step, QImage::Format_RGB888); // Display the image on the label ui.CameraLabel->setPixmap(QPixmap::fromImage(image)); pixmapScale = 1.0; // Reset zoom for live view isImageDisplayed = true; // Update code labels updateCodeLabels(barcodeResults); } else { qDebug() << "Failed to read frame from camera"; } } // Load an image and recognize barcodes void CCD1::onLoadingImage() { // Close the camera if it's open if (isCameraOpen) { onCloseCamera(); } // Open file dialog QString filePath = QFileDialog::getOpenFileName( this, tr("Select Image"), QDir::homePath(), tr("Image Files (*.jpg *.jpeg *.png *.bmp);;All Files (*)") ); if (filePath.isEmpty()) { return; } // Read the image using OpenCV (auto-detect format) cv::Mat image = cv::imread(filePath.toStdString(), cv::IMREAD_COLOR); if (image.empty()) { // Try loading with Qt to get more detailed error information QImage qImage(filePath); if (qImage.isNull()) { QMessageBox::warning(this, "Load Failed", "Unrecognized image format"); qDebug() << "Failed to load image:" << filePath; return; } // Convert QImage to Mat image = cv::Mat(qImage.height(), qImage.width(), qImage.format() == QImage::Format_RGB888 ? CV_8UC3 : CV_8UC4, qImage.bits(), qImage.bytesPerLine()); if (qImage.format() == QImage::Format_RGB888) { cv::cvtColor(image, image, cv::COLOR_RGB2BGR); } else { cv::cvtColor(image, image, cv::COLOR_RGBA2BGR); } } // Save the loaded image currentImage = image.clone(); // Recognize barcodes and display results auto [barcodeResults, barcodeLocations] = recognizeBarcodes(image); overlayBarcodeResults(image, barcodeResults, barcodeLocations); // Convert to RGB and display on CameraLabel cv::cvtColor(image, image, cv::COLOR_BGR2RGB); QImage qImage(image.data, image.cols, image.rows, image.step, QImage::Format_RGB888); ui.CameraLabel->setPixmap(QPixmap::fromImage(qImage)); ui.CameraLabel->setScaledContents(false); // Disable auto-scaling isImageDisplayed = true; pixmapScale = 1.0; // Reset zoom scale // Update code labels updateCodeLabels(barcodeResults); qDebug() << "Image loaded successfully:" << filePath; } void CCD1::updateCodeLabels(const QVector<QString>& results) { for (int i = 0; i < 8; i++) { QLabel* label = findChild<QLabel*>(QString("Codelabel%1").arg(i + 1)); if (label) { label->setText(i < results.size() ? QString("%1. %2").arg(i + 1).arg(results[i]) : ""); } } } // Safely convert QImage to cv::Mat cv::Mat CCD1::qImageToMat(const QImage& qImage) { cv::Mat mat; switch (qImage.format()) { case QImage::Format_ARGB32: case QImage::Format_ARGB32_Premultiplied: mat = cv::Mat(qImage.height(), qImage.width(), CV_8UC4, (uchar*)qImage.bits(), qImage.bytesPerLine()); cv::cvtColor(mat, mat, cv::COLOR_RGBA2BGR); break; case QImage::Format_RGB32: mat = cv::Mat(qImage.height(), qImage.width(), CV_8UC4, (uchar*)qImage.bits(), qImage.bytesPerLine()); cv::cvtColor(mat, mat, cv::COLOR_RGBA2BGR); break; case QImage::Format_RGB888: mat = cv::Mat(qImage.height(), qImage.width(), CV_8UC3, (uchar*)qImage.bits(), qImage.bytesPerLine()); cv::cvtColor(mat, mat, cv::COLOR_RGB2BGR); break; default: QImage temp = qImage.convertToFormat(QImage::Format_RGB888); mat = cv::Mat(temp.height(), temp.width(), CV_8UC3, (uchar*)temp.bits(), temp.bytesPerLine()); cv::cvtColor(mat, mat, cv::COLOR_RGB2BGR); break; } return mat; } void CCD1::onExit() { qDebug() << "Exiting"; onCloseCamera(); // Close camera before exiting qApp->quit(); } // Mouse press event handler - for image dragging void CCD1::mousePressEvent(QMouseEvent* event) { if (ui.CameraLabel->rect().contains(event->pos()) && isImageDisplayed) { lastMousePos = event->pos(); isDragging = true; event->accept(); } else { QMainWindow::mousePressEvent(event); } } // Context menu event handler void CCD1::contextMenuEvent(QContextMenuEvent* event) { QPoint pos = event->pos(); pos = this->mapToGlobal(pos); QMenu* menu = new QMenu(this); QAction* loadImage = new QAction(tr("Load Image")); connect(loadImage, &QAction::triggered, this, &CCD1::onLoadingImage); menu->addAction(loadImage); QAction* zoomInAction = new QAction(tr("Zoom In")); connect(zoomInAction, &QAction::triggered, this, &CCD1::onZoomInImage); menu->addAction(zoomInAction); QAction* zoomOutAction = new QAction(tr("Zoom Out")); connect(zoomOutAction, &QAction::triggered, this, &CCD1::onZoomOutImage); menu->addAction(zoomOutAction); QAction* presetAction = new QAction(tr("Preset")); connect(presetAction, &QAction::triggered, this, &CCD1::onPresetImage); menu->addAction(presetAction); menu->exec(pos); } void CCD1::paintEvent(QPaintEvent* event) { // 设置一个画家painter, 在空白的tmpPixMap上进行绘制的 QPainter painter(this); QPixmap tmpPixMap(this->width(), this->height()); tmpPixMap.fill(Qt::transparent); painter.begin(&tmpPixMap); // 根据窗口计算应该显示的图片的大小 int width = qMin(m_Image.width(), this->width()); int height = width * 1.0 / (m_Image.width() * 1.0 / m_Image.height()); height = qMin(height, this->height()); width = height * 1.0 * (m_Image.width() * 1.0 / m_Image.height()); // 平移 painter.translate(this->width() / 2 + m_XPtInterval, this->height() / 2 + m_YPtInterval); // 缩放 painter.scale(m_ZoomValue, m_ZoomValue); // 绘制图像 QRect picRect(-width / 2, -height / 2, width, height); painter.drawImage(picRect, m_Image); painter.end(); m_bTempPixmap = tmpPixMap; } void CCD1::wheelEvent(QWheelEvent* event) { // 使用 angleDelta().y() 获取滚轮滚动值,正数向前滚,负数向后滚 int value = event->angleDelta().y(); if (value > 0) { onZoomInImage(); } else { onZoomOutImage(); } this->update(); } void CCD1::mouseMoveEvent(QMouseEvent* event) { if (m_Pressed) return QMainWindow::mouseMoveEvent(event); // 修正为调用QMainWindow的mouseMoveEvent this->setCursor(Qt::SizeAllCursor); QPoint pos = event->pos(); int xPtInterval = pos.x() - m_OldPos.x(); int yPtInterval = pos.y() - m_OldPos.y(); m_XPtInterval += xPtInterval; m_YPtInterval += yPtInterval; m_OldPos = pos; this->update(); } void CCD1::mouseReleaseEvent(QMouseEvent* event) { m_Pressed = false; this->setCursor(Qt::ArrowCursor); } // 实现缩放功能 void CCD1::onZoomInImage() { m_ZoomValue *= 1.2; this->update(); } void CCD1::onZoomOutImage() { if (m_ZoomValue > 0.5) { m_ZoomValue /= 1.2; this->update(); } } // 实现预设视图功能 void CCD1::onPresetImage() { m_ZoomValue = 1.0; m_XPtInterval = 0; m_YPtInterval = 0; this->update(); } //鼠标双击还原图片大小事件 void CCD1::mouseDoubleClickEvent(QMouseEvent* event) { onPresetImage(); } //键盘事件(按住ctrl 并滚动滚轮才可以放大或者缩小图片)
06-18
### 使用MATLAB和GUI进行简单图像处理 创建基于MATLAB图形用户界面(GUI)的简单图像处理应用程序涉及多个方面,包括设计界面布局、编写回调函数以及实现核心算法。下面提供了一个简单的例子来展示如何构建这样的应用。 #### 创建一个新的GUIDE模板项目 启动MATLAB并输入`guide`命令打开GUIDE快速入门对话框;选择“Blank GUI (Default)”选项以初始化一个空白的设计环境[^1]。 #### 设计UI组件 拖拽按钮(Button),轴(Axes)和其他控件到界面上合适的位置上。对于本案例而言,在窗口内放置两个Axes用于显示原始图片与处理后的效果对比图,并添加几个Button以便加载文件及执行特定操作如灰度转换等。 #### 编写回调逻辑 双击各个按钮进入其对应的Callback编辑区定义交互行为: - **Load Image Button**: 当点击此按键时触发事件读取本地磁盘上的位图资源。 ```matlab function load_image_Callback(hObject, eventdata, handles) % 浏览器选取一张照片作为待处理素材 [filename, pathname] = uigetfile({'*.jpg; *.png'; '*.*'}, 'Select an Image'); if isequal(filename,0) || isequal(pathname,0) warndlg('User selected Cancel','Warning') return; end fullFileName = fullfile(pathname,filename); imgOriginal = imread(fullFileName); axes(handles.axes1); imshow(imgOriginal); guidata(hObject,handles); ``` - **Convert to Grayscale Button**: 实现色彩空间变换功能 ```matlab function convert_to_grayscale_Callback(hObject,eventdata,handles) try rgbImage = getappdata(gcf,'rgbImage'); %#ok<UNRCH> grayImg=rgb2gray(rgbImage); set(handles.axes2,'Visible','on'); axes(handles.axes2); imshow(grayImg); title('Grayscaled Version'); clear rgbImage; assignin('base','processedGrayScale',grayImg); catch ME errordlg(ME.message,'Error Occurred During Conversion.') end ``` 上述代码片段展示了基本框架下如何利用内置工具完成从彩色至单通道灰阶模式之间的切换过程。 #### 完整的应用程序结构 为了使整个流程更加连贯顺畅,还需要考虑其他细节之处比如保存修改过的版本或是重置当前视窗状态等功能模块。此外,通过设置合理的提示信息帮助初次使用者理解各项指令含义也十分必要。 ```matlab function varargout = simple_gui(varargin) gui_Singleton = 1; gui_State = struct('gui_Name', mfilename,... 'gui_Singleton', gui_Singleton,... 'gui_OpeningFcn', @simple_gui_OpeningFcn,... 'OutputFcn', @simple_gui_OutputFcn,... 'axes1', [],... 'axes2', []); if nargin && ischar(varargin{1}) gui_State.gui_CallBack = str2func(varargin{1}); end if nargout [varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:}); else gui_mainfcn(gui_State, varargin{:}); end end % --- Executes just before simple_gui is made visible. function simple_gui_OpeningFcn(hObject, eventdata, handles,varargin) % Choose default command line output for simple_gui handles.output = hObject; % Update handles structure guidata(hObject, handles); end % --- Outputs from this function are returned to the command line. function varargout = simple_gui_OutputFcn(hObject, eventdata, handles) varargout{1} = handles.output; end ``` 这段脚本负责管理整体运行机制,确保各部分能够协同工作而不发生冲突。
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

大气层煮月亮

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值