在Allwinner V3s 上面运行 RFBNet 检测

RFBNet具备同非常深的主干网络检测器的精度,但是保持了实时性。

论文:Receptive Field Block Net for Accurate and Fast Object Detection (CVPR2017)

链接:https://arxiv.org/abs/1711.07767

Github:https://github.com/Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB
优点:

基于ssd修改。速度超级快,MNN上测试,MTCNN得20ms,该模型只要4ms。精度上不如MTCNN。

支持NCNN,MNN。包含2个版本的模型,slim版本(速度快)和RFB版本(精度高)。

缺点:

不支持人脸关键点检测。

交叉编译MNN

MNN是阿里的开源机器学习推理框架,有着详细的官方文档。

从github上下载 https://github.com/alibaba/MNN

交叉编译工具链可使用Linaro
使用 cmake 命令,我使用的是:arm-linux-gnueabihf-g++ 6.3
cmake …
-DCMAKE_SYSTEM_NAME=Linux
-DCMAKE_SYSTEM_VERSION=1
-DCMAKE_SYSTEM_PROCESSOR=arm
-DCMAKE_C_COMPILER=arm-linux-gnueabihf-gcc
-DCMAKE_CXX_COMPILER=arm-linux-gnueabihf-g++

编译MNN
mkdir build
cd build
cmake …
make -j4

编译得到:libMNN.so,可以使用命令:file libMNN.so 查看
file libMNN.so
libMNN.so: ELF 32-bit LSB shared object, ARM, EABI5 version 1 (GNU/Linux), dynamically linked, BuildID[sha1]=8da2a34050d48644ca34e7bf6a622381475e6777, not stripped

交叉编译RFB-MNN

将 libMNN.so 拷贝到 /home/t/Zero/Ultra-Light-Fast-Generic-Face-Detector-1MB-master/MNN/mnn/lib
把 opencv 相关屏蔽

main.cpp 修改如下:

//  Created by Linzaer on 2019/11/15.
//  Copyright © 2019 Linzaer. All rights reserved.#include "UltraFace.hpp"
#include <iostream>
//#include <opencv2/opencv.hpp>
#include <cstring>
#include <fstream>
#include <iostream>
#include <chrono>
#include <cmath>
#include <memory>using namespace std;static unique_ptr<char[]> file_to_buffer(char *filename, int *sizeptr) {ifstream fin(filename, ios::in | ios::binary);if (!fin.is_open()) {cout << "Could not open file: " << filename << endl;exit(-1);}fin.seekg(0, std::ios::end);*sizeptr = fin.tellg();fin.seekg(0, std::ios::beg);unique_ptr<char[]> buffer(new char[*sizeptr]);fin.read((char *)buffer.get(), *sizeptr);fin.close();return move(buffer);
}int main(int argc, char **argv) {if (argc <= 2) {fprintf(stderr, "Usage: %s <mnn .mnn> [image files...]\n", argv[0]);return 1;}string mnn_path = argv[1];UltraFace ultraface(mnn_path, 320, 240, 4, 0.65); // config model inputstring image_file = argv[2];cout << "Processing " << image_file << endl;int datasize = 0;unique_ptr<char[]> datafile = file_to_buffer(argv[2], &datasize);printf("datasize = %d\n", datasize);auto start = chrono::steady_clock::now();vector<FaceInfo> face_info;ultraface.detect((uint8_t*)datafile.get(), face_info);for (auto face : face_info) {printf("x1 %f y1 %f, x2 %f y2 %f \n", face.x1, face.y1, face.x2, face.y2);}auto end = chrono::steady_clock::now();chrono::duration<double> elapsed = end - start;cout << "all time: " << elapsed.count() << " s" << endl;/*for (int i = 2; i < argc; i++) {string image_file = argv[i];cout << "Processing " << image_file << endl;cv::Mat frame = cv::imread(image_file);auto start = chrono::steady_clock::now();vector<FaceInfo> face_info;ultraface.detect(frame, face_info);for (auto face : face_info) {cv::Point pt1(face.x1, face.y1);cv::Point pt2(face.x2, face.y2);cv::rectangle(frame, pt1, pt2, cv::Scalar(0, 255, 0), 2);}auto end = chrono::steady_clock::now();chrono::duration<double> elapsed = end - start;cout << "all time: " << elapsed.count() << " s" << endl;cv::imshow("UltraFace", frame);cv::waitKey();string result_name = "result" + to_string(i) + ".jpg";cv::imwrite(result_name, frame);}
*/return 0;
}

UltraFace.cpp 修改如下:

int UltraFace::detect(/*cv::Mat &raw_image*/uint8_t* source, std::vector<FaceInfo> &face_list) {/*if (raw_image.empty()) {std::cout << "image is empty ,please check!" << std::endl;return -1;}image_h = raw_image.rows;image_w = raw_image.cols;cv::Mat image;cv::resize(raw_image, image, cv::Size(in_w, in_h));
*/image_h = 1024;                                                  //  原图的大小image_w = 768;ultraface_interpreter->resizeTensor(input_tensor, {1, 3, in_h, in_w});ultraface_interpreter->resizeSession(ultraface_session);std::shared_ptr<MNN::CV::ImageProcess> pretreat(MNN::CV::ImageProcess::create(MNN::CV::BGR, MNN::CV::RGB, mean_vals, 3,norm_vals, 3));pretreat->convert(/*image.data*/source, in_w, in_h, /*image.step[0]*/960, input_tensor);   // 这个960 是 因为图片 resize 320 * 3auto start = chrono::steady_clock::now();// run networkultraface_interpreter->runSession(ultraface_session);// get output datastring scores = "scores";string boxes = "boxes";MNN::Tensor *tensor_scores = ultraface_interpreter->getSessionOutput(ultraface_session, scores.c_str());MNN::Tensor *tensor_boxes = ultraface_interpreter->getSessionOutput(ultraface_session, boxes.c_str());MNN::Tensor tensor_scores_host(tensor_scores, tensor_scores->getDimensionType());tensor_scores->copyToHostTensor(&tensor_scores_host);MNN::Tensor tensor_boxes_host(tensor_boxes, tensor_boxes->getDimensionType());tensor_boxes->copyToHostTensor(&tensor_boxes_host);std::vector<FaceInfo> bbox_collection;auto end = chrono::steady_clock::now();chrono::duration<double> elapsed = end - start;cout << "inference time:" << elapsed.count() << " s" << endl;generateBBox(bbox_collection, tensor_scores, tensor_boxes);nms(bbox_collection, face_list);return 0;
}

UltraFace.hpp 修改如下:

//#include <opencv2/opencv.hpp>
int detect(/*cv::Mat &img*/ uint8_t* source, std::vector<FaceInfo> &face_list);

编译测试

使用命令编译:arm-linux-gnueabihf-g++ -o test main.cpp UltraFace.cpp -L…/mnn/lib/ -I…/mnn/include -lMNN

将 test、libMNN.so 拷贝到Allwinner V3s 测试运行。
注:opencv 的作用就是图片的rgb数据。

Allwinner V3s 使用V4l2 获取数据

参考代码:

#include "v4l2_device.h"  typedef struct {void *start;int length;
} BUFTYPE;BUFTYPE *usr_buf;
static unsigned int n_buffer = 0;
static int tmp = 0;/*set video capture ways(mmap)*/
int init_mmap(int fd) {/*to request frame cache, contain requested counts*/struct v4l2_requestbuffers reqbufs;memset(&reqbufs, 0, sizeof(reqbufs));reqbufs.count = 4; /*the number of buffer*/reqbufs.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;reqbufs.memory = V4L2_MEMORY_MMAP;if (-1 == ioctl(fd, VIDIOC_REQBUFS, &reqbufs)) {perror("Fail to ioctl 'VIDIOC_REQBUFS'");system("sync");system("reboot");exit (EXIT_FAILURE);}n_buffer = reqbufs.count;printf("n_buffer = %d\n", n_buffer);usr_buf = (BUFTYPE *) calloc(reqbufs.count, sizeof(BUFTYPE));if (usr_buf == NULL) {printf("Out of memory\n");system("sync");system("reboot");exit(-1);}/*map kernel cache to user process*/for (n_buffer = 0; n_buffer < reqbufs.count; ++n_buffer) {//stand for a framestruct v4l2_buffer buf;memset(&buf, 0, sizeof(buf));buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;buf.memory = V4L2_MEMORY_MMAP;buf.index = n_buffer;/*check the information of the kernel cache requested*/if (-1 == ioctl(fd, VIDIOC_QUERYBUF, &buf)) {perror("Fail to ioctl : VIDIOC_QUERYBUF");system("sync");system("reboot");exit (EXIT_FAILURE);}usr_buf[n_buffer].length = buf.length;usr_buf[n_buffer].start = (char *) mmap(NULL, buf.length, PROT_READ | PROT_WRITE, MAP_SHARED, fd, buf.m.offset);if (MAP_FAILED == usr_buf[n_buffer].start) {perror("Fail to mmap");system("sync");system("reboot");exit (EXIT_FAILURE);}}return 0;
}int open_camera(void) {int fd;struct v4l2_input inp;fd = open(FILE_VIDEO, O_RDWR | O_NONBLOCK, 0);if (fd < 0) {fprintf(stderr, "%s open err \n", FILE_VIDEO);exit (EXIT_FAILURE);};inp.index = 0;if (-1 == ioctl(fd, VIDIOC_S_INPUT, &inp)) {system("sync");system("reboot");fprintf(stderr, "VIDIOC_S_INPUT \n");}return fd;
}int init_camera(int fd, int width, int height) {struct v4l2_capability cap; /* decive fuction, such as video input */struct v4l2_format tv_fmt; /* frame format */struct v4l2_fmtdesc fmtdesc; /* detail control value *///struct v4l2_control     ctrl;int ret;/*show all the support format*/memset(&fmtdesc, 0, sizeof(fmtdesc));fmtdesc.index = 0; /* the number to check */fmtdesc.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;/* check video decive driver capability */ret = ioctl(fd, VIDIOC_QUERYCAP, &cap);if (ret < 0) {fprintf(stderr, "fail to ioctl VIDEO_QUERYCAP \n");exit (EXIT_FAILURE);}/*judge wherher or not to be a video-get device*/if (!(cap.capabilities & V4L2_BUF_TYPE_VIDEO_CAPTURE)) {fprintf(stderr, "The Current device is not a video capture device \n");exit (EXIT_FAILURE);}/*judge whether or not to supply the form of video stream*/if (!(cap.capabilities & V4L2_CAP_STREAMING)) {printf("The Current device does not support streaming i/o\n");exit (EXIT_FAILURE);}printf("\ncamera driver name is : %s\n", cap.driver);printf("camera device name is : %s\n", cap.card);printf("camera bus information: %s\n", cap.bus_info);/*display the format device support*/printf("\n");while (ioctl(fd, VIDIOC_ENUM_FMT, &fmtdesc) != -1) {printf("support device %d.%s\n", fmtdesc.index + 1, fmtdesc.description);fmtdesc.index++;}printf("\n");/*set the form of camera capture data*/tv_fmt.type = V4L2_BUF_TYPE_VIDEO_CAPTURE; /*v4l2_buf_typea,camera must use V4L2_BUF_TYPE_VIDEO_CAPTURE*/tv_fmt.fmt.pix.width = width;tv_fmt.fmt.pix.height = height;//tv_fmt.fmt.pix.pixelformat = V4L2_PIX_FMT_YUV420;   /*V4L2_PIX_FMT_YYUV*/tv_fmt.fmt.pix.pixelformat = V4L2_PIX_FMT_YVU420; /*V4L2_PIX_FMT_YYUV*/tv_fmt.fmt.pix.field = V4L2_FIELD_ANY; /*V4L2_FIELD_NONE*/if (ioctl(fd, VIDIOC_S_FMT, &tv_fmt) < 0) {fprintf(stderr, "VIDIOC_S_FMT set err\n");exit(-1);close(fd);}init_mmap(fd);return 0;
}int start_capture(int fd) {unsigned int i;enum v4l2_buf_type type;/*place the kernel cache to a queue*/for (i = 0; i < n_buffer; i++) {struct v4l2_buffer buf;memset(&buf, 0, sizeof(buf));buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;buf.memory = V4L2_MEMORY_MMAP;buf.index = i;if (-1 == ioctl(fd, VIDIOC_QBUF, &buf)) {perror("Fail to ioctl 'VIDIOC_QBUF'");exit (EXIT_FAILURE);}}type = V4L2_BUF_TYPE_VIDEO_CAPTURE;if (-1 == ioctl(fd, VIDIOC_STREAMON, &type)) {printf("i=%d.\n", i);perror("VIDIOC_STREAMON");system("sync");system("reboot");close(fd);exit (EXIT_FAILURE);}return 0;
}int read_frame(int fd, unsigned char *outbuf, int *len) {struct v4l2_buffer buf;//unsigned int i;fd_set fds;struct timeval tv;int r;FD_ZERO(&fds);FD_SET(fd, &fds);/*Timeout*/tv.tv_sec = 2;tv.tv_usec = 0;r = select(fd + 1, &fds, NULL, NULL, &tv);if (-1 == r) {if (EINTR == errno) {printf("select received SIGINT \n");return 0;//perror("Fail to select");//exit(EXIT_FAILURE);}}if (0 == r) {fprintf(stderr, "select Timeout\n");exit(-1);}memset(&buf, 0, sizeof(buf));buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;buf.memory = V4L2_MEMORY_MMAP;if (-1 == ioctl(fd, VIDIOC_DQBUF, &buf)) {perror("Fail to ioctl 'VIDIOC_DQBUF'");exit (EXIT_FAILURE);}assert(buf.index < n_buffer);//unsigned char outbuf[1024*1024];  //process_image(usr_buf[buf.index].start, usr_buf[buf.index].length); memcpy(outbuf, usr_buf[buf.index].start, usr_buf[buf.index].length);*len = usr_buf[buf.index].length;//memmove(outbuf, usr_buf[buf.index].start, usr_buf[buf.index].length);tmp++;printf("index = %d \n", tmp);if (tmp == 30) {tmp = 0;FILE* fp = fopen("test.yuyv","w");fwrite(usr_buf[buf.index].start, 1, usr_buf[buf.index].length, fp);fclose(fp);}if (-1 == ioctl(fd, VIDIOC_QBUF, &buf)) {perror("Fail to ioctl 'VIDIOC_QBUF'");exit (EXIT_FAILURE);}return 1;
}void stop_capture(int fd) {enum v4l2_buf_type type;type = V4L2_BUF_TYPE_VIDEO_CAPTURE;if (-1 == ioctl(fd, VIDIOC_STREAMOFF, &type)) {perror("Fail to ioctl 'VIDIOC_STREAMOFF'");exit (EXIT_FAILURE);}
}void close_camera_device(int fd) {unsigned int i;for (i = 0; i < n_buffer; i++) {if (-1 == munmap(usr_buf[i].start, usr_buf[i].length)) {exit(-1);}}free(usr_buf);if (-1 == close(fd)) {perror("Fail to close fd");exit (EXIT_FAILURE);}
}

main.cpp

int main() {pthread_t t1,t2;states = SysStatesRead();if(states == 0) {system("aplay idla.wav");}int err = pthread_create(&t1, NULL, GpioProcess, NULL);if (err != 0) {printf("GpioProcess thread_create Failed :%s\n", strerror(err));}TP recTime_s = getTime();pModel = GtiCreateModel(recog_modelFile);TP recTime_e = getTime();printf("create model time diff %lld\n", getTimeDiff(recTime_s, recTime_e));err = pthread_create(&t2, NULL, VideoPorcess, NULL);if (err != 0) {printf("VideoPorcess thread_create Failed :%s\n", strerror(err));}int height = 600;int width = 800;int fd;int len = 0;unsigned char *cam_buf;int index = 0;int fps = 30;unsigned int tick_gap = 1000 / fps;uint32_t now = 0;uint32_t last_update = 0;cam_buf = (unsigned char*) malloc(1024 * 1024 * 3);memset(cam_buf, 0, 1024 * 1024 * 3);if (signal(SIGINT, sig_user) == SIG_ERR) {perror("catch SIGINT err");}fd = open_camera();if (fd > 0) {printf("Open Camera succ\n");}if (0 == init_camera(fd, width, height))printf("Init camera succ\n");usleep(100);start_capture(fd);printf("inited \n");runflag = 1;while (runflag) {last_update = GetTime();//      printf("------------%ld \n", last_update);read_frame(fd, cam_buf, &len);now = GetTime();printf("++++++++++++%ld \n", now - last_update);index ++;if (states == 1) {memcpy(Cam_buf, cam_buf, len);continue;}if (index == 14) {index = 0;
//          printf(">>>> d\n");memcpy(Cam_buf, cam_buf, len);
//          printf(">>>> sd\n");}}free(cam_buf);stop_capture(fd);close_camera_device(fd);GtiDestroyModel(pModel);return 0;}

就集成实现了在荔枝派上面实现人脸检测功能。

Allwinner V3s RFBNet相关推荐

  1. Allwinner V3s + ov2640 + SPR5801

    最近在调试荔枝派Zero Allwinner V3s + ov2640 + SPR5801 1. uboot,正常修改,增加zImage 8k 地址 dd烧录. 首先获取u-boot源码: git c ...

  2. 全志线刷工具如何刷linux,全志 Allwinner V3S 开发环境搭建 (二)安装必要工具

    1.libncurses5-dev sudo apt-get install libncurses5-dev make menuconfig 配置工程时用到 2.GIT sudo apt-get in ...

  3. v3s 全志_基于全志V3s的开源开发板,提供pcb和系统源码和资料

    Blueberry PI 开源地址: https://github.com/petit-miner/Blueberry-PI 蓝莓PI 我设计了这块PCB,因为这款SoC的功率非常低,功耗非常低,再加 ...

  4. 全志V3s学习记录(13)OV2640的使用

    文章目录 硬件分析 一.修改设备树 二.增加Linux驱动配置 三.构建Buildroot文件系统 使用I2c工具调试 意外收获 RAW看图软件 7yuv 测试 参考:https://blog.51c ...

  5. 全志V3S零基础教程

    文章目录 U-boot编译 环境配置 下载Uboot 执行编译 分区一(boot分区)设置: 分区二(rootfs分区)设置: Kernel编译 修改参数:添加LCD st7789 单独编译设备树 l ...

  6. 全志v3s学习笔记(5)——主线Linux编译与烧录

    一.安装交叉编译环境 交叉编译环境跟uboot使用的一样. 参考:arm-linux-gnueabihf 交叉编译工具链安装 二.下载linux源码 # 默认是zero-4.10.y分支: git c ...

  7. v3S编译大全(uboot 主线linux buildroot)

    文章目录 一.github代理下载 二.Uboot 1.uboot下载 2.uboot的基本结构 3.uboot配置屏幕 4.修改可以从tf卡启动 5.配置 6.编译 6.烧录测试 1.进入 fel ...

  8. W806 SDIO 设备 扩展 荔枝派 V3s IO 使用

    全志V3s 不论焊接还是使用很方便,唯一缺点就是IO不够,偶然发现联德盛 W806 竟然自带SDIO 设备接口,当然肯定还有ESP32模块 也是带的(,这里并不适用),选择SDIO 优点是速度快,方便 ...

  9. lichee linux nfs,SPI Flash 系统编译

    在一些低成本应用场景,需要在SPI flash上启动系统,这需要对Uboot和系统镜像做些适配. 本文介绍SPI Flash镜像的制作过程. 这里 使用 MX25L25645G, 32M SPI fl ...

  10. Licheepi zero SPI Flash 系统编译

    Licheepi zero SPI Flash 系统编译 在一些低成本应用场景,需要在SPI flash上启动系统,这需要对Uboot和系统镜像做些适配. 本文介绍SPI Flash镜像的制作过程. ...

最新文章

  1. HashMap的31连环炮,我倒在第5个上
  2. 小白巷分享 -- Laravel5的新特性之异常处理
  3. big sur 降级_太阳报:若诺维奇降级,球员将降薪一半
  4. 页面怎么把关键字保留下来_怎么做seo优化,以及网站SEO优化计划!
  5. Cloudera Manager 和CDH6.0.1安装,卸载,各步骤截图(此博文为笔者辛苦劳作最终生成的,使用了3个熬到凌晨2~4点的夜晚,外加一个周末完成,请转载时记录转载之处,谢谢)
  6. idea修改html不能立刻生效,解决idea debug模式下修改代码却不能生效
  7. 计算机科学和建筑设计结合,智能化建筑中计算机科学与技术的应用
  8. redis持久化之rdb
  9. linux 7 远程桌面xrdp,[转帖]CentOS7安装xrdp(windows远程桌面连接linux)
  10. 敏捷开发用户故事系列之一:何为用户故事
  11. 如何生成二维码及注意事项
  12. 你会用 JSON.stringify()? JSON.stringify一些坑
  13. mysql 嵌套查询
  14. python3.6 scrapy模块查询POS后台获取指定时间和状态的订单存入到excel表格中
  15. Tempo数据分析平台,助力企业高效完成数据预处理工作
  16. IntelliJ IDEA 2017 汉化包及安装
  17. Spring中AOP的Introductions使用介绍(五)
  18. Maven(六)Maven传递性和依赖性
  19. 深入浅出Alpha Zero技术原理
  20. ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed问题解决

热门文章

  1. hmcl手机版下载_hmcl启动器正版-hmcl启动器手机版下载hmclv1.0.0-七度网
  2. 借助winrats软件实现BEKK模型
  3. Java数据结构和算法---程序员常用10种算法
  4. envi安装成功教程 附下载地址
  5. 使用ENVI下载雷达图像参考DEM的方法
  6. AD22如何添加元器件库
  7. 认知机器人:机器人学
  8. Windows内核之系统架构
  9. 电力系统同步发电机励磁系统的建模与仿真
  10. 自己不能跑的车凭什么叫自行车?B站硬核up主把自行车做成了自动驾驶