Ubuntu下kinect v2制作数据集

1.下载kinectv2-dataset_make

git clone https://github.com/MRwangmaomao/KinectV2_dataset_make.git

下载后放在catkin_ws/src下面
2.修改保存地址
在深度图像以及RGB图像的保存 get_image.cpp需要修改保存文件的地址:
string save_path = “/home/xxxx/kinectdata”; //根据自己需要修改,这是存储数据集的文件夹的路径,在建立存储数据集kinectdata的文件夹的时候需要在文件夹下建立两文件名为depth、rgb的文件夹,这样才会保存图像文件在这两个文件夹下面。不然就只会生成两个txt文件,而不会保存图像文件。
然后保存,并编译:

cd catkin_ws
catkin_make

3.运行

roslaunch kinect2_bridge kinect2_bridge.launch rosrun dataset_make get_image_node

生成数据集

在数据集文件下执行

python associate.py rgb.txt depth.txt >associate.txt

注:首先需要去下载的源代码中将associate.py文件复制到存放数据集的文件中去,与rgb.txt depth.txt在同一文件夹下同级目录中。

associate.py代码如下

#!/usr/bin/python
# -*- coding:utf-8 -*-# RGB和深图像时间戳对齐
# 再和 groundtruth 相机轨迹对齐
# 用法 python2 associate.py rgb.txt depth.txt > associate.txt
# python associate.py associate.txt groundtruth.txt > associate_with_groundtruth.txt# Software License Agreement (BSD License)
#
# Copyright (c) 2013, Juergen Sturm, TUM
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
#
#  * Redistributions of source code must retain the above copyright
#    notice, this list of conditions and the following disclaimer.
#  * Redistributions in binary form must reproduce the above
#    copyright notice, this list of conditions and the following
#    disclaimer in the documentation and/or other materials provided
#    with the distribution.
#  * Neither the name of TUM nor the names of its
#    contributors may be used to endorse or promote products derived
#    from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
# COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
#
# Requirements:
# sudo apt-get install python-argparse"""
The Kinect provides the color and depth images in an un-synchronized way.
This means that the set of time stamps from the color images do not intersect with those of the depth images.
Therefore, we need some way of associating color images to depth images.For this purpose, you can use the ''associate.py'' script.
It reads the time stamps from the rgb.txt file and the depth.txt file,
and joins them by finding the best matches."""import argparse
import sys
import os
import numpy# 读取轨迹文件=================
def read_file_list(filename):"""Reads a trajectory from a text file. File format:The file format is "stamp d1 d2 d3 ...", where stamp denotes the time stamp (to be matched)and "d1 d2 d3.." is arbitary data (e.g., a 3D position and 3D orientation) associated to this timestamp. Input:filename -- File nameOutput:dict -- dictionary of (stamp,data) tuples"""file = open(filename)data = file.read()lines = data.replace(","," ").replace("\t"," ").split("\n") list = [[v.strip() for v in line.split(" ") if v.strip()!=""] for line in lines if len(line)>0 and line[0]!="#"]list = [(float(l[0]),l[1:]) for l in list if len(l)>1]return dict(list)def associate(first_list, second_list,offset,max_difference):"""Associate two dictionaries of (stamp,data). As the time stamps never match exactly, we aim to find the closest match for every input tuple.Input:first_list -- first dictionary of (stamp,data) tuplessecond_list -- second dictionary of (stamp,data) tuplesoffset -- time offset between both dictionaries (e.g., to model the delay between the sensors)max_difference -- search radius for candidate generationOutput:matches -- list of matched tuples ((stamp1,data1),(stamp2,data2))"""first_keys = first_list.keys()second_keys = second_list.keys()potential_matches = [(abs(a - (b + offset)), a, b) for a in first_keys for b in second_keys if abs(a - (b + offset)) < max_difference]potential_matches.sort()matches = []for diff, a, b in potential_matches:if a in first_keys and b in second_keys:first_keys.remove(a)second_keys.remove(b)matches.append((a, b))matches.sort()return matchesif __name__ == '__main__':# parse command lineparser = argparse.ArgumentParser(description='''This script takes two data files with timestamps and associates them   ''')parser.add_argument('first_file', help='first text file (format: timestamp data)')parser.add_argument('second_file', help='second text file (format: timestamp data)')parser.add_argument('--first_only', help='only output associated lines from first file', action='store_true')parser.add_argument('--offset', help='time offset added to the timestamps of the second file (default: 0.0)',default=0.0)parser.add_argument('--max_difference', help='maximally allowed time difference for matching entries (default: 0.02)',default=0.02)args = parser.parse_args()# 读取文件first_list = read_file_list(args.first_file)second_list = read_file_list(args.second_file)matches = associate(first_list, second_list,float(args.offset),float(args.max_difference))    if args.first_only:for a,b in matches:print("%f %s"%(a," ".join(first_list[a])))else:for a,b in matches:print("%f %s %f %s"%(a," ".join(first_list[a]),b-float(args.offset)," ".join(second_list[b])))

Ubuntu下kinect v2制作数据集相关推荐

  1. Ubuntu下录制并制作Gif图片

    1.背景 如果你看了我的文章,那么很庆幸,你将学会在Ubuntu下录制Gif图片 ; 大多数应该和我一样, 在开发Android 的时候,需要弄个效果图 , 但是在Ubuntu下,就没有windows ...

  2. ubuntu下使用yocto制作龙芯文件系统

    一.下载yocto代码 poky下载得到poky openembedded配方下载得到meta-openembedded目录 MIPS架构配方下载得到meta-img目录 git clone git: ...

  3. ROS 环境下 Kinect V2 开发(4) —— NiTE2 的配置

    平台环境 Ubuntu14.04 32位 ROS indigo 处理器:Inter(R) Celeron(R) CPU N3160 @ 1.60GHz x 4 libfreenect2 驱动 iai_ ...

  4. ubuntu 20.04配置Elasticfusion及问题解决(Kinect V2实测数据、TUM数据)

    测试数据集+Kinect V2实时数据 已跑通,前后分别在4台电脑跑过,问题梳理比较多. Tips:编译过程中仔细阅读readme文件, 看看是不是有什么地方没注意到,能解决大部分问题. [源码] E ...

  5. ubuntu 16.04 ROS + kinect v2 安装

    参考: ubuntu 16.04 ROS + kinect v2 driver安装方法:安装驱动时遇到的问题及解决方法(1) 以下为参考链接部分内容以及我安装过程中的操作 安装libfreenect2 ...

  6. 【计算机视觉】深度相机(六)--Kinect v2.0 手势样本库制作

    目录为1.如何使用Kinect Studio录制手势剪辑:2.如何使用Visual Gesture Builder创建手势项目:3.如何在我的C#程序中使用手势:4.关于录制.剪辑手势过程中的注意事项 ...

  7. 深度相机(六)--Kinect v2.0 手势样本库制作

    目录为1.如何使用Kinect Studio录制手势剪辑:2.如何使用Visual Gesture Builder创建手势项目:3.如何在我的C#程序中使用手势:4.关于录制.剪辑手势过程中的注意事项 ...

  8. Ubuntu 20.04LTS 安装openni2、编译opencv、连接Kinect v2

    西八,我的电脑,就在毕业设计答辩前几天,没错,就是昨天,G了,windows系统直接爆炸,无法开机,还好有两个系统,我也对源码进行了备份,所以,我现在需要在ubuntu上重新搭建环境...终于还是把欠 ...

  9. 小白教程:Ubuntu下使用Darknet/YOLOV3训练自己的数据集

    小白教程:Ubuntu下使用Darknet/YOLOV3训练自己的数据集 YOLOV3官网教程:https://pjreddie.com/darknet/yolo/ 使用预训练模型进行检测 git c ...

最新文章

  1. 电脑ip地址设置_关于电脑的远程开机(唤醒)
  2. 物联网安全:LED灯中存在多个安全漏洞
  3. 怎么学python知乎_你们都是怎么学 Python 的?
  4. java for与foreach_java中for和foreach的区别是什么?
  5. 因为在此系统上禁止运行脚本。有关详细信息_在弃用11年后微软终于允许IT管理员禁用IE中的JScript脚本引擎...
  6. Python实训day02am【列表、字符串、字符集】
  7. python爬虫常见反爬措施_爬虫常见的反爬措施有哪些
  8. Vue学习笔记一 创建vue项目
  9. Eclipse安装aptana
  10. linux jdk安装
  11. 在win2k3上使用卡巴斯基6.0
  12. 一句话理解到底什么是电平触发器,脉冲触发器,同步触发器,边沿触发器
  13. Xcode8写代码闪退
  14. win10只有c盘怎么分区_win10系统硬盘怎么分区
  15. 蔬菜小程序服务器,生鲜蔬菜同城配送小程序案例分析
  16. 刘畅清华大学计算机学院,刘畅
  17. 深度推荐模型-NFM
  18. 【Verilog-9.9】initial和always的用法
  19. 云米AI洗碗机Iron A1 AI消毒除菌版上手体验
  20. 抓取微信公众号全部文章,采用AnyProxy+Javascript+Java实现

热门文章

  1. python中item是什么意思中文-Python中使用item()方法遍历字典的例子
  2. 仪器仪表的发展和应用
  3. 干货 :手把手教你Tableau高级数据分析功能(附数据集)
  4. 虚拟现实技术会带来网吧的新春天吗?
  5. 阿根廷探戈舞会- 一起salsa百科 - 一起salsa网 - Powered by HDWiki!
  6. html类似ppt的效果,类似PPT演示稿效果的JavaScript幻灯片插件
  7. 高防服务器稳定性原因,企业租用高防服务器有什么原因呢?
  8. C语言基础:预处理指令
  9. day42 jQuery
  10. c语言bnd文件,(((sizeof (X)) + (bnd)) (~(bnd)))