Dex-Net 3.0 论文翻译

一、绪论

1、研究目的:研究深度学习在机器人吸附抓取领域的应用
2、研究意义:提高在对具有复杂几何外形的物体进行吸附抓取是鲁棒性较低的问题
3、研究思路:
(1). 设计物理模型
(2). 构建Dex-Net 3.0数据集
(3). 训练GQ-CNN网络

二、柔性吸附接触模型

(一)问题描述
1、目标:对于由深度相机给出的点云,我们的目标是找到一个鲁棒性最高的吸附抓取方式。
2、假设:为了便于模型的建立,我们做出以下假设:
  (1). 该系统是准静态的。(吸盘运动过程中的惯性影响可以忽略)
  (2). 物体是刚性,无孔的。
  (3). 每个物体在平坦的工作面上以稳定的静止姿态分开存放。
  (4). 与工作表面正交的单个深度相机,具有相对于机器人已知的位置和方向。
  (5). 具有已知几何形状的真空式末端执行器和由线性弹性材料制成的单个吸盘。
3、因此,我们定义:
  (1). 一次吸附抓取的参数化表示:u = ( p + v ),其中p表示三维目标点,v表示一个二维的渐进角度
  (2). 影响抓取成功鲁棒性的潜在状态(如物体材料,摩擦性因素等):x
  (3). 表示成功抓取分布的模型:p ( S | u, x ),其中,S是一个二进制抓取质量函数,当S = 1时,抓取成功,否则,抓取失败。
  (4). 对于给定点云y,我们定义,抓取鲁棒性即为在环境p下,抓取成功的概率:Q ( u, y ) = P ( S | u, y )
4、我们的目标是对于一个已知的点云,找到一个最大化鲁棒性的吸附抓取方式,即
这里写图片描述

由于x的存在,我们不能直接推算出抓取的鲁棒性函数Q,但是,我们可以通过已有数据中的点云,吸附抓取方式和成功标签来训练GQ-CNN神经网络,借助最小化交叉熵代价函数L的方法逼近π*,即:
这里写图片描述
之后我们将在之前研究的基础上,对抓取结果进行评估。

(二)密封形成
1、为了对抓取结果进行评估,论文建立了一个准静态弹簧模型,并在该模型的基础上对以下两个指标进行评估:
1) 在吸盘的周边与物体表面之间是否形成密封。
2) 对于一个已经形成的密封状态,由于重力和扰动,吸盘是否能抵挡物体上的外部作用力。
2、该模型用由连接{ v1, v2, v3, … ,vn, a }多个顶点的弹簧系统代替复杂的弹性分析模型,连接的弹簧可以分为以下三类:
边界(结构)弹簧:连接底面的相邻顶点:vi ~ vi+1
锥(结构)弹簧:连接底面顶点和锥体顶点:vi ~ a
弯曲弹簧:连接相互间隔的底面顶点:vi ~ vi+2
3、在此基础上,论文提出了判断密封是否形成的指标:
  (1). 在接近或接触配置期间,C的锥面不能与M碰撞。
  (2). M的表面在C的边界弹簧所形成的接触环内没有缝隙。
(3). 每个弹簧中要维持C的接触构造所需的能量低于阈值。

(三)作用力空间分析
扰动作用力:在抓取状态下,物体受力情况可以用包含m个基准作用力的接触模型表示。
吸附接触模型:该模型定义的抓取力由以下几个部分组成:
  (1). 扰动法向力(fz):吸盘材料沿z轴施加到物体上的力。
  (2). 真空力(V):保持物体吸附状态的气压差产生的恒定力的大小。
  (3). 摩擦力(ff =(fx,fy)):由于吸盘与物体之间的法向力,接触切面中的力f N = f z + V。
  (4). 扭转摩擦力矩(τz):由接触环中的摩擦力产生的扭矩。
  (5). 弹性恢复力矩(τe =(τx,τy)):由吸力杯中的弹性恢复力沿着接触环的边界推动物体的接触切线平面中的轴的转矩。
根据推导:F必须满足一组线性约束条件,用以计算作用力产生的阻力
Dex-Net 3.0 论文翻译_第1张图片

这里μ是摩擦系数,r是接触环的半径,κ是材料依赖常数。

(四)鲁棒性作用力阻力
我们通过评估物体姿态,抓取姿态和干扰作用力的分布上的密封形成和作用力阻力来评估候选吸附方式的鲁棒性:
定义:u和x的鲁棒性作用力阻力度量是:
这里写图片描述

在实践中,我们通过采取M个样本,评估每个样本的作用力阻力并计算样本平均值来确定鲁棒性作用力阻力。

三、Dex-Net 3.0数据集

1、为了学习预测基于嘈杂点云的抓取鲁棒性,我们通过从联合分布p(S,u,x,y)中抽取元组(Si,ui,yi)生成Dex-Net 3.0训练数据集,该数据集中包含点云数据,抓取策略和抓取成功标签,它由以下分布组成:
  • 状态:p(x):机器人将遇到的可能的物体,物体姿势和相机姿势先前的状态。
  • 候选抓取方式:p(u | x):先前的约束将候选人掌握到物体表面上的目标点。
  • 掌握成功p(S | u,x):重力扳手的扳手阻力随机模型。
  • 观测值p(y | x):传感器噪声模型。
2、要从模型中抽样:
  (1). 我们首先从3D CAD模型的数据库中随机选择一个对象;
  (2). 然后对物体的姿态,摩擦系数等潜在状态进行采样;
  (3). 对作用力阻力(ρ)进行评估,并按阈值为0.2转换为二进制标签S;
  (4). 使用渲染和图像噪声模型对场景的点云图进行采样,将S标签通过投影与图像中的像素位置相关联。

四、深度鲁棒性吸附抓取策略

使用GQ-CNN架构对Dex-Net 3.0数据库进行训练,该网络架构与Dex-Net 2.0相似,区别是:
  (1). 修改姿态输入以包含进近方向和桌面法线之间的夹角
  (2). 将pc1层从16位修改至64位
训练得到的模型在验证数据集上达到了93.5%的分类精度。

五、参考文献

[1] A. Ali, M. Hosseini, and B. Sahari, “A review of constitutive models
for rubber-like materials,” American Journal of Engineering and
Applied Sciences, vol. 3, no. 1, pp. 232–239, 2010.
[2] B. Bahr, Y. Li, and M. Najafi, “Design and suction cup analysis of
a wall climbing robot,” Computers & electrical engineering, vol. 22,
no. 3, pp. 193–209, 1996.
[3] J. Bohg, A. Morales, T. Asfour, and D. Kragic, “Data-driven grasp
synthesisa survey,” IEEE Trans. Robotics, vol. 30, no. 2, pp. 289–309,
2014.
[4] N. Correll, K. E. Bekris, D. Berenson, O. Brock, A. Causo, K. Hauser,
K. Okada, A. Rodriguez, J. M. Romano, and P. R. Wurman, “Analysis
and observations from the first amazon picking challenge,” IEEE
Transactions on Automation Science and Engineering, 2016.
[5] R. Detry, C. H. Ek, M. Madry, and D. Kragic, “Learning a dictionary
of prototypical grasp-predicting parts from grasping experience,” in
Proc. IEEE Int. Conf. Robotics and Automation (ICRA). IEEE, 2013,
pp. 601–608.
[6] Y. Domae, H. Okuda, Y. Taguchi, K. Sumi, and T. Hirai, “Fast
graspability evaluation on single depth maps for bin picking with
general grippers,” in Robotics and Automation (ICRA), 2014 IEEE
International Conference on. IEEE, 2014, pp. 1997–2004.
[7] C. Eppner, S. Höfer, R. Jonschkowski, R. M. Martin, A. Sieverling,
V. Wall, and O. Brock, “Lessons from the amazon picking challenge:
Four aspects of building robotic systems.” in Robotics: Science and
Systems, 2016.
[8] C. Ferrari and J. Canny, “Planning optimal grasps,” in Proc. IEEE Int.
Conf. Robotics and Automation (ICRA), 1992, pp. 2290–2295.
[9] K. Goldberg, B. V. Mirtich, Y. Zhuang, J. Craig, B. R. Carlisle, and
J. Canny, “Part pose statistics: Estimators and experiments,” IEEE
Trans. Robotics and Automation, vol. 15, no. 5, pp. 849–857, 1999.
[10] R. Hartley and A. Zisserman, Multiple view geometry in computer
vision. Cambridge university press, 2003.
[11] C. Hernandez, M. Bharatheesha, W. Ko, H. Gaiser, J. Tan, K. van
Deurzen, M. de Vries, B. Van Mil, J. van Egmond, R. Burger, et al.,
“Team delft’s robot winner of the amazon picking challenge 2016,”
arXiv preprint arXiv:1610.05514, 2016.
[12] M. Jaderberg, K. Simonyan, A. Zisserman, et al., “Spatial transformer
networks,” in Advances in Neural Information Processing Systems,
2015, pp. 2017–2025.
[13] E. Johns, S. Leutenegger, and A. J. Davison, “Deep learning a
grasp function for grasping under gripper pose uncertainty,” in Proc.
IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS). IEEE,
2016, pp. 4461–4468.
[14] I. Kao, K. Lynch, and J. W. Burdick, “Contact modeling and ma-
nipulation,” in Springer Handbook of Robotics. Springer, 2008, pp.
647–669.
[15] D. Kappler, J. Bohg, and S. Schaal, “Leveraging big data for grasp
planning,” in Proc. IEEE Int. Conf. Robotics and Automation (ICRA),
2015.
[16] A. Kasper, Z. Xue, and R. Dillmann, “The kit object models database:
An object model database for object recognition, localization and
manipulation in service robotics,” Int. Journal of Robotics Research
(IJRR), vol. 31, no. 8, pp. 927–934, 2012.
[17] R. Kolluru, K. P. Valavanis, and T. M. Hebert, “Modeling, analysis, and
performance evaluation of a robotic gripper system for limp material
handling,” IEEE Transactions on Systems, Man, and Cybernetics, Part
B (Cybernetics), vol. 28, no. 3, pp. 480–486, 1998.
[18] R. Krug, Y. Bekiroglu, and M. A. Roa, “Grasp quality evaluation done
right: How assumed contact force bounds affect wrench-based quality
metrics,” in Robotics and Automation (ICRA), 2017 IEEE International
Conference on. IEEE, 2017, pp. 1595–1600.
[19] I. Lenz, H. Lee, and A. Saxena, “Deep learning for detecting robotic
grasps,” Int. Journal of Robotics Research (IJRR), vol. 34, no. 4-5, pp.
705–724, 2015.
[20] S. Levine, P. Pastor, A. Krizhevsky, and D. Quillen, “Learning hand-
eye coordination for robotic grasping with deep learning and large-
scale data collection,” arXiv preprint arXiv:1603.02199, 2016.
[21] Z. Li and S. S. Sastry, “Task-oriented optimal grasping by multifin-
gered robot hands,” IEEE Journal on Robotics and Automation, vol. 4,
no. 1, pp. 32–44, 1988.
[22] J. Mahler, J. Liang, S. Niyaz, M. Laskey, R. Doan, X. Liu, J. A.
Ojea, and K. Goldberg, “Dex-net 2.0: Deep learning to plan robust
grasps with synthetic point clouds and analytic grasp metrics,” in Proc.
Robotics: Science and Systems (RSS), 2017.
[23] J. Mahler, F. T. Pokorny, B. Hou, M. Roderick, M. Laskey, M. Aubry,
K. Kohlhoff, T. Kröger, J. Kuffner, and K. Goldberg, “Dex-net 1.0:
A cloud-based network of 3d objects for robust grasp planning using
a multi-armed bandit model with correlated rewards,” in Proc. IEEE
Int. Conf. Robotics and Automation (ICRA). IEEE, 2016.
[24] G. Mantriota, “Theoretical model of the grasp with vacuum gripper,”
Mechanism and machine theory, vol. 42, no. 1, pp. 2–17, 2007.
[25] R. M. Murray, Z. Li, and S. S. Sastry, A mathematical introduction to
robotic manipulation. CRC press, 1994.
[26] L. Pinto and A. Gupta, “Supersizing self-supervision: Learning to
grasp from 50k tries and 700 robot hours,” in Proc. IEEE Int. Conf.
Robotics and Automation (ICRA), 2016.
[27] X. Provot et al., “Deformation constraints in a mass-spring model
to describe rigid cloth behaviour,” in Graphics interface. Canadian
Information Processing Society, 1995, pp. 147–147.
[28] R. Y. Rubinstein, A. Ridder, and R. Vaisman, Fast sequential Monte
Carlo methods for counting and optimization. John Wiley & Sons,
2013.
[29] A. Saxena, J. Driemeyer, and A. Y. Ng, “Robotic grasping of novel
objects using vision,” The International Journal of Robotics Research,
vol. 27, no. 2, pp. 157–173, 2008.
[30] K. B. Shimoga, “Robot grasp synthesis algorithms: A survey,” The
International Journal of Robotics Research, vol. 15, no. 3, pp. 230–
266, 1996.
[31] H. S. Stuart, M. Bagheri, S. Wang, H. Barnard, A. L. Sheng,
M. Jenkins, and M. R. Cutkosky, “Suction helps in a pinch: Improving
underwater manipulation with gentle suction flow,” in Intelligent
Robots and Systems (IROS), 2015 IEEE/RSJ International Conference
on. IEEE, 2015, pp. 2279–2284.
[32] N. C. Tsourveloudis, R. Kolluru, K. P. Valavanis, and D. Gracanin,
“Suction control of a robotic gripper: A neuro-fuzzy approach,”
Journal of Intelligent & Robotic Systems, vol. 27, no. 3, pp. 215–235,
2000.
[33] A. J. Valencia, R. M. Idrovo, A. D. Sappa, D. P. Guingla, and
D. Ochoa, “A 3d vision based approach for optimal grasp of vacuum
grippers,” in Electronics, Control, Measurement, Signals and their
Application to Mechatronics (ECMSM), 2017 IEEE International
Workshop of. IEEE, 2017, pp. 1–6.
[34] J. Weisz and P. K. Allen, “Pose error robust grasping from contact
wrench space metrics,” in Proc. IEEE Int. Conf. Robotics and Au-
tomation (ICRA). IEEE, 2012, pp. 557–562.
[35] W. Wohlkinger, A. Aldoma, R. B. Rusu, and M. Vincze, “3dnet: Large-
scale object class recognition from cad models,” in Proc. IEEE Int.
Conf. Robotics and Automation (ICRA). IEEE, 2012, pp. 5384–5391.
[36] K.-T. Yu, N. Fazeli, N. Chavan-Dafle, O. Taylor, E. Donlon, G. D.
Lankenau, and A. Rodriguez, “A summary of team mit’s approach to
the amazon picking challenge 2015,” arXiv preprint arXiv:1604.03639,
2016.

你可能感兴趣的:(HUMAN+)