# Multi-purpose Tactile Perception Based on Deep Learning in a New Tendon-driven Optical Tactile Sensor

### From LRDE

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

## Abstract

In this paper, we create a new tendon-connected multi-functional optical tactile sensor, MechTac, for object perception in field of view (TacTip) and location of touching points in the blind area of vision (TacSide). In a multi-point touch task, the information of the TacSide and the TacTip are overlapped to commonly affect the distribution of papillae pins on the TacTip. Since the effects of TacSide are much less obvious to those affected on the TacTip, a perceiving out-of-view neural network (O${\displaystyle ^{2}}$VNet) is created to separate the mixed information with unequal affection. To reduce the dependence of the O${\displaystyle ^{2}}$VNet on the grayscale information of the image, we create one new binarized convolutional (BConv) layer in front of the backbone of the O${\displaystyle ^{2}}$VNet. The O${\displaystyle ^{2}}$VNet can not only achieve real-time temporal sequence prediction (34 ms per image), but also attain the average classification accuracy of 99.06%. The experimental results show that the O${\displaystyle ^{2}}$VNet can hold high classification accuracy even facing the image contrast changes.

## Bibtex (lrde.bib)

```@InProceedings{	  zhao.22.iros,
author	= {Zhou Zhao and Zhenyu Lu},
title		= {Multi-purpose Tactile Perception Based on Deep Learning in
a New Tendon-driven Optical Tactile Sensor},
booktitle	= {2022 IEEE/RSJ International Conference on Intelligent
Robots and Systems},
year		= 2022,
month		= oct,
abstract	= {In this paper, we create a new tendon-connected
multi-functional optical tactile sensor, MechTac, for
object perception in field of view (TacTip) and location of
touching points in the blind area of vision (TacSide). In a
multi-point touch task, the information of the TacSide and
the TacTip are overlapped to commonly affect the
distribution of papillae pins on the TacTip. Since the
effects of TacSide are much less obvious to those affected
on the TacTip, a perceiving out-of-view neural network
(O\$^2\$VNet) is created to separate the mixed information
with unequal affection. To reduce the dependence of the
O\$^2\$VNet on the grayscale information of the image, we
create one new binarized convolutional (BConv) layer in
front of the backbone of the O\$^2\$VNet. The O\$^2\$VNet can
not only achieve real-time temporal sequence prediction (34
ms per image), but also attain the average classification
accuracy of 99.06\%. The experimental results show that the
O\$^2\$VNet can hold high classification accuracy even facing
the image contrast changes.},
note		= {accepted}
}```