Difference between revisions of "Conference papers"

From LRDE

Line 1: Line 1:
  +
{{Publication
{{#ask: [[Category:Publications]] [[Publication type::inproceedings]]
 
  +
| published = true
| ?Has bibtex id=Bibtex id
 
  +
| date = 2022-07-22
| ?Has author=Authors
 
  +
| authors = Zhou Zhao, Zhenyu Lu
| ?Has title=Title
 
  +
| title = Multi-purpose Tactile Perception Based on Deep Learning in a New Tendon-driven Optical Tactile Sensor
| ?Published in
 
  +
| booktitle = 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems
| ?News date#MEDIAWIKI=Date
 
  +
| address = Kyoto, Japan
| format =template
 
  +
| abstract = In this paper, we create a new tendon-connected multi-functional
| template = PublicationRow
 
  +
optical tactile sensor, MechTac, for object perception
| introtemplate = PublicationRowIntro
 
  +
in field of view (TacTip) and location of touching points
| outrotemplate = PublicationRowOutro
 
  +
in the blind area of vision (TacSide). In a multi-point touch task,
| sort = News date
 
  +
the information of the TacSide and the TacTip are
| order = descending
 
  +
overlapped to commonly affect the distribution of papillae pins on the TacTip.
| sep =
 
  +
Since the effects of TacSide are much less obvious to
  +
those affected on the TacTip, a perceiving out-of-view neural network (O2VNet)
  +
is created to separate the mixed information with unequal affection.
  +
To reduce the dependence of the O2VNet on the grayscale information
  +
of the image, we create one new binarized convolutional (BConv) layer
  +
in front of the backbone of the O2VNet. The O2VNet can not only achieve
  +
real-time temporal sequence prediction (34 ms per image),
  +
but also attain the average classification accuracy of 99.06%.
  +
The experimental results show that the O2VNet can hold high classification accuracy
  +
even facing the image contrast changes.
  +
| lrdeprojects = Olena
  +
| lrdekeywords = Image
  +
| lrdenewsdate = 2022-07-22
  +
| note = accepted
  +
| type = inproceedings
  +
| id = zhao.22.iros
 
| bibtex =
  +
@InProceedings<nowiki>{</nowiki> zhao.22.iros,
  +
author = <nowiki>{</nowiki>Zhou Zhao and Zhenyu Lu<nowiki>}</nowiki>,
  +
title = <nowiki>{</nowiki>Multi-purpose Tactile Perception Based on Deep Learning in a New Tendon-driven Optical Tactile Sensor<nowiki>}</nowiki>,
  +
booktitle = <nowiki>{</nowiki>2022 IEEE/RSJ International Conference on Intelligent Robots and Systems<nowiki>}</nowiki>,
  +
year = 2022,
  +
address = <nowiki>{</nowiki>Kyoto, Japan<nowiki>}</nowiki>,
  +
month = october,
  +
abstract = <nowiki>{</nowiki>In this paper, we create a new tendon-connected multi-functional
  +
optical tactile sensor, MechTac, for object perception
  +
in field of view (TacTip) and location of touching points
  +
in the blind area of vision (TacSide). In a multi-point touch task,
  +
the information of the TacSide and the TacTip are
  +
overlapped to commonly affect the distribution of papillae pins on the TacTip.
  +
Since the effects of TacSide are much less obvious to
  +
those affected on the TacTip, a perceiving out-of-view neural network (O$^2$VNet)
  +
is created to separate the mixed information with unequal affection.
  +
To reduce the dependence of the O$^2$VNet on the grayscale information
  +
of the image, we create one new binarized convolutional (BConv) layer
  +
in front of the backbone of the O$^2$VNet. The O$^2$VNet can not only achieve
  +
real-time temporal sequence prediction (34 ms per image),
  +
but also attain the average classification accuracy of 99.06\%.
  +
The experimental results show that the O$^2$VNet can hold high classification accuracy
  +
even facing the image contrast changes. <nowiki>}</nowiki>,
  +
note = <nowiki>{</nowiki>accepted<nowiki>}</nowiki>
  +
<nowiki>}</nowiki>
  +
 
}}
 
}}

Revision as of 16:34, 22 July 2022

Abstract

In this paper, we create a new tendon-connected multi-functional optical tactile sensor, MechTac, for object perception in field of view (TacTip) and location of touching points in the blind area of vision (TacSide). In a multi-point touch task, the information of the TacSide and the TacTip are overlapped to commonly affect the distribution of papillae pins on the TacTip. Since the effects of TacSide are much less obvious to those affected on the TacTip, a perceiving out-of-view neural network (O2VNet) is created to separate the mixed information with unequal affection. To reduce the dependence of the O2VNet on the grayscale information of the image, we create one new binarized convolutional (BConv) layer in front of the backbone of the O2VNet. The O2VNet can not only achieve real-time temporal sequence prediction (34 ms per image), but also attain the average classification accuracy of 99.06%. The experimental results show that the O2VNet can hold high classification accuracy even facing the image contrast changes.


Bibtex (lrde.bib)

@InProceedings{	  zhao.22.iros,
  author	= {Zhou Zhao and Zhenyu Lu},
  title		= {Multi-purpose Tactile Perception Based on Deep Learning in a New Tendon-driven Optical Tactile Sensor},
  booktitle	= {2022 IEEE/RSJ International Conference on Intelligent Robots and Systems},
  year		= 2022,
  address	= {Kyoto, Japan},
  month		= october,
  abstract	= {In this paper, we create a new tendon-connected multi-functional
	          optical tactile sensor, MechTac, for object perception
	          in field of view (TacTip) and location of touching points
	          in the blind area of vision (TacSide). In a multi-point touch task,
	          the information of the TacSide and the TacTip are
	          overlapped to commonly affect the distribution of papillae pins on the TacTip.
	          Since the effects of TacSide are much less obvious to
	          those affected on the TacTip, a perceiving out-of-view neural network (O$^2$VNet)
	          is created to separate the mixed information with unequal affection.
	          To reduce the dependence of the O$^2$VNet on the grayscale information
	          of the image, we create one new binarized convolutional (BConv) layer
	          in front of the backbone of the O$^2$VNet. The O$^2$VNet can not only achieve
	          real-time temporal sequence prediction (34 ms per image),
	          but also attain the average classification accuracy of 99.06\%.
	          The experimental results show that the O$^2$VNet can hold high classification accuracy
	          even facing the image contrast changes. },
  note		= {accepted}
}