Difference between revisions of "Conference papers"
From LRDE
(Blanked the page) Tag: Blanking |
|||
Line 1: | Line 1: | ||
− | {{Publication |
||
− | | published = true |
||
− | | date = 2022-07-22 |
||
− | | authors = Zhou Zhao, Zhenyu Lu |
||
− | | title = Multi-purpose Tactile Perception Based on Deep Learning in a New Tendon-driven Optical Tactile Sensor |
||
− | | booktitle = 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems |
||
− | | address = Kyoto, Japan |
||
− | | abstract = In this paper, we create a new tendon-connected multi-functional |
||
− | optical tactile sensor, MechTac, for object perception |
||
− | in field of view (TacTip) and location of touching points |
||
− | in the blind area of vision (TacSide). In a multi-point touch task, |
||
− | the information of the TacSide and the TacTip are |
||
− | overlapped to commonly affect the distribution of papillae pins on the TacTip. |
||
− | Since the effects of TacSide are much less obvious to |
||
− | those affected on the TacTip, a perceiving out-of-view neural network (O2VNet) |
||
− | is created to separate the mixed information with unequal affection. |
||
− | To reduce the dependence of the O2VNet on the grayscale information |
||
− | of the image, we create one new binarized convolutional (BConv) layer |
||
− | in front of the backbone of the O2VNet. The O2VNet can not only achieve |
||
− | real-time temporal sequence prediction (34 ms per image), |
||
− | but also attain the average classification accuracy of 99.06%. |
||
− | The experimental results show that the O2VNet can hold high classification accuracy |
||
− | even facing the image contrast changes. |
||
− | | lrdeprojects = Olena |
||
− | | lrdekeywords = Image |
||
− | | lrdenewsdate = 2022-07-22 |
||
− | | note = accepted |
||
− | | type = inproceedings |
||
− | | id = zhao.22.iros |
||
− | | bibtex = |
||
− | @InProceedings<nowiki>{</nowiki> zhao.22.iros, |
||
− | author = <nowiki>{</nowiki>Zhou Zhao and Zhenyu Lu<nowiki>}</nowiki>, |
||
− | title = <nowiki>{</nowiki>Multi-purpose Tactile Perception Based on Deep Learning in a New Tendon-driven Optical Tactile Sensor<nowiki>}</nowiki>, |
||
− | booktitle = <nowiki>{</nowiki>2022 IEEE/RSJ International Conference on Intelligent Robots and Systems<nowiki>}</nowiki>, |
||
− | year = 2022, |
||
− | address = <nowiki>{</nowiki>Kyoto, Japan<nowiki>}</nowiki>, |
||
− | month = october, |
||
− | abstract = <nowiki>{</nowiki>In this paper, we create a new tendon-connected multi-functional |
||
− | optical tactile sensor, MechTac, for object perception |
||
− | in field of view (TacTip) and location of touching points |
||
− | in the blind area of vision (TacSide). In a multi-point touch task, |
||
− | the information of the TacSide and the TacTip are |
||
− | overlapped to commonly affect the distribution of papillae pins on the TacTip. |
||
− | Since the effects of TacSide are much less obvious to |
||
− | those affected on the TacTip, a perceiving out-of-view neural network (O$^2$VNet) |
||
− | is created to separate the mixed information with unequal affection. |
||
− | To reduce the dependence of the O$^2$VNet on the grayscale information |
||
− | of the image, we create one new binarized convolutional (BConv) layer |
||
− | in front of the backbone of the O$^2$VNet. The O$^2$VNet can not only achieve |
||
− | real-time temporal sequence prediction (34 ms per image), |
||
− | but also attain the average classification accuracy of 99.06\%. |
||
− | The experimental results show that the O$^2$VNet can hold high classification accuracy |
||
− | even facing the image contrast changes. <nowiki>}</nowiki>, |
||
− | note = <nowiki>{</nowiki>accepted<nowiki>}</nowiki> |
||
− | <nowiki>}</nowiki> |
||
− | |||
− | }} |