Max-Tree Computation on GPUs
From LRDE
- Authors
- Nicolas Blin, Edwin Carlinet, Florian Lemaitre, Lionel Lacassagne, Thierry Géraud
- Journal
- IEEE Transactions on Parallel and Distributed Systems
- Type
- article
- Projects
- Olena
- Keywords
- Image
- Date
- 2022-03-09
Abstract
In Mathematical Morphology, the max-tree is a region-based representation that encodes the inclusion relationship of the threshold sets of an image. This tree has been proven useful in numerous image processing applications. For the last decade, works have been led to improve the building time of this structure; mixing algorithmic optimizationsparallel and distributed computing. Nevertheless, there is still no algorithm that takes benefit from the computing power of the massively parallel architectures. In this work, we propose the first GPU algorithm to compute the max-tree. The proposed approach leads to significant speed-ups, and is up to one order of magnitude faster than the current State-of-the-Art parallel CPU algorithms. This work paves the way for a max-tree integration in image processing GPU pipelines and real-time image processing based on Mathematical Morphology. It is also a foundation for porting other image representations from Mathematical Morphology on GPUs.
Documents
Bibtex (lrde.bib)
@Article{ blin.22.tpds, author = {Nicolas Blin and Edwin Carlinet and Florian Lemaitre and Lionel Lacassagne and Thierry G\'eraud}, title = {Max-Tree Computation on {GPU}s}, journal = {IEEE Transactions on Parallel and Distributed Systems}, month = mar, year = {2022}, volume = {33}, number = {12}, pages = {3520--3531}, abstract = {In Mathematical Morphology, the max-tree is a region-based representation that encodes the inclusion relationship of the threshold sets of an image. This tree has been proven useful in numerous image processing applications. For the last decade, works have been led to improve the building time of this structure; mixing algorithmic optimizations, parallel and distributed computing. Nevertheless, there is still no algorithm that takes benefit from the computing power of the massively parallel architectures. In this work, we propose the first GPU algorithm to compute the max-tree. The proposed approach leads to significant speed-ups, and is up to one order of magnitude faster than the current State-of-the-Art parallel CPU algorithms. This work paves the way for a max-tree integration in image processing GPU pipelines and real-time image processing based on Mathematical Morphology. It is also a foundation for porting other image representations from Mathematical Morphology on GPUs.}, doi = {10.1109/TPDS.2022.3158488} }