LogoTeluq
English
Logo
Répertoire de publications
de recherche en accès libre

A Generalized Graph Reduction Framework for Interactive Segmentation of Large Images [r-libre/3120]

Gueziri, Houssem-Eddine; McGuffin, Michael J et Laporte, Catherine (2016). A Generalized Graph Reduction Framework for Interactive Segmentation of Large Images. Computer Vision and Image Understanding, 150, 44-57. https://doi.org/10.1016/j.cviu.2016.05.009

Fichier(s) associé(s) à ce document :
[img]  PDF - A-Generalized-Graph-Reduction-Framework-for-Interactive-Segmentation-of-Large-Images.pdf
Licence : Creative Commons CC BY.
 
Catégorie de document : Articles de revues
Évaluation par un comité de lecture : Oui
Étape de publication : Publié
Résumé : The speed of graph-based segmentation approaches, such as random walker (RW) and graph cut (GC), depends strongly on image size. For high-resolution images, the time required to compute a segmentation based on user input renders interaction tedious. We propose a novel method, using an approximate contour sketched by the user, to reduce the graph before passing it on to a segmentation algorithm such as RW or GC. This enables a significantly faster feedback loop. The user first draws a rough contour of the object to segment. Then, the pixels of the image are partitioned into “layers” (corresponding to different scales) based on their distance from the contour. The thickness of these layers increases with distance to the contour according to a Fibonacci sequence. An initial segmentation result is rapidly obtained after automatically generating foreground and background labels according to a specifically selected layer; all vertices beyond this layer are eliminated, restricting the segmentation to regions near the drawn contour. Further foreground/background labels can then be added by the user to refine the segmentation. All iterations of the graph-based segmentation benefit from a reduced input graph, while maintaining full resolution near the object boundary. A user study with 16 participants was carried out for RW segmentation of a multi-modal dataset of 22 medical images, using either a standard mouse or a stylus pen to draw the contour. Results reveal that our approach significantly reduces the overall segmentation time compared with the status quo approach (p < 0.01). The study also shows that our approach works well with both input devices. Compared to super-pixel graph reduction, our approach provides full resolution accuracy at similar speed on a high-resolution benchmark image with both RW and GC segmentation methods. However, graph reduction based on super-pixels does not allow interactive correction of clustering errors. Finally, our approach can be combined with super-pixel clustering methods for further graph reduction, resulting in even faster segmentation.
Adresse de la version officielle : https://doi.org/10.1016/j.cviu.2016.05.009
Déposant: GUEZIRI, Houssem
Responsable : Houssem GUEZIRI
Dépôt : 14 déc. 2023 21:29
Dernière modification : 14 déc. 2023 21:29

Actions (connexion requise)

RÉVISER RÉVISER