Surgical Planning Laboratory - Brigham & Women's Hospital - Boston, Massachusetts USA - a teaching affiliate of Harvard Medical School

Surgical Planning Laboratory

The Publication Database hosted by SPL

All Publications | Upload | Advanced Search | Gallery View | Download Statistics | Help | Import | Log in

Active Learning Guided Interactions for Consistent Image Segmentation with Reduced User Interactions

Institution:
General Electric Research, Niskayuna, NY, USA.
Publisher:
ISBI 2011
Publication Date:
Mar-2011
Journal:
Proc IEEE Int Symp Biomed Imaging.
Pages:
1645-8
Citation:
Proc IEEE Int Symp Biomed Imaging. 2011 Mar; 1645-8.
Keywords:
Active learning, SVM classification, interactive segmentation, learning based user guidance
Appears in Collections:
NAC, NA-MIC, SLICER
Sponsors:
P41 RR013218/RR/NCRR NIH HHS/United States
U54 EB005149/EB/NIBIB NIH HHS/United States
P50 AG005681/AG/NIA NIH HHS/United States
P01 AG003991/AG/NIA NIH HHS/United States
R01 AG021910/AG/NIA NIH HHS/United States
P50 MH071616/MH/NIMH NIH HHS/United States
U24 RR021382/RR/NCRR NIH HHS/United States
R01 MH056584/MH/NIMH NIH HHS/United States
Generated Citation:
Veeraraghavan H., Miller J.V. Active Learning Guided Interactions for Consistent Image Segmentation with Reduced User Interactions. Proc IEEE Int Symp Biomed Imaging. 2011 Mar; 1645-8.
Downloaded: 665 times. [view map]
Paper: Download, View online
Export citation:
Google Scholar: link

Interactive techniques leverage the expert knowledge of users to produce accurate image segmentations. However, the segmentation accuracy varies with the users. Additionally, users may also require training with the algorithm and its exposed parameters to obtain the best segmentation with minimal effort. Our work combines active learning with interactive segmentation and (i) achieves as good accuracy compared to a fully user guided segmentation but with significantly lower number of user interactions (on average 50%), and (ii) achieves robust segmentation by reducing segmentation variability with user inputs. Our approach interacts with user to suggest gestures or seed point placements. We present extensive experimental evaluation of our results on two different publicly available datasets.

Additional Material
1 File (131.58kB)
Veeraraghavan-ISBI2011-fig4.jpg (131.58kB)