diff --git "a/B9E4T4oBgHgl3EQf5Q72/content/tmp_files/load_file.txt" "b/B9E4T4oBgHgl3EQf5Q72/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/B9E4T4oBgHgl3EQf5Q72/content/tmp_files/load_file.txt" @@ -0,0 +1,866 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf,len=865 +page_content='Salient Object Detection for Images Taken by People With Vision Impairments Jarek Reynolds*, Chandra Kanth Nagesh*, and Danna Gurari denotes equal contribution University of Colorado Boulder Abstract Salient object detection is the task of producing a bi- nary mask for an image that deciphers which pixels be- long to the foreground object versus background.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We in- troduce a new salient object detection dataset using images taken by people who are visually impaired who were seek- ing to better understand their surroundings, which we call VizWiz-SalientObject.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Compared to seven existing datasets, VizWiz-SalientObject is the largest (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=', 32,000 human- annotated images) and contains unique characteristics in- cluding a higher prevalence of text in the salient objects (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=', in 68% of images) and salient objects that occupy a larger ratio of the images (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=', on average, ∼50% cover- age).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We benchmarked seven modern salient object detec- tion methods on our dataset and found they struggle most with images featuring salient objects that are large, have less complex boundaries, and lack text as well as for lower quality images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We invite the broader community to work on our new dataset challenge by publicly sharing the dataset at https://vizwiz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='org/tasks-and-datasets/salient-object.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Introduction Locating the most prominent foreground object in an im- age is a core computer vision problem, often referred to as salient object detection (as well as salient object seg- mentation and foreground object detection/segmentation) [8,12,32,40].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' This work is motivated by the desire to have salient object detection models work well for images taken by people who are blind or with low vision1 (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=', people with vision impairments).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Such a feature could offer sev- eral benefits to this community.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' For example, it could con- tribute to privacy-preservation for photographers who rely on visual assistance technologies to learn about objects in their daily lives, using mobile phone applications such as Microsoft’s Seeing AI, Google Lookout, and TapTapSee.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='2 1For people with low vision, solutions do not exist to correct their vi- sion (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=', by wearing glasses, surgery).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 2Many companies record submitted data as evidence that potentially could be needed for legal reasons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Example images demonstrating unique features of our new VizWiz-SalientObject dataset when compared to other datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' The salient objects commonly contain text and occupy a larger portion of the image (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=', high coverage).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' All content except the foreground content of interest could be obfuscated, which is important since private information is often inadvertently captured in the background of images taken by these photographers [24].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Additionally, localiza- tion of the foreground object would empower low vision users to rapidly magnify content of interest and also enable quick inspection of smaller details [21,39].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Many salient object detection datasets have been created to enable progress in algorithm development [7,8,22,42].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' A limitation of existing datasets is they are typically built us- ing high-quality images collected from photo-sharing web- sites on the Internet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' As we will show in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='2, such images commonly lack many characteristics that can be ob- served in real-world settings, especially for visual media taken by visually impaired photographers who are trying to learn about the content they photograph [24], often pho- tographing distinct types of content such as objects showing text [25], and cannot verify visual quality [13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' To fill this gap, we introduce a new salient object de- tection dataset based on images captured in an authentic use case where visually impaired photographers shared their images to solicit assistance in learning about the visual con- tent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We created this dataset by crowdsourcing the collec- 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='05323v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='CV] 12 Jan 2023 rCableSales SUPPLY ext Present 口 WASYOURTRIP?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' ANTTOHEARFR OUR ON-LINB SUR pse hcwyou want to take the XILLL uter gli.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='ols.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='sgizmo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='com/s3l pne: KUCLO LCINNO CICMO G gli.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='olsiphone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='sgizmo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='co Pad LOLIUILKIUtion of salient object annotations for nearly 40,000 images taken from the VizWiz-Captions dataset [25].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Examples of resulting annotated images are shown in Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Af- ter applying quality control filtration steps, our final dataset consists of 32,000 annotated images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We call our dataset VizWiz-SalientObject (or VizWiz-SO).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We conduct a de- tailed analysis revealing how this new dataset relates to ex- isting datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' When comparing our salient objects to the visual evidence needed to answer questions the photogra- phers asked about their images (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=', taken from the VizWiz- VQA-Grounding dataset [11]), we observe that over half the time the necessary visual evidence is the salient ob- ject.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' When comparing our dataset to seven existing datasets, we observe VizWiz-SalientObject is the largest (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=', 32,000 human-annotated images) and is unique in its higher preva- lence of text in the salient objects (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=', in 68% of images) as well as salient objects occupying a larger ratio of the images (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=', on average, ∼50%).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We also benchmark modern salient object detection al- gorithms on our new dataset to uncover open challenges for the research community.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Experiments with seven al- gorithms reveal that they struggle most for images with salient objects that are large, have less complex bound- aries, and lack text as well as for lower quality images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' To facilitate progress on these challenging problems, upon publication, we will publicly-share the dataset and an evaluation server with leaderboard at the following link: https://vizwiz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='org/tasks-and-datasets/salient-object.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' In summary, our new dataset supports the development of more generalized algorithms that not only address the in- terests of people with vision impairments but also can ben- efit related applications that encounter similar real world challenges observed in our dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Relevant applications include robotics, lifelogging, and privacy protection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Related Work Salient Object Detection Datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Over the past cou- ple of decades, many datasets were introduced to facili- tate improving the design of algorithms that address salient object detection problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Several survey papers provide comprehensive characterizations of the tens of datasets de- signed for this task [7, 8, 22, 42].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' A common observation is that datasets were artificially constructed around high- quality images which often feature salient objects in the center of the images with a high contrast against the back- ground.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' This is a mismatch from many real-world settings, especially for visual media taken by visually impaired pho- tographers who often photograph distinct types of content, such as objects showing text [25], with the aim to learn about that content.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We introduce the first salient object de- tection dataset based on images taken by visually impaired people in an authentic use case where they were trying to learn about their visual surroundings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Compared to seven modern datasets, our dataset is larger, has a high prevalence of salient objects containing textual information, and shows objects that occupy larger portions of the images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Salient Object Detection Algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Researchers have designed novel algorithms to automatically perform salient object detection for over 20 years, with the status quo since 2015 being that state-of-the-art methods employ neural net- works trained on large-scale annotated datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Several survey papers provide comprehensive characterizations of the hundreds of algorithms for this task [7,8,22,42].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' While convolutional neural network (CNN) based models became the mainstream method [1, 33, 43] in 2015, transformer based models [30, 44] have become the mainstream ap- proach over the past few years.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' To assess how well mod- ern methods perform on our new dataset, we benchmark seven modern methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We observe that existing methods fall below human performance and struggle most for salient objects that lack text and occupy a larger ratio of the image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Visual Assistance Technologies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Visually impaired peo- ple can share their visual media (images and videos) with various technologies [3, 4, 6, 14, 18, 27, 32, 40] in order to receive assistance for daily tasks such as deciding what to eat, wear, and buy [10,24].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' The widespread impact of such technologies for real users is exemplified by reports from some of these companies that the technologies have 10s to 100s of thousands of users who have submitted millions of assistance requests [5,9,14,17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' The most common reported goal for using such technologies is to learn about a (salient) object [9,10,23,28,47].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Given this common use case, salient object detection models could help for privacy preservation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Specifically, images (or video frames) could be edited be- fore being shared with companies, by obfuscating the back- ground, in order to reduce inadvertent disclosures of pri- vate content that often appears in the background of images taken by visually impaired photographers [24].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' VizWiz-SalientObject Dataset We now introduce our new salient object detection dataset, we call VizWiz-SalientObject (VizWiz-SO).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Dataset Creation Image Source.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We focus on images taken by visually im- paired people who shared them in an authentic use case where they were soliciting visual assistance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Specifically, we leverage the 39,181 labeled images from the VizWiz- Captions dataset, each of which is paired with five crowd- sourced captions [25].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Observing that images from these photographers can have severe quality issues resulting in no detectable salient object (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=', extreme blur or inadequate illumination), we did not use the images which were cap- tioned as follows by at least four of the five crowdworkers: 2 “Quality issues are too severe to recognize visual content.” We also did not use the small images (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=', both the height and width were less than 300 pixels) because of the chal- lenges of collecting precise annotations for such images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' This left us with 37,120 images for our annotation task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Task Design.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Our task interface for segmenting salient objects begins with a comprehensive instruction set at the top detailing both how to navigate the interface and how to complete challenging annotation scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Next, it shows an image alongside two preliminary questions for verifying there is a single, unambiguous foreground object.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' The first question asks “Is the image showing a screenshot?”' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' If the answer is “yes”, we conclude the image lacks a salient ob- ject.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Next, we ask the more general, direct question of “Is there a single unambiguous foreground object?”' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' An anno- tator is only prompted to segment the foreground object for images deemed by these preliminary questions to show a single, unambiguous foreground object.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' To demarcate the boundary of the salient object, the in- terface collects a series of points that are connected into polygon(s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' When segmenting the salient object, the an- notator is required to remove any holes (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=', donut) as well as capture all object parts when occlusions break a salient object into more than one polygon (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=', hand obfuscates a pencil into two parts).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' The annotator also has an option to select a button indicating that the salient object occupies the full image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We provide more details about the task interface as well as a screenshot of it in the Supplementary Materials.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Annotation Collection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We leveraged the benefits of an around-the-clock distributed workforce by crowdsourc- ing annotations via Amazon’s crowdsourcing marketplace, Amazon Mechanical Turk (AMT).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Although AMT can support our large-scale annotation needs, it brings concerns about annotation quality due to the anonymous nature of the crowdsourced workforce.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Con- sequently, we implemented several measures to ensure the collection of high-quality annotations, as summarized be- low.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' First, we restricted who were potential candidates for our task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We only accepted workers who had at least a 98% acceptance rate while having completed at least 500 Human Intelligence Tasks (HITs) on AMT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Moreover, to encourage understanding of our initial and ongoing task in- structions, we opted for crowdworkers only from the United States since that provided us confidence that they have English-language proficiency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' In addition, we also required crowdworkers to pass a qualification assessment covering five challenging annotation scenarios documented in our in- structions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' The qualification images feature foreground ob- jects consisting of complex boundaries, holes within the ob- ject, and occlusions obfuscating portions of the foreground object.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Consequently, the task required crowdworkers to demonstrate an understanding for how to generate multi- ple polygons, annotate holes, handle occlusions, and draw complex boundaries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We employed 40 AMT crowdworkers who completed our qualification task to complete annotations of all images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' For each of the 37,120 images, we collected two annotations from the crowdworkers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='3 During annotation collection, we monitored ongoing quality by tracking each worker’s per- formance with respect to their frequency of indicating the presence of full-screen annotations or no prominent fore- ground object as well as the level of detail they provided in their segmentations (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=', high prevalence of triangles).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Cu- mulatively, the crowdworkers took 1,290 annotation hours over 11 days to complete annotating the 37,120 images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Annotation Post-Processing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We next analyzed the re- dundant annotations per image to determine how to use each annotated image in the final dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' First, we removed 3,662 images for which workers agreed there was no sin- gle, unambiguous salient object, which occurred when both annotators either answered “Yes” to “Is the image a screen- shot?”' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' or “No” to “Is there a single most prominent fore- ground object?”' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Next, we manually inspected 7,443 images for which workers disagreed on the answers to either of the two preliminary questions and determined whether there is indeed a single, unambiguous object.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Finally, with all im- ages deemed to have a single, unambiguous salient object, we determined which annotation to assign as ground truth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' To assist in this process, we computed the intersection over union (IoU) score between the two segmentations for all images with two or more segmentations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' With IoUs ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='90, we deemed both annotations high quality and randomly se- lected one as ground truth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' For the remaining 2,951 images with IoUs< 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='90, we manually reviewed the annotations to decide whether one was correct or whether the image should be discarded due to foreground object ambiguity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Dataset Analysis We now characterize the VizWiz-SalientObject (VizWiz- SO) dataset and how it relates to existing datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='1 Salient Objects vs Answer Groundings for VQA We first explore how the target content the photographers were asking about relates to an image’s salient object.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' To do so, we compare the annotations of the visual evidence needed to answer questions about the images, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=', an- swer groundings provided in the VizWiz-VQA-Grounding dataset [11], to the annotations of the salient objects in our dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We first identified all annotated images that were in 3For a subset of images, we collected four annotations to support fur- ther analysis of human annotation performance, which we describe in the Supplementary Materials.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 3 Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' The histogram summarizes for 6,540 images the fre- quency of observing different levels of similarity between two segmentations per image, which show the salient object and the visual evidence needed to answer the photographer’s question re- spectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' These findings reveal that visually impaired photogra- phers often want to learn about the salient objects in their images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' common across the two datasets, yielding a total of 6,540 images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' For each image, we then measured the similarity between the answer grounding and salient object segmenta- tions using the IoU metric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We visualize our results using a histogram where we categorize each image into one of ten interval bins starting with IoU=[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='0, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='1), incrementing in intervals of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='1, and ending with IoU=[0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='9, 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Results are shown in Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We observe that about half of the images have a high sim- ilarity between the salient object and VQA answer ground- ing;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=', 46% had an IoU ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' This reveals that visually impaired photographers often are trying to learn about the salient object in their images when trying to get answers to their visual questions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We also observe that roughly one quarter of the images have a very low similarity between the salient object and VQA answer grounding;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=', 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='7% of images had an IoU < 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We manually reviewed these 1,680 images with IoUs less than 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='1 to understand the reasons for this finding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We discovered that 95% (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=', 1,599) of these images have a salient object featuring a full-screen or large region while the VQA answer grounding captures a small aspect of the salient object.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Examples include expiration dates on food packages or the current page number of an open book.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' The remaining 5% (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=', 81) of these images featured a VQA an- swer grounding unrelated to the salient object.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' More generally, we observe that the IoU scores follow a U-shaped distribution with only a small portion of images having middling scores;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=', 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='9% (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=', 511) of images had an IoU ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='3 and < 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Among these images, we found the salient object contained the VQA answer grounding region 100% of the time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' There are two primary trends that led to these less common IoU scores.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' The first trend is that larger VQA answer grounding regions occur with smaller salient objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Examples include brands of cereal, types of soda, and denominations of currency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' The second trend was for salient objects featuring holes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' That is because the VizWiz- VQA-Grounding dataset did not account for holes in their annotation task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' The absence of annotated holes in only one of the two segmentations led to lower IoU scores.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Altogether, these findings highlight that a valuable step for tackling many of this population’s VQA goals is to ini- tially locate the salient object.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' That is because the answer will likely only be grounded in the salient object or the background rather than their intersection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='2 VizWiz-SO vs Existing Datasets We next compare our dataset to seven datasets: DUTS [41]: the most commonly used dataset to train state-of-the-art algorithms (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=', [1,30,33,38,43,44]) due to its large size paired with diverse saliency challenges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' DUT-OMRON [46]: consist of images showing multiple salient objects, often with complex backgrounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' This is a useful reference when considering extending our dataset to when photographs taken by visually impaired photog- raphers showing multiple salient objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We share our collected metadata indicating when this occurs to facili- tate this line of future research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' ECSSD [45]: consists of images featuring complex scenes that present textures and structures expected to be com- mon in real-world salient object detection scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' PASCAL-S [29]: derived from PASCAL VOC’s [16] val- idation set, it is designed to facilitate salient object seg- mentation generalization on realistic images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' HRSOD [48]: explicitly designed for salient object de- tection on high-resolution images;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' this is relevant for our real-world application since images taken by people with vision impairments often are relatively high resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' UHRSD [44]: currently the largest ultra-high resolution salient object detection dataset, which is relevant to our work since images taken by people with vision impair- ments can be ultra high resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' DAVIS-S [48]: derived from DAVIS [36], a densely an- notated video segmentation dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' This is relevant for our real-world application to analyze implications for video frames since visually impaired photographers often stream live video with their cameras when using visual assistance technologies [4,18].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Of note, images in six of these datasets originate from the Internet on photo-sharing websites such as Flickr [29, 41, 44–46, 48], and so likely are high quality since they were deemed of sufficient quality to upload to the Internet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='4 4The origins of the images for the final dataset is not reported [48].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 4 50% 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='0% 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='40% Image 30% 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='7% 20% 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='0% 10% 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='4% 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='8% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='4% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='1% 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='7% 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='7% 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='2% [0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='8, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='9] [0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' [O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' [o.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 0 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='6) ,1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='0) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='3) IoU SimilarityDAVIS-S [48] PASCAL-S [29] HR [48] ECSSD [45] DUT-O [46] UH [44] DUTS [41] Ours Images 92 850 2,010 1,000 5,168 5,920 15,572 32,000 Text 13% 24% 15% 15% 11% 19% 13% 68% MR 22% 31% 25% 9% 17% 35% 19% 1% Holes 82% 50% 62% 29% 28% 75% 41% 4% Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Characterization of our VizWiz-SO dataset and seven existing salient object detection datasets with respect to how many images are included (“Images”), the percentage of images that have text present in the salient objects (“Text”), the percentage of images that have salient objects consisting of more than one region (“MR”), and the percentage of images that have salient objects containing any holes (“Holes”).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' As shown, our dataset is distinct in that it contains more images, more salient objects with text present, more salient objects consisting of one region, and fewer salient objects containing holes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' (HR=HRSOD;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' UH=UHRSD) Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Summary statistics for ours and seven other datasets with respect to four measures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Each box reveals statistics about all salient objects in a particular dataset with the central mark capturing the median value, box edges the 25th and 75th percentiles values, whiskers the most extreme data points not considered outliers, and individually plotted points the outliers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Our dataset is unique in that salient objects tend to have less complex boundaries, occupy larger portions of an image, and exhibit a greater diversity of sizes relative to the image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' For each salient object in every dataset, we characterize it in six ways.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Three measures focus on detecting the pres- ence versus absence of particular properties for the salient object.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' These are whether the salient object contains text 5, consists of multiple regions 6, or contains any hole(s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' The remaining three measures characterize the salient region it- self.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' First, we identify the position of an object within an image by measuring its center of mass relative to the im- age coordinates, resulting in x and y coordinate values in the range between 0 to 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Next, we characterize the ob- ject’s boundary complexity by computing its isoperimetric inequality, which is the ratio of the object’s area to the length of its perimeter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Values range from 0 to 1, with larger values indicating simpler boundaries that are less jagged/dented (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=', a circle).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Finally, to gauge the relative size of a salient object in the image, we compute its cover- age ratio, meaning the fraction of all image pixels that are occupied by the salient object’s pixels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We show summative statistics of our findings per dataset in Table 1 and Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' In particular, in Table 1, we re- 5We obfuscate all image content but the salient object and then check whether Microsoft Azure’s OCR API returns text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 6Multiple regions means there are multiple separate polygons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' This can occur either because multiple salient objects were annotated or because of occlusions that lead to more than one region for a single salient object.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' port how many images are in each dataset paired with what percentage of those images have salient objects with text, multiple regions, and holes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' In Figure 3, we visualize statis- tics summarizing the values for each dataset’s salient ob- jects with respect to center of mass, boundary complexity, and coverage ratio using boxplots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' While our findings highlight that our VizWiz-SO dataset has many distinct characteristics, one commonality it has with most existing salient object detection datasets is that the salient objects typically occupy centered positions within an image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Specifically, in Figure 3, we observe this trend for all datasets except HRSOD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We found this some- what surprising since visually impaired photographers can- not visually inspect their images to verify they are conform- ing to the common photographer’s bias of centering con- tents of interest they are trying to photograph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Yet, given our findings from Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='1 that photographers often are interested in learning about an image’s salient object, our findings suggest these photographers have skills in center- ing contents of interest in pictures they take.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' A unique aspect of our VizWiz-SO dataset is that it fea- tures more salient objects with textual data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Specifically, 68% of salient objects in VizWiz-SO contain text while the dataset with the next highest prevalence of text, PASCAL- S [29], only has it for 24% of the images (Table 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' A gap of 5 Center of mass Y-axis Center of mass X-axis Boundary Complexity Coverage Ratio 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='0 DAVIS-S PASCAL-S HRSOD ECSSD DUT-OMRON UHRSD DUTS Ours: VizWiz-SOthis magnitude (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=', 44 percentage points) suggests that our new dataset offers a considerable domain shift in the salient object detection problem space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We suspect part of this shift stems from the types of salient objects included, with many more daily objects such as products (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=', food packages) included in our VizWiz-SO dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Another unique aspect of VizWiz-SO is that far fewer images feature salient objects that consist of multiple re- gions;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=', only 1% of images (Table 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We suspect this distinction stems from our unique approach of adopt- ing a rigorous annotation preprocessing step, where we re- quire crowdworkers to verify images have one unambigu- ous salient object before allowing them to annotate images for use in our final dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Any remaining objects in our dataset with multiple regions are therefore highly likely a result of occlusions breaking a single salient object into multiple polygons, which evidently is incredibly rare.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' VizWiz-SO is also unique due to the rarity in which salient objects contain holes;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=', only observed for 4% of images (Table 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' From visual inspection, we suspect this finding reflects a domain shift in the types of content found in the datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' For example, examples from other datasets of objects with holes include people riding bikes, people dancing, and animals in intricate poses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' In con- trast, in VizWiz-SO, objects with holes include retail pack- aging made to hang from hooks, pairs of scissors, and coffee mugs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We posit the lower prevalence of holes in VizWiz-SO stems from the fact that images originate from an authentic use case where photographers primarily photograph house- hold and retail items, which naturally feature fewer holes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' A further distinction of our VizWiz-SO dataset is that the salient objects tend to have less complex boundaries (Fig- ure 3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We suspect this is again because of a domain shift in the types of objects in our dataset, with many more human- made items, such as food packaging boxes and cans, that by design are typically more structured in shape.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' A final distinction of salient objects in our VizWiz-SO is how much of the image they occupy (Figure 3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' First, they tend to occupy a much larger amount of the image than observed in other datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Specifically, they on average oc- cupy roughly half of all image pixels, with a mean coverage ratio of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='5 and a median of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' In contrast, the dataset with the next highest coverage ratio statistics is PASCAL- S [29], and over 75% of its images contain salient objects that occupy less than half of the image pixels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We attribute this distinction to the authentic use case of our dataset, where visually impaired photographers attempting to learn about the salient objects they are photographing seem to be taking zoomed-in or close-to-camera images of the content of interest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Another unique aspect of our salient objects, is that they exhibit a larger range of sizes, as shown by the gaps between the 25 and 75 percentile values of each box.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' For example, PASCAL-S features the next largest interquar- tile range with a 23% gap(i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=', 19% to 42%).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' In contrast, the gap for VizWiz-SO is more than twice as large at 56% (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=', 22% to 78%).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Consequently, a unique challenge of our dataset for algorithms is that they no longer can assume a strong bias regarding a salient object’s relative size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Algorithm Benchmarking We benchmark modern salient object detection algo- rithms to show how they perform on our new dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We conducted all experiments on a Nvidia A100 GPU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Experimental Design Dataset Splits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We use the existing splits available for the VizWiz-Captions dataset [25], which translates to approxi- mately a 60/20/20 training, validation and test split for our VizWiz-SO dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' In particular, from the 32,000 anno- tated images, the number of images in each split respec- tively is 19,116, 6,105, and 6,779.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Evaluation Metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We evaluate each model with re- spect to five popular metrics for salient object detection models: Mean Absolute Error (MAE), Structure Measure (Sm), Mean F-Measure (Fm), Enhanced Alignment Mea- sure (Em), and Intersection over Union (IoU).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We benchmark the following seven methods from the past three years to assess the difficulty of our new dataset for modern salient object detection models: Boundary Aware Segmentation Network (BASNet) [38]: an appealing model for real-time applications like our tar- get use case because it can achieve 70fps during inference time while achieving competitive performance (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=', was a top-performer in 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Fusion, Feedback and Focus Network (F3Net) [43]: state- of-the-art performing model on five datasets in 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' U2 Network (U2Net) [1]: an appealing model for real- world applications like our target use case because it has a very light footprint (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='7MB), and so is more suitable for resource-constrained devices such as smartphones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' It achieved competitive performance in 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Visual Saliency Transformer (VST) [30]: achieved state- of-the-art performance in 2021, and is based purely on a transformer architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Pyramidal Feature Shrinking Network (PFSNet) [33]: achieved state-of-the-art performance on five datasets in 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' it consists of a decoder that aims at using aggre- gated adjacent feature nodes hierarchically to avoid the problem of leaping feature fusion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Pyramid Grafting Network (PGNet) [44]: introduced in 2022, it is a one-stage framework based on a transformer 6 HP BASNet F3Net U2Net VST PFSNet PGNet DIS VST-FT VST-S [38] [43] [1] [30] [33] [44] [37] Attr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Backbone R-34 R-50 T2T-ViT R-50 R-18+SWIN U2Net VST ViT Training set D D D D D D+HR DIS5K D+VW VW Input size 2562 3522 3202 2242 3522 2242, 10242 10242 2242 2242 Size (MB) 333 98 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='7 171 120 280 169 171 171 VizWiz-SO MAE ↓ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='28 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='28 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='26 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='17 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='32 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='21 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='36 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='19 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='21 Sm ↑ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='92 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='59 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='55 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='61 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='65 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='48 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='62 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='46 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='64 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='63 Fm ↑ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='96 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='77 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='74 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='80 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='83 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='79 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='61 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='74 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='72 Em ↑ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='97 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='64 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='65 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='65 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='76 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='60 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='74 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='55 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='77 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='70 IoU ↑ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='94 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='62 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='53 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='63 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='73 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='48 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='67 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='49 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='69 Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Analysis of existing algorithms that we benchmark on our VizWiz-SO dataset, including both off-the-shelf models (which are cited) as well as those fine-tuned (-FT) and trained from scratch (-S).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We first report differentiating attributes between the algorithm architectures and then present the model performance with respect to five widely-used metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' (HP=Human Performance;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' R=ResNet [26];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' ViT=Vision Transformer [15];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Swin=Shifted window transformer [31];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' D=DUTS-TR [41];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' VW=VizWiz-SO;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' HR=HRSOD [48]) and CNN backbone that achieves state-of-the-art perfor- mance on five benchmark datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' [41,44,46,48].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Dichotomous Image Segmentation (DIS) [37]: also in- troduced in 2022 as the state-of-the-art method for the DIS5K [37] dataset;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' it is designed for detecting salient object in high resolution images, which makes it relevant for our use case where many images coming from people with vision impairments are relatively high resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We further characterize each model by identifying the backbone architecture used in the architecture, datasets used for training, image size used for training, and model foot- print.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' These characteristics are reported in Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' All models predict saliency maps that represent the brightness of certain pixels within the same spatial reso- lution as the input image;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=', ∈ [0, 1] or alternatively ∈ [0, 255].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' The predictions generated by salient object de- tection models are converted into binary masks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Humans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We also evaluate human performance to estab- lish an upper bound for what we should strive for from au- tomated methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Since, we get two human annotations per image in our dataset, we calculate human performance by comparing the two annotations in cases where the IoU is greater than 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Performance for Off-The-Shelf Models We first evaluate each of the algorithms as is in their orig- inal design.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Results are shown in Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We observe that VST [30] is the top-performing model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Yet, it still falls short of human performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' For exam- ple, the gap in performance is 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='15 in terms of MAE, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='211 in terms of IoU, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='26 for Sm, and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='2 for Em.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Conse- quently, this dataset offers a new challenging benchmark for the community.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' A further observation is that the models perform poorly on the VizWiz-SO dataset in comparison to their perfor- mance on the original datasets for which they were bench- marked.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' For example the MAE and Sm performance ob- served by PGNet [44] on DUTS-TE is 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='028 and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='912 re- spectively versus 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='2123 and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='6233 respectively for our dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We hypothesize that part of the reason for this poor performance is that models trained and evaluated on other datasets are not able to learn how to generalize to the real- world challenges that arise for images taken by visually im- paired photographers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Performance When Training on VizWiz-SO We next explore whether training the top-performing al- gorithm, VST [30], on our new dataset will lead to improved performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' To do so we analyze two additional models: (1) the pretrained VST [30] model fine-tuned on VizWiz- SO (VST-FT) and (2) the pretrained VST [30] algorithm trained from scratch on VizWiz-SO (VST-S).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We use the default hyperparameters reported in the VST [30] paper for model training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Results are shown in Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We observe that both models, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=', created by training from scratch and fine-tuning on our VizWiz-SO dataset, achieve worse results than the baseline of not training the al- gorithm on our dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' This suggests that the training data used by algorithms is not the only culprit for what makes our new dataset challenging.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Rather, our findings suggest that new algorithmic frameworks are also needed to achieve strong generalization performance on our new dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Fine-grained Analysis We next conduct fine-grained analysis to better isolate what makes our dataset challenging for modern algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' To do so, we divide our VizWiz-SO test set according to the following four factors, with the first three based on metadata collected in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='2 to characterize our dataset: 7 BASNet F3Net U2Net VST PFSNet PGNet DIS VST-FT VST-S [38] [43] [1] [30] [33] [44] [37] Text Presence True 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='23 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='22 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='22 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='13 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='16 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='32 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='16 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='17 False 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='35 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='38 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='32 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='24 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='42 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='29 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='40 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='24 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='26 Coverage Small 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='16 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='07 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='11 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='16 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='09 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='11 Medium 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='09 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='24 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='09 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='10 Large 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='60 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='47 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='54 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='30 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='54 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='35 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='38 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='39 Complexity High 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='21 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='24 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='16 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='21 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='11 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='12 Low 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='38 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='34 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='35 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='21 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='38 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='48 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='26 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='27 Image Quality Good 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='22 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='23 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='21 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='26 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='17 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='30 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='16 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='17 Poor 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='44 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='43 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='41 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='27 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='47 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='34 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='30 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='31 Table 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Fine-grained analysis of existing algorithms with respect to presence of text on the salient object (“Text Presence”), relative size of the salient object in the image (“Coverage”), relative complexity of the salient object’s boundary (“Complexity”), and image quality (“Image quality”).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' As shown the algorithms perform worse when there is salient objects lack text, occupy a large portion of the image, have less complex boundarys as well as when the image quality is poor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Text Presence: two groups based on whether text is present in the salient object.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Coverage Ratio (Coverage): three groups based on the 33rd and 66th quartile values in our dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' All images with coverage ratio less than 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='32 has small coverage, be- tween 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='32 and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='62 has medium coverage, and greater than 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='62 has large coverage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Boundary Complexity (Complexity): two groups by split- ting them around the mean score for boundary complex- ity (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=', 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='66) with high boundary complexity when the score is less than the mean and low boundary complexity otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Image Quality: leveraging metadata from prior work [25], which indicates how many of the five crowdworkers in- dicated an image as insufficient quality to recognize the content, we split the images into groups with good qual- ity being when none of the crowdworkers indicate insuf- ficient quality and poor otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Due to space constraints, we only report results in the main paper with respect to the Mean Absolute Error [35].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Results for all benchmarked models are shown in Table 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' In terms of text presence, we see that the models perform better when there is text present as opposed to when there is none.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' For example, the performance drops by 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='11 for the best performing model, VST.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We suspect visual patterns that arise with text may serve as a valuable cue to models in locating salient objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Next, we see that as the coverage ratio of the salient ob- jects increase, the models tend to perform worse.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' For in- stance, the best performing model, VST, has a performance dropoff of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='19 when predicting images with small cover- age ratios as opposed to large coverage ratios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We see an even greater performance dropoff from other models such as 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='60 for DIS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We suspect this performance gap arises in part from the fact that existing datasets largely lack such large salient objects, which both could have affected what algorithms were designed to handle as well what they could learn from the data they observed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Further observed trends are that performance drops for salient objects with lower boundary complexity and for poorer quality images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' These are two additional factors that reflect domain shifts between our dataset and prior datasets that could have affected the design of algorithms as well what they could learn from the data training data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Conclusions We introduce the VizWiz-SalientObject dataset to en- courage the community to design more generalized salient object detection models that can handle a larger range of challenges motivated by our authentic use case that also can occur in many real-world applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We offer our exper- imental findings from benchmarking modern salient object detection algorithms as a valuable starting point for iden- tifying valuable future research directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' To summarize, new models are needed to better handle salient objects that are large, have less complex boundaries, and lack text as well as work well in the presence of lower quality images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We now close with a discussion of some ethical impli- cations of our work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' While we are motivated to better as- sist a population that is traditionally marginalized in society, we acknowledge our work can lead to potentially adverse social effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Our concern is primarily centered on bad- actor behaviors intended to exploit the privacy, autonomy, and livelihoods of a population demographic inherently sus- ceptible to such behavior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Bad actors could use our work to deceive visually impaired individuals in harmful ways, such as through fraud, scams, and other deceptive practices, by for example intercepting their visual media and replacing automatically detected salient objects with misinformation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 8 Acknowledgments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' This project was supported in part by a National Science Foundation SaTC award (#2148080) and Amazon Mechanical Turk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We thank Leah Findlater and Yang Wang for contributing to this research idea.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' References [1] U2-net: Going deeper with nested u-structure for salient ob- ject detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Pattern Recognition, 106:107404, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 2, 4, 6, 7, 8 [2] Radhakrishna Achanta, Sheila Hemami, Francisco Estrada, and Sabine S¨usstrunk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Frequency-tuned salient region de- tection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' In CVPR, number CONF, pages 1597–1604, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 13 [3] AIpoly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Aipoly homepage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='aipoly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' com/, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' (Accessed on 01/08/2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 2 [4] Aira.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Aira homepage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' https://aira.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='io/, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' (Ac- cessed on 01/08/2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 2, 4 [5] BeSpecular.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' BeSpecularPressKit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 2 [6] BeSpecular.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Bespecular.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='bespecular.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' com/, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' (Accessed on 01/08/2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 2 [7] Ali Borji, Ming-Ming Cheng, Qibin Hou, Huaizu Jiang, and Jia Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Salient object detection: A survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Computational visual media, pages 1–34, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 1, 2 [8] Ali Borji, Ming-Ming Cheng, Huaizu Jiang, and Jia Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Salient object detection: A benchmark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' IEEE transactions on image processing, 24(12):5706–5722, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 1, 2 [9] Erin Brady and Jeffrey P Bigham.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Crowdsourcing accessi- bility: Human-powered access technologies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 2 [10] Erin L Brady, Yu Zhong, Meredith Ringel Morris, and Jef- frey P Bigham.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Investigating the appropriateness of social network question asking as a resource for blind users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' In Proceedings of the 2013 conference on Computer supported cooperative work, pages 1225–1236.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' ACM, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 2 [11] Chongyan Chen, Samreen Anjum, and Danna Gurari.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Grounding answers for visual questions asked by visually impaired people.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' arXiv preprint arXiv:2202.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='01993, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 2, 3 [12] Ming-Ming Cheng, Niloy J Mitra, Xiaolei Huang, Philip HS Torr, and Shi-Min Hu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Global contrast based salient region detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' IEEE transactions on pattern analysis and ma- chine intelligence, 37(3):569–582, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 1 [13] Tai-Yin Chiu, Yinan Zhao, and Danna Gurari.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Assessing im- age quality issues for real-world problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3646–3656, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 1 [14] Ned Desmond.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Microsoft’s Seeing AI founder Saqib Shaikh is speaking at Sight Tech Global.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 2 [15] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Syl- vain Gelly, Jakob Uszkoreit, and Neil Houlsby.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' An image is worth 16x16 words: Transformers for image recognition at scale, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 7 [16] Mark Everingham, Luc Gool, Christopher K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Williams, John Winn, and Andrew Zisserman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' The pascal visual object classes (voc) challenge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Vision, 88(2):303–338, jun 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 4 [17] Be My Eyes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Be My Eyes: Our story.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 2 [18] Be My Eyes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Bringing sight to blind and low-vision people.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='bemyeyes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='com/, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' (Accessed on 01/08/2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 2, 4 [19] Deng-Ping Fan, Ming-Ming Cheng, Yun Liu, Tao Li, and Ali Borji.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Structure-measure: A new way to evaluate foreground maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' In ICCV, pages 4548–4557, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 13 [20] Deng-Ping Fan, Cheng Gong, Yang Cao, Bo Ren, Ming- Ming Cheng, and Ali Borji.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Enhanced-alignment measure for binary foreground map evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' In IJCAI, pages 698– 704, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 13 [21] American Federation for the Blind.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Low vision optical de- vices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 1 [22] Ashish Kumar Gupta, Ayan Seal, Mukesh Prasad, and Pri- tee Khanna.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Salient object detection techniques in computer vision—a survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Entropy, 22(10), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 1, 2 [23] Danna Gurari, Kun He, Bo Xiong, Jianming Zhang, Mehrnoosh Sameki, Suyog Dutt Jain, Stan Sclaroff, Margrit Betke, and Kristen Grauman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Predicting foreground object ambiguity and efficiently crowdsourcing the segmentation (s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' International Journal of Computer Vision, 126(7):714– 730, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 2 [24] Danna Gurari, Qing Li, Chi Lin, Yinan Zhao, Anhong Guo, Abigale Stangl, and Jeffrey P Bigham.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Vizwiz-priv: A dataset for recognizing the presence and purpose of private visual information in images taken by blind people.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' In Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 939–948, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 1, 2 [25] Danna Gurari, Yinan Zhao, Meng Zhang, and Nilavra Bhat- tacharya.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Captioning images taken by people who are blind.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' In European Conference on Computer Vision, pages 417– 434.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Springer, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 1, 2, 6, 8 [26] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Deep residual learning for image recognition, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 7 [27] iDentifi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' identifi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' http://getidentifi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='com/, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' (Accessed on 01/08/2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 2 [28] Hernisa Kacorri, Kris M Kitani, Jeffrey P Bigham, and Chieko Asakawa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' People with visual impairment training personal object recognizers: Feasibility and challenges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pages 5839–5849, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 2 [29] Yin Li, Xiaodi Hou, Christof Koch, James M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Rehg, and Alan L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Yuille.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' The secrets of salient object segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' In 2014 IEEE Conference on Computer Vision and Pattern Recognition, pages 280–287, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 4, 5, 6, 13 [30] Nian Liu, Ni Zhang, Kaiyuan Wan, Ling Shao, and Junwei Han.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Visual saliency transformer, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 2, 4, 6, 7, 8, 14 [31] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Swin trans- former: Hierarchical vision transformer using shifted win- dows, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 7 [32] LookTel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Looktel recognizer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 1, 2 [33] Mingcan Ma, Changqun Xia, and Jia Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Pyramidal fea- ture shrinking for salient object detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Proceedings of the AAAI Conference on Artificial Intelligence, 35(3):2311– 2318, May 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 2, 4, 6, 7, 8, 14 [34] MDN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Fill-rule - svg: Scalable vector graphics: Mdn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 10 9 [35] Federico Perazzi, Philipp Kr¨ahenb¨uhl, Yael Pritch, and Alexander Hornung.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Saliency filters: Contrast based filtering for salient region detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' In CVPR, pages 733–740, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 8, 12 [36] Federico Perazzi, Jordi Pont-Tuset, Brian McWilliams, Luc Van Gool, Markus Gross, and Alexander Sorkine- Hornung.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' A benchmark dataset and evaluation methodology for video object segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 4 [37] Xuebin Qin, Hang Dai, Xiaobin Hu, Deng-Ping Fan, Ling Shao, and Luc Van Gool.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Highly accurate dichotomous im- age segmentation, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 7, 8, 14 [38] Xuebin Qin, Deng-Ping Fan, Chenyang Huang, Cyril Di- agne, Zichen Zhang, Adri`a Cabeza Sant’Anna, Albert Su`arez, Martin Jagersand, and Ling Shao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Boundary-aware segmentation network for mobile and web applications, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 4, 6, 7, 8 [39] Abigale J Stangl, Esha Kothari, Suyog D Jain, Tom Yeh, Kristen Grauman, and Danna Gurari.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Browsewithme: An online clothes shopping assistant for people with visual im- pairments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' In Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility, pages 107–118, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 1 [40] TapTapSee.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Taptapsee.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 1, 2 [41] Wang, Lijun, Lu, Huchuan, Wang, Yifan, Feng, Mengyang, Wang, Dong, Yin, Baocai, Ruan, and Xiang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Learning to detect salient objects with image-level supervision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' In CVPR, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 4, 5, 7, 13 [42] Wenguan Wang, Shuyang Zhao, Jianbing Shen, Steven C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Hoi, and Ali Borji.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Salient object detection with pyramid at- tention and salient edges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 1, 2 [43] Jun Wei, Shuhui Wang, and Qingming Huang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' F3net: Fu- sion, feedback and focus for salient object detection, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 2, 4, 6, 7, 8, 14 [44] Chenxi Xie, Changqun Xia, Mingcan Ma, Zhirui Zhao, Xi- aowu Chen, and Jia Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Pyramid grafting network for one- stage high resolution saliency detection, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 2, 4, 5, 6, 7, 8, 13 [45] Qiong Yan, Li Xu, Jianping Shi, and Jiaya Jia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Hierarchical saliency detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' In 2013 IEEE Conference on Computer Vision and Pattern Recognition, pages 1155–1162, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 4, 5, 13 [46] Yang, Chuan, Zhang, Lihe, Lu, Huchuan, Ruan, Xiang, Yang, and Ming-Hsuan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Saliency detection via graph-based manifold ranking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' In Computer Vision and Pattern Recogni- tion (CVPR), 2013 IEEE Conference on, pages 3166–3173.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' IEEE, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 4, 5, 7, 13 [47] Xiaoyu Zeng, Yanan Wang, Tai-Yin Chiu, Nilavra Bhat- tacharya, and Danna Gurari.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Vision skills needed to an- swer visual questions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Proceedings of the ACM on Human- Computer Interaction, 4(CSCW2):1–31, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 2 [48] Yi Zeng, Pingping Zhang, Jianming Zhang, Zhe Lin, and Huchuan Lu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Towards high-resolution salient object detec- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF International Confer- ence on Computer Vision, pages 7234–7243, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 4, 5, 7, 13 Appendix This document supplements the main paper with additional information concerning: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Dataset Creation (supplements Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='1) Annotation Task Interface Worker Qualification Task Analysis of Workers’ Annotation Differences 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Dataset Analysis: VizWiz-SO vs Existing Datasets (supplements Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='2) 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Experimental Design (supplements Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='1) A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Dataset Creation A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Annotation Task Interface The task interface displays five images within a tabbed container on the left and preliminary questions with task instructions on the right.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' A screenshot of the task interface (without instructions) is shown in Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' To account for occlusions and holes while keeping the task simple for annotators, we permitted annotators to gen- erate multiple polygons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' For occlusions, annotators could use as many polygons as necessary for demarcating fore- ground objects partitioned into multiple polygons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' For holes, we apply an even-odd fill rule to images featuring foreground objects with holes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' With an even-odd fill rule, every area inside an even number of enclosed areas be- comes hollow, and every region inside an odd number of enclosed areas becomes filled [34].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' By treating the image’s four corners as the first enclosed area, the outermost bound- ary of the foreground object becomes the second enclosed area.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Moreover, holes within foreground objects represent the third layer of enclosed areas and become filled, allowing annotators to demarcate foreground objects featuring holes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' In practice, annotators first trace the outermost boundary of the foreground object and close the path by clicking the first point a second time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We then instructed annotators to trace any holes within the foreground object, and so those holes end up in odd-numbered layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Worker Qualification Task We administered a qualification task for workers to sup- port our collection of high-quality ground truth annotations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' The qualification task required annotating five images, each of which features a distinct challenging annotation scenario.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' All five images are shown in Figure 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' The first two images show a table and a bench, offering examples with complex boundaries and holes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' The next two images feature a per- son holding a coffee mug, to support educating a crowd- worker about our expectations for annotating objects with 10 Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' A screenshot of our annotation task interface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Figure 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' The five images used for the worker qualification task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Each was selected to demonstrate a challenging annotation sce- nario such as complex boundaries, holes, and occlusions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' complex geometries that have many curves and occlusions that require annotating multiple polygons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' The final image is a spatula.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' This task verified a crowdworker’s ability to correctly identify and annotate multiple holes that can arise within the salient object.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' After crowdworkers annotated each qualification image, the backend code of our website checked if their annotation was sufficiently similar to the GT annotation (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=', IoU sim- ilarity of at least 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='90).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Crowdworkers could only proceed to the following image after they obtained an IoU ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='90 on the current image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Crowdworkers obtaining an IoU ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='90 on all five qualification assessment images on a per- image basis gave us substantial confidence that they would be able to successfully handle complex and challenging out- lier cases within the original VizWiz Dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='7 A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Analysis of Workers’ Annotation Differences We collected a larger number of redundant annotations per image for a random subset of images to better explore when and why annotation differences are observed from dif- ferent workers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Specifically, for this analysis, we collected four annotations as opposed to two for a subset of 1,237 im- ages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Examples of the redundant annotations collected per image are shown in Figure 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' The first example (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=', row 1 of Figure 6) highlights that annotation differences can stem from challenging annota- tion scenarios where objects contain holes (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=', in mug han- dle) or are occluded (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=', by the straw).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' For instance, the hole was not annotated in the third annotation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Addition- ally, only the fourth annotation captured the occlusion that arises from the straw.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' The second example (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=', row 2 of Figure 6) highlights that annotation differences can stem from ambiguity regard- 7Some crowdworkers did not pass the qualification assessment due to time constraints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' In these cases, crowdworkers would contact us with the images they annotated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' If we were confident in their annotation abilities, we manually added these crowdworkers to the qualified worker pool.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 11 Image 1 Image 2 Image 3 Image 4 Image 5 Work may be rejected for not following instructions Step 1: Is the image showing a screenshot?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' O Yes O No WASYOURTRIP?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Step 2: Is there a single prominent foreground object?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' OYes ONO ANTTOHEARFRO OUR ON-LINE SURVEYTONACHANCETO WIN SU Step 3: Demarcate the prominent foreground object osehcwyou wantto takethesunvey typethatweb address inyourbrowse puter one: gli.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='ols.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='sgizmo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='com/s3/ Prev Image Next Image Pad gli.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='olsiphone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='sgizmo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='com/s3 gli.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='olsipad.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='sgizmo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='com/s3/ Select Full Undo Last Clear All Image Point PolygonsFigure 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Example annotations from our random subset where we collected four annotations as opposed to two.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We find worker dif- ferences primarily occur in challenging annotation scenarios such as holes, occlusions, complex boundaries, and object saliency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' ing what is the salient object.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' As shown, the first two an- notations flag the image as lacking a foreground object, the third annotation identifies the child holding the cup as the salient object, and the fourth annotation identified the child’s cup as the salient object.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' The third example (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=', in row 3 of Figure 6) highlights that annotation differences also can arise for objects that simultaneously have complex boundaries and holes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' In an- notation one, the worker did not fully annotate the salient object, cutting out part of the object from the annotation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Only the third and fourth annotations accurately annotate all holes that are present in the salient object’s boundary while also having tight boundaries in the annotation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' In summary, we found occlusions, holes, and saliency ambiguity to be the primary factors contributing to annota- tion differences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' In the case of occlusions, worker differ- ences can arise when deciding whether to include objects that are a composite part of the salient object.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' In the case of holes, annotation differences can arise regarding which holes to annotate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Last, we found that it can be ambiguous as to which object is the most salient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Figure 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Example ground truth annotations from the HRSOD dataset which exemplify that salient objects are not usually not centered in the image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' This is a common trend in the dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Dataset Analysis B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' VizWiz-SO vs Existing Datasets We present finer-grained details about typical image res- olutions for the different salient object detection datasets to expand upon discussions in the main paper about how VizWiz-SO relates to other datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Specifically, we report the median image width (Med.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' W), median image height (Med.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' H), and whether the dataset supports high resolu- tion images (High Res.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=') as defined by whether the median image height and width are greater than 1080 and 1920 re- spectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Results are reported in Table 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We observe that our new dataset, overall, provides higher resolution images than most datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We also expand on a surprising finding reported in our main paper that the HRSOD dataset is the only one for which salient objects do not occupy the typical center po- sitions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' To do so, we visualize the ground truth masks of some non-centered objects in Figure 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' In row one, we see that objects are horizontally distributed to left and right po- sitions of the images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Similarly, we observe in row two that the salient objects are vertically distributed to the top and bottom positions of the images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Algorithmic Benchmarking C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Experimental Design We compute the five metrics used in the benchmarking section using the following definitions: Mean Absolute Error [35] represents the average abso- lute difference between the predicted saliency map and its 12 Annotation 1 Annotation 2 Annotation 3 Annotation 4 WRKDAVIS-S [48] PASCAL-S [29] HR [48] ECSSD [45] DUT-O [46] UH [44] DUTS [41] Ours Med.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' W 1080 375 2704 300 300 3612 300 1296 Med.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' H 1920 500 3264 400 400 5000 400 968 High Res.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' \x13 \x17 \x13 \x17 \x17 \x13 \x17 \x13 Table 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Characterization of our VizWiz-SO dataset and seven existing salient object detection datasets with respect to metrics showcasing the image resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' This includes median image width (“Med.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' W”), median image height (“Med.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' H”), and flag indicating if high resolution (“High Res.”).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' (HR=HRSOD;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' UH=UHRSD) ground truth per pixel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' It can be given as: MAE = 1 H ∗ W H � r=1 W � c=1 |pred(r, c) − gt(r, c)| (1) where pred represents the predicted saliency map, gt repre- sents the ground truth, (H, W) represents the height and width of the image, and (r, c) represents the pixel co- ordinates for the given image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Structure Measure [19] is used to measure the similarity between the predicted saliency map and the ground truth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Since, we convert both the predictions and ground truths into the [0, 1] range, we apply the formula directly to the predictions and maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' It can defined as follows: Sm = (1 − α)Sr + αSo (2) where, Sr is defined as the region aware similarity score, So is defined as the object aware similarity score, and α repre- sents the weight that is used to sum up the values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We set α = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='5, therefore making sure that we see equal contribu- tion from both region and object aware scores.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' F-Measure [2] represents the precision and recall ratio for the given prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' It can be represented as: Fm = (1 + β2) ∗ Precision ∗ Recall β2 ∗ Precision + Recall (3) Here precision = T P T P +F P and recall = T P T P +F N on the entire prediction image by pixels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We set β2 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content='3 and re- port the average of all F-measures as Fm similar to previous works.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Enhanced Alignment Measure [20] is used as the met- ric to measure the effectiveness of the saliency prediction against the ground truth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' It captures the pixel-level match- ing information and image-level statistics into one single metric by the means of an enhanced alignment matrix φ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' It is defined as follows: Em = 1 H ∗ W H � r=1 W � c=1 φF M(r, c) (4) where, φF M represents the enhanced alignment matrix for the foreground map, (H, W) represents the height and width of the image, and (r, c) represents the pixel co- ordinates for the given image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Intersection over Union also known as Jaccard Index is used to determine the similarity between sample sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' In this case it captures the overlap between the ground truth and prediction map of the salient object.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We convert the predictions in binary map and compute the Jaccard Index over two classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' It can be defined as follows: IoU = J(A, B) = |A ∩ B| |A ∪ B| (5) where, A and B are images of same size, consisting of inte- ger class values {0, 1}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We further show how the models performed on VizWiz- SO with qualitative examples shown in Figure 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' These ex- amples feature a variety of challenges we observed for the models, such as a large salient object, less complex bound- aries, lack of text on the salient object, and lower qual- ity images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' For example, we observe how the models fail to perform adequately in identifying larger salient objects (rows 4 and 5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We also observe the models perform bet- ter when salient objects contain text (rows 1 and 2) versus lack text (rows 5 and 6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Further, we see models perform worse for images that are lower quality (rows 3, 4, and 5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Our fine-grained analysis in the main paper suggests each of these factors offer unique challenges for modern salient object detection models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 13 Figure 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' Examples of difficult images present in VizWiz-SO, with characteristics such as high coverage ratio, presence of text, less complex boundaries, and lower image quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We show how the seven models perform on these cases as compared to the human annotation (GT=Ground Truth).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We see that models such as PFSNet [33], DIS [37], and F3Net [43] do not always give us the correct salient objects or sometime no predictions at all.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' We also notice that VST [30] usually predicts salient objects with better accuracy compared to other models, but also suffer from not detecting the correct salient object.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'} +page_content=' 14 Image GT BASNet F3Net U2Net VST PFSNet PGNet DIS' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E4T4oBgHgl3EQf5Q72/content/2301.05323v1.pdf'}