Date on Senior Honors Thesis

12-2024

Document Type

Senior Honors Thesis

Department

Psychological and Brain Sciences

Author's Keywords

Visual Selective Attention; Feature Integration; Conjunction Search

Abstract

Feature integration and visual selective attention have been studied together to understand how we attend to different objects, often through conjunction search tasks. Conjunction search tasks involve searching for an object defined by multiple visual features in the presence of distractors. However, few studies have examined how different visual feature combinations impact search performance. Motivated by the functional and structural organization of the brain, this study investigated how differing feature combinations that theoretically place varying degrees of demand on feature integration impact attention efficiency. In two experiments, I explored reaction times (RT) and target detection sensitivity (d’) across varying set sizes and visual feature conditions thought to systematically target distinct visual processing pathways in the brain (i.e., color-motion, luminance-motion, and shape-color). My findings revealed that across both experiments, RT increased set size, indicating that search becomes longer with more distractors present. In Experiment 1, color-motion had slower RTs compared to luminance-motion. In Experiment 2, color-motion was less efficient than luminance-motion in target-absent only trials and the addition of motion as a relevant visual feature was associated with decreased search efficiency. These findings provide a better understanding of why some objects may be easier to find or harder to ignore and allow us to place certain visual selective attention abilities along a spectrum of efficiency, depending on which visual features are the target of attention. Future studies should explore neuroimaging techniques to understand the role of neural connections on integration of visual features within and across visual pathways.

Lay Summary

Every day, we search for various objects, whether it is searching for our keys in a junk drawer or our phone on our messy kitchen table. Our ability to locate these items relies on how we process and combine the visual features that comprise them; for example, the shape and color of a phone. Different visual features are processed in different brain regions. For example, color and shape are processed in brain regions closer to one another, but further away from regions that process luminance and motion. With this in mind, I investigated how our ability to find an object among distractors would be impacted when that target object is defined by different visual feature combinations. To test this, I used a conjunction search task in which participants searched for a color-motion, luminance-motion, or shape-color target. As expected, I found that reaction time increases as the number of distractors increased. I also found that it took longer to locate color-motion objects compared to luminance-motion or shape-color objects. Additionally, finding moving objects took longer than finding stationary objects. These findings give us a better understanding of why some object may be easier to find than others and how motion plays a role in our search for objects.

Share

COinS