The cognitive brown bag on Weds. April 3 will feature Junha Chang ( https://www.umass.edu/pbs/people/junha-chang). As always, the talk will be at noon in Tobin 521B. Title and abstract are below. All are welcome.
Do Observers Integrate Separate Features to Make An Integrated Target for Better Search Guidance?
In most search tasks, observers are given target information as a cue before a search array appears. It is well established that observers actively use the given form of target cue and earn benefits from an exact target cue on search performance. However, it is unclear whether the observers could voluntarily integrate individual target feature information to match the predicted target form to earn the benefits. To test this, we compared behavioral data and the amplitude of Contralateral Delay Activity, which is indicative of the number of VWM representations being held, between two cue conditions. In a split cue condition, participants viewed two separate feature cues for each target feature (i.e., a color rectangle and an orientation bar) and were instructed to look for a target defined by a conjunction of two feature cues in the following search array. In an integrated cue condition, the participants viewed two identical conjunction targets as cues (i.e., two colored orientation bars). If the participants integrate two target features into an object in the split cue, it predicts similar RTs and CDA amplitudes between two cue conditions. Whereas, if the participants maintain two target features separately in the split cue condition, it predicts longer RT and larger CDA amplitudes in the split cue condition than the integrated cue condition. So far, we found mixed results: longer RT and numerically larger CDA amplitude but not significant in the split cue condition. This pattern might suggest that the participants maintained an integrated target representation in VWM but guided attention by each feature.