Does Judgment Feedback Affect Visual-Field Superiority as a Function of Stimulus Structure and Content?

Visual-field advantage was envisaged as a function of presentation mode (unilateral, bilateral), stimulus structure (word, face), and stimulus content (emotional, neutral) in two conditions, with and without feedback of judgment. Split visual-field paradigm was taken into account with recognition accuracy and response latency as the dependent variables. Stimuli were significantly better recognized in left visual-field than in right visual-field. Unilaterally, rather than bilaterally, presented stimuli were significantly better recognized. Emotional content were intensely recognized than neutral content. Analysis using multivariate ANOVA suggested that words as well as faces were recognized better without judgment feedback condition as compared to with judgment feedback condition; however these stimuli were judged with significantly less response latency following judgment feedback.


Introduction
Different methodological variations have been carried out in split visual-field studies by changing stimulus structure.It has been found that linguistic material is responded to more efficiently and more quickly in right visual-field (RVF) or left hemisphere (LH) and facial stimuli is more distinctly and more quickly perceived in left visual-field (LVF) or right hemisphere (RH).Recent literatures show a RH or LVF advantage for perception of emotional expression and a RVF advantage for perception of neutral information.In the context of valence, RH is held responsible for negative valence and LH for positive valence.Disputes remain regarding the interactive effect of stimulus structure and stimulus content.Therefore, the question remains whether emotional content interacting with linguistic component (words) confounds RH superiority to some extent.Atchley et al. (2003) documented that the RH is preferentially sensitive to the emotional context of stimuli.Unilaterally, rather than bilaterally, presented stimuli were significantly better recognized (Basu & Mandal, 2004).Hines (1975) argued that bilateral presentation mode gives an independent assessment of the abilities of the two hemispheres whereas unilateral presentation gives a measure of information lost during inter-hemispheric transfer.Recent studies indicate the advantage in unilateral presentation might not imply attention selectivity and sudden presentation of a stimulus in unattended hemi field might automatically capture attention in an empty visual-field.Another important factor is judgment feedback (JF).Feedback of judgment refers to the knowledge of result, which has a possible effect on hemispheric dominance.Recognition of iconic memory takes place in the sensory level before it reaches the brain.Whether JF decreases the difficulty level of the processing by constantly changing the behavioral strategy of the receiver gives rise to open question.The purpose of the present experiment therefore was to examine the effect of presentation mode on the visual field advantage as elicited by stimulus structure and content with respect to feedback of judgment.
It was hypothesized that (a) Visual field advantage will be significantly higher for stimulus structure than stimulus content.
(b) The effect will be significantly more for unilateral than bilateral presentation of stimuli.
(c) Feedback of judgment will be significantly more beneficial than without feedback of judgment.

Tools
The experiment was based on a personal computer based Java program.Stimuli were shown through the monitor and responses were saved in database.

Design
The design of the experiment involving JF was a 2 (Visual-field: left visual-field, right visual-field) x 2 (Presentation mode: unilateral, bilateral) x 2 (Stimulus structure: word, face) x 2 (Stimulus content: neutral, emotional) x 2 (Judgment feedback: with JF, without JF) mixed factorial design with visual-field, presentation mode, stimulus structure, stimulus content as within subject factors and judgment feedback as between subject factor.

Sample
Participants were 320 right handed subjects (N=320), with feedback of judgment =160(male=80, female=80), without feedback of judgment =160(male=80, female=80), mean age = 21.6 yrs, SD = 2.3 yrs, mean education = 16.5 yrs) engineering students from the Indian Institute of Technology, Kharagpur, India.Since students in I.I.T come from different states, this sample was very representative of normal population.Subjects were all right-handed as measured by a 20-item Handedness questionnaire (Mandal, Pandey, Singh, & Asthana, 1992).The students did not have visual field defect, and all had the reading habit from 'left to right'.Left handed subjects were not chosen in the study, since the lateralization pattern of these subjects was found to be different from right-handed subjects (Bryden, 1982).Subjects were chosen randomly.

Procedure
From a pool of standard photographs (Mandal, 1987), 48 of them showing facial expressions (6 expressions for each of the 6 universal emotions (6x6=36): happy, sad, fear, anger, surprise, disgust, and 12 expressions of neutral state) were taken for the present study.Similarly, same number of emotion words representing six universal emotions and 12 neutral words were also selected.Preparation of neutral words was made in such a way so that no word exceeded more than 5 letters.
Stimuli were prepared unilaterally and bilaterally.12 sequences were followed and each sequence constituted of 12 trials (total trials = 144).These stimuli were counterbalanced for structure (face, word), content (emotional, neutral) and visual-field (left, right).Six target stimuli were emotional (3 each in the RVF and LVF) and six stimuli were neutral (3 each in the RVF and LVF) from these twelve trials.It was kept in mind so that using a Boolean Array Method so that no trial appears in succession in the same sequence and these stimuli were presented in a randomized order.
Recognition accuracy was operationalized as a condition in which subjects had to respond in terms of stimulus structure as well as content and moreover they had to match the target stimulus with a set of test stimuli.Response latency was defined as the time between the onset of stimulus and presentation of response.At first, subjects were asked to fix their gaze at the center of the 17 inch computer screen; the target stimulus appeared for 180ms after an interval of 75ms.The angle through the line at the center of the window to the top left/right corner of the image with the horizontal was 55 o from the intersecting point.Subject's response was taken with the usage of arrow keys in the computer keyboard (top: emotional word, bottom: neutral word; left: emotional face, right: neutral face).24 practice trials were administered before the beginning of actual experiment and all were asked to use the right index finger for all responses.Dependent variables were the recognition accuracy (RA) and response latency (RL).In the first step, subjects identified the stimulus category, structure x content, (for example, emotional word, emotional face, neutral word, neutral face).In the second step, a second window would emerge in the computer screen with 6 photographs or 6 words belonging to the category that subject identified only if the stimulus recognized in the first step were correct.But the RL of matching target stimuli with the test stimuli was not stored.RA of the above mentioned task was stored.Computer recorded the RL and RA in a database.A second window appears irrespective of correct response in a complete different second set.The subject had to press a numbered key (1-6) to identify the target stimulus from a pool of 6 test stimuli.

Results
Visual-field advantage as a function of Stimulus Structure (verbal, nonverbal), Stimulus Content (emotional, neutral), the interaction of these factors (Stimulus Structure and Content), Presentation mode (unilateral and bilateral), and Judgment Feedback were examined.
Findings were analyzed for the main effects of judgment feedback with each factor having stimulus structure, stimulus content, presentation mode and visual-field as within subject factors.
Table 1 shows the RA and RL data of participants.Analysis of the data was done with a 2 (Visual-field: left visual-field, right visual-field) x 2 (Presentation mode: unilateral, bilateral) x 2 (Stimulus structure: face, word) x 2 (Stimulus content: emotional, neutral) x 2 (JF: with, without) mixed factorial design.The main effect of JF was treated as the between subject factor.
3.1.1Three way interaction and two way interaction break ups of JF (RA) The three-way interaction of Visual-Field x Presentation Mode x Judgment Feedback was significant, F = 13.38,df = 1, p < .001.The three-way interaction of Visual-Field x Presentation Mode x Judgment Feedback indicated that RA for without feedback was significantly higher than with JF in RVF during bilateral presentation.(see figure 1).
The two-way interaction of Visual-Field x Presentation Mode was significant F= 231.25, df= 1, p< .001.RA for bilateral presentation mode, suggested that the performance was significantly lower in RVF as compared to LVF.
The two-way interaction of Presentation Mode x Judgment Feedback was significant F= 20.26, df= 1, p< .001.
The two-way interaction of Presentation Mode x Judgment Feedback indicated that subjects had higher RA in without feedback in bilateral presentation mode (mean=11.9)than JF during bilateral presentation mode (mean=10.9).
The interaction of Visual-Field x Judgment Feedback was also significant, F = 13.98,df = 1, p < .001.Recognition accuracy for without feedback in LVF (mean=16.8)was higher than JF in LVF (mean=14.9)as compared to the RA of these conditions in RVF (mean of without feedback in RVF= 13.2, mean of judgment feedback in RVF= 12.2).
The three-way interaction of Presentation Mode x Stimulus Structure x Judgment Feedback showed that RA for without feedback was significantly higher than JF in face recognition during bilateral presentation, F= 34.44, df=1, p< .001(see figure 2).
The two-way interaction of Stimulus Structure x Judgment Feedback was significant F=34.58,df = 1, p< .001.However, RA for face in without feedback (mean=13.5)was significantly higher than in JF (mean=12.6).
The two-way interaction of Presentation Mode x Judgment Feedback was significant.The result was mentioned earlier.
The three-way interaction of Stimulus Content x Presentation Mode x Judgment Feedback was significant F= 9.20, df=1, p< .003(see figure 3).RA score in without feedback was significantly higher than JF in case of neutral content during bilateral presentation mode.
The two-way interaction of Stimulus Content x Presentation Mode was significant, F= 69.98, df= 1, p < .001.RA score was significantly lower in bilateral presentation mode for neutral content (mean=10.8)as compared to emotional content (mean= 11.9) in bilateral presentation mode.
The two-way interaction of Stimulus Content x Judgment Feedback was significant, F= 20.30, df=1, p< .001.Moreover, RA in without feedback (mean=13.8)was significantly higher than JF.
The two-way interaction of Judgment Feedback x Presentation Mode was significant, which was narrated earlier.

Four way interactions and two way interaction break ups of JF (RA)
The four-way interaction of Visual-Field x Stimulus Structure x Stimulus Content x Judgment Feedback was significant, F=8.17, df=1, p= .005.
The two-way interaction of Visual-Field x Stimulus Structure was also significant, F= 33.05, df=1, p< .001.Result indicated that word in LVF (mean=15.74)was more accurately recognized than face in LVF (mean=13.97).
The two-way interaction of Stimulus Structure x Stimulus Content was also significant, F= 27.96, df =1, p< .001.Result reflected that emotional word (mean=14.75)was significantly better recognized than emotional face (mean=13.56).
The two-way interaction of Stimulus Content x Judgment Feedback was discussed earlier.
The two-way interaction confirms that Visual-Field x Stimulus Content was significant F= 209.73, df=1, p< .001.Result showed that emotional content in LVF (mean=14.92)was recognized with much accuracy than that in RVF (mean=13.39).However, neutral content was also recognized with much accuracy in LVF (mean=14.79)than that in RVF (mean=12.05).
Accuracy data of four-way interaction of Visual-Field x Presentation Mode x Stimulus Structure x Stimulus Content showed was also significant, F=11.56, df=1, (p< .001).
The two way interactions of Stimulus Content x Presentation Mode, Visual-Field x Stimulus Structure, and Stimulus Structure x Stimulus Content were discussed earlier.
3.2.1 Three way interaction and two way interaction break ups of JF (RL) RL scores showed that the three-way interaction of Visual-Field x Stimulus Structure x Judgment Feedback was significant, F = 8.63, df = 1, p = .004(see figure 4).RL of word in LVF under JF condition (976.59msec) was less than word in LVF under without feedback condition (1443.88msec).Similarly, RL of word in RVF under JF condition (994.17.msec) was less than word in RVF under without feedback condition (1453.30msec).
The three-way interaction of Visual-Field x Stimulus Content x Judgment Feedback was significant, F= 12.52, df=1, p< .001(see figure 5).Relative performance in without feedback was slightly better if compared to JF for neutral contents in LVF.
The two-way interaction of Visual-Field x Stimulus Content confirmed the finding, F= 9.52, df=1, p= .002.A significant difference was noticed between neutral and emotional contents in LVF.Emotional contents in LVF (mean=1242.73 msec) took less RL than neutral contents (mean=1280.13msec) in LVF.
The three-way interaction of Stimulus Structure x Stimulus Content x Judgment Feedback showed that neutral face took maximum time (RL) in both JF and without feedback, F= 8.42, df=1, p= .004(see figure 6).
Since, the two-way interactive effects of Stimulus Structure x Stimulus Content, Stimulus Content x Judgment Feedback, and Judgment Feedback x Stimulus Structure was not significant, it was not taken into account.

Four way interactions and two way interaction break ups of JF (RL)
The four-way interaction of Visual-Field x Presentation Mode x Stimulus Structure x Judgment Feedback was highly significant, F= 21.31, df=1, p< .001.
The interaction pertaining to Visual-Field x Stimulus Structure was significant F= 13.24, df= 1, p< .001.However, face was recognized taking more RL in LVF (mean= 1312.6msec)than words in LVF (mean= 1210.2 msec) than that in RVF (mean of face in right visual-field) = 1283.2msec, mean of word in RVF = 1223.74msec).
The four-way interaction of Visual-Field x Stimulus Structure x Stimulus Content x Judgment Feedback showed significant interaction effect, F= 14.94, df=1, p< .001.

Discussion
The experiment showed that (1) the main effects of visual-field, presentation mode, stimulus structure and stimulus content are significant; (2) effects of experiment reveal that the main effect of presentation mode and stimulus structure significantly affect RL.
It is found that stimuli are significantly recognized in LVF than in RVF.The finding is in line with Gilbert and Bakan (1973) who showed that the tendency to process information is greater in LVF.Hillard (1973) also found LVF superiority by using black and white photographs.The finding is also supported by Coronel et al. (1999) who found LVF superiority in perception of stimuli in majority of right-handed subjects expressed as a smaller response time.Schweinberger et al. (2003) also found a RH superiority in case of unfamiliar faces.The superiority was measured in terms of LVF and both visual-field advantage in accurately recognising the expressions of unfamiliar faces.
Results corroborated that words are recognized with greater recognition accuracy than face.Words are recognized with significant greater accuracy than face, suggesting that lexical decision task in the study is cognitively less demanding as compared to faces (Basu & Mandal, 2004).
Moreover, emotional contents are more accurately recognized than neutral content.Nague and Moscovitch (2002) substantiated the finding.They found that explicit memory is more dependent on the RH, in case of emotional words.However, perception of emotional and non-emotional words is more dependent on the LH.This finding is in line with Compton et al. (2005) that emotional stimuli gets special priority in information processing.He found that across the field advantage is better in angry and happy faces as compared to neutral faces.The result was reflected both in RA and RL measure.
The result is consistent with earlier findings (Banich & Belger, 1990;Heinze et al., 1990;Luck et al., 1990).Banich and Belger (1990) showed a unilateral advantage in a physical matching task in comparison to bilateral presentation.The present finding confirms the proposition by Hines (1975) in which unilateral, in comparison to bilateral, presentation of stimuli is found to enhance the RA.
The finding supports the behavioral data that subjects respond faster to unilateral stimuli to bilateral stimuli (Lange et al., 1999).They showed that event related potentials (ERP) effects of visual spatial attention are noticed in unilateral presentation.On the other hand, an attention related posterior contra lateral positivity was not observed.The result can be interpreted also on the basis of random sequence of single stimuli might automatically draw attention.(Luck et al., 1990).Their task was to search a target letter from distracters.According to them, reorienting attention after each irrelevant stimulus during bilateral presentation inhibits the selection of attention.
The hypothesis that JF would elicit greater RA and lower RL in eliciting visual-field advantage as compared to that without judgment feedback was partially supported.RA scores in without feedback cases were much higher than those with JF thus contradicting the hypothesis.However, RL in without feedback cases was significantly higher than those with JF, thus corroborating the hypothesis.Thus, error rates increased in JF along with quick perception.
That JF does not enhance RA can be explained by the fact that the bias is systematically embedded in the visual system and JF failed to alter this systematic bias.JF would probably play a role in changing behavioral strategy if it was due to error.The rationale behind forming the hypothesis was that recognition of iconic memory takes place in the sensory level before reaching the brain and thus probably would have changed the behavioral strategy.But the systematic bias already embedded in the visual system did not allow the efficacy of judgment to change.
JF change behavioral strategy in the sensory memory level and face recognition involves analytical components.Thus face recognition required processing information at a deeper level and detailed task analysis and is better recognized in without feedback.
One interesting finding is that error rates increase in JF along with quick perception giving rise to an open debate.JF may be looked upon as example based learning where counter examples are being presented.

Conclusion
RA scores in without feedback was much higher than JF.Interestingly, RL in without feedback was significantly higher than JF.This showed that although stimuli were better and accurately perceived in without feedback, RL was high.On the other side, stimuli were less accurately perceived in JF, but it took less RL.So, error rates increased in JF along with quick perception.
Results showed that JF elicits greater accuracy in cases of (a)words, with a more pronounced effect in unilateral presentation mode; (b) emotional content in unilateral presentation mode.JF elicited less RL in case of emotional words.
The present study could not explain why without feedback cases elicit better recognition accuracy than JF.A new experimental set-involving example based learning and counter examples can be undertaken as a future work.

Implications of this study for future research
However, these issues may be taken into consideration in future and a more sophisticated tool of split visual field task can be developed to give due importance to both stimulus structure and content along with valence of stimuli.Besides, other central measures such as dichotic listening measures can also be taken into account to assess the relationship of hemispheric dominance with respect to judgment feedback.

Table 1 .
Recognition Accuracy and Response Latency Mean and Standard Deviation for Visual-Field, Stimulus Structure, Stimulus Content, Presentation Mode *Maximum possible score per cell: 18 (for Accuracy) *Response latency is for correct response only

Table 2 .
Summary ANOVA with Judgment Feedback Accuracy as Dependent Measure Tests of Between-Subjects Effects (Judgment Feedback Accuracy)