Furthermore, this reaction time improvement increased following a

Furthermore, this reaction time improvement increased following a sequence switch in each block (Figure 2D). We recorded the activity of 553 (230 in monkey 1, 323 in monkey 2) neurons in 3-deazaneplanocin A clinical trial the lateral prefrontal cortex (lPFC) and 453 (210 in monkey 1, 243 in monkey 2) neurons, in the dorsal striatum (dSTR) predominantly in the caudate nucleus. Neural activity was recorded simultaneously from both areas in all sessions. All reported effects were consistent in both animals. Therefore, the data were pooled. We examined activity relative to five factors; the task condition, the sequence executed in each trial,

the specific movement being executed, the color bias, and learning related action value, which was estimated using a reinforcement learning algorithm (see Experimental Procedures). Sequence and learning effects were less well defined in the random sets, but because of the consistent task structure we analyzed them as an internal control. We began by analyzing activity using an omnibus ANOVA, across conditions, and then split the data by task condition to examine more specific hypotheses. Neurons buy Saracatinib were found which were related to all variables of interest. For example, some neurons had responses which depended on the specific movement being executed, but which also depended on the task condition (Figure 3). This lPFC neuron tended

to respond strongly to the first and last movement of all sequences in both the random and the fixed condition, as has been seen in previous studies (Fujii and Graybiel, 2003).

However, it also had a robust response to the second movement in sequence one and sequence five, but only in the fixed condition. We also found neurons related to the color bias. For example, in the random sets (Figure 4A) this dSTR neuron had a very strong baseline firing rate which was additionally modulated with the color bias (Figure 4B), an effect which became statistically significant just after movement onset (Figure 4C). Sequence selection also was modeled using a reinforcement learning algorithm (see Experimental Procedures). This allowed us to track the animal’s estimate of the value of each eye movement, movement by movement and trial by trial. For example, in the fixed condition, following a switch from a block in which sequence seven had been correct to a block in which sequence two was correct the animal continued trying to execute sequence seven in the first trial and the value estimates reflected this. The first execution of the leftward movement had a high value (1.0) as this had been correct in the previous block (Figure 5A; switch + 0). After this point, the animal still believed that the sequence had not switched and therefore it executed a downward movement for the second movement. As the animal would assume this was correct, this movement would have a high value (1.0).

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>