AY Psychology Different Styles of Corrective Feedback Discussion
AY Psychology Different Styles of Corrective Feedback Discussion
Question Description
I’m working on a psychology discussion question and need a sample draft to help me learn.
What are the different styles of corrective feedback? What do you prefer and why?
Unformatted Attachment Preview
See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/273349031 An Analysis of Feedback from a Behavior Analytic Perspective Article in The Behavior analyst / MABA · May 2015 DOI: 10.1007/s40614-014-0026-x CITATIONS READS 5 195 2 authors, including: Nancy Hemmes City University of New York – Queens College 46 PUBLICATIONS 710 CITATIONS SEE PROFILE All content following this page was uploaded by Nancy Hemmes on 25 June 2015. The user has requested enhancement of the downloaded file. BEHAV ANALYST (2015) 38:51–75 DOI 10.1007/s40614-014-0026-x An Analysis of Feedback from a Behavior Analytic Perspective Kathleen A. Mangiapanello & Nancy S. Hemmes Published online: 14 January 2015 # Association for Behavior Analysis International 2015 Abstract The present paper presents a systematic analysis from a behavior analytic perspective of procedures termed feedback. Although feedback procedures are widely reported in the discipline of psychology, including in the field of behavior analysis, feedback is neither consistently defined nor analyzed. Feedback is frequently treated as a principle of behavior; however, its effects are rarely analyzed in terms of wellestablished principles of learning and behavior analysis. On the assumption that effectiveness of feedback procedures would be enhanced when their use is informed by these principles, we sought to provide a conceptually systematic account of feedback effects in terms of operant conditioning principles. In the first comprehensive review of this type, we compare feedback procedures with those of well-defined operant procedures. We also compare the functional relations that have been observed between parameters of consequence delivery and behavior under both feedback and operant procedures. The similarities observed in the preceding analyses suggest that processes revealed in operant conditioning procedures are sufficient to explain the phenomena observed in studies on feedback. Keywords Feedback . Behavioral processes . Operant conditioning . Reinforcement . Punishment . Contingency Feedback is a term commonly used within the discipline of psychology; however, it is neither consistently defined nor analyzed. In his influential commentary, Peterson (1982) pointed out numerous shortcomings in the use of feedback in the behavior analytic literature, including his observation that it is sometimes “treated as a principle of behavior” (p. 101), yet is no better than “professional slang” (p. 102). Weatherly and Malott (2008) observed that control of feedback effects by associative learning processes is often assumed, rather than determined, further pointing out that this occurs K. A. Mangiapanello : N. S. Hemmes The Graduate School, City University of New York, New York, NY, USA K. A. Mangiapanello (*) : N. S. Hemmes Department of Psychology, Queens College, 65-30 Kissena Boulevard, Flushing, NY 11367, USA e-mail: kathleen.mangiapanello@qc.cuny.edu 52 BEHAV ANALYST (2015) 38:51–75 even when the parameters of feedback are unfavorable for associative learning. Similarly, Catania (1998) noted that while it is often presumed that feedback functions as a reinforcer (or punisher), such assumptions may be misleading. Imprecision and inconsistency in defining feedback at both the stimulus and procedural level interfere with analyzing the specific function(s) that feedback serves. It has been suggested that feedback may function similarly to the following: (a) a reinforcer or punisher (Carpenter and Vul 2011; Cook and Dixon 2005; Slowiak, Dickinson, and Huitema 2011; Sulzer-Azaroff and Mayer 1991), (b) an instruction (Catania 1998; Hirst et al. 2013), (c) a guide (Salmoni et al. 1984), (d) a discriminative stimulus (Duncan and Bruwelheid 1985-1986; Roscoe et al. 2006; Sulzer-Azaroff and Mayer 1991), (e) a rule (Haas and Hayes 2006; Prue and Fairbank 1981; Ribes and Rodriguez 2001), (f) a conditioned reinforcer (Hayes et al. 1991; Kazdin 1989), and (g) a motivational (Johnson 2013; Salmoni et al. 1984) or establishing stimulus (Duncan and Bruwelheide 1985-1986). Peterson’s (1982) suggestion that the term be eradicated apparently was not wellreceived. Since publication of that paper, 441 articles with feedback in the title and behavior in the journal name have appeared in over 50 journals in 1983 through 2013. Publication frequency was obtained from a search conducted on PsycINFO (July 29, 2014) using the search parameters listed in Table 1. In these studies, feedback was often implemented in the following areas: (a) behavioral skills training (to improve teaching, academic, and a variety of motor skills), (b) health-related behavior (increasing exercise time and habit cessation), and organizational behavior management (to improve customer service, increase productivity, decrease absenteeism, and increase safe work habits). Table 1 lists the journals that collectively accounted for greater than 80 % (356) of the 441 publications identified in the search. The three journals that focus on organizational behavior collectively had the highest frequencies of articles, with the Journal of Organizational Behavior Management yielding the highest count of all journals searched. High frequencies were also observed for the Journal of Motor Behavior and the Journal of Applied Behavior Analysis. Cumulative frequency per year is plotted in Fig. 1. This function suggests that the rate of publication increased after 2006. While admitting the wisdom of Peterson’s (1982) suggestion that behavior analysts avoid references to feedback, Duncan and Bruwelheide (1985-1986) point out that this strategy risks limiting the influence of behavior analytic scholarship on the research and practice of those who take a nonbehavior analytic approach to problems in business and industry. As an alternative, Duncan and Bruwelheide urged “a finer-grained analysis of feedback, focused on behavior functions” (p. 112). The present report adopts this recommendation. In the first comprehensive review of this type, we compare feedback at a procedural level with well-defined operant procedures, noting overlap and dissimilarities. We also compare functional relations between parameters of consequence delivery and behavior under both feedback and operant procedures. We reason that convergence of these functions will support the conceptually systematic assertion that the processes involved in operant conditioning are sufficient to account for the phenomena revealed in studies of feedback. As pointed out by Alvero et al. (2001) and others (e.g., Duncan and Bruwelheide 1985-1986; Johnson 2013; Normand et al. 1999), to the extent that behavior control by feedback can be attributed to basic learning processes, its application can be informed by the vast literature on variables that influence the effects of operant procedures. BEHAV ANALYST (2015) 38:51–75 53 Table 1 Citations by journal Journal title Number of citations Journal of Organizational Behavior Management 82 Journal of Motor Behavior 56 Organizational Behavior and Human Decision Processes 50 Journal of Applied Behavior Analysis 42 Computers in Human Behavior 21 Behavior Modification 18 Journal of Organizational Behavior (Journal of Occupational Behavior) 16 Behavioral Therapy 15 Games and Economic Behavior 12 Journal of Behavior Therapy and Experimental Psychiatry 12 Social Behavior and Personality 12 Law and Human Behavior 11 Small Group Research (Small Group Behavior) 9 Frequency of citation was obtained from a search conducted on PsycINFO (July 29, 2014) using the following parameters: the term feedback appeared in the title and the word behavior appeared in the publication title; the source was limited to journal articles that were published between 1983 and 2013. Studies that did not include human participants were excluded. A total of 441 citations were identified, representing over 50 scholarly journals. Table 1 lists citation frequencies for the journals that collectively accounted for greater than 80 %, that is, 356 of the 441 publications identified in the search In the absence of a consensus on defining feedback (Houmanfar 2013), and in the tradition of Skinner’s analysis of psychological terms (Skinner 1945; also see Schlinger Fig. 1 Cumulative frequency per year of journal articles containing the word feedback in the title, appearing in journals with behavior in the journal name 54 BEHAV ANALYST (2015) 38:51–75 2013), an operational definition was adopted for this report in order to describe the minimal stimulus conditions likely to occasion emission of feedback as an utterance or as a written word. Although the objective was a definition that is applicable to a broad range of procedures, the manner in which both feedback and operant conditioning procedures could be programmed is potentially limitless; therefore, the present formulation focuses on that which is typical, as opposed to that which is conceivable. For this report, feedback is defined as presentation of an exteroceptive stimulus whose parameters vary as a function of parameters of antecedent responding. The feedback stimulus may vary along one or more dimensions with any number of parameters of responding—both quantitative and qualitative. It may describe characteristics of the immediately prior response or of a predetermined target response (goal) and possibly the relation between the two (i.e., whether they are members of the same response class or, if not, how they differ). It may also describe the contingency between responding and the consequences of responding. The feedback stimulus may be presented in a number of forms or modalities (e.g., verbal statements, tones, images) and may vary with a single response dimension or a combination of response dimensions. While the foregoing definition and specifications are based on those of previous authors, collectively, they are more comprehensive than those identified in the literature. The intention was to provide a framework within which the planned analysis could be conducted systematically and with precision for a variety of feedback procedures. A Comparison of Feedback with Reinforcement and Punishment Procedures In this section, procedures labeled feedback are compared at a procedural level with common operant conditioning procedures, and functions showing behavior change as a function of procedural parameters under feedback and operant procedures are examined. A number of authors have pointed out that feedback procedures are implemented in a manner that is analogous to reinforcement (or punishment) procedures (e.g., Duncan and Bruwelheide 1985-1986; Peterson 1982). Most notably, under both feedback and operant procedures, consequences of responding are contingent on properties of responding. Likely for this reason, feedback is often assumed to function as a reinforcer or punisher (Weatherly and Malott 2008; Miltenberger 2012), yet these behavior control processes are rarely established independently, as discussed in the following section. Unlike feedback stimuli, reinforcers and punishers are defined and classified by their effects on parameters of the response class upon which they are contingent, including probability, rate, magnitude, and latency. Reinforcement is defined as an increase in response strength (increased rate, probability, or magnitude or decreases in latency of the response), whereas changes in the opposite direction are consistent with a stimulus functioning as a punisher. Both feedback and operant procedures are classified as positive or negative; however, these terms are used differently when referring to feedback versus to operant procedures. For reinforcement and punishment procedures, the terms refer to the sign (positive or negative) of the contingency between responding and consequences, whereas for feedback procedures, positive and negative usually refer to the content of the feedback stimulus (affirmative or corrective, respectively), not to the contingency between responding and the presentation of feedback. Feedback BEHAV ANALYST (2015) 38:51–75 55 procedures are most often programmed analogously to positive reinforcement or positive punishment procedures in that feedback stimuli are typically presented, rather than removed, contingent upon responding. In the following sections, we examine the overlap of feedback procedures with operant conditioning, focusing on the following: (a) identification of the a priori behavior-altering effects of the consequence, (b) parameters of consequence delivery, and (c) precision of the relation between target responding and its consequences. A Priori Establishment of Behavior-Altering Effects of Consequential Stimuli Operant and feedback procedures can be compared in terms of whether behavioral functions are identified prior to their implementation. Traditions differ for the two types of procedures, possibly owing to differing epistemologies. In the feedback literature, a control system model is often used to account for the behavior-altering effects of feedback. In general, control system models hold that feedback stimuli provide information about current performance that is compared by the learner with preestablished goals or other standards for performance. Discrepancies result in adjustment of performance or goals (for a description of this model, see Duncan and Bruwelheide 19851986). Under control system models, a priori establishment of feedback stimulus effectiveness would amount to determining that stimuli accurately indicate the relation between current behavior and performance goals or standards. In the operant tradition, identification of effective response consequences is guided by the principle of transituationality (Meehl 1950). When a stimulus is determined to function as a reinforcer or punisher for at least one response prior to implementation of a behavior change procedure, its effectiveness in the latter procedure can be attributed to its previously identified reinforcer or punisher function. When such attribution is possible, the condition of transituationality has been achieved. To help achieve transituationality, and also enhance the efficacy of operant procedures, many researchers use stimulus preference assessments to identify potentially effective stimuli (e.g., Call et al. 2012; DeLeon et al. 2001; Kelly et al. 2014; Lee et al. 2010; Reed et al. 2009; Roane 2008; Vollmer et al. 2001). In these procedures, stimuli are presented individually or in pairs, with the relative proportion of trials on which each stimulus is approached taken as an index of preference for that stimulus (Fisher et al. 1992). Fisher et al. and others (e.g., Lee et al. 2010) have shown that preference assessments yield strong predictions regarding reinforcer efficacy. Graff and Karsten (2012) surveyed practitioners who serve individuals with developmental disabilities. Most behavior analysts (approximately 89 %) who responded reported using some form of stimulus preference assessment to determine effective consequences. This literature suggests that a priori establishment of potential reinforcer or punisher effectiveness of consequent stimuli is common among behavior analysts. The reinforcer or punisher functions of feedback stimuli generally are not established prior to their implementation (Duncan and Bruwelheide 1985-1986), thereby compromising analysis of these procedures in terms of reinforcement or punishment. Nonetheless, reinforcement is often posited as a basis for increases in responding when feedback is contingent upon the measured response, particularly when procedural parameters, such as timing of the feedback stimulus, match those of typical reinforcement procedures (e.g., Cook and Dixon 2005). The proposition that effects of feedback 56 BEHAV ANALYST (2015) 38:51–75 are reducible to reinforcement has been examined in the literature. For example, in a recent study, Johnson (2013) performed a component analysis in which effects of objective feedback—description of the previous day’s performance—and evaluative feedback—statements consistent with excellent, good, average, or poor performance on the previous day—were dissociated. Both types of feedback were associated with higher levels of performance in comparison to a no-feedback condition; however, when the two types were combined, performance was considerably higher than when either type was presented separately. The author reasoned that the evaluative feedback may function as an establishing or abolishing operation, controlling the effectiveness of objective feedback as a reinforcer or punisher. However, without a priori assessment of the behavior-altering function of the consequences presented in this and other studies of feedback, it cannot be determined if behavior change was mediated by reinforcement (or punishment) or by other processes controlled by response-contingent feedback (e.g., instruction, establishing or abolishing operations, discriminative stimulus control, or elicitation). In the absence of direct evidence that feedback functions as a reinforcer, a review by Alvero et al. (2001) provides some indirect evidence. Those authors distinguished between feedback-alone procedures—the consequences of responding consisted only of information about the quality or quantity of prior responding—and procedures that included feedback plus consequences, where consequences could include such events as praise, money, and time off from work, that is, presumed reinforcers. In the literature reviewed by Alvero et al., consequences were included with the feedback stimulus in 34 of 64 applications; however, the authors did not indicate whether behavior-altering effects of the consequences were assessed independently of the feedback procedure. Interestingly, Alvero et al. found that feedback-alone procedures were consistently effective in 47 % of the 68 applications, while procedures that combined feedback with consequences (presumed reinforcers) were consistently effective in 58 % of the applications. In an earlier review of 126 experiments, Balcazar et al. (1985-1986) reported values of 28 and 52 % for feedback alone and feedback with consequences, respectively. Whether the positive effects of feedback-alone procedures can be attributed to reinforcement remains an open question. Even if feedback stimuli do not include previously established reinforcers or punishers, it is likely that feedback stimuli may function as some type of conditioned reinforcer or punisher (e.g., Duncan and Bruwelheide 1985-1986; Jerome and Sturmey 2014; Johnson and Dickinson 2012); Kazdin 1989; Peterson 1982). Conditioned reinforcers or punishers are established by means of a stimulus-stimulus contingency between a stimulus having no initial reinforcing or punishing effects and a primary reinforcer or punisher, or with an established conditioned reinforcer or punisher. Under feedback procedures, these behavior-controlling functions of feedback stimuli (e.g., “Good job” or “Your output is low”) may be present prior to introduction of training (Hayes et al. 1991; Roscoe et al. 2006), or they may be acquired during the feedback procedure, provided that a stimulus-stimulus contingency is arranged with an established reinforcer (or punisher) during the procedure. As Alvero et al. (2001) noted, in many cases, when feedback stimuli are used, they are paired with presumed or established reinforcers or punishers during training. An experiment by Hayes et al. (1991) demonstrated an untrained behavior control effect of positive and negative verbal feedback stimuli—“correct” and “incorrect”—for correct BEHAV ANALYST (2015) 38:51–75 57 and incorrect sorting responses, respectively, and transfer of that control to the arbitrary stimuli (i.e., stimuli having no measurable response-altering effect) with which the feedback stimuli were differentially paired during training on the sorting task. In contrast to the positive findings of Hayes et al., Slowiak et al. (2011) reported mixed results in a study of acquisition of conditioned reinforcer effectiveness of feedback stimuli that were correlated with monetary payments. In an interesting design, the authors assessed reinforcer effectiveness of feedback stimuli for two different responses. In the study, participants in each of two experimental conditions performed a simulated data entry task during which they could self-deliver objective feedback— statements describing the cumulative number of responses completed during the session, the number of correct responses, and mean rate of responding during the session. For one group, pay was presented contingent upon data entry performance (termed incentive pay); for these subjects, feedback therefore corresponded to amount of money earned, creating an opportunity for feedback to acquire a conditioned reinforcer function. For the other group, pay was not continent upon performance. One measure of conditioned reinforcer effectiveness of the feedback stimuli was the rate at which participants self-delivered feedback by executing a computer keyboard response. Participants in both groups self-delivered feedback frequently; however, this did not vary between conditions, suggesting that correspondence of feedback with pay did not differentially increase the reinforcing value of feedback (as a reinforcer for solicitation of feedback). On the other hand, level of performance on the data entry task was higher for participants receiving incentive pay. Conceivably, a differential conditioned reinforcing effect of feedback for data entry (but not for self-delivered feedback) was acquired as a function of correspondence between feedback and pay; alternatively, the payment contingency alone controlled the differences in data entry performance. In summary, there are little direct data regarding the a priori effectiveness of feedback stimuli, though two literature reviews (Alvero et al. 2001; Balcazar, et al. 1985-1986) indicate that feedback-alone procedures were ineffective in a rather large percentage of studies surveyed. Nonetheless, because the parameters of many effective feedback procedures overlap with those of operant procedures, it is likely that in some cases, feedback stimuli are functional reinforcers or punishers prior to training or become so as a result of stimulus-stimulus contingencies arranged between feedback stimuli and reinforcers or punishers under feedback procedures. These observations support design of feedback procedures that include the following: (1) a priori establishment of reinforcing or punishing effectiveness of feedback stimuli and (2) combining feedback with preestablished reinforcer or punishing stimuli. Parameters of Consequence Delivery Under operant and feedback procedures, presentation of reinforcement or punishment typically is controlled by a contingency (positive or negative) between responding and consequences. The contingency specifies the characteristics of responses (criterion or target responses) that are eligible for consequence delivery, a schedule of consequence delivery that stipulates the conditions under which criterion responding will occasion a consequence (reinforcer presentation or omission), characteristics of consequence(s) to be delivered, and precision of the relation between responding and consequences. In the following sections, effects of three families of parameters of consequence delivery for 58 BEHAV ANALYST (2015) 38:51–75 feedback and operant conditioning are discussed: (a) interval between target responding and consequence delivery, (b) probability of consequence delivery, and (c) precision of the relation between responding and consequences. As will be shown, the range of these parameters differs, in some cases markedly, between the two types of procedures; however, the functions relating these parameters to behavior change show considerable similarity between feedback and operant procedures. Delay of Consequences for Responding A thoroughly investigated parameter of both feedback and operant conditioning procedures is the response-consequence interval (delay to reinforcement, punishment, or feedback). In reinforcement and punishment procedures, the magnitude of operant responding is strongly and inversely related to delay, with intervals as short as 0.5 s causing deleterious effects (e.g., Grice 1948); however, this effect is mitigated if an exteroceptive stimulus is presented during the delay (e.g., Lattal 1984; Richards 1981; Schaal and Branch 1988). A different picture emerges for feedback procedures, in which the impact of delay of feedback on performance is inconsistent. In some cases, longer delays to feedback are associated with superior performance (e.g., Swinnen et al. 1990a, b). Maddox et al. (2003) found either no effect (experiment 1) or a deleterious effect of delay (experiment 2) on category learning tasks. In contrast, Northcraft et al. (2011) found that immediate feedback (versus feedback presented according to a fixedtime 6-min schedule) was associated with superior performance (number of units correctly completed) on a simulated college class scheduling task. In the area of semantic learning, results are mixed, with a large percentage of studies showing an advantage of delayed over immediate feedback (e.g., Smith and Kimball 2010). Salmoni et al. (1984), in a review of over 250 research reports on feedback and motor learning, pointed out that delay covaried with intertrial interval, the interval between feedback and the next opportunity to respond, or both. After accounting for these confounds, the authors concluded that there is little consistent evidence for an effect of feedback delay on acquisition or on post-acquisition performance. Metcalfe et al. (2009) targeted an analogous situation in the area of recall of educationally relevant material, where a test-retest paradigm was used. They report that in a large proportion of studies, manipulation of the interval between the response (first test) and feedback covaried with the interval between feedback and the next opportunity to respond (posttest). Under these conditions, the delay condition, in comparison with the immediate condition, provides subjects with the correct response at a shorter interval from the next opportunity to respond—a condition that likely would favor recall of the correct response, regardless of the delay between the initial response and presentation of feedback. Similarly to Salmoni et al., Metcalfe et al. pointed out that evidence of superiority of delayed versus immediate feedback can be determined only when the interval between feedback and the next opportunity to respond is controlled. In summary, a consistent inverse relation between delay and learning exists for operant procedures, while the relation between delay and learning is inconsistent for feedback procedures and is complicated by vast procedural differences among studies that have investigated this parameter (cf. Carpenter and Vul 2011). Evidence of differential effects of delay under feedback versus operant procedures may signal a functional difference between the two types of procedures; however, the data admit to BEHAV ANALYST (2015) 38:51–75 59 other interpretations, including learning-based accounts of delay effects that are applicable to both feedback and operant procedures (see Lattal 2010, for a comprehensive overview). Two reports have sought to account for the inconsistent effects of delay of feedback and for the discrepancy between studies with humans versus animals, in terms of single underlying mechanism. Costa and Boakes (2011) and Lieberman et al. (2008) based their studies on a version of Revusky’s (1971) concurrent interference theory. Revusky argued that deleterious effects of delay between responding and consequences can be attributed to incursion of other events, including the subject’s own behavior, during the delay interval. When consequences are immediate, the strengthening effects of reinforcement will be accrued primarily by the criterion response. When a delay intervenes between the criterion response and a consequence, responses subsequent to the criterion response may achieve greater contiguity with the consequence, possibly leading to strengthening of competing or irrelevant responses. Although numerous reports point to differential parametric effects of consequence delay between feedback and operant procedures, the consistency of this generalization is challenged by reviews that have taken into account effects of extraneous and confounded procedural variables. Accordingly, comparative data provide insufficient basis for analyzing performance under these two procedures according to differing processes. On the contrary, there exists credible evidence that the functions relating response-consequence delay and behavior change under operant and feedback procedures are similar and, therefore, attributable to common behavioral processes. Correspondingly, similar procedures may be used to mitigate the effects of responseconsequence delay for both feedback and reinforcement or punishment preparations (see, for example, Dickinson et al. 1996; Lurie and Swaminathan 2009; Metcalfe et al. 2009; Reeve et al. 1993; Stromer et al. 2000). Probability of Consequence Delivery Another important parametric difference between operant conditioning and feedback procedures is response-consequence probability, or p(consequence|response). Typically, in operant procedures, reinforcers are programmed with a probability of less than 1.00 following a defined target response, while for punishment procedures, a probability of 1.00 (a continuous schedule) may be used (Mazur 2006). For both reinforcement and punishment procedures, the probability of a consequence for all nontarget responses is zero. In contrast, under feedback procedures, in many instances, some form of feedback stimulus is presented as a consequence of all responses, both criterial and noncriterial (although see, for example, Maes 2003; and Murch 1969). As a result, suppression of noncriterial responding is accomplished differently in operant procedures, where extinction is often used, versus use of negative feedback under feedback procedures. Despite these systematic differences between feedback and operant procedures, the relation between behavior change and p(consequence|response) is similar for operant and feedback procedures. It might be anticipated that speed of acquisition of criterion performance would covary with p(consequence|response), given that higher values are associated with a greater frequency of opportunities to learn. This relationship is well documented for both operant conditioning and feedback procedures; however, evidence indicates that acquisition speed under operant (e.g., Williams 1989) and feedback procedures 60 BEHAV ANALYST (2015) 38:51–75 (Salmoni et al. 1984; however, see Winstein and Schmidt 1990) is invariant with the number of obtained reinforcers, independent of probability of reinforcement. For example, Williams showed that improvement in rats’ discrimination performance increased more rapidly as a function of number of trials when each correct response was reinforced, versus when the probability of reinforcement was 0.50. However, when discrimination performance was plotted as a function of number of reinforced trials, performance did not vary with probability of reinforcement. Perhaps, more important is the observation that for both feedback and operant procedures, learning (defined as post-acquisition performance when consequence presentation is thinned or discontinued) is inversely related to p(consequence|response) during acquisition (Maas et al. 2008; Mackintosh 1974; Salmoni et al. 1984; Winstein and Schmidt 1990). The foregoing observations are comparable to the familiar partial reinforcement extinction effect (PREE) observed in operant conditioning preparations, where responding trained with p(reinforcement|response)
Purchase answer to see full attachment
Purchase answer to see full attachment
Explanation & Answer:
1 Discussion