Welcome to Module 5. In this module, we will discuss antecedent conditions. This module will cover stimulus control and motivating operations. First, let’s take a look at stimulus control. Virtually the aim of all skill-acquisition programs is to promote stimulus control so that specific responses can occur reliably under particular instructional antecedent conditions and not under other antecedent conditions. Stimulus control is a principle in which the frequency, latency, duration, or amplitude of a behavior is altered by the presence or absence of an antecedent stimulus. In other words, behavior will occur in the presence of a particular antecedent stimulus and will not occur in the absence of this antecedent stimulus. For example, a child will stand up when hearing the instruction, “stand up,” but will not stand up upon other instructions. In this case, the instruction, “stand up,” has the stimulus control of the response, and other antecedent stimuli do not. The stimulus that has the control over a response is called a discriminative stimulus, or SD. The name for SD is because the stimulus has been correlated with a reinforcer, and therefore signals the reinforcement. Specifically, in the presence of an SD, responses have been reinforced, while in the absence of the SD, or stimulus delta conditions, responses have not been reinforced. Therefore, SD has the ability to momentarily increase a response, or to evoke a response. The instruction, “stand up,” can momentarily increase the response of standing up. Stimulus delta, or s-delta, is the antecedent stimulus which signals that reinforcement is not available for particular behavior. For example, the instruction, “come here,” is an s-delta for standing up. Standing up upon the instruction, “come here,” will not produce reinforcement. Thus, the instruction, “come here,” does not evoke the response, standing up. Stimulus control is developed through differential reinforcement. In the presence of an SD, a specific response is reinforced, while in the presence of an s-delta, the response is not reinforced. Using talking on the phone as an example. After phone rings, if you talk on the phone, you will likely receive social reinforcer. Through repeated correlation between the phone ringing and social reinforcement for talking on the phone, phone ringing will start to signal reinforcement. As a result, phone ringing will evoke the response talking on the phone. However, only an SD is not enough to ensure stimulus control because responses may generalize to other similar, but inappropriate conditions, for example, hearing a door bell and answering the phone. In this case, talking on the phone will not produce social reinforcement. The door bell signals that answering the phone will not get reinforced, and thus will not evoke the response. Similarly, all other sdelta conditions will not evoke the response. While responses can be emitted under both an SD and a conditioned stimulus, CS, however, their effects and the processes through which they acquire their effects are different. A CS acquires its ability to elicit a response through stimulus-stimulus pairing. This is respondent conditioning. As such, the relation between the CS and the elicited response does not depend on consequence. By contrast, an SD acquires its ability to evoke a response through its association with the reinforcement produced by the said response. This is an example of operant conditioning. In this case, the relation between the SD and the evoked response depends on the consequence. You should also note that a CS elicits a response while an SD evokes a response. Similar to an SD, if an antecedent stimulus is associated with punishment for a particular response, this antecedent stimulus may become a discriminative stimulus for punishment, or SDp. It acquires the ability to signal punishment and suppress that response. For example, if a teacher consistently paired himself with a reprimand for a particular response, the teacher himself will become an SDp for the reprimand. Thus, in the presence of the teacher, the frequency of the response is lowered. Although responses have lowered frequency under both the SDp and s-delta conditions, how they acquire their effects are different. As discussed, SDp acquires its effects through association with punishment, while s-delta’s effects are through association with no-reinforcement or extinction. For example, while the teacher suppresses a behavior due to the teacher being associated with the reprimand, the reason that the door bell does not evoke answering the phone was because the door bell signals that answering the phone will not provide reinforcement. Establishing and maintaining the control of a response may depend on how salient the stimulus is. In other words, how likely the individual will detect the stimulus. The more salient the stimulus is, the better the stimulus control will be. For example, when you are teaching a student to sound out letters, these letters are typically isolated and enlarged so that the student will correctly point to and sound each of them out. However, if the letters are printed as they appear in words, sentences, and passages, it will be much more difficult for the student to pay attention to individual letters. Masking and overshadowing are two phenomena associated with stimulus salience. In the case of masking, even though an SD has already established control over behavior, its evocative effect is blocked due to a completing stimulus. For example, although a student had practiced his presentation and could do it fluently in private, once he stood on the stage, he was no longer able to follow the slides and his speech was no longer fluent. The slides should have acquired the control of his speech when he was practicing in private, but once he was on the stage, other competing stimuli that may be more salient, such as a group of audience, blocked the control of the slides. Overshadowing, on the other hand, is when the acquisition of stimulus control is interfered due to a more salient stimulus. For example, the student could not focus on practicing his presentation because there was going to be an English test the next day, and he hadn’t studied for it. In this example, the development of the stimulus control was interfered. Establishing stimulus control is virtually the goal for all skill acquisition programs. Procedures for developing stimulus control have been documented in literature. In general, practitioners typically use response and stimulus prompting procedures to evoke target responses, and then gradually fade the prompts in order to transfer control from prompts to the instructional or natural stimuli. These procedures are beyond the scope of this class, but I encourage you to continue reading and familiarize yourself with these procedures. Video 2 ——-In addition to SDs, s-deltas, and SDps, motivating operations are also an antecedent condition that affects both our responses and reinforcer effectiveness. When we want something, it is said that we are motivated for that thing. Motivating operations, or MOs, are a behavioral treatment of “wanting something”. Instead of basing motivation on mentalistic events, motivating operations in behavior analysis are antecedent environmental variables that can be manipulated. These MOs have two functions: the value-altering and the behavior-altering effects. That is, one function of the MOs is related to the consequences, and the other is related to behavior. The valuealtering effect refers to the ability to either establish or abolish the reinforcing or punishing effectiveness of the consequence. For example, food deprivation establishes the reinforcing effectiveness of food, and food satiation abolishes the reinforcing effectiveness of food. Behavior-altering effect, on the other hand, refers to the ability to either evoke or abate the current behavior that has been reinforced or punished. For example, food deprivation evokes food seeking behavior that has historically produced food while food satiation abates the food seeking behavior. The effects of MOs on behavior can be direct or indirect. They can directly evoke or abate responses such as food seeking behavior during food deprivation. There may not be an SD that is related to food availability in the environment. On the other hand, as they can establish or abolish the value of a reinforcer, the evocative effectiveness of the SD is affected. For example, recall that Domino’s sign can be an SD for pizza. In this case, food satiation, the MO, will reduce the effectiveness for a Domino’s sign, the SD, to evoke behavior that produces pizza. Please pay attention to the verbs we use here. We use “establish” or “abolish” when referring to the value-altering effect on consequences, and “evoke” or “abate” when referring to the behavior-altering effect. There are two general types of MOs: the establishing operations, or EOs, and the abolishing operations, or AOs. The effects of EOs and AOs are opposite to each other. In relation to value-altering effect, EOs establish the reinforcer effectiveness. As for behavior-altering effect, EOs evoke all behavior that produces the reinforcer. By contrast, AOs abolish the reinforcer effectiveness and abate all behavior that produces the reinforcer. For example, food deprivation is an EO the establishes the reinforcer effectiveness of food and evokes all behavior that produces food, while food satiation is an AO that abolishes the reinforcer effectiveness of food and abates all behavior that produces food. When we discuss the effects of different stimuli on behavior, they are generally either behavior-altering or repertoire-altering effects. Behavior-altering effect is only on the current frequency of behavior, not on the future frequency. Therefore, only antecedent events, such as the SDs, s-deltas, SDps, and MOs, have behavior-altering effect. They evoke or abate the current behavior. Repertoire-altering effect, also called function-altering effect, is on the future frequency of behavior. Thus, only consequences such as reinforcers, punishers, and extinction can alter the function of behavior. Both MOs and SDs are antecedent variables that affect the current frequency of behavior due to their relations with the consequences. MOs establishes or abolishes the effectiveness of the consequences while SDs and SDps signals the availability of the consequences. Even though MOs and SDs share some commonalities as antecedent variables, they have two contrasting different definitions. An SD is correlated with the availability of reinforcement while an MO is related to the effectiveness of reinforcement. For example, even though food deprivation can evoke a response, but “deprivation of food” cannot signal the availability of food. Thus, food deprivation cannot be an SD, but it is an EO that establishes the effectiveness of food as a reinforcer. Likewise, a painful stimulation evokes all behavior related to reducing pain. However, the painful stimulation cannot be an SD as it cannot signal its own reduction. Rather, it increases the value of its removal as a reinforcer. On the other hand, an aspirin can signal pain reduction. In this case, painful stimulation is an MO that increases the value of pain reduction while an aspirin is an SD that evokes the behavior of taking the aspirin. Let’s take a look at another similar example. Two teachers are teaching the same child. Teacher A removes the instruction after the child cries, but Teacher B never removes the instruction. As a result, the child cries during Teacher A’s sessions and does not cry in Teacher B’s sessions. In this case, what is the SD that evokes the crying? And what is the MO here? To answer these two questions, we need to look at the maintaining reinforcer for crying, the removal of instruction. Remember that SDs signal the availability of reinforcement, so what signals the removal of instruction here? Recall that only Teacher A removes instruction and Teacher B does not remove instruction. The child behaves differently in the presence of the two teachers. The child cries only during Teacher A’s session. That means, Teacher A is the SD for the removal of instruction, while Teacher B is the s-delta. On the other hand, instruction is the MO that increases the value of its own removal, similar to painful stimulation. Instruction cannot signal its own removal. The removal is provided by Teacher A. Therefore, to identify the SDs and MOs in different scenarios, you must first determine the behavior of interest, then ask yourself what is the variable that is related to the availability of reinforcement, and what is the variable that affects the reinforcement value. In the following ASR questions, you will practice differentiating various stimuli and their effects. Please pay attention to the feedback. MOs for punishment also have value-altering and behavior-altering effects. An EO for punisher increases the effectiveness of punishment and abates behavior, while an AO for punisher decreases the effectiveness of punishment and evokes behavior. Using a positive punisher as an example, you are having a headache and you are going to take away your child’s toy. However, historically if you take away the toy, your child screams. The headache you are having can establish the effectiveness of your child’s scream as a punisher and abate your behavior of taking away the toy. In the negative punishment, the MOs for reinforcers and the MOs for negative punishment are the same. For example, if the EO for social attention is strong, the same EO will establish the effectiveness of the removal of social attention as negative punishment, and abate behavior that is punished by the removal of social attention. I should emphasize here that when analyzing scenarios, the function of a stimulus should be analyzed based on the behavior of interest because the same stimulus will have different effects. Depending on the behavior, it can have behavior-altering or repertoire-altering effect. For example, when a child was hungry, he searched for the teacher. After he found his teacher, he requested and got food. As the teacher has been paired with food, the teacher was a conditioned reinforcer that maintains the behavior of “searching for the teacher”. The teacher is also an SD that evoked requesting behavior. Thus, in relation to “searching for the teacher”, the teacher is a conditioned reinforcer. However, in relation to “requesting food”, the teacher is an SD. Painful stimulation can also have multiple effects. It can be an EO that evokes responses related to pain reduction, such as searching for aspirin. It can also be an EO for punishment that abates responses related to increase of pain. If painful stimulation is contingent on a response, it can also be a punisher. Again, analyzing the function of a particular stimulus should be based on the behavior. In other words, you must identify the target behavior before analyzing the function of environmental variables. Video 3 —— Based on the conditioning history of MOs, MOs can be classified as unconditioned motivating operations or UMOs, and conditioned motivating operations or CMOs. There are a total of 9 UMOs. Five of them are deprivation and satiation UMOs. Deprivation of food, water, oxygen, activity, or sleep can establish food, water, oxygen, activity, or sleep as an effective reinforcer and evoke behavior that produces the reinforcer. Satiation can abolish the value of these reinforcers and abate all behavior that produces them. Other four UMOs include the UMO relevant to sexual reinforcement. Both male and female can be affected by the passage of time since last sexual activity. That is, sex deprivation can also function as an UEO. Sexual stimulation or orgasm functions as an UAO that abolishes the effectiveness of sex as reinforcer and abates all behavior that produces it. Likewise, too cold or too warm functions as an UEO that establishes getting warmer or getting cooler as a reinforcer and evokes all behavior that has had the effect. Last, painful stimulation establishes the pain reduction as a reinforcer and evokes all behavior that results in pain reduction. If a repertoire is taught with a certain reinforcer, regardless how well the behavior has been learned, the MO for the reinforcer must be in effect for the learner to demonstrate the response. That is, if food items are used for teaching a specific skill, it is likely the learner will not demonstrate the skill unless the UEO for food items is in effect. Thus, in practice, it is suggested that a variety of reinforcers, especially generalized ones should be used. In addition, you should also consider weakening the effects of UMOs during teaching. For example, I had this student who was 15 years old at the time. He demonstrated severe aggression during sessions. A functional behavior assessment was conducted and revealed that the behavior was maintained by escape – removal of instruction. We implemented the escape extinction and other reinforcement procedures. However, his aggression did not reduce and became highly variable. We knew that we had missed some factors, so we interviewed his mom. It turned out that he was on different medications and could not eat or drink before coming to the sessions in the afternoon. In other words, he was hungry, thirsty or both. Afterwards, we asked his mom to bring his lunch. Each session started with lunch, and there was a water bottle available throughout the session. His escape behavior immediately dropped to 0. In this case, weakening the effects of UEO for water and food may have reduced aggressive behavior that was maintained by escape from the task. The conditioned motivating operations, CMOs, are a result of learning. These stimuli were previously neutral but acquired motivating properties after association with other MOs, reinforcement, and punishment. Similar to the UMOs, they have value-altering and behavior-altering effects. There are three types of CMOs, the surrogate, reflexive, and transitive CMOs. The surrogate CMO, CMO-S, is a result of pairing with a UMO. For example, after repeatedly pairing the blue color with the cold temperature, the blue color may acquire both value-altering and behavior-altering effects of the cold temperature. CMO-S can be weakened by unpairing. For example, if the blue color is repeatedly presented without the cold temperature, it’s motivating properties can be weakened. The reflexive CMOs, CMO-R, are correlated with a worsening or improving condition. The CEO-R increases the value of the removal of the worsening condition as the reinforcer while the CAO-R decreases the value of the removal of the worsening condition. Usually, CEO-Rs exist in escape-avoidance contingencies. For example, in the discriminated avoidance contingency, a warning stimulus is associated with the worsening situation, an electric shock. If the warning stimulus is not terminated, the electric shock will follow, representing a worsening situation. The warning stimulus here is the CEO-R, and terminating the warning stimulus produces the negative reinforcement. Another common example of the CEO-R is the instructional demands. If a student does not respond to the instructional demand, worsening condition may follow, such as corrective procedures. Therefore, the instructional demand may evoke behavior that results in the removal of the instruction. The behavior may include correct responding or problem behavior. The effects of CEO-R can be weakened by extinction. In other words, responses no longer terminate the warning stimulus. The effects can also be weakened by unpairing. For example, the warning stimulus continues but the condition does not get worse. Alternatively, another form of unpairing would be the worsening continues even if the response has terminated the warning stimulus. CEO-Rs are also common in social interactions. For example, if your friend makes a request and you fulfill the request, there will be social reinforcement. The request in this case is an SD for social reinforcement. However, if there is a delay in responding or fulfilling your friend’s request, social worsening can happen. For example, your friend may repeat the request in a louder voice or get cranky. That is, social worsening happens if you do not respond quickly. If requests have been associated with social worsening, the requests may also become CEO-R that evokes a response to terminate the request. The last type of CMO is the transitive CMO, CMO-T. CMO-Ts usually exist in behavior chains. They alter the value of a stimulus that is necessary to produce another reinforcement condition. In other words, they establish another stimulus as the conditioned reinforcer. In the example of searching for the teacher when hungry and then requesting food from the teacher, the teacher becomes a conditioned reinforcer when hunger occurs. In other words, hunger establishes the teacher as a conditioned reinforcer and alters the value of the teacher who is necessary to produce another reinforcement condition, food. Now let’s look at another example. When it is raining outside, you will search for an umbrella. In this case, the rain establishes the value of an umbrella which is necessary to produce negative reinforcement – not getting wet from the rain. In this case, the rain is a CEO-T and the umbrella is a conditioned reinforcer. Likewise, power outage is the CEO-T that evokes the responses for finding a flashlight and establishes the flashlight as a conditioned reinforcer. Flashlight is then the SD that signals the availability of the light. Please pay attention to the difference between CEO-R and CEO-T when negative reinforcement is involved. A CEO-R directly evokes a response that terminates the warning stimulus, such as a fulfilling a request which terminates the request. There is no behavior chain involved. A CEO-T, on the other hand, alters the value of a stimulus such as the umbrella which is necessary to produce the negative reinforcement. In other words, the responses evoked by the CEO-Ts do not directly produce negative reinforcement. All UMOs can function as CMO-Ts for establishing stimuli as conditioned reinforcers. For example, hunger can establish a fridge, thirst can establish a water bottle, and oxygen deprivation can establish an oxygen tank as conditioned reinforcers. The effects of a CEO-T can be weakened by satiation or terminating the reason to perform the task, such as food and water ingestion, raining stopped, or power restored. More permanent weakening may happen through extinction in which behavior no longer produces reinforcement, or the conditioned reinforcement is no longer necessary for accessing the final reinforcement. For example, if the teacher stops delivering food at all, hunger will no longer establish the teacher as a conditioned reinforcer, and therefore it will not evoke the behavior to search for the teacher. CMO-T is commonly used to teach requesting skills. Below, I will present two examples of how to teach requesting skills using CMO-T. First, you may present a tightly closed box which contains food. In this case, a situation is contrived for the student to request for help. The box is a CEO-T that establishes an adult’s help as a temporary conditioned reinforcer through which the final reinforcer, food item, can be obtained. A similar example would be asking the student to complete a puzzle while hiding one of the puzzle pieces. As student is unable to complete the puzzle without the piece, the student needs to request for information. In this case, a CEO-T for the missing piece is contrived, and the missing piece functions as a temporary conditioned reinforcer in order to complete the work.
Purchase answer to see full attachment