Difference between a marker and a secondary reinforcer?

In summary, a marker is something that makes up for the delay of reinforcement by signalling to the subject "reinforcement is coming". For example, saying "good dog" before giving your dog a treat. Secondary reinforcement is when something becomes reinforcing / causes pleasure through it's association with a reinforcement. e.g. dog loves to love praise. People love money because you can buy stuff with money. So aren't they the same thing? Are there any concrete experiments or exampes of where they are different? If you go here http://books.google.com.tr/books?id=seR4AgAAQBAJ&pg=PA332&lpg=PA332&dq=psychology+marking+
  • #1
RabbitWho
153
18
A marker is something that makes up for the delay of reinforcement by signalling to the subject "reinforcement is coming" For example saying "good dog" before giving your dog a treat.

Secondary reinforcement is when something becomes reinforcing / causes pleasure through it's association with a reinforcement. e.g. dog loves to love praise. People love money because you can buy stuff with money.

So aren't they the same thing? Are there any concrete experiments or exampes of where they are different?
If you go here http://books.google.com.tr/books?id...sychology marking procedure lieberman&f=false
On page 332 it says that because they used the marker whether there was reinforcement or not, they can be confident it wasn't a secondary reinforcer. I don't understand how that follows.
 
Last edited:
Physics news on Phys.org
  • #2
RabbitWho said:
A marker is something that makes up for the delay of reinforcement by signalling to the subject "reinforcement is coming" For example saying "good dog" before giving your dog a treat.

Secondary reinforcement is when something becomes reinforcing / causes pleasure through it's association with a reinforcement. e.g. dog loves to love praise. People love money because you can buy stuff with money.

So aren't they the same thing? Are there any concrete experiments or exampes of where they are different?
If you go here http://books.google.com.tr/books?id=seR4AgAAQBAJ&pg=PA332&lpg=PA332&dq=psychology marking procedure lieberman&source=bl&ots=ML3GNiMm_h&sig=fkmXcbt4HnciW3nxNV3GTnDkH5k&hl=en&sa=X&ei=7C5zVOuyIsmeywPOjICIBA&ved=0CB0Q6AEwAA#v=onepage&q=psychology marking procedure lieberman&f=false
On page 332 it says that because they used the marker whether there was reinforcement or not, they can be confident it wasn't a secondary reinforcer. I don't understand how that follows.
I can't view the page, but in any case I'll try to give you an answer.

A marker can be viewed as a signal that identifies a particular behavior or response. Looking at it from a non-rigorous point of view, it can also be seen as something that "imparts information" about a behavior.
Secondary reinforcement is, as you, say, when something becomes reinforcing through its association with a primary reinforcer. This "something" is then called a secondary reinforcer.

Not all markers signal reinforcement, and not all markers are secondary reinforcers. For example, in dog training there is something known as a no-reward marker. One of the ways to use it is as so: Whenever the dog offers the correct behavior, reward him. When he offers an incorrect behavior, signal with the marker and give no reward. What happens over time is that the dog stops offering said incorrect behavior. Why? If we look at it as "imparting information", the no-reward marker has told the dog, "when I do this behavior, I don't get a treat". Note that here the marker functions here as a form of negative punishment, not reinforcement!

If we look at it more rigorously, the no-reward marker has been paired/associated with a punishment, and the incorrect behavior has then been associated with the no-reward marker.

Now let's look at a marker that is a reinforcer, like a clicker. Before training, the clicker is repeatedly paired with rewards. During training, correct behaviors are marked by clicking the clicker, and a reward is given. Because the clicker is consistently paired with reinforcement, in this case it usually does become a secondary reinforcer.

As for the quote about being confident that the marker is not a secondary reinforcer: the marker must be consistently paired with reinforcement to become a secondary reinforcer. If something is associated equally with reinforcement and with a neutral or punishing response - or in fact, with any two conflicting stimuli - there's no reason for it to become preferentially associated with one of the types of stimuli. Think of those little jingles peoples smartphones make when they get a message from one of their friends. Every time they hear that jingle, they hare highly likely to check their phone. This is because the jingle has been consistently paired with getting a message. If however, I programmed someone's phone to jingle half the time because they got a message, and half the time randomly throughout the day, they would become much less likely to automatically check their phone. I won't expand on this because, as I said, I can't view the page.

Hope that helped. :)
 
  • Like
Likes RabbitWho
  • #3
Thanks! that's so so so helpful!
 
  • Like
Likes HyperActive

What is the difference between a marker and a secondary reinforcer?

A marker is a sound or signal that is used to communicate to an animal that they have performed the desired behavior and will receive a reward. A secondary reinforcer, on the other hand, is a stimulus that has been paired with a primary reinforcer (such as food) and has acquired the ability to also elicit a response from the animal through conditioning.

How are markers and secondary reinforcers used in animal training?

Markers are used as a way to bridge the time between the desired behavior and the delivery of the reward. This helps the animal understand which specific behavior is being rewarded. Secondary reinforcers, on the other hand, are used to maintain and strengthen the behavior through association with the primary reinforcer.

Can a marker also be a secondary reinforcer?

Yes, in some cases a marker can also become a secondary reinforcer. For example, if a trainer consistently uses a clicker as a marker and always follows it with a food reward, the clicker can become a secondary reinforcer for the animal.

What are some examples of markers and secondary reinforcers?

Examples of markers include a clicker, a whistle, or a specific word or sound. Secondary reinforcers can include a variety of stimuli such as a toy, praise, or even a social interaction with the trainer.

How do markers and secondary reinforcers differ from primary reinforcers?

Primary reinforcers are innate, biological needs such as food, water, and shelter. They do not need to be conditioned and are inherently rewarding for animals. Markers and secondary reinforcers, on the other hand, need to be paired with a primary reinforcer in order to become effective in training.

Back
Top