As highlighted in a July 2015 blog Discomfort Triggers Behaviour Change, wildlife demand reduction campaigns can learn much from the decades of work done by anti-smoking and road safety groups. While these were the pioneers in behaviour change advertising, more recent examples include those highlighting the tragedy of domestic violence and mental health which are helping to overcome the taboos of talking about these issues openly.
To further understand the value these sectors can offer in educating conservation groups about behaviour change campaigns, take a look at a November 2016 blog Empirical Evidence Shows The Way
The work of anti-tobacco and road safety campaigners also offers insights in to how to evaluate the impacts of behaviour change campaigns; they have been researching this for decades.
While researching how to do a quantitative evaluation of behaviour change, BTB found two techniques worth exploring: 1) The Randomised Response Technique (RRT) and 2) The Crosswise Model. More detail about these methods can be found here.
Critical: Measure What Is Relevant
What is critical in any evaluation is that it measures what is relevant and doesn’t simply follow the habit of measurement for measurement sake, something that is done far too much across all sectors including business, government and academia. Currently, too much importance is placed on measuring and too little importance on understanding. The reason the two don’t yield the same insights is that we will generally measure what can easily be measured, and not necessarily what is relevant. One way the conservation sector is conducting surveys that are irrelevant is interviewing and surveying people who don’t have the financial means of purchasing illegal wildlife products. For example, a recent survey of supposed consumers of rhino horn in Viet Nam surveyed people whose average salary was less than US$300 per month. This makes no sense when the price of rhino horn is quoted at greater than US$65,000 per Kg; the research was a perfect example of measurement-for-measurements sake and not useful or relevant.
The Evaluation of Breaking The Brand’s 4th Campaign
In rolling out campaign four, Breaking the Brand had hoped to carry out both quantitative surveys and qualitative interviews with the target group in Viet Nam. The interviews and the survey were designed to be invitation only to ensure that only the right demographic was targeted for evaluation purposes. The right demographic from Breaking The Brand’s definition is the group of people who, should they choose to purchase rhino horn, they can afford to purchase genuine rhino horn and are unlikely to be buying fake rhino horn. To clarify, by agreeing to be part of the survey this in no way indicated that the survey participant/interviewee is a rhino horn user, simply that they are part of the demographic group that can afford genuine rhino horn from a wealth perspective.
Stage 1: Quantitative Evaluation (Survey)
Breaking the Brand choose to use a survey model regularly used to assess the impact of anti-tobacco, road safety etc campaigning. This approach includes measuring the perceived effectiveness of the campaign by the target groups. The evaluation measures four sub-scales:
- ‘Message Acceptance’
- ‘Negative Emotion’
- ‘Perceived Effectiveness’
- ‘Behavioural Intention’
The process first checks that the person is paying attention to the campaign in question. The next stage is to check if the message in the advert is personally relevant to them and their peer group. Then the evaluation process checks for an emotional and/or behavioural response.
The basis for the invitation only, SurveyMonkey design is outlined in the linked document below. The aim was to have between 50-150 invited members of the target group to complete an anonymous online evaluation form (in both English and Vietnamese) on the effectiveness of Breaking The Brand advert. To achieve these numbers at least 500 (warm) people from the target demographic group would need to be invited.
Stage 2: Qualitative Evaluation (Interviews)
The aim was to find between 10-25 members of the campaign target group to participate in 1-on-1, face-to-face interviews. The method being free-flowing face-to-face interview of 20-30min. These interviews were to be conducted with business people in Viet Nam (Hanoi and Ho Chi Minh City) with high disposable income.
As previously stated, we know from over 15 years’ experience in executive coaching that the quality of the information you get from interviews with people that you have never met before and who may have good reason not to trust you, especially if they didn’t initiate the coaching themselves, depends entirely on how you approach the interview.
The key elements are mutuality, vulnerability and genuine interest in the other person. These are usually not in place when interviews are conducted by market research agencies or university researchers. They tend to approach interviews either from a position of (expert) superiority or maintain their distance to be an ‘objective observer’. Neither approach has a great chance of success in getting people to truthfully talk about behaviours that may be illegal, subject to social stigma or contrary to established norms.
What works is the opposite. As quickly as possible establish mutuality – find out the level of thinking and decision making the person operates from and talk in their language. This is not a skill we learned overnight, but it is a skill that can be taught and learned through practice. It goes hand-in-hand with showing vulnerability. Instead of our egos dominating the conversation we decide to leave our egos outside the door before we enter and greet the person. That ensures the conversation is about them, not us, that they are the most important person in the frame and that we can be playful, humble, curious and challenging all at the same time. We can play dumb and ask seemingly trivial questions to verify assumptions we are making about the person and how they see the world. We can jokingly throw suggestions into the mix and observe the response to judge if we have hit a nerve. We have the sensory acuity to see when the interviewee is becoming a bit nervous because they feel they have revealed too much and do what is necessary to neutralise their concern so they become comfortable again and feel safe to tell us more. And so on.
Once you have acquired a good understanding of the user groups and their motivations and patterns of use you should calibrate your findings with some members of the target group. Feedback your observations and look for confirmation or (polite) disagreement. In some cultures, it is considered impolite to disagree, so make sure you know how to navigate such cultural obstacles. This is discussed in more detail in the Motivations to Use section on How to Create a Demand Reduction Campaign Page
If conservation groups are looking to do these types of interviews, one option may be to get the support of experienced sociologists and anthropologists, who know how to build trust, appear non-judgemental and non-threatening. The interviews are best driven by natural curiosity, exploring the beliefs and values of the users and how they perceive the context and choices that they have. Too often we hear that to save money, large conservation buys in the ‘cheapest resource’ to do this, students or someone straight out of university. When the target group is wealthy (males), such interviewers have little (or no) status with the target group.
While Breaking The Brand is able to find between 10-25 people to conduct the qualitative interviews, none of the international companies we approached, with bases in Hanoi and Ho Chi Minh City, agreed to support the quantitative evaluation. We will keep trying to enlist the support of these and other companies for future evaluations.
Breaking the Brand is however happy with the evaluation framework that we have researched and created. We hope that other organisations will find this useful and if anyone has any questions about the evaluation process outlined please don’t hesitate to contact use on via email@example.com