Skip to main content

Children and young people’s experiences of completing mental health and wellbeing measures for research: learning from two school-based pilot projects



In recent years there has been growing interest in child and adolescent mental health and wellbeing, alongside increasing emphasis on schools as a crucial site for research and intervention. This has coincided with an increased use of self-report mental health and wellbeing measures in research with this population, including in school-based research projects. We set out to explore the way that children and young people perceive and experience completing mental health and wellbeing measures, with a specific focus on completion in a school context, in order to inform future measure and research design.


We conducted semi-structured interviews and focus groups with 133 participants aged 8–16 years following their completion of mental health and wellbeing measures as part of school-based research programmes, using thematic analysis to identify patterns of experience.


We identified six themes: Reflecting on emotions during completion; the importance of anonymity; understanding what is going to happen; ease of responding to items; level of demand; and interacting with the measure format.


Our findings offer greater insight into children and young people’s perceptions and experiences in reporting on their mental health and wellbeing. Such understanding can be used to support more ethical and robust data collection procedures in child and adolescent mental health research, both for data quality and ethical purposes. We offer several practical recommendations for researchers, including facilitating this in a school context.


The mental health and wellbeing of children and young people (CYP) has become an international priority in recent years [1, 2], and there is growing recognition of the need for research in this area [3,4,5,6,7]. Self-report measuresFootnote 1 are often used across this research agenda, given emerging evidence that CYP are able to report and describe their own health experiences [8,9,10]. Informant discrepancies between reports of CYP and their parents, once seen as attributable to differences in “accuracy”, are now more commonly thought to reflect differences in perspective, with CYP offering valid and important insights into their own health [11,12,13]. This also reflects an increased emphasis on the voice of CYP in research and policy, with a “no decision about me without me” approach frequently adopted [14,15,16,17,18].

Given growing engagement and involvement of CYP in research, a range of general guidance has become available, offering both practical and ethical methodological advice (e.g., [19,20,21]). However, as noted by Crane and Broome in a recent review of the literature [22], there are particular aspects and types of research participation that can affect the way that CYP view and cooperate with research procedures. For instance, a focus on health compromising behaviours (e.g., drug use or suicidal ideation) can prompt suspicion around purported confidentiality procedures [23], while trust in researchers may influence level of cooperation in participation [24]. Investigating potentially sensitive topics, such as mental health, entails a range of considerations, given both ethical concerns regarding participants’ wellbeing and data implications relating to the reliability and validity of results [9, 25]. Social desirability, for instance, can be a central issue with both adults and younger participants, necessitating considerations around anonymity in the context and mode of data collection [25,26,27]. The perceived risks of asking about sensitive topics, such as distress, disclosure, and non-response rates, can sometimes overshadow the potential societal benefits of this type of research [28]. Research into sensitive topics, while encompassing potential risks, can be of great importance for policy and practice with CYP, whereas neglecting such research may contribute to avoidance and stigma at a societal level [9]. Indeed, there have been questions regarding the extent of impact of asking about sensitive topics; for instance, Langhinrichsen-Rohling et al. [9] encouraged a distinction between temporary distress in relation to completing measures and the unlikely event of lasting psychological harm.

Studies in this area are frequently conducted in schools, both for epidemiological and evaluative purposes, given a growing emphasis on schools as a context for prevention and promotion [29, 30]. From an ethical standpoint, past research has demonstrated additional challenges when engaging CYP in school-based research. In particular, obtaining valid consent in this context is complicated by the way pupils are generally afforded little choice in how they spend their time in school, meaning that research participation can be misconstrued as compulsory [31,32,33,34]. Moreover, a reliance on teachers to introduce and guide pupils through the process of completing measures has been noted as potentially problematic, as they are unlikely to be able to facilitate this process as comprehensively as a researcher involved with the project [35]. There is limited understanding of how the school environment may influence data quality, warranting investigation; for instance, completion of measures alongside peers in a classroom may influence responses to such measures, as past research indicates that the social environment can affect responses to sensitive and socially desirable items [36, 37].

At present, there is relevant literature relating to the experience of CYP research participation more generally [38, 39], as well as school-based research engagement [31,32,33,34,35] and mental health measure completion for clinical purposes [40]. However, to our knowledge, there is no prior research exploring CYP’s experiences of completing mental health and wellbeing measures for school-based research (though a recent study explored school-based completion of self-harm measures [41]). As researchers, there is a responsibility to explore and understand how self-report processes are experienced by CYP in mental health research, including in particular contexts, in order to offer appropriate procedures that are ethical, reliable, and valid, and can meet the needs of this group.

The current study

We set out to explore the way that CYP perceive and experience completing mental health and wellbeing measures, with a focus on completion in a school context. We focused specifically on completion (i.e., directly responding to measures as participants in a research project) to capture perceptions and experiences of the full experience of engaging with this aspect of research. We have sought to centralise the CYP voice in this study by focusing explicitly on CYP’s perceptions and experiences and by co-authoring the study with a young person (HM, the fifth author; note that HM was independent of and older than the participants in the current study). HM is an expert by experience, having acted as an advisor for the National Health Service (NHS)’s mental health services as well as mental health charities throughout their adolescence and young adulthood, and so was well suited to the aim of the current study.



We adopted an exploratory qualitative design, focusing on interview and focus group data pertaining to the completion of an integrated measurement framework including quantitative mental health and wellbeing measures. This qualitative data was gathered as part of the piloting processes for two school-based projects, each of which had distinct but similarly focused measurement frameworks that were administered through similar procedures (as detailed below). Merging qualitative data across two projects is valuable as it allows for findings that capture more general experiences of measure completion, rather than experiences grounded in any single set of measures or context. Our full sample drawn from these projects is 133 CYP aged eight to 16 years. This broad age range allows insight into how researchers can facilitate experiences among this group as a whole, rather than within any one age group.

Research Project 1 (RP1)

Project overview

We used data collected during a formative pilot of the Wellbeing Measurement Framework (WMF), an inventory of measures designed to access a range of mental health and wellbeing indices; specific measures are shown in Table 1. The WMF was designed for use in secondary schools taking part in HeadStart, a 5-year, £58.7 million programme set up by The National Lottery Community Fund exploring ways to improve young people’s mental health and wellbeing. Note that piloting was carried out in non-HeadStart schools and so participants here had no engagement with the wider programme.

Table 1 Measures completed by participants


65 participants aged 10 to 16 years took part in focus groups at eight schools for the piloting of the WMF (five mainstream schools and three specialist schools). Participants volunteered to participate in focus groups after completing the measurement framework for piloting. As these focus groups were part of a formative piloting process, participants took part anonymously and detailed demographic data were not requested.

Measure completion process

The WMF (as shown in Table 1) included measures focused on mental health symptoms, wellbeing, stress, and factors associated with positive outcomes (e.g., family support). Each individual measure was presented sequentially, with participants clicking through to the next measure after each one. Measures comprised of more sensitive items were limited in number by prioritising those most important for addressing key research questions and measures that mostly comprised positively phrased items were presented at the beginning and end of the overall measurement framework. As data collection was for research purposes only (rather than as a screening procedure), data was collected confidentially.

Pupils completed the measurement framework in their education settings, in classrooms with computers. At least two weeks prior, pupils and their parents/carers were provided with an information sheet outlining details of the research, the nature of participation, details of data storage, usage and confidentiality, and contact details, along with an opt-out consent form. Immediately prior to completion, pupils were presented with this information in age-appropriate language, including reiterations that participation was voluntary and data would be treated confidentially (including that researchers did not work at their school, and that parents and teachers would not see their answers). Pupils then gave informed assent by ticking a box to proceed. Researchers facilitated the administration of the measurement framework, reiterating key information, guiding online access, and addressing queries; teachers were also present to offer support in many, though not all, cases. Schools were advised to allocate a standard lesson (i.e., 45–60 min) for pupils to complete the measurement framework.

Qualitative data collection

Eight focus groups were conducted. In one class per school, the facilitating researchers asked for volunteers to engage in focus groups immediately following completion of the measurement framework. Researchers carried out focus groups in private rooms in participants’ settings, with group size ranging from six to 11. As focus groups were primarily carried out for formative piloting of the WMF, these sessions were not audio recorded; instead, a second researcher took field notes throughout, documenting participants’ comments as closely as possible.

Focus groups enable participants to explore, compare, and contrast their perceptions and experiences with one another, allowing nuanced discussion and clarification [42, 43]. Researchers used a semi-structured topic guide, which facilitated discussion of key topics alongside unanticipated themes [44]. The topic guide (presented in Table 2) included 11 open-ended questions and probes focused on various aspects of completion, namely understanding of items and wording, likes/dislikes of measures and items, perceptions of length and format, Copies of the measurement framework were provided to avoid reliance on recall and to facilitate specificity in comments.

Table 2 Interview and focus group questions

Participation in focus groups required opt-in assent from pupils and opt-out consent from their parents/carers. At the beginning of sessions, researchers verbally reminded participants of key information, including an overview of the project and the nature of participation, and reiterated that participation was entirely voluntary. Ethics approval was granted by the main institute’s Research Ethics Committees (Reference number 8097/002).

Research Project 2 (RP2)

Project overview

We used data gathered within a feasibility study for the Education for Wellbeing (EfW) programme, which trialled and evaluated five universal mental health interventions in English primary and secondary schools, commissioned by the Department for Education [45, 46]. Of these five interventions, three aimed to reduce emotional difficulties and two aimed to increase help-seeking intentions.


68 participants aged eight to 15 years (M = 11.88; SD = 2.06) participated in interviews and focus groups across 10 EfW feasibility study schools in South East England. In RP2, 66% (n = 45) of participants were female and 34% (n = 23) were male, while 45% identified themselves as White British.

Measure completion process

Measurement frameworks were tailored to assess intended intervention outcomes and mechanisms, and so included a range of mental health indices. The frameworks differed slightly across age groups for RP2, with versions for both primary-aged (8–11 years) and secondary-aged (11+ years) participants (specific measures presented in each version shown in Table 1). In both versions, each individual measure was presented sequentially, with participants clicking through measures one at a time. Measures comprising mostly positively-phrased items were presented at the beginning and end of the framework. As data collection was for research purposes only (rather than as a screening procedure), data was collected confidentially. The measurement framework was administered both before and after intervention delivery to evaluate effectiveness; qualitative data used here focuses on the experiences of pre-intervention completion.

Pupils completed the measurement framework in classrooms with computers (prior to any intervention). At least two weeks prior, pupils and their parents/carers were provided with an information sheet outlining details of the research, the nature of participation, details of data storage, usage, and confidentiality, and contact details, along with an opt-out consent form. Teachers facilitated sessions with instructions for facilitating online access and key information to reiterate to pupils. Pupils were also presented with key information in age-appropriate language immediately prior to completion, including reiterations that participation was voluntary and data would be treated confidentially (including that researchers did not work at their school, and that parents and teachers would not see their answers). Pupils then gave informed assent by ticking a box to proceed. Schools were advised to allocate a standard lesson (i.e., 45–60 min) for completion of the measurement framework.

Qualitative data collection

13 interviews and 11 focus groups were conducted. These sessions focused on experiences of completing the measurement framework as well as wider aspects of the project, and so were conducted 2–4 months after pre-intervention completion of measurement frameworks to allow for intervention delivery (but prior to post-intervention completion). Pupils volunteered to participate in interviews and focus groups (e.g., by submitting an expression of interest form provided by teachers). Sessions were carried out by researchers in private rooms within participants’ settings and were audio-recorded and transcribed verbatim, with group sizes ranging from two to five participants.

Both one-to-one interviews and focus groups were conducted. While interviews facilitate detailed exploration of individual perceptions and experiences, focus groups allow participants to explore, compare, and contrast such perspectives with one another [42, 43]. Researchers used a semi-structured topic guide. Most questions focused on participants’ experiences of interventions, as this was the primary focus of qualitative exploration in RP2, but a sub-section of seven questions focused exclusively on experiences of completing the measurement framework, namely likes/dislikes of measures/items and the completion experience, ease/difficulty of completion, perceptions of length and format, and suggestions for improvement (see Table 2). Copies of the measurement framework were provided to avoid reliance on recall, particularly given the time lapse after completion, and to ensure specificity in comments.

For qualitative data collection, information sheets were provided for participants. Participation required opt-in assent from pupils and opt-in consent from parents/carers. At the beginning of sessions, researchers verbally reminded participants of key information, including an overview of the project and the nature of participation, and reiterated that participation was entirely voluntary. Ethics approval was granted by the main institute’s Research Ethics Committees for qualitative data collection for the feasibility study of the EfW programme (Reference number 7963/003).

Summary of methods

In total, the current study draws on 32 data sources (i.e., interviews and focus groups) with 133 participants aged eight to 16 across RP1 and RP2. A summary of the methods across the two projects is shown in Table 3.

Table 3 Summary of methods across projects


A thematic analysis was conducted to identify group patterns across the data, utilising Braun and Clarke’s six-step approach [47]. An inductive approach was utilised given the exploratory nature of the study, generating themes from the data itself rather than examining data in relation to existing theoretical models. The first three authors (OD, EA and RM) familiarised themselves with the dataset by reading through each of the data sources and then generated initial codes across 60% of the dataset by systematically coding extracts in NVivo (Version 11; [48]). At this stage these three authors reviewed this coding in unison to agree upon an initial set of themes. Next, the remaining 40% of data was analysed by three further authors against this initial set of themes (ES, KB, AM). Finally, the first author (OD) reviewed, refined, and named final themes in consultation with all authors, including checks against the data and discussion with the study’s young advisor (HM).


We developed six main themes to capture CYP’s perceptions and experiences: Reflecting on emotions during completion; the importance of anonymity; understanding what is going to happen; ease of responding to items; intensity of completion; and interacting with the measure format. Table 4 presents these six themes alongside associated subthemes and illustrative quotes. This section details and explores the main themes, drawing on participants’ quotes to illustrate the particular aspects that they discussed. All themes were observed to include data from both of the two projects included for analysis. To provide an indication of prevalence across the dataset, we have adopted the following system in reporting these findings: “most cases” where a finding is present for 24 or more of the 32 data sources, “many cases” where this is true of 16–23 sources, “some cases” for 8–15 sources, and “a few cases” where a finding is present for less than 8 cases. However, it is worth noting that this refers to data sources, capturing focus groups and interviews, rather than individual participant-level responses, as we were not able to reliably distinguish between individuals within audio recordings for focus groups.

Table 4 Overview of main themes and associated subthemes

Reflecting on emotions during completion

In many cases, completion of the measurement framework was seen as an opportunity to “release” feelings and to reflect on one’s emotions, behaviours, and life; for instance, “you got to like, look back upon like previous actions and what, what made you feel that way” (RP2) and “I felt calm when it was completed” (RP1). Some of the participants in these cases highlighted that they did not typically have time to reflect in this way on a day-to-day basis, explaining “you actually got a second to think about it” (RP2) and “if you needed to stop your life for a second just to think what’s going on in my life, is it healthy, am I feeling alright, how am I going to deal with the responsibilities?” (RP2). As part of this, in some cases participants also described identifying elements of their lives that they were less happy with, such as difficulties with emotions; for instance, one participant explained they had “never thought about them [feelings], now I can work on them” (RP1). In some other cases, participants described taking stock of the positive aspects of their lives: “I need to change this. But some I don’t need to change. At least you know, okay, my lifestyle’s all right” (RP2). In a few cases, participants suggested that this might be uncomfortable for CYP who felt that something was difficult or lacking in their life: “those that don’t have friends might [not] want to think about it” (RP1).

There were a few cases where participants explained that completing the measures had made them think differently about how to handle an aspect of their life and wellbeing moving forward, such as reaching out to others or re-evaluating their strategies. For instance, one participant reflected: “it’s improved my anger […] I need to stop showing my temper, find another way to calm myself down to fix that situation” (RP2), while another explained that “you can understand how much you actually might need to talk to somebody or something and not keep it inside if that’s what you were doing” (RP2). In a few cases, participants therefore highlighted the value of providing information and directions for support at the end of the measurement framework.

The importance of being anonymous

In some cases, participants commented on the degree of anonymity that they perceived in completing the measurement framework, given that their data would be sent to researchers rather than school staff: “instead of like… answering them to a teacher so they… know […] you had your own code to get on it so no one could like… figure out what you were answering” (RP2). In these cases, participants discussed feeling reassured by this and reflected that this particular feature gave them the chance to privately share their feelings, which felt different from talking to someone: “you’re talking to someone but not actually talking to someone […] they get the thing and the feelings and they won’t know who it is” (RP2). However, in a few cases participants were less certain about the extent to which their responses were anonymous within this system, and wanted to confirm with the researchers that the school could not see their responses or that nobody would check their individual responses, with questions including “can someone use your password and check your answers?” (RP1) and “these [items and responses] just go to you right?” (RP1). Indeed, in a few cases participants believed that somebody would see their responses and would then help them: “if you answer that, others will see and might do something about it” (RP1). Thus, it was suggested by participants that at the end of the measurement framework there should be an option for participants to disclose that there is something they would like to discuss or need support with, or to opt to share their responses with a teacher: “at the end you should have a box saying if there is anything you want to talk about” (RP1).

As noted previously, participants completed the measurement frameworks on computers alongside their peers, in sessions facilitated by researchers and/or teachers. While in a few cases participants stated they were comfortable with other people being present, in some cases participants described feeling exposed and worrying that someone else might look at their responses: “it could make you feel exposed a little bit” (RP2). Indeed, in a few cases a participant reported instances of this: “people would look at your screen. Even though the teachers told you not to, people would still be. I saw people behind me look at each other’s screen” (RP2). Consequently, in a very small number of cases participants said that they might omit information and provide a false response where items related to behaviours seen as culturally or societally unacceptable. For instance, one participant commented in relation to a question about caregiving responsibilities, which featured a definition that included mention of drug and alcohol abuse: “for example Muslims cannot have alcohol or [drugs], so if we say yes, someone from the same religion might judge you” (RP1). Participants gave a number of suggestions as to how this issue could be reduced, namely: (a) allowing completion in smaller groups rather than full classes; (b) ensuring that pupils were not sat directly next to one another; (c) providing a more private space in schools to individually complete the measurement framework (e.g., completing on staff room computers); or (d) sharing the web link with pupils so that they could complete it at home.

Understanding what is going to happen

In some cases participants seemed to value knowing that the overall study might be helpful to others in the future, and felt that they were making a positive contribution in this way: “it was going into somewhere where it could help you know everyone that did have the problems” (RP2). However, in a few cases, participants felt that they had not been fully informed about certain details before they completed the measurement framework. In particular, participants in these cases commented that they were unsure how long the process would take (e.g., “[I would have liked to know] how long it was gonna go on for”; RP2), or how the data would be used (e.g., “I didn’t really know where it was all going”; RP2), and that they felt that they had been given sufficient advance notice that they were taking part (e.g., “we only got like two weeks, no two days notice”; RP2) [insert footnote: as clarified in “Method” section, schools were required to send out information two weeks prior to data collection]. In a few cases, participants asked the researchers these questions during the focus groups and interviews because they had not fully understood at the time of completion. While this demonstrates an interest and desire to understand, it also suggests that these participants did not have the level of information that they wanted about the purpose of the research at the time of completion. In a few cases participants also felt that they had been unclear whether or not completing the measurement framework was compulsory and commented that this should be outlined clearly within the information presented at the start of the measurement framework: “you should say ‘if you don’t want to do it you can leave the room’” (RP1). They said they had been uncertain about whether or not they had been able to skip specific items if they wanted to (e.g., “were we allowed to skip questions?”; RP1), and felt this too should be made clearer: “in the beginning say they are personal, but you can skip some” (RP1). Participants also suggested including a response option that allowed them to explicitly state they didn’t want to respond to an item: “just have a box so people can say ‘I don’t want to answer’” (RP1).

Ease of responding to items

There were a number of comments around how the complexity of mental health as a construct played a role in participants’ experiences. In some cases this was viewed positively, whereby participants felt it meant that there was variety across the overall measurement framework (e.g., “like, different aspects were included of it”; RP2) and it also gave them the opportunity to think deeply about their feelings and their life (e.g., “you [wouldn’t] really usually think of those questions”; RP2). However, in some other cases participants felt that this complexity made items confusing and difficult to respond to: “I didn’t really understand the question properly” (RP2). Often in these instances, participants said that they had not previously considered the types of issues and feelings that they were being asked about: “what if you’ve never experienced these things?” (RP1). They frequently highlighted this in relation to hypothetical or scenario-based items; for instance, in a stigma measure, participants were asked whether they agreed or disagreed with a series of statements including “a mentally ill person should not be able to vote in an election” in the Attitudes Toward Mental Illness (Stigma) questionnaire [49]; one participant described these items as “questions that you didn’t even know the answer to” (RP2). Some items were seen as unclear due to vague wording (e.g., double-barrelled items, ambiguous wording) and unfamiliar words, which made them difficult to understand: “I had to ask the teacher like, to explain a question” (RP2). In a few cases participants highlighted that the temporal nature of the measures, where they were asked to reflect on the last month or the last two weeks, was challenging because they had a difficult time looking beyond how they were feeling on that particular day or beyond specific events: “if something happened [in the last two weeks], do I consider that or the whole month?” (RP1). In a few cases, participants were also confused about the context of measures, as they were not sure whether they should only reflect on how they felt at school given that this was where they were taking part, which they commented should be clarified to avoid confusion: “you should be clear whether it is about home or school” (RP1).

Likert scale response formats were discussed in many cases, but participants were divided in their comments. In a few cases, participants explained that having different response options available gave them choice and the ability to more accurately capture their feelings. One participant reflected: “it wasn’t like yes, no, maybe. It was like I’m not sure, but I’m kind of sure, so it’ll be like a seven” (RP2) while another commented: “I think it was a good way to answer because it has like different variety of answers” (RP2). However, in a few other cases participants found the options confusing, with comments including that they didn’t understand the distinctions and scope between the anchors for response options (e.g., strongly disagree to strongly agree, never to always; “it’s difficult to know what’s between”; RP1), that some had too many response options (e.g., “sometimes there are too many boxes”; RP1), and that these changed across the overall measurement framework (given that multiple measures were combined, each with distinct anchor options). In a few cases, participants said that they wanted a space to provide further detail and explain their responses, as they felt that a numbered response format was restrictive and couldn’t capture the subjectivity of these experiences: “if I could write the answers, it would be… I would’ve explained why” (RP2).

In a few cases, participants reported drawing on others around them for support during completion, particularly their peers: “’cause erm we were discussing it with each other anyway, to know what to say if you didn’t know” (RP2). In these cases there were participants who recalled asking their teacher to explain something, but it was suggested that the teachers were not necessarily equipped to provide support: “they don’t even know how to explain it us properly” (RP2).

Intensity of completion

There were mixed perspectives on the length of the overall measurement framework and the time it took to complete, with participants in focus groups often disagreeing with each other about this feature, across both projects. In some cases participants indicated that this was acceptable, with comments such as “I think it was the right length” (RP2) and “it didn’t take quite long” (RP1). However in some cases participants commented that it was too long: “it went on forever” (RP1) and in a few cases stated that it could be somewhat repetitive: “some repeated itself and I was like, it’s kind of the same content” (RP2). In many cases, participants drew attention to the sensitive or personal nature of some of the items, particularly those focused on mental health symptoms: “I think the questions to do with emotions and feelings, they are a little bit sensitive” (RP2). There were a few cases where participants said they recognised the necessity of such items: “I found a lot of the questions you know very personal, but which was a good thing because it’s […] about you so you know, not other people” (RP2). However, in some cases participants commented that some items were too personal and that there were a large volume of them; for instance, “they are too personal” (RP1) and “it’s a bit private” (RP1). In a few cases participants presented this as sometimes uncomfortable and intrusive, with comments such as“I felt kind of annoyed they’re asking like personal things” (RP2) and “we might think it is none of your business” (RP1). In a few cases participants suggested limiting the amount of these types of items: “I think just less of, like some of the feelings questions [would help with sensitivity]” (RP2). In a few cases participants also drew attention to the placement of these types of items within the overall measurement framework, highlighting that as the items were mostly in the middle, this became less difficult over time: “midway through I wanted to stop because it got personal, but I continued and it got better” (RP1). In a few cases participants explained that while they were not entirely comfortable with the personal items, these didn’t affect the overall experience; for instance, one participant explained that initially they felt “a little bit sceptical, because some of the questions were a bit sensitive, but […] all in all, I think it was very helpful” (RP2).

Interacting with the measure format

Despite concerns in a few cases around being observed by peers when completing the measurement framework on the computer, in many cases participants said they felt that completing the measures on a computer was better than paper versions, for several reasons. A number of these participants believed that this made their responses more secure and more likely to reach researchers rather than getting lost: “you believe it’s safer because it’s like, whereas on paper, your answers aren’t going to get lost just like that” (RP2). These participants also felt that this made the process feel generally more anonymous: “it felt like you were talking to someone but you were like talking to a computer instead” (RP2) and meant others wouldn’t be able to figure out that their responses belonged to them: “[computer was better than paper because] some people can recognise your style of writing” (RP1). These participants also explained that completing the measurement framework on a computer made the overall process feel familiar and accessible (“it was easy ‘cause like I’m used to doing it on the computer”; RP2), and that it was quicker and easier than if they were to complete on paper: “computer is much quicker” (RP2). However, in a few cases participants found the visual formatting to be confusing in some places, because they could not always tell which response options related to which item and they suggested making sure information was clearly spread out: “it was so close together you could make a mistake” (RP1).


We set out to explore the way that CYP perceive and experience completing mental health and wellbeing measures, with a specific focus on completion in a school context, and developed six main themes: Reflecting on emotions during completion; the importance of anonymity; understanding what is going to happen; ease of responding to items; level of demand; and interacting with the measure format. Our findings offer a number of implications, both in relation to optimising the experiences of CYP and for obtaining quality data.

Measure completion provides a space to reflect

Many participants described reflecting on their emotions, with some describing a “release” seemingly indicative of a lessening either of an emotion or associated burden. Exploring negative emotions is considered valuable and is central within most therapeutic approaches [50,51,52,53] and although such inspection can encourage rumination and thus prolong negative affect [54, 55], participants did not describe such difficulties. Thus, findings suggest that responding to mental health and wellbeing measures may facilitate positive reflective processes, rather than distress as sometimes feared with sensitive topics. This complements and extends previous indications that responding to such measures may at worst cause temporary distress and is unlikely to induce lasting psychological harm [9]. The structural design of our measurement frameworks may have facilitated this (e.g., placement of measures with more sensitive items in the middle of the overall measurement framework, limited amount of sensitive items). Such considerations may be important in developing measures and integrated measurement frameworks.

Findings highlight researcher responsibilities to CYP after completion. The emotional reflection processes described, and cases where participants reported wanting to make changes to their life, indicate a need to adequately support CYP to make disclosures or seek support after completion (e.g., having pastoral school staff available). Help-seeking research has drawn on the Theory of Planned Behaviour [56] to emphasise the importance of help-seeking intentions for behaviour change, but also highlights barriers including self- and perceived stigma and low help-seeking efficacy among CYP (e.g., see [57, 58]). Here, in both projects the research teams provided teachers with guidance regarding signposting of support following CYP completion of the measurement framework where appropriate. However, as suggested by participants, researchers could also provide such information directly to CYP at the end of a measurement framework and seek to equip teachers to create de-stigmatising classroom environments that encourage help-seeking.

Facilitating informed participation

Findings offer insight into several issues and misinterpretations that may arise when CYP engage with participant information, which can influence their experience of the participation process. Firstly, we note that some participants felt they had received insufficient information and prior warning, despite effort from researchers to provide detailed information sheets and two weeks’ notice prior to participation. Similarly, some participants believed someone would see their responses and offer support, which is worrying and warrants careful attention. We note that clear reiteration of confidentiality processes and signposting are key in mitigating this specific misunderstanding, including offering reminders at the end of measure completion; participants did also suggest including an option to disclose difficulties and request support, but this would require careful collaboration with schools to ensure requests are consistently followed through. Taken together, the issues noted above highlight scope for misinterpretation of information, indicating that participant information sheets may not be understood, trusted, or read. Alternative approaches such as video information presentation and provision of clear lesson plans for teachers may better aid understanding and reduce scope for misinterpretation.

Concern about the ambiguous nature of “informed consent” in school-based research is well documented [31,32,33,34, 59, 60]. Pupil participation in day-to-day classroom activity is generally compulsory, meaning that research engagement becomes “just another piece of schoolwork” imbued with an assumed lack of choice [31,32,33,34], perhaps especially when teachers are the ones introducing the research in large-scale studies. By the time of participation, researchers have negotiated access through gatekeepers in positions of control over CYP, namely teachers and parents, meaning that participation becomes “fait accompli” instead of free choice [31, 34, 59, 60]. Although our participants did not directly draw such links, we note that concerns about being able to opt out or skip items may reflect this context. The power dynamic of the classroom could perhaps be overcome by having non-teaching staff (e.g., pastoral staff) facilitate participation, which could reduce expectations that participation is compulsory, and making other activities available to demonstrate capacity for choice.

Indications that teachers were not perceived as knowledgeable or equipped to offer support also indicate issues for CYP in accessing support in understanding their participation. Here, we implemented several changes following piloting, including developing “crib sheets” of frequently asked questions and relevant information, though we note that not all teachers may engage with such materials given wider workload demands. It may also be important to ensure that such guidance clearly explains ethical processes and boundaries alongside more practical information, so that teachers can provide further guidance and reassurance around issues such as confidentiality to reduce misinterpretation. It is possible that the focus on mental health may be a barrier in this particular context, given that teachers do not always feel confident in supporting pupil mental health and wellbeing [61,62,63]. The presence of pastoral staff may lessen such issues and facilitate access to informed support as needed.

Confidentiality and privacy

Findings offer insight into confidentiality and privacy concerns among participants in the context of school-based research. At a system level, participants generally felt their responses were confidential and private, reflecting previous indications that self-administered measures (including online measures) are associated with lower social desirability bias given perceived removal from the researcher [25, 64]. Of course, there were exceptions to this, as some did not trust this confidentiality and others thought this would act as a screening procedure, as discussed above. At a more immediate level, findings suggest peers pose a direct privacy concern in a classroom setting, likely intensified by the ongoing connections that participants have with those around them and, for adolescent participants, heightened sensitivity to peer rejection [65, 66]. Findings indicate that environmental context can introduce a source of anxiety and may prompt false or omitted responses. Researchers could work with schools to develop practices minimising such issues; our participants suggested allowing pupils to complete within smaller groups or within spacious seating arrangements to increase privacy. Finally, we note that although some participants suggested completing measurement frameworks at home to facilitate privacy, this reduces the capacity to ensure there is immediately scope for support. Findings also suggest that issues of social desirability may be heightened among particular groups when others are present, as reflected here in some Muslim participants’ concerns about particular items. This reflects previous findings that cultural norms can introduce social desirability bias [67, 68]. Researchers should be aware of such influences in interpreting findings, particularly in the context of diverse and cross-cultural populations and research. Future research could further explore experiences and barriers across specific groups, including among individuals from diverse ethnic and cultural backgrounds, those with mental health difficulties, and those with additional needs and/or disabilities who may face further practical or cognitive barriers in engaging with measures and/or an integrated measurement framework.

Interpretability and readability of items and response options

Findings highlight barriers in interpreting items, particularly clarity and familiarity, which influence the extent to which participants feel able to respond. Measurement guidance emphasises the importance of interpretability and readability for reliability (e.g., [69, 70]), yet here even commonly used measures (e.g., the Strengths and Difficulties Questionnaire [SDQ]) were not always clearly understood due to features including unfamiliar vocabulary and double-barrelled items (e.g., “I fight a lot. I can make other people do what I want”; SDQ). Though it is advised all measures (even for adults) should match the typical reading comprehension of a 12-year-old [69], readability studies have shown that CYP mental health measures are frequently not age-appropriate [71, 72]. Findings emphasise that measure developers should carefully consider item readability to ensure age-appropriateness. For researchers adopting pre-existing measures, this highlights the need for piloting regardless of how well validated measures are. Where permitted by developers, researchers could adapt and further validate a measure (e.g., see [73]); where not permitted, researchers could explore alternatives like providing definitions of frequently misunderstood words. Furthermore, although quality guidance advises researchers to specify a time period for respondents (e.g., the last month; [69]), participants found this difficult. Research has shown that the richness of one’s episodic thinking improves in adolescence, while younger children may experience difficulty in immersing themselves in past events [74,75,76]. This may be particularly problematic in reporting mental health and wellbeing, as more emotionally salient events can be easier to remember and re-construct [77, 78]. Taken together with our findings, this could suggest that younger participants could over- or under-report their overall level of symptomatology or wellbeing. Such findings highlight the benefits of age measurement invariance testing when developing and validating CYP measures, as well as methods such as cognitive interviewing to ensure that items effectively target the intended phenomenon [79,80,81,82,83].

In terms of response options, although some participants reported liking the granularity of the Likert scale, others found this restrictive. Indications that some participants did not feel adequately heard within this narrowed response scope raises questions of whether self-report can truly be considered to centralise CYP voice, as is often suggested [8,9,10]. Such comments also highlight that quantitative measures alone are insufficient in fully capturing the thoughts, feelings, and experiences of CYP. Participants suggested including open-text boxes alongside quantitative scales to allow elaboration if desired. Of course, Likert scales are inherently subjective given variation in the way participants both items and response options [84, 85]; thus, opportunities to qualitatively contextualise responses may complement quantitative results. However, this would produce large volumes of data, which should be given careful consideration and would warrant different ethical and safeguarding considerations with CYP. Alternatively, broader mixed methods designs with a separate qualitative strand would facilitate deeper understanding of these phenomena and a fuller representation of CYP voice.

Findings also offer insight into the measurement features that constitute burden to CYP when completing mental health and wellbeing measures, namely length, repetition, and item sensitivity, and how they feel this could be mitigated. Aside from the ethical duty to minimise burden, such issues may affect data quality; for instance, inclusion of highly similar items within and across measures can reduce respondent precision [86, 87]. Measure developers should seek to identify small groups of key items where possible and minimise over-similarity across items [88]. Within integrated measurement frameworks, researchers should consider how measures compare with one another to avoid repetition [89]. Finally, it is inherently difficult to measure mental health constructs without sensitive items, and this does not necessarily mean such topics should not be explored. However, there is a need to be mindful about the extent and distribution of such items, which appeared meaningful here given participants’ comments that items “got better” as they went through the framework, and to take ethical steps such as signposting.

Positive perceptions of computer format

Participants’ reported preference for computer-based participation, rather than paper, reflects previous findings from research with adults [90] and is perhaps unsurprising given current levels of digital literacy among CYP. Here, such comparisons were hypothetical as participants only completed computer-based measures; nevertheless, participants highlighted multiple perceived benefits including greater security, anonymity, familiarity, and accessibility. Some existing research indicates benefits in research with CYP; for instance, Rew, Horner, Riesch, and Cauvin [91] reported higher attention in computer-based completion among school-aged children and suggested that this may feel less like a “test” when completed in schools. However, research indicates data quality issues for computer-based completion; Stieger and Reips [92] found that adults engaged in behaviours associated with lowered data quality, such as changing responses or excessive mouse movement. There is also mixed evidence regarding psychometric effects; though much of this is focused on adults, Patalay and colleagues found item-level differences based on completion mode for the SDQ [93], but not for the Me and My School measure [94]. Currently there is little examination of preferences or differing behaviours across completion mode among CYP, and digital advancements and increased digital literacy among recent generations warrants further up-to-date research.

Summary of recommendations

Participants’ experiences offer a range of implications and practical considerations for researchers collecting self-report data for child and adolescent mental health research, with additional points to consider in school-based research. We have drawn together the various recommendations outlined throughout this discussion for researchers to consider:

  • Present key information to participants in an accessible manner (e.g., videos), as the written information sheets typically used may not be fully digested by participants;

  • Ensure that information clarifies the purpose of data collection and how data will/will not be used, including explicit clarity on procedures of anonymity and confidentiality;

  • Remind participants of the anonymity and/or confidentiality (as appropriate) of their responses at the end of completion along with clear signposting for relevant avenues of support, and encourage schools (or other delivery agent) to facilitate help-seeking after completion;

  • Work with schools to take steps to make clear to CYP that their participation is voluntary rather than compulsory (e.g., having non-teaching staff lead sessions and ensuring alternative activities are available);

  • Clearly articulate to participants that they are able to skip items that they do not want to respond to and reiterate this throughout;

  • Ensure that steps are taken during completion to facilitate privacy, such as completing in smaller groups or more private spaces than in a typical classroom;

  • Researchers should seek to pilot measures and integrated measurement frameworks with CYP prior to main project administration, including use of cognitive interviewing in development of new measures;

  • Researchers should work closely with CYP to facilitate readability and interpretability within measures and integrated measurement frameworks, which could be further optimised for a CYP population using age measurement invariance testing and cognitive interviewing; where adaptation is not possible, researchers could provide definitions of frequently misunderstood words to facilitate understanding;

  • When integrating multiple measures, inspect overlap and fit across the framework to avoid unnecessary repetitiveness and length;

  • It may be beneficial to structure a measurement framework so that measures comprising mostly positive items are presented at the beginning and end to facilitate a more emotionally positive experience; indeed, recent evidence indicates mood-mitigation activities such as a doodle page at the end of a measurement framework may be helpful following emotionally sensitive measure completion [41];

  • Including a qualitative strand within the overall project may facilitate a deeper understanding of phenomena and ensure prioritisation of CYP voice; and

  • Computer-based administration may be preferable to paper completion for research with CYP.

Strengths and limitations

A key strength of the current study is its focus on how CYP themselves perceive and experience completing mental health measures for school-based research, including the inclusion of a young person as a co-author. This direct insight into the perspective of CYP is valuable as it can contribute to a clearer understanding of how researcher practices may be perceived by participants, including scope for ethical and data quality implications such as misinterpretation of key information. As a result, the study is able to offer clear recommendations for practice informed directly by CYP, making a timely contribution to the literature given increased use of self-report mental health measures in a school context. Of course, we note that our findings apply specifically to completion of mental health and wellbeing measures for research purposes, in an education context. As such, we highlight that our findings may not be transferable to other contexts, such as mental health screening in schools or assessment for mental health services, given differences in factors such as anonymity. Similarly, the focus of a research project may affect results, such as epidemiological versus experimental designs; here, data from Research Project 1 focused on a pilot sample who completed the measurement framework but were not in the main experimental group (i.e., participants in current study were not participants in HeadStart programme), while in Research Project 2 participants were engaged in an intervention linked with the measurement framework. Although the development of crosscutting themes across two projects is a strength, demonstrating that experiences are not necessarily specific to any one framework or project context, we note that we did not directly ask young people about these wider contexts and indeed were not equipped to compare experiences due to imbalances in the volume of data. Further research should be undertaken to explore how CYP experience completing such measures across a range of contexts and research types, including direct comparisons and exploration of other forms of research engagement such as qualitative engagement.

A number of limitations should be noted. Participants volunteered to engage in interviews and focus groups after completing measurement frameworks, perhaps meaning that those with more positive experiences were more likely to participate, thus potentially affecting the representativeness of findings. Limited demographic information has further reduced our ability to assess representativeness or identify differing group perceptions. Finally, as previously outlined, there was a time lapse of two to four months for collecting qualitative data after completion; while copies of the measurement framework were provided to minimise the effects of this, more immediate responses, particularly emotional ones, may have been lost. We also note that although the current study’s use of a broad age range (eight to 16 years) allows insight into this group as a whole, rather than focusing on any one age group, future research could seek to explore experiences in a design that allows direct examination of variation across age groups.


We set out to explore the way that CYP perceive and experience completing mental health and wellbeing measures, with a specific focus on completion in a school context, and developed six main themes. Our findings provide insight into the ways that CYP experience completing such measures for school-based research and offer several implications for how researchers and schools can best facilitate this process. Firstly, our findings demonstrate that asking CYP about their thoughts and feelings relating to mental health does not appear to cause damage or long-term distress, but instead can be a valuable experience that allows emotional reflection. Our study also shows it is critical that participation information is presented in a way that is understandable and accessible to ensure that consent is truly “informed”, particularly in the context of completion in education settings. In terms of data quality, it is important that the time and effort CYP invest in participating leads to quality research that can generate robust evidence relating to child and adolescent mental health. This necessitates careful consideration of CYP and public involvement in the development and planning of measures and integrated measurement frameworks for use in such evaluations. We recommend that researchers make clear where such processes have been undertaken and to clarify the steps they have taken to ensure that their data collection processes are designed to best suit the needs of CYP.

Availability of data and materials

The datasets generated and analysed in the current study are not publicly available due to ethical restrictions.


  1. Here we use measure to refer to a singular research instrument designed to measure one or more underlying constructs, item to refer to the individual statements a participant directly responds to within a measure, response options to denote the available answers for a participant to select by way of responding to an item, and measurement framework to describe an integrated set of multiple measures administered jointly to create a multivariate dataset. During qualitative data collection with participants however we used “question and answer” for familiarity and ease of discussion.



Children and young people


Education for Wellbeing


National Health Service


Research Project 1


Research Project 2


Strengths and Difficulties Questionnaire


Wellbeing Measurement Framework


  1. Kieling C, Baker-Henningham H, Belfer M, Conti G, Ertem I, Omigbodun O, et al. Child and adolescent mental health worldwide: evidence for action. Lancet. 2011;378:1515–25.

    Article  PubMed  Google Scholar 

  2. Patel V, Flisher AJ, Hetrick S, McGorry P. Mental health of young people: a global public-health challenge. Lancet. 2007;369:1302–13.

    Article  PubMed  Google Scholar 

  3. Research Councils. Widening cross-disciplinary research for mental health. 2017.

  4. UK Clinical Research Collaboration. UK health research analysis 2014. 2015.

  5. World Health Organization. Sixty-sixth world health assembly: comprehensive mental health action plan 2013–2020. Geneva: World Health Organization; 2013.

    Google Scholar 

  6. UK Government. New research to improve treatment for adolescent mental health. 2019.

  7. Wellcome. Wellcome boosts mental health research with extra £200 million. 2019.£200-million.

  8. Deighton J, Croudace T, Fonagy P, Brown J, Patalay P, Wolpert M. Measuring mental health and wellbeing outcomes for children and adolescents to inform practice and policy: a review of child self-report measures. Child Adolesc Psychiatry Ment Health. 2014.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Langhinrichsen-Rohling J, Arata C, O’Brien N, Bowers D, Klibert J. Sensitive research with adolescents: just how upsetting are self-report surveys anyway? Violence Vict. 2006;21:425–44.

    Article  PubMed  Google Scholar 

  10. Riley AW. Evidence that school-age children can self-report on their health. Ambul Pediatr. 2004;4:371–6.

    Article  PubMed  Google Scholar 

  11. Eiser C, Varni JW. Health-related quality of life and symptom reporting: similarities and differences between children and their parents. Eur J Pediatr. 2013;172(10):1299–304.

    Article  PubMed  Google Scholar 

  12. De Los Reyes A. Introduction to the special section: more than measurement error: discovering meaning behind informant discrepancies in clinical assessments of children and adolescents. J Clin Child Adolesc Psychol. 2011;40:1–9.

    Article  Google Scholar 

  13. Castagna PJ, Calamia M, Davis TE. The discrepancy between mother and youth reported internalizing symptoms predicts youth’s negative self-esteem. Curr Psychol. 2019.

    Article  Google Scholar 

  14. Children Act 2004. 2004.

  15. Department of Health. Liberating the NHS: No decision about me without me. 2012.

  16. UN General Assembly. The United Nations convention on the rights of the child. 1989.

  17. Department of Health, NHS England. Future in mind: Promoting, protecting and improving our children and young people's mental health and wellbeing. 2015.

  18. Department of Health and Social Care. Report of the children and young people’s health outcomes forum 2014/2015: culture, engagement and voice theme group. 2015.

  19. Tisdall K, Davis J, Gallagher M. Researching with children and young people: research design, methods and analysis. London: SAGE Publications; 2009.

    Book  Google Scholar 

  20. Alderson P, Morrow V. The ethics of research with children and young people: a practical handbook, 2nd ed. London: SAGE Publications; 2011.

    Book  Google Scholar 

  21. Shaw C, Brady LM, Davey C. Guidelines for research with children and young people. NCB Research Centre. 2011, p. 63.

  22. Crane S, Broome ME. Understanding ethical issues of research participation from the perspective of participating children and adolescents: a systematic review. Worldviews Evid Based Nurs. 2017;14(3):200–9.

    Article  PubMed  PubMed Central  Google Scholar 

  23. Fisher CB. Adolescent and parent perspectives on ethical issues in youth drug use and suicide survey research. Ethics Behav. 2003;13(4):303–32.

    Article  PubMed  Google Scholar 

  24. Woodgate RL, Edwards M. Children in health research: a matter of trust. J Med Ethics. 2010;36(4):211–6.

    Article  PubMed  Google Scholar 

  25. Krumpal I. Determinants of social desirability bias in sensitive surveys: a literature review. Qual Quant. 2013;47(4):2025–47.

    Article  Google Scholar 

  26. Fear NT, Seddon R, Jones N, Greenberg N, Wessely S. Does anonymity increase the reporting of mental health symptoms? BMC Public Health. 2012;12(1):797-804.

    Article  PubMed  PubMed Central  Google Scholar 

  27. van de Mortel TF. Faking it: social desirability response bias in self-report research. Aust J Adv Nurs. 2008;25(4):40–8.

    Google Scholar 

  28. Becker-Blease KA, Freyd JJ. Research participants telling the truth about their lives: the ethics of asking and not asking about abuse. Am Psychol. 2006;61(3):218–26.

    Article  PubMed  Google Scholar 

  29. Fazel M, Hoagwood K, Stephan S, Ford T. Mental health interventions in schools in high-income countries. Lancet Psychiatry. 2014;1:377–87.

    Article  PubMed  PubMed Central  Google Scholar 

  30. Murphy JM, Abel MR, Hoover S, Jellinek M, Fazel M, Psych MRC. Scope, scale, and dose of the world’s largest school-based mental health programs. Harvard Rev Psychiatry. 2017;25(5):218–28.

    Article  Google Scholar 

  31. Pole C, Mizen P, Bolton A. Realising children’s agency in research: partners and participants? Int J Soc Res Methodol. 1999;2(1):39–54.

    Article  Google Scholar 

  32. Denscombe M, Aubrook L. “It’s just another piece of schoolwork”: the ethics of questionnaire research on pupils in schools. Br Educ Res J. 1992;18(2):113–31.

    Article  Google Scholar 

  33. Graham A, Powell MA, Taylor N. Ethical research involving children: encouraging reflexive engagement in research with children and young people. Child Soc. 2015;29:331–43.

    Article  Google Scholar 

  34. David M, Edwards R, Alldred P. Children and school-based research: “Informed consent” or “educated consent”? Br Educ Res J. 2001;27:347–65.

    Article  Google Scholar 

  35. Strange V, Forest S, Oakley A. Using research questionnaires with young people in schools: the influence of the social context. Int J Soc Res Methodol Theory Pract. 2003;6(4):337–46.

    Article  Google Scholar 

  36. Fendrich M, Johnson TP. Examining prevalence differences in three national surveys of youth: impact of consent procedures, mode, and editing rules. J Drug Issues. 2001;31:615–42.

    Article  Google Scholar 

  37. Johnson TP. Sources of error in substance use prevalence surveys. Int Sch Res Not. 2014;2014:1–21.

    Article  Google Scholar 

  38. Midgley N, Isaacs D, Weitkamp K, Target M. The experience of adolescents participating in a randomised clinical trial in the field of mental health: a qualitative study. Trials. 2016;17:364.

    Article  PubMed  PubMed Central  Google Scholar 

  39. Cooper Robbins SC, Rawsthorne M, Paxton K, Hawke C, Rachel Skinner S, Steinbeck K. “You can help people”: adolescents’ views on engaging young people in longitudinal research. J Res Adolesc. 2012;22:8–13.

    Article  Google Scholar 

  40. Wolpert M, Curtis-Tyler K, Edbrooke-Childs J. A qualitative exploration of patient and clinician views on patient reported outcome measures in child mental health and diabetes services. Adm Policy Ment Health Ment Health Serv Res. 2016;43(3):309–15.

    Article  Google Scholar 

  41. Lockwood J, Townsend E, Royes L, Daley D, Sayal K. What do young adolescents think about taking part in longitudinal self-harm research? Findings from a school-based study. Child Adolesc Psychiatry Ment Health. 2018;12(1):1–13.

    Article  Google Scholar 

  42. Gill P, Stewart K, Treasure E, Chadwick B. Methods of data collection in qualitative research: interviews and focus groups. Br Dent J. 2008;204:291–5.

    Article  CAS  PubMed  Google Scholar 

  43. Kitzinger J. Qualitative research: introducing focus groups. BMJ. 1995;311:299–302.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  44. Wilkinson S. Focus groups. In: Smith JA, editor. Qualitative psychology: a practical guide to research methods. 3rd ed. London: SAGE Publications Ltd; 2015. p. 199–221.

    Google Scholar 

  45. Hayes D, Moore A, Stapley E, Humphrey N, Mansfield R, Santos J, et al. Promoting mental health and wellbeing in schools: examining mindfulness, relaxation and strategies for safety and wellbeing in English primary and schondary schools: study protocol for a multi-school, cluster randomised control trial (INSPIRE). Trials. 2019;20:640–52.

    Article  PubMed  PubMed Central  Google Scholar 

  46. Hayes D, Moore A, Stapley E, Humphrey N, Mansfield R, Santos J, et al. A school based interventions study examining approaches for wellbeing and mental health literacy of pupils in year nine in England: study protocol for a multi-school, cluster randomised control trial (AWARE). BMJ Open. 2019;9:e029044.

    Article  PubMed  PubMed Central  Google Scholar 

  47. Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3:77–101.

    Article  Google Scholar 

  48. QSR International Pty Ltd. NVivo qualitative data analysis software: Version 11. 2015.

  49. Milin R, Kutcher S, Lewis SP, Walker S, Wei Y, Ferrill N, et al. Impact of a mental health curriculum on knowledge and stigma among high school students: a randomized controlled trial. J Am Acad Child Adolesc Psychiatry. 2016;55(5):383–391.e1.

    Article  PubMed  Google Scholar 

  50. Cooper M, McLeod J. Pluralistic counselling and psychotherapy. London: SAGE Publications Ltd; 2011.

    Google Scholar 

  51. Pascual-Leone A, Greenberg LS. Emotional processing in experiential therapy: why “the only way out is through”. J Consult Clin Psychol. 2007;75(6):875–87.

    Article  PubMed  Google Scholar 

  52. Greenberg LS, Watson JC. Emotion-focused therapy: coaching clients to work through their feelings. 2nd ed. Washington, DC: American Psychological Association; 2015.

    Book  Google Scholar 

  53. Southam-Gerow MA, Kendall PC. Emotion regulation and understanding: implications for child psychopathology and therapy. Clin Psychol Rev. 2002;22:189–222.

    Article  PubMed  Google Scholar 

  54. Kross E, Ayduk O, Mischel W. When asking “why” does not hurt. Psychol Sci. 2005;16(9):709–15.

    Article  PubMed  Google Scholar 

  55. Torre JB, Lieberman MD. Putting feelings into words: affect labeling as implicit emotion regulation. Emot Rev. 2018;10:116–24.

    Article  Google Scholar 

  56. Ajzen I. The theory of planned behavior. Organ Behav Hum Decis Process. 1991;50:179–221. 

    Article  Google Scholar 

  57. Rickwood DJ, Deane FP, Wilson CJ, Ciarrochi J. Young people’s help-seeking for mental health problems. Aust e-J Adv Ment Health. 2005;4:218–51.

    Article  Google Scholar 

  58. Gulliver A, Griffiths KM, Christensen H. Perceived barriers and facilitators to mental health help-seeking in young people: a systematic review. BMC Psychiatry. 2010;10:113–22.

    Article  PubMed  PubMed Central  Google Scholar 

  59. Coyne I. Accessing children as research participants: examining the role of gatekeepers. Child Care Health Dev. 2010;36(4):452–4.

    Article  CAS  PubMed  Google Scholar 

  60. Heath S, Charles V, Crow G, Wiles R. Informed consent, gatekeepers and go-betweens: negotiating consent in child- and youth-orientated institutions. Br Educ Res J. 2007;33(3):403–17.

    Article  Google Scholar 

  61. Hanley T, Winter LA, Burrell K. Supporting emotional well-being in schools in the context of austerity: an ecologically informed humanistic perspective. Br J Educ Psychol. 2019;90:1–18.

    Article  PubMed  PubMed Central  Google Scholar 

  62. Danby G, Hamilton P. Addressing the ‘elephant in the room’. The role of the primary school practitioner in supporting children’s mental well-being. Pastor Care Educ. 2016;3944:1–14.

    Article  Google Scholar 

  63. Kidger J, Gunnell D, Biddle L, Campbell R. Part and parcel of teaching? Secondary school staff’s views on supporting student emotional health and well-being. Br Educ Res J. 2010;36(6):919–35.

    Article  Google Scholar 

  64. Callegaro M. Social desirability. In: Lavrakas PJ, editor. Encyclopedia of survey research methods. London: SAGE Publications; 2011. p. 173–80.

    Google Scholar 

  65. Blakemore SJ. Avoiding social risk in adolescence. Curr Dir Psychol Sci. 2018;27:116–22.

    Article  Google Scholar 

  66. Mulvey KL, Boswell C, Zheng J. Causes and consequences of social exclusion and peer rejection among children and adolescents. Rep Emot Behav Disord Youth. 2017;17(3):71–5.

    Google Scholar 

  67. Johnson TP, van de Vijver FJR. Social desirability in cross-cultural research. Cross-Cult Surv Methods. 2003;325:195–204.

    Google Scholar 

  68. Malham PB, Saucier G. The conceptual link between social desirability and cultural normativity. Int J Psychol. 2016;51:474–80.

    Article  Google Scholar 

  69. Terwee CB, Bot SDM, de Boer MR, van der Windt DAWM, Knol DL, Dekker J, et al. Quality criteria were proposed for measurement properties of health status questionnaires. J Clin Epidemiol. 2007;60:34–42.

    Article  PubMed  Google Scholar 

  70. Scientific Advisory Committee of the Medical Outcomes Trust. Assessing health status and quality-of-life instruments: attributes and review criteria. Qual Life Res. 2002;11(3):193–205.

    Article  Google Scholar 

  71. Patalay P, Hayes D, Wolpert M. Assessing the readability of the self-reported Strengths and Difficulties Questionnaire. BJPsych Open. 2018;4:55–7.

    Article  PubMed  PubMed Central  Google Scholar 

  72. Jensen SA, Fabiano GA, Lopez-Williams A, Chacko A. The reading grade level of common measures in child and adolescent clinical psychology. Psychol Assess. 2006;18(3):346–52.

    Article  PubMed  Google Scholar 

  73. Lereya ST, Humphrey N, Patalay P, Wolpert M, Böhnke JR, Macdougall A, et al. The student resilience survey: psychometric validation and associations with mental health. Child Adolesc Psychiatry Ment Health. 2016;10(1):44.

    Article  PubMed  PubMed Central  Google Scholar 

  74. Cole M, Cole SR, Lightfoot C. The development of children. New York: Worth; 2005.

    Google Scholar 

  75. Wang Q, Capous D, Koh JBK, Hou Y. Past and future episodic thinking in middle childhood. J Cogn Dev. 2014;15(4):625–43.

    Article  Google Scholar 

  76. Gott C, Lah S. Episodic future thinking in children compared to adolescents. Child Neuropsychol. 2014;20(5):625–40.

    Article  PubMed  Google Scholar 

  77. Krauel K, Duzel E, Hinrichs H, Santel S, Rellum T, Baving L. Impact of emotional salience on episodic memory in attention-deficit/hyperactivity disorder: a functional magnetic resonance imaging study. Biol Psychiatry. 2007;61(12):1370–9.

    Article  PubMed  Google Scholar 

  78. Levine LJ, Safer MA, Lench HC. Remembering and misremembering emotions. In: Sanna LJ, Chang EC, editors. Judgements over time: the interplay of thoughts, feelings, and behaviors. Oxford: Oxford University Press, Inc.; 2006. p. 271–90.

    Chapter  Google Scholar 

  79. Beatty P, Willis G. Research synthesis: the practice of cognitive interviewing. Public Opin Q. 2007;71:287–311.

    Article  Google Scholar 

  80. Jobe JB. Cognitive research improves questionnaires. Am J Public Health. 1989;79:1053–5.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  81. Drennan J. Cognitive interviewing: verbal data in the design and pretesting of questionnaires. J Adv Nurs. 2003;42(1):57–63.

    Article  PubMed  Google Scholar 

  82. Willis GB. Cognitive interviewing: a tool for improving questionnaire design. London: SAGE Publications; 2004.

    Google Scholar 

  83. Willis GB, Artino AR. What do our respondents think we’re asking? Using cognitive interviewing to improve medical education surveys. J Grad Med Educ. 2013;5(3):353–6.

    Article  PubMed  PubMed Central  Google Scholar 

  84. Cummins RA, Gullone E. Why we should not use 5-point Likert scales: the case for subjective quality of life measurement. In: Proceedings second international conference on quality of life in cities. 2000. pp. 74–93.

  85. Heine SJ, Lehman DR, Peng K, Greenholtz J. What’s wrong with cross-cultural comparisons of subjective Likert scales? The reference-group effect. J Pers Soc Psychol. 2002;82(6):903–18.

    Article  PubMed  Google Scholar 

  86. Smith GT, McCarthy DM. Methodological considerations in the refinement of clinical assessment instruments. Psychol Assess. 1995;7:300–8.

    Article  Google Scholar 

  87. Boyle GJ. Does item homogeneity indicate internal consistency or item redundancy in psychometric scales? Pers Individ Differ. 1991;12:291–4.

    Article  Google Scholar 

  88. Turner RR, Quittner AL, Parasuraman BM, Kallich JD, Cleeland CS, Mayo/FDA Patient-Reported Outcomes Consensus Meeting Group. Patient-reported outcomes: instrument development and selection issues. Value Health. 2007;10:86–93.

    Article  Google Scholar 

  89. Rolstad S, Adler J, Rydén A. Response burden and questionnaire length: is shorter better? A review and meta-analysis. Value Health. 2011;14(8):1101–8.

    Article  PubMed  Google Scholar 

  90. Wijndaele K, Matton L, Duvigneaud N, Lefevre J, Duquet W, Thomis M, et al. Reliability, equivalence and respondent preference of computerized versus paper-and-pencil mental health questionnaires. Comput Hum Behav. 2007;23:1958–70.

    Article  Google Scholar 

  91. Rew L, Horner SD, Riesch L, Cauvin R. Computer-assisted survey interviewing of school-age children. Adv Nurs Sci. 2004;27(2):129–37.

    Article  Google Scholar 

  92. Stieger S, Reips U. What are participants doing while filling in an online questionnaire: a paradata collection tool and an empirical study. Comput Hum Behav. 2010;26:1488–1495.

    Article  Google Scholar 

  93. Patalay P, Hayes D, Deighton J, Wolpert M. A comparison of paper and computer administered strengths and difficulties questionnaire. J Psychopathol Behav Assess. 2016;38(2):242–50.

    Article  Google Scholar 

  94. Patalay P, Deighton J, Fonagy P, Wolpert M. Equivalence of paper and computer formats of a child self-report mental health measure. Eur J Psychol Assess. 2015;31:54–61.

    Article  Google Scholar 

  95. NHS Health Scotland, University of Warwick, University of Edinburgh. The Warwick-Edinburgh mental well-being scale (WEMWBS). 2006.

  96. Goodman R, Meltzer H, Bailey V. The Strengths and Difficulties Questionnaire: a pilot study on the validity of the self-report version. Int Rev Psychiatry. 1998;15(1–2):173–7.

    Article  Google Scholar 

  97. Petrides KV, Sangareau Y, Furnham A, Frederickson N. Trait emotional intelligence and children’ s peer relations at school. Soc Dev. 2006;15:537–47.

    Article  Google Scholar 

  98. Cohen S, Kamarck T, Mermelstein R. A global measure of perceived stress. J Health Soc Behav. 1983;24:385–96.

    Article  CAS  PubMed  Google Scholar 

  99. Sun J, Stewart D. Development of population-based resilience measures in the primary school setting. Health Educ. 2007;107(6):575–99.

    Article  Google Scholar 

  100. Huebner ES. Initial development of the student’s life satisfaction scale. Sch Psychol Int. 1991;12(3):231–40.

    Article  Google Scholar 

  101. Angold A, Costello EJ, Messer SC, Pickles A, Winder F, Silver D. Development of a short questionnaire for use in epidemiological studies of depression in children and adolescents. Int J Methods Psychiatr Res. 1995;5:237–49.

    Google Scholar 

  102. Deighton J, Tymms P, Vostanis P, Belsky J, Fonagy P, Brown A, et al. The development of a school-based measure of child mental health. J Psychoeduc Assess. 2013;31(3):247–57.

    Article  PubMed  PubMed Central  Google Scholar 

  103. Ravens-Sieberer U, Gosch A, Rajmil L, Erhart M, Bruil J, Duer W, et al. KIDSCREEN-52 quality-of-life measure for children and adolescents. Expert Rev Pharmacoecon Outcomes Res. 2005;5(3):353–64.

    Article  PubMed  Google Scholar 

  104. Chisholm K, Pattersen P, Torgerson C, Turuner E, Jenkinson D, Birchwood M. Impact of contact on adolescents’ mental health literacy and stigma: the SchoolSpace cluster randomised controlled trial. BMJ Open. 2016;6:e009435.

    Article  PubMed  PubMed Central  Google Scholar 

  105. Chisholm D, Knapp MRJ, Knudsen HC, Amaddeo F, Gaite L, van Wijngaarden B, et al. Client Socio-Demographic and Service Receipt Inventory—European Version: development of an instrument for international research: EPSILON Study 5. Br J Psychiatry. 2002;177(S39):s28–33.

    Article  Google Scholar 

  106. Ratcliffe J, Couzner L, Flynn T, Sawyer M, Stevens K, Brazier J, et al. Valuing child health utility 9D health states with a young adolescent sample. Appl Health Econ Health Policy. 2011;9(1):15.

    Article  PubMed  Google Scholar 

  107. Hart LM, Mason RJ, Kelly CM, Cvetkovski S, Jorm AF. “Teen Mental Health First Aid”: a description of the program and an initial evaluation. Int J Ment Health Syst. 2016.

    Article  PubMed  PubMed Central  Google Scholar 

Download references


Thanks are due to Margarita Panayiotou for sharing her expertise in item and measure development and quality, and for her contributions to data collection, as well as to Tanya Lereya for her role in data collection. Thanks must also go to the children and young people who shared their perspectives and experiences with us for the purposes of this study. We also wish to thank the anonymous peer reviewers for their valuable suggestions, which we feel have aided us in substantially strengthening this piece.


The piloting of the WMF was funded by the National Lottery Community Fund as part of the evaluation of HeadStart, while the piloting of the EfW framework was funded by the Department for Education as part of the feasibility study for the EfW evaluation. Funding bodies were involved in the overarching design of these evaluations, but played no role in the collection, analysis, and interpretation of data or in writing the manuscript. Funding bodies were, however, given the opportunity to view the manuscript prior to submission to check for accuracy of reporting.

Author information

Authors and Affiliations



OD led the conception and design of the current study and led analysis of data and drafting of the manuscript. EA and RM were major contributors to the study design and analytical process and contributed substantially to writing the manuscript. ES led the design of the qualitative piloting processes upon which the study draws and made substantial contributions to the design of the current study and the analytical process. HM was a major contributor to the development of findings and supported in shaping the design and manuscript for the study. DH was a major contributor in the design of the qualitative piloting process for EfW and contributed to the study’s design and analytical process, as well as reviewing and refining the manuscript. KB and AM made contributions to design and analytical processes and supported the development and refining of the manuscript. JD was the principal investigator for both main projects and contributed substantially to translating findings into clear recommendations and to revising the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Ola Demkowicz.

Ethics declarations

Ethics approval and consent to participate

Participation required opt-in consent from participants and their parents/carers before any data were collected. Ethics approval was granted by the University College London (UCL) Research Ethics Committees for the piloting of the WMF (Reference number 8097/002) and for qualitative data collection for the feasibility study of the EfW programme (Reference number 7963/003).

Consent for publication

The consent obtained from participants and their parents/carers included consent for publication of their data within written reports.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Demkowicz, O., Ashworth, E., Mansfield, R. et al. Children and young people’s experiences of completing mental health and wellbeing measures for research: learning from two school-based pilot projects. Child Adolesc Psychiatry Ment Health 14, 35 (2020).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI:


  • Mental health outcomes
  • Wellbeing
  • Measurement
  • Child and adolescent mental health
  • Self report
  • School surveys
  • Measure design
  • Research ethics