Skip to main content

Advertisement

Clinicians’ attitudes toward standardized assessment and diagnosis within child and adolescent psychiatry

Abstract

Background

There is a strong call for clinically useful standardized assessment tools in everyday child and adolescent psychiatric practice. The attitudes of clinicians have been raised as a key-facilitating factor when implementing new methods. An explorative study was conducted aimed to investigate the clinicians’ attitudes regarding standardized assessments and usefulness of diagnoses in treatment planning.

Methods

411 mental health service personnel working with outpatient and inpatient assessment and treatment within the specialist child and adolescent mental health services, Stockholm County Council were asked to participate in the study, of which 345 (84%) agreed answer a questionnaire. The questionnaire included questions regarding Attitudes toward Standardized Assessment and Utility of Diagnosis. Descriptive analyses were performed and four subscales were compared with information from a similar study in US using the same instruments. The demographic and professional characteristics (age, working years, gender, education, profession, management position, involvement in assessment, level of service) in terms of prediction of attitudes were studied by univariate and multivariate linear regressions.

Results

Overall, the clinicians had quite positive attitudes and were more positive compared to a similar study conducted in the US earlier. There were differences in attitudes due to several characteristics but the only characteristic predicting all subscales was type of profession (counselor, nurse, psychiatrist, psychologist, other), with counselors being less positive than other groups.

Conclusion

The overall positive attitudes toward standard assessment are of importance in the development of evidence-based practice and our study implies that clinicians in general value and are willing to use standardized assessment. Nevertheless, there are specific issues such as adequate training and available translated assessment instrument that need to be addressed. When implementing new methods in practice, there are general as well as specific resistances that need to be overcome. Studies in different cultural settings are of importance to further extend the knowledge of what is general and what is specific barriers.

Introduction

Over recent decades the field of child and adolescent mental health care has changed and the demand for obtaining structured, systematic and valid information of diagnosis and treatments has increased, in order to prioritize and plan organization of mental health services [1, 2]. Parallel to these changes, the health care systems have been influenced by the evidence-based movement highlighting the importance of using scientific findings in decision-making [3]. An overarching concept in this movement is evidence-based practice (EBP), characterized as a systematic approach, integrating best research evidence and standardized data, with clinical expertise, while respecting patient preferences [4,5,6]. Although many different evidence-based initiatives have been undertaken in within the field of child and adolescent psychiatry, EBP has so far only been implemented in slow pace within this specialty [7, 8].

Appropriate diagnosis is essential for providing good medical and psychological treatments and for psychoeducation, i.e. helping patients and their families to recognize and understand symptoms [9,10,11]. Valid and accurate diagnoses are also stipulated in treatment protocols and are prerequisites for planning accurate interventions [10].

Making a diagnosis requires thorough assessment of medical history, symptoms, and function. Yet, traditionally the diagnostic assessment by clinicians has been more or less unstructured, capturing some but not all of the diagnostic criteria described in the disease classifications [12, 13]. A recent study within adult psychiatry showed that clinicians do not collect sufficient information in order to establish a correct diagnosis [14]. Furthermore, the traditional diagnostic process and the information obtained by this has consequently been subject to considerable variation [15].

The importance of standardized diagnostic interviews in child and adolescent psychiatry practice has been highlighted in several studies [12, 13, 16, 17], as well as within the field of clinical psychology [18]. Standardized diagnostic interviews are assumed to save time and speed up the assessment process by facilitating and clarifying the diagnostic process, systematically detect comorbidity, obtain a reliable diagnosis and prepare treatment in a more solid manner [10, 19]. Less use of structured interviews has been related to underestimation of patient acceptance and mistaken assumptions of patients’ feelings [20].

Despite the importance of assessment, most attention has been given to Evidence-Based Treatments (EBT) and not that much to assessments in the EBP literature [2, 5, 21]. However, during recent years, the concept of Evidence-Based Assessment (EBA) has been launched as a part of EBP. Mash and Hunsley [22] propose that standardized assessments (SA) are not restricted to standardized interviews and could be conducted for other purposes than determining diagnosis, such as prognosis and predictions, treatment planning and monitoring. Similarly, Christon et al. [23] have proposed how EBA could be part of EBP in the treatment process.

EBA represents a strong call for valid and clinically useful assessment tools in everyday child and adolescent psychiatric practice; both for strengthening the diagnostic process and enabling ongoing progress monitoring [24, 25]. Nevertheless, a survey among 1.927 psychiatrists and psychotherapists in Switzerland revealed that on average only 15% of the patients were assessed using standardized assessment tools [20]. Further, Garland, Kruse and Aarons [1] found that standardized measures or scales were even less frequently used within child and adolescent psychiatric settings; 92% of child psychiatrists indicated they had never used the scores from standardized measures in their clinical practice. An inventory conducted in Sweden found that 39% of all psychiatric units used standardized assessment tools in the diagnostic process but not frequently and only 12% did so on regularly basis [26].

A key facilitating factor for the general success of implementing methods or innovations is whether clinicians find the procedures relevant [27]. Earlier studies have shown that clinicians’ incitement for diagnosing is often external e.g. billing purposes, rather than usefulness, which reduces investment in the assessment process [28, 29]. Concerns for using SA in the assessment process have also been highlighted and the arguments against using SA include that they are time consuming, that structured interviews will disturb the therapeutic relationship and that clinical judgments are more sufficient and useful [25, 20]. In parallel, a review of therapist-level resistance to EBP showed that psychotherapists believe that they can objectively and without bias perceive the patients’ problem and treatment outcome [30]. Harvey and Gumport [31] have identified obstacles against EBT in general and call for more studies of therapists’ beliefs and preferences among a broader range of mental health professionals. The same call could probably be made about EBA since there are even fewer studies conducted.

Implementation of new clinical procedures is strongly influenced by clinicians’ attitudes. However, there is not yet enough knowledge about obstacles towards the use of standardized tools in diagnostic assessment processes. Large scale studies of child and adolescent mental health providers from various disciplines and in different countries is needed to inform specific efforts to encourage clinicians to use standardized tools systematically and thereby to more evidence-based assessments.

This study is an exploratory study and aims to investigate clinician´s attitudes towards standardized assessments and usefulness of diagnosis, and the research questions are:

  • What are the attitudes of clinicians in secondary mental health care in Stockholm, Sweden towards standardized assessment and the usefulness of diagnosis in treatment planning and how do they differ from an US population?

  • Do Swedish clinicians’ attitudes differ between groups due to demographic and profession?

Method

Participants and setting

In Sweden, the child and adolescent mental health services are divided into two parts: (1) the primary mental health care (general physicians and psychologists, not licensed as specialists in child and adolescent mental disorders) and (2) the specialized mental health care (licensed specialists, i.e. psychiatrics/child psychiatrists and psychologist specialized in mental disorders working in multidisciplinary teams, together with nurses, counselors and others). The present study was conducted within the latter. The participants were mental health care personnel working with outpatient and inpatient assessment and treatment within the non-private specialist child and adolescent mental health care services in Stockholm County Council (CAMHS Stockholm). Each year, approximately 22,000 children and adolescents receive treatment for a mental disorder in one of the six departments in CAMHS Stockholm. This equals nearly 6% of the population under 18 years of age in the catchment area. CAMHS Stockholm consists of 12 outpatient clinics, four intermediate care units mainly working with patients in their home or other environments and one inpatient clinic. All 411 mental health service personnel working with assessment and treatment were asked to participate in the study, of which 345 (84%) volunteered to participate. CAMHS Stockholm also includes seven outpatient clinics specialized in treating e.g. sexual abuse, selfharm, domestic violence, immigrants with mental health problems, and to which patients are referred after initial assessment in the general clinics. Hence, clinicians in the specialized clinics were not included in this survey.

There were mainly female participants (78%) and the average age was 47.2 years (median 48). The participants had worked within the children and adolescent mental health services for an average of 10.3 years (median 7). The participants were psychologists (49%), counselors with a degree in social work and psychotherapy, (22%), medical doctors/psychiatrists (10%), nurses (9%) and other occupational background like mental health keepers, pedagogues etc. with therapeutic training (8%). The majority of participants (90%) had more than 3.5 years of education from university. All the clinical staff working at the CAMHS Stockholm are involved in interdisciplinary assessments in the beginning of a new patient contact, but not all conduct in-depth assessments involving psychological, medical and/or observational tests. The characteristics of the participants are further presented in Table 1.

Table 1 Distribution of participants’ demographic and professional characteristics (n = 345)

Procedure

In each of the participating clinics, the clinic manager distributed a questionnaire either during staff meetings or individually distributed in internal mailboxes. During the period when the survey was conducted, 461 were employed, although 50 of them did not receive the questionnaire due to different circumstances, i.e. long sick leave, educational leave or vacation etc. If the clinicians volunteered to participate they completed the questionnaire individually and anonymously and returned the surveys directly to the researchers, using sealed envelopes.

Measures

The questionnaire included questions regarding demographic and professional characteristics (independent variables), the measurement Attitudes toward Standardized Assessment (ASA) consisting of four subscales and the Utility of Diagnosis scale (dependent variables) developed in earlier studies [24, 25]. The scales were translated in collaboration with researchers in Norway and Denmark, and back-translated. One of the original developers of the questionnaire, Dr. Jensen-Doss audited the back-translation to secure correct meaning and approved the final translated Swedish version.

Demographic and professional characteristics

The demographic and professional characteristics included age, number of years working within CAMHS, gender, highest educational degree (categorized as PhD; university more than 3.5 years; university less than 3.5 years/other higher education), profession (categorized as counselor; nurse; psychiatrist/MD including those on specialist training; psychologist; other), management position (categorized as unit manager or co-manager of the clinic or not), degree of involvement in assessments (conducting in-depth diagnostic examinations or not) and level of service (outpatient; intermediate; inpatient). In this context, the clinicians’ psychotherapeutic training was of interest, as CBT (cognitive behavioral therapy) has a long tradition of using assessments [32]. However, since most participants had a broad therapeutic educational training, indicating an eclectic approach, this factor could not be explored in the analysis.

Attitudes toward standardized assessment and usefulness of diagnosis

The ASA questionnaire was originally developed to assess clinicians’ attitudes toward SA in three different areas, each measured by a subscale [25]. In total, ASA consists of 22 items, all rated on a 5-point Likert scale from 1 (strongly disagree) to 5 (strongly agree). The questionnaire measures both positive and negative attitudes towards standardized assessments. Hence, in order to receive a universal ranking of the scale direction, the negative ranking scores were re-coded to correspond to the positive ranking scores. For each subscale, the average ranking of included items were calculated.

The ASA subscale, Benefit over Clinical Judgment assesses to which extent standardized tools can improve the assessment information compared to relying on clinical judgments alone. The scale consists of five items and with the internal consistency α = .75 in the present study. The subscale, Practicality assesses clinician opinions of the feasibility in practice, and consists of 10 items with the internal consistency α = .60 in the present study. The subscale, Psychometric Quality assesses clinicians’ beliefs concerning reliability and validity of standardized measures and how much they value these psychometric properties, and consists of 7 items with the internal consistency α = .69.

Separate from ASA, The Utility of Diagnosis scale assesses clinicians' opinions regarding the usefulness of diagnosis in their clinical work (e.g. “Making a diagnosis is more important for obtaining services or benefits than for planning of treatment”) since it could be of importance for the willingness to invest in the assessment process. The subscale was developed by the same founders as ASA [24] and consists of five items, also rated on a 5-point Likert scale from 1 (strongly disagree) to 5 (strongly agree), but with somewhat lower internal consistency (α = .45) than the subscales included in ASA. When single items were excluded from the scale in further reliability analysis, the internal consistency improved somewhat, α = .50, and when keeping only three items, it improved additionally (α = .54). However, our judgment was that these improvements were not large enough to motive change of the scale, and we decided to keep all items of the original scale.

Data analysis

Prior to analysis we examined normal distribution of continuous independent and dependent variables using test of skewness and kurtosis in which values between − 2 and 2 are considered acceptable, according to Almquist, Ashir and Brannstroem [33]. The two independent variables, age and working years, were somewhat skewed, whereas the four dependent variables, the attitude subscales, fulfilled criteria for normality.

In order to explore the first research question on clinicians attitudes regarding standardized assessment and diagnosis and how they differ from an US population, descriptive statistics were performed and the four subscales were compared with information from a similar study in US [24, 25] using an immediate form of two-sample t test, ttesti in Stata [34].

In preparation to answer the second research question on differences between clinicians due to characteristics, descriptives of the four subscale (means and standard deviations) were first calculated by categories of each demographic and professional characteristic and then tested in an ANOVA and Post Hoc analysis. The two continuous variables, age and number of working years within secondary mental health services, were dichotomized at the median. As a result of the ANOVA and Post Hoc analysis, three independent variables were changed. Highest educational degree was dichotomized by merging “Doctorial” and “University more than 3.5 years” and by merging “University less than 3.5 years” and “Other higher education”. Second Level of service was dichotomized by merging “Outpatient” and Intermediate” into one category, and keeping “Inpatient” as the other category. Third, profession categories “Nurse” and “Other” were merged into one category.

In order to answer the second research question whether clinicians’ attitudes in Sweden differ between groups due to demographic and professional characteristics and to what degree the same characteristics predict the attitudes univariate and multivariate linear regressions were conducted. In the regression analyses, the continuous data for age and working years within secondary children and adolescent mental health services were used [35].

Since age and working years within secondary children and adolescent mental health services were strongly correlated, r (331) = .69, p < .000, we considered excluding one of them from the multivariate analysis. However, this did not change the explained variance, and hence, both variables were kept in the model, making it possible to explore the strengths of prediction for both of them. In order to compare profession categories, the multivariate regressions were conducted for each category compared to another, one at a time (Psychiatrist/MD vs psychologist; Psychiatrist/MD vs councellor; Psychiatrist/MD vs nurses/other; Psychologist vs councellor; Psychologist vs nurses/other; Councellor vs nurses/other).

Missing data were examined with Post Hoc analysis of the variance for each dependent variable value. This showed that participants with missing data did not differ from the others. The number of missing data on each characteristic is presented in Table 1 and since the overall rate of missing data was low, 5% or fewer, decision was made to use listwise deletion.

Decision was made to choose significant level 95% in all analyses and utilize the alphas of .05. Cohen’s definitions of effect sizes [36] were used to describe the subscale differences between two samples t-tests, d values of .20, .50 and .80 were interpreted as small, medium and large effect, and the strengths of the regression coefficients, R2 values of .02, .13 and .26 were interpreted as small, medium and large effect sizes [37].

Results

Clinicians’ attitudes to standardized assessments and diagnoses

The clinicians’ attitudes to standardized assessments and diagnostic interviews and to the usefulness of diagnosis in clinical work are presented in Table 2.

Table 2 Descriptive statistics of subscales and items for Attitudes toward Standardized Assessment and Utility of diagnosis in CAP Stockholm (point scales, means, standard variation, N) and comparison to US (mean, standard deviation, N)

First, the clinicians in Stockholm CAP were most positive concerning Psychometric Quality (M = 3.81 CI 3.76; 3.87). According to the confidence intervals they were less positive to Utility of Diagnosis (M = 3.60 CI 3.54; 3.66) and even less positive to the feasibility in Practice (M = 3.19 CI 3.13; 3.23) and Benefit over Clinical Judgement (M = 3.14 CI 3.07; 3.21). In comparison to US, clinicians in the Swedish setting were more positive concerning Benefit over Clinical Judgment (p < .001; Cohens d = .28) and to the Utility of Diagnosis (p < .001; Cohens d = .71) corresponding to small and medium effect sizes, respectively. No statistically significant differences between countries were found in attitudes concerning Psychometric quality (p > .05; Cohens d = .06) and Practicality (p > .05; Cohens d = .00).

Table 2 also present results at the single item level and here the Swedish clinicians were most negative concerning the availability of standardized measurements in other languages valid for ethnic minorities.

Differences in attitudes by groups of demographic and professional characteristics

The descriptive results for each attitude subscale are presented by groups of demographic and professional characteristics in Table 3.

Table 3 Means (M) and standard deviations (SD) for clinicians’ attitudes to standardized assessment and utility of diagnosis by groups of demographic and professional characteristics

The demographic and professional characteristics in terms of prediction of attitudes were studied by univariate and multivariate linear regressions and are presented in Table 4.

Table 4 Demographic and professional characteristics as predictors of clinician attitudes by four subscales; univariate (one independent variable) and multivariate (controls for all other independent variables) linear regressions

Profession alone explained 9.9% of the variance in the subscale Benefit over Clinical Judgement (F(3327) = 11.96, p < .001), a small effect size. Also gender (F(1329) = 7.73, p < .010) and working year (F(1326) = 8.00, p < .001) had small effect sizes. Entering all predictors into a multivariable regression analysis, they all together explained 17.3% of the variance (F(10,281) = 7.08, p < .001), a medium effect size. Most predictors from the univariate analysis remained significant, except for age and the differences between psychiatrist and the other professions (Table 4).

According to univariate analysis profession explained 5.9% of the variance in Practicality scale (F(3327) = 6.81, p < .001), a small effect size. Entering all predictors into a multivariate regression analysis they all together explained 6.2% of the variance (F(10,281) = 2.94, p < .001), a small effect size.

Profession alone explained 12.3% of the variance in the subscale Psychometric Quality (F(3329) = 15.40, p < .001), medium effect size. Clinicians age explained 6.2% of the variance (F(1336) = 22.33, p < .001), working years explained 3.5% of the variance (F(1328) = 11.74, p < .001) and whether they conduct in-depth assessments or not explained 4.9% (F(1328) = 11.74, p < .001); all predictors had small effect sizes. According to the multivariate regression analysis all predictors together, explained 13.0% of the variance (F(10,283) = 5.36, p < .001); a medium effect size, with only profession still being statistically significant.

The only statistically significant predictor of the subscale Utility of Diagnosis in the univariate regression was profession, which explained 5.2% of the variance (F(3331) = 6.02, p < .001). Entering all predictors into a multivariate regression analysis they all together explained only 1.8% of the variance in the model Utility of Diagnosis (F(10,285) = 1.55, p = .122); none effect size. Only one predictor remained statistically significant, psychiatrists were more positive than counselors.

Discussion

This study aims to investigate attitudes of clinicians in specialist child and adolescent mental health care in Stockholm, Sweden towards standardized assessment and the usefulness of diagnosis in treatment planning and how do they differ from an US population.

The main finding from the present study is that clinicians in CAMHS Stockholm overall had quite positive attitudes towards the use of standardized assessment tools and found diagnoses useful. The attitudes were more positive compared to a similar previous study conducted in the US [24, 25]. The only characteristic that predicted attitudes, in all subscales, was profession.

Participants were most positive towards the psychometric quality of standardized assessments and the utility of diagnoses. They were somewhat less positive to the usefulness in practice and the use of standardized assessment when compared to clinical judgment. The patterns in attitudes across subscale were similar to those found in the US study [24, 25]. Exceptions from this included that clinicians in the present study seemed to be more positive towards the utility of diagnosis in treatment, compared to those in the US study. This is interesting since the health care systems in these two countries are somewhat different. Clinicians in Sweden were also more likely than their colleagues in US, to report that standardized tools improve the assessment information more than simply relying only on clinical judgments.

Our study also aimed to explore whether clinicians’ attitudes differ due to demographic and professional characteristics. The only characteristic that was found to predict attitude across all subscales was profession, with counselors being less positive than the other groups. Also, clinicians with fewer years of working within CAMHS, seemed to be more positive than those with longer experience, but this relationship was not sustained when controlling for all the other variables. The characteristics predicting the explained variance differed somewhat from those in the earlier mentioned US study by Jensen-Doss and Hewley [24, 25]. Even though profession seemed to be the most important predictor in both populations, it was not always the same professional groups that were most positive. This may be explained by cultural differences between countries, i.e. how the mental health services are organized, but also by differences in the duties/tasks, educational backgrounds and social status of the profession [38].

One previously identified barrier against EBP in general is the belief that it could have negative impact on the therapeutic relationship [39]. The clinicians in our study are not quite that pessimistic, which is positive from an implementation point of view [40]. However, the Swedish clinicians believe that SA do not offer additional information they cannot obtain from informal interviews or from just talking with the children and their parents. This finding is in line with other research; a review of therapist-level resistance to EBP showed that psychotherapists believe that they can objectively and without bias perceive the patients’ problem and treatment outcome [30] a belief not likely to be true [14, 20]. As mentioned earlier, our study indicates that less experienced clinicians in terms of working years within CAP were more positive than experienced clinicians, this was specifically to the use of SA over clinical judgements. This could be explained by the fact that less experienced appreciate more support in the diagnostic process, but also, as Nakamura, Higa-McMilla, Okamura and Shimabukuro suggest, by a more recent education at university, influenced more by EBP [41].

The results from our study raise practical issues that need to be considered. First few clinicians in our study, and even fewer than in the US study, agreed that assessment instruments in languages that their clients speak are readily available. Addressing the language issue is crucial, since assessment strategies need to be not just scientifically sound, but also culturally sensitive and clinically relevant as well [42]. As the patient group in mental health services has changed over the last decades in Sweden, with increasing proportions of children and adolescents originating from countries other than Sweden, it is important to consider availability of the instruments for the most common languages, when implementing EBA in clinical practices.

Second, about one-third of the clinicians reported that they did not have adequate training in using structured assessment tools, which implies a need for more education and practice in this area. According to several implementation strategy theories, e.g. Rogers theory Diffusion of innovations [43] and research within EBP [44], providers must not only have favorable attitudes towards it, they also need to have knowledge about the new technique, before successfully adopted into clinical practice. In a recent study of clinicians training in cognitive behavioral therapy with a strong focus on SA tools, the researchers used the ASA questionnaire to investigate the change in attitudes and use of SA before and after training and found that the clinicians developed a more positive attitude towards psychometric quality and feasibility of SA in clinical practices with training [45]. The actual use of SA also increased during training, but declined somewhat after the training was ended. This is in line with another study showing that training has an positive impact on attitudes and self-efficacy regarding using SA [46]. To continue increased use of SA, a learning environment is probably needed.

Finally, even if the practical issues are solved, successful implementation of EBA requires a competent and skilful organizational culture with commitment among the mental health service personnel [7]. As organizations and technologies change rapidly, solutions must be able to handle complex clinical situations and also be flexible. The arena where the patients (especially the young ones) and professionals meet will be somewhat different in the future [47]. This will also be the future for SA. Although the development of technological solutions has exploded during last decades, it is important that this trend continues in collaboration between clinicians and patients [48] as well as between practitioners and researchers [49].

Exchange of scientific and applied knowledge to meet these challenges within nations and between societies are therefore of importance. Whether EBA will be implemented in child mental health services or not in the future, does not only depend on clinicians’ attitude, knowledge, ability and motivation. The importance of organizational factors and resources have also been highlighted [50]. Generally, the motives for using SA must be clear and supported by suitable systems of the mental health service as well as science. When implementing EBP in the future an integrating approach is needed [23] where both EBA and EBT are of importance since they bridge the gap between science and community services [2].

Strengths and limitations in our study

The present study is an investigation within only one of many Swedish Counties. However, almost a quarter of the Swedish population lives within the Stockholm County and the CAMHS Stockholm serves more than 80% of the population in this age group in the catchment area. In addition, the high response rate in our study and our respondent coverage of all professions within secondary children and adolescent mental health services, increase the generalizability of our findings.

Our study did not include data from the US and therefore it is important to be careful when drawing conclusions about the differences between the two national settings, Sweden and the US. The findings could, apart from pointing at possible cultural differences, also to some extent, be due to differences in the samples and applied methodology in obtaining and analyzing data.

Profession was the main characteristic statistically significantly associated with the results from all subscales and except for cultural professional differences, the size of the professional groups differed. Counselors were quite narrowly defined as a group in the Swedish sample and were less positive to SA than counselors in the US sample, which represented a larger and more inclusive group consisting of counselors and to some degree social workers. Further, compared to the US sample, the Swedish sample includes more psychologists, who we found had less positive attitudes to SA. Finally, clinicians in the private sector were not included in the Swedish sample; a group with less positive attitudes according to the US study. In addition, the Swedish sample included fewer clinicians with research training (PhD) than the US study, which found high educational level to be a predictor of positive attitudes towards SA.

In this study, we were unfortunately not able to explore the relationship between attitudes and use of SA, which is a limitation. Participants did answer an open-ended question about this but the responses could not be grouped or categorized reliably enough to be included in the analyses.

Finally, the psychometric properties of the subscales, Utility of Diagnosis and Practicality must be mentioned. In total, all characteristics together explained only 1.8% of the variance in Utility of Diagnosis, which is a negligible effect size and less than all other scales. This subscale was also the one with lowest internal consistency and also the one with troublesome face validity in the translation process. The subscale Practicality had also questionable internal consistency in the Swedish sample (α = .60) lower than in the US study (α = .75) [26]. In the present study, we chose not to change the number of items to improve the reliabilty since we wanted to compare the results with those in the previous study.

Implications and conclusion

This study aims to investigate clinician´s attitudes towards standardized assessments and usefulness of diagnosis. The overall positive attitudes toward diagnosis and SA are of importance in the development of EBA within child and adolescent mental health services and our study suggest that clinicians in general value diagnosis and are willing to use SA. When implementing new methods in practice, there are general as well as specific resistances that needs to be overcome and studies in different cultural settings are of importance to further extend the knowledge of what is universal and what is contextual. Our study indicates that there are some differences compared to earlier studies that could be explained by cultural circumstances and may be used for assisting favorable EBA progress in several settings. Nevertheless, there are specific issues that need to be addressed in order to achieve an equitable and efficient health care e.g. the lack of translated assessment tools and training. The health services and the scientific community need to collaborate to succeed in implementing more evidence-based assessments of mental disorders in children and adolescents.

Abbreviations

CAMHS Stockholm:

Specialist Child and Adolescent Mental Healths Services within Stockholm County Council

EBM:

evidence-based medicine

EBP:

evidence-based practice

EBT:

evidence-based treatment

EBA:

evidence-based assessment

SA:

standardized assessments

References

  1. 1.

    Garland AF, Kruse M, Aarons GA. Clinicians and outcome measurement: what’s the use? J Behav Health Serv Res. 2003;30(4):393–405.

  2. 2.

    Achenbach TM. Future directions for clinical research, services, and training: evidence-based assessment across informants, cultures, and dimensional hierarchies. J Clin Child Adolesc Psychol. 2017;46(1):159–69.

  3. 3.

    Dawes M, Summerskill W, Glasziou P, Cartabellotta A, Martin J, Hopayian K, Porzsolt F, Burls A, Osborne J. Sicily statement on evidence-based practice. BMC Med Educ. 2005;5:1.

  4. 4.

    Spring B. Evidence-based practice in clinical psychology: what it is, why it matters; What you need to know. J Clin Psychol. 2007;63(7):611–31.

  5. 5.

    Youngstrom EA. Future directions in psychological assessment: combining evidence-based medicine innovations with psychology’s historical strengths to enhance utility. J Clin Child Adolesc Psychol. 2013;42(1):139–59.

  6. 6.

    American Psychological Association PTFoE-BP. Evidence-based practice in psychology. Am Psychol. 2006;61(4):271–85.

  7. 7.

    Aarons GA, Glisson C, Green PD, Hoagwood K, Kelleher KJ, Landsverk JA, Research Network on Youth Mental H, Weisz JR, Chorpita B, Gibbons R, et al. The organizational social context of mental health services and clinician attitudes toward evidence-based practice: a United States national study. Implement Sci. 2012;7:56.

  8. 8.

    Garland AF, Hurlburt MS, Hawley KM. Examining psychotherapy processes in a services research context. Clin Psychol Sci Pract. 2006;13(1):30–46.

  9. 9.

    Jorm AF. Mental health literacy. Empowering the community to take action for better mental health. Am Psychol. 2012;67(3):231–43.

  10. 10.

    Ramirez Basco M, Bostic JQ, Davies D, Rush AJ, Witte B, Hendrickse W, Barnett V. Methods to improve diagnostic accuracy in a community mental health setting. Am J Psychiatry. 2000;157(10):1599–605.

  11. 11.

    Shear MK, Greeno C, Kang J, Ludewig D, Frank E, Swartz HA, Hanekamp M. Diagnosis of nonpsychotic patients in community clinics. Am J Psychiatry. 2000;157(4):581–7.

  12. 12.

    Angold A, Costello EJ. Nosology and measurement in child and adolescent psychiatry. J Child Psychol Psychiatry. 2009;50(1–2):9–15.

  13. 13.

    Galanter CA, Patel VL. Medical decision making: a selective review for child psychiatrists and psychologists. J Child Psychol Psychiatry. 2005;46(7):675–89.

  14. 14.

    Nakash O, Nagar M, Kanat-Maymon Y. Clinical use of the DSM categorical diagnostic system during the mental health intake session. J Clin Psychiatry. 2015;76(7):e862–9.

  15. 15.

    McClellan JM, Werry JS. Introduction–research psychiatric diagnostic interviews for children and adolescents. J Am Acad Child Adolesc Psychiatry. 2000;39(1):19–27.

  16. 16.

    Rettew DC, Lynch AD, Achenbach TM, Dumenci L, Ivanova MY. Meta-analyses of agreement between diagnoses made from clinical evaluations and standardized diagnostic interviews. Int J Methods Psychiatr Res. 2009;18(3):169–84.

  17. 17.

    First MB, Pincus HA, Levine JB, Williams JBW, Ustun B, Peele R. Clinical utility as a criterion for revising psychiatric diagnoses. Am J Psychiatry. 2004;161:946–54.

  18. 18.

    Joiner TE Jr, Walker RL, Pettit JW, Perez M, Cukrowicz KC. Evidence-based assessment of depression in adults. Psychol Assess. 2005;17(3):267–77.

  19. 19.

    Zimmerman M, Mattia JI. Psychiatric diagnosis in clinical practice: is comorbidity being missed? Compr Psychiatry. 1999;40(3):182–91.

  20. 20.

    Bruchmuller K, Margraf J, Suppiger A, Schneider S. Popular or unpopular? Therapists’ use of structured interviews and their estimation of patient acceptance. Behav Ther. 2011;42(4):634–43.

  21. 21.

    Hunsley J, Mash EJ. Evidence-based assessment. Annu Rev Clin Psychol. 2007;3:29–51.

  22. 22.

    Mash EJ, Hunsley J. Evidence-based assessment of child and adolescent disorders: issues and challenges. J Clin Child Adolesc Psychol. 2005;34(3):362–79.

  23. 23.

    Christon LM, McLeod BD, Jensen-Doss A. Evidence-based assessment meets evidence-based treatment: an approach to science-informed case conceptualization. Cogn Behav Pract. 2015;22(1):36–48.

  24. 24.

    Jensen-Doss A, Hawley KM. Understanding clinicians’ diagnostic practices: attitudes toward the utility of diagnosis and standardized diagnostic tools. Adm Policy Ment Health. 2011;38(6):476–85.

  25. 25.

    Jensen-Doss A, Hawley KM. Understanding barriers to evidence-based assessment: clinician attitudes toward standardized assessment tools. J Clin Child Adolesc Psychol. 2010;39(6):885–96.

  26. 26.

    Ramklint M, Hellgren L. Diagnostics in child psychiatry must be quality assured. Lakartidningen. 2009;106(36):2184–5.

  27. 27.

    Smart A. A multi-dimensional model of clinical utility. Int J Qual Health Care. 2006;18:377–82.

  28. 28.

    Frazer P, Westhuits D, Daley JG, Philips I. How clinical social workers are using the DSM-IV: an national study. Soc Work Ment Health. 2009;7(4):325–39.

  29. 29.

    Jampala VC, Zimmerman M, Sierles FS, Taylor MA. Consumers’ attitudes toward DSM-III and DSM-III-R: a 1989 survey of psychiatric educators, researchers, practitioners, and senior residents. Compr Psychiatry. 1992;33(3):180–5.

  30. 30.

    Lilienfeld SO, Ritschel LA, Lynn SJ, Cautin RL, Latzman RD. Why many clinical psychologists are resistant to evidence-based practice: root causes and constructive remedies. Clin Psychol Rev. 2013;33(7):883–900.

  31. 31.

    Harvey AG, Gumport NB. Evidence-based psychological treatments for mental disorders: modifiable barriers to access and possible solutions. Behav Res Ther. 2015;68:1–12.

  32. 32.

    Hunsley J. Translating evidence-based assessment principles and components into clinical practice settings. Cogn Behav Pract. 2015;22(1):101–9.

  33. 33.

    Almquist YB, Ashir S, Brännström L. A guide to quantitative methods. Stockholm: Chess Stockholm University; 2015. p. 342.

  34. 34.

    StataCorp. 2015. Stata Statistical Software: Release 14. College Station, TX: StataCorp LP.

  35. 35.

    Altman DG. Practical statistics for medical research. London: Chapman & Hall/CRC; 1991.

  36. 36.

    Cohen J. Statistical power analysis for the behavioral science. 2nd ed. New Jersey: Lawrence Erlbaum; 1988.

  37. 37.

    Kotrlik JWW, Heather A, Jabor M. Khata: reporting and interpreting effect size in quantitative agricultural education research. J Agric Educ. 2011;52(1):132–42.

  38. 38.

    Traynor M. Indeterminacy and technicality revisited: how medicine and nursing have responded to the evidence based movement. Sociol Health Illn. 2009;31(4):494–507.

  39. 39.

    Addis ME, Wade WA, Hatgis C. Barriers to dissemination of evidence based practice: addressing practioners’s concerns about manual based psychotherapies. Clin Psychol Sci Pract. 1999;6(4):430–41.

  40. 40.

    Osterberg LD, Jensen-Doss A, Cusack KJ, de Arellano MA. Diagnostic practices for traumatized youths: do clinicians incorporate symptom scale results? Community Ment Health J. 2009;45(6):497–507.

  41. 41.

    Nakamura BJ, Higa-McMillan CK, Okamura KH, Shimabukuro S. Knowledge of and attitudes towards evidence-based practices in community child mental health practitioners. Adm Policy Ment Health. 2011;38(4):287–300.

  42. 42.

    Wood JM, Garb HN, Lilienfeld SO, Nezworski MT. Clinical assessment. Annu Rev Psychol. 2002;53:519–43.

  43. 43.

    Rogers EM. Diffusion of innovations. 5th ed. New York: Free Press; 2003.

  44. 44.

    Okamura KH, Hee PJ, Jackson D, Nakamura BJ. Furthering our understanding of therapist knowledge and attitudinal measurement in youth community mental health. Adm Policy Ment Health. 2018;45(5):699–708.

  45. 45.

    Lyon AR, Dorsey S, Pullmann M, Silbaugh-Cowdin J, Berliner L. Clinician use of standardized assessments following a common elements psychotherapy training and consultation program. Adm Policy Ment Health. 2015;42(1):47–60.

  46. 46.

    Edbrooke-Childs J, Wolpert M, Deighton J. Using patient reported outcome measures to improve service effectiveness (UPROMISE): training clinicians to use outcome measures in child mental health. Adm Policy Ment Health. 2016;43(3):302–308.

  47. 47.

    Youngstrom EA, Meter AV, Hunsley J, Prinstein MJ, Ong M-L, Youngstrom JK. Evidence-based assessment as an integrative model for applying psychological science to guide the voyage of treatment. Clin Psychol Sci Pract. 2017;24(4):331–66.

  48. 48.

    Sharples E, Qin C, Goveas V, Gondek D, Deighton J, Wolpert M, Edbrooke-Childs J. A qualitative exploration of attitudes towards the use of outcome measures in child and adolescent mental health services. Clin Child Psychol Psychiatry. 2017;22(2):219–28.

  49. 49.

    Jones DJ. Future directions in the design, development, and investigation of technology as a service delivery vehicle. J Clin Child Adolesc Psychol. 2014;43(1):128–42.

  50. 50.

    Mitchell PF. Evidence-based practice in real-world services for young people with complex needs: new opportunities suggested by recent implementation science. Child Youth Serv Rev. 2011;33(2):207–16.

Download references

Authors’ contributions

MD, SD and JOL developed the study design and translated questionnaire. MD was the principal investigator and carried out the data collecting and analyzed most data, drafted, edited and finalized the manuscript. JOL, AM and MD planned the article, JOL, SD and AM supervised MD while drafting the manuscript and JOL was the main supervisor. EF supervised the multivariable regression analysis. All authors edited the manuscript. All authors read and approved the final manuscript.

Acknowledgements

The authors thank Amanda Jensen-Doss for approving the use of the questionnaire and her generous sharing of experiences and knowledge. The authors also thank Einar Heiervang, Per Kristen Teigen and Tobias Edbom for the engagement in the early phase of the study. Finally the authors also want to thank all who participated in the study. 

The paper was not presented at any meeting.

Competing interests

The authors declare that they have no competing interests.

Availability of data and materials

The data that support the findings of this study are not publicly available. Data are however available from authors upon reasonable request.

Consent for publication

All participants consented to the publication of anonymous results.

Ethical approval and consent to participate

The Stockholm regional ethical review board gave ethical approval (2013/1505-31/5). All participants were asked for consent and participated voluntary.

Funding

The study was funded by Stockholm County Council, Sweden.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Author information

Correspondence to M. Danielson.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Keywords

  • Standardized assessment
  • Implementation
  • Utility
  • Mental health service