Du er her

Development and validation of the 12-item video consultation self-efficacy scale

Publisert
1. juli 2024
Merknad
Manuskriptet har ikke vært publisert tidligere. En del av resultater var presentert på 90 % PhD-seminar for JBB. Studien er finansiert av et PhD-stipend ved Universitetet i Stavanger for JBB.
Fagfellevurdert artikkel
Abstract

Background: Video consultations in healthcare are remote solutions for delivering assessments and treatments to patients. The acceptance and use of video consultations may depend on self-efficacy among healthcare practitioners. Measuring self-efficacy in providing video consultations may identify individuals with insufficient self-efficacy and enable targeted interventions and support. No valid and reliable scale was available for measuring self-efficacy in the Norwegian context. Therefore, our aim was to develop (Study 1) and validate (Study 2) a new research-based video consultation self-efficacy scale for Norwegian practitioners in specialized healthcare. Method: In Study 1, we developed preliminary scale items, based on results from a systematic review. These items were subjected to experts’ opinions in a modified Delphi method-based study. The experts also suggested additional items. These results were then used in developing an initial video consultation self-efficacy scale. In Study 2, this scale was validated in a questionnaire study. Reliability was examined by using item analysis and Cronbach’s alpha (internal consistency). Construct validity was examined by using exploratory factor analysis and Spearman’s correlation (convergent and divergent validity). Results: In Study 1, a total of 56 scale items were considered, and resulted in a preliminary 15-item scale. In Study 2, item analysis and exploratory factor analysis resulted in a unidimensional 12-item video consultation self-efficacy scale. Cronbach’s alpha (internal consistency) was (α) = .974. The Spearman’s correlations showed a moderate positive correlation between the 12-item scale and the Digital Competence Questionnaire, a weak positive correlation between the 12-item scale and the General Self-Efficacy Scale, and a weak positive correlation between the 12-item scale and the WHO-5 Well-Being Index. These results suggest that the scale is a reliable and valid measure for assessing practitioners’ self-efficacy in providing video consultations to patients in specialized healthcare. Implications: We recommend further, more comprehensive, validation of the scale in different contexts in Norwegian specialized healthcare, such as in different clinical specialties and with larger samples.

Keywords: development and validation, self-efficacy scale, specialized healthcare, practitioners, video consultation

The use of video consultations (VCs) in healthcare increased during the COVID-19 pandemic (Ohannessian et al., 2020). Whether healthcare practitioners accept and use VCs in assessment and treatment may depend on their self-efficacy in using such solutions (Rho et al., 2014).

Self-efficacy refers to beliefs about one’s own ability and capacity to perform behaviors required to achieve specific goals (Bandura, 1997). It may influence how healthcare practitioners think, become motivated and behave and, in this context, whether or not they use VCs. Furthermore, self-efficacy is related to psychological resilience among healthcare practitioners under challenging circumstances (Baluszek et al., 2023). Practitioners with higher degrees of self-efficacy handle situations more confidently and accomplish tasks, which can be a determining positive factor in the success of their endeavors and well-being. Low self-efficacy may lead to work-related stress, poor resilience, and reduced quality in VCs. Therefore, measuring self-efficacy in delivering VCs may identify practitioners who are more likely to show resistance, have suboptimal use or experience high work-related stress when providing VCs. These may need, or in any case benefit from, training and support. The measurement of VC self-efficacy may therefore be relevant for healthcare practitioners themselves and for leaders to improve organizational support and training. In the long term, this may promote quality, safety, and resilience in VCs in healthcare.

We found no suitable measure for assessing video consultation self-efficacy among Norwegian practitioners in specialized healthcare. A scale for measuring self-efficacy must be tailored to the relevant domain of functioning (Bandura, 2006). We therefore set out to develop and validate a video consultation self-efficacy scale (VCSES).

Reliability in the form of internal consistency evaluates how well scale items measure the same construct using measures such as Cronbach’s alpha (Cronbach, 1951). Scores above 0.80 indicate a very high level of reliability. Construct validity refers to the extent to which a scale (VCSES) truly assesses the underlying construct (Messick, 1995). Exploratory factor analysis (EFA) provides insight into how scale items group together and which items measure the same construct (dimensionality). This can help refine the scale and confirm its construct validity. Two subtypes of construct validity are convergent and divergent validity (Campbell & Fiske, 1959). Convergent validity is established when two measures of constructs that are theoretically expected to be related, i.e., correlate with each other. Divergent validity is established when two measures of constructs that are theoretically expected not to be related, do not correlate or correlate weakly with each other.

We included a measure of digital competence, assuming that digital competence and VC self-efficacy, while distinct, share certain conceptual similarities. We hypothesized a positive correlation between these two constructs. We also included a measure to assess general self-efficacy. General self-efficacy is linked to individuals’ beliefs in their domain-independent abilities. Both encompass confidence in handling challenges, involving cognitive appraisal, adaptability, problem-solving skills, and emotional resilience, among others. Despite differences, both constructs reflect individuals’ assessments of their own capabilities and beliefs, whether within a specific context like VCs (video consultation self-efficacy) or across diverse situations in life (general self-efficacy). Therefore, our hypothesis posits a moderate positive correlation between VC self-efficacy and general self-efficacy.

Contrarily, divergent validity assesses the extent to which measures of unrelated constructs do not covary (weak or no correlation). In this case, whether VCSES is separate from measures of other constructs with which it should not share a strong or any correlation. Therefore, as a construct dissimilar to video consultation self-efficacy, we chose well-being. Well-being and video consultation self-efficacy presents distinct facets of an individual’s experience, characterized by differences in nature and scope. Well-being involves a generic, comprehensive and subjective assessment of life satisfaction and health. In contrast, video consultation self-efficacy focuses on confidence and beliefs specific to VCs. We hypothesized that there would not be no correlation between VC self-efficacy and well-being or that if there was a correlation, it would be weak.

The current study

Our overall aim was to develop and validate a new VCSES. Research questions in Study 1 concern development of the scale:

  1. What are experts’ opinions and consensus about preliminary scale items for a VCSES?
  2. What are experts’ suggestions for additional challenges that practitioners may experience and/or perceive when performing VCs with patients in Norwegian specialized healthcare?

Study 2 concerns validation of the scale.

Regarding reliability, we developed the following research question:

  1. Is the new Video Consultation Self-Efficacy Scale a reliable measure (Cronbach’s alpha value to be at 0.8 or greater)?

Regarding validity, we proposed the following hypotheses:

  1. Exploratory factor analysis hypothesis:

We expect that all the items in the scale will load onto a single factor (component), suggesting that the scale measures a single underlying construct.

  1. Convergent validity hypotheses:

We expect positive correlation between video consultation self-efficacy and digital competence.

We expect positive correlation between video consultation self-efficacy and general self-efficacy.

  1. Divergent validity hypothesis:

We do not expect correlation between video consultation self-efficacy and well-being, or the correlation will be weak.

Ethics

Both the study on development of the VCSES and the assessment of reliability and construct validity were assessed by Norwegian Agency for Shared Services in Education and Research (reference numbers 600109 and 850084, respectively). All respondents were informed in writing about the purpose of the studies and their rights. These included their right to access the personal data and information registered about them, to have incorrect or misleading personal data about them corrected, to have personal data about them deleted, and to a complain to the Norwegian Data Protection Authority regarding the processing of their personal data.

Study 1: Methods and results

We began with a conceptualization of the target construct (Clark & Watson, 2016). Video consultation self-efficacy was defined as a belief in one’s own ability to organize and carry out such consultations with patients. When designing the initial scale items, we relied on the guidelines for constructing self-efficacy scales (Bandura, 2006). The preliminary items were based on a previously published systematic review of challenges that specialized healthcare practitioners perceive and/or experience when providing video consultations (VCs) in the Nordic countries (Baluszek et al., 2022). Based on this systematic review, we proposed 38 preliminary Video consultation self-efficacy scale (VCSES) items.

We then conducted a modified Delphi method-based study (Barrett & Heale, 2020) to elicit experts’ opinions about the 38 items and suggestions for additional items. The data on the experts’ opinions were collected between July and August 2022.

There is currently no consensus on the number of experts required in Delphi rounds. Sample sizes in other published studies typically vary between 10 and 100 (Nasa et al., 2021). We identified N = 95 potential experts via scientific publications, the website of a telemedicine research center, and Norwegian hospitals. Experts were defined as individuals with experience and/or knowledge about VCs in specialized healthcare. The experts were professionals such as psychologists, doctors, nurses, researchers, and information technology (IT) consultants. We selected this heterogenous group of experts because we were interested in obtaining a broad spectrum of opinions from different perspectives. We sent individual emails to the N = 95 experts providing information and an electronic link to the study’s questionnaire. A total of 24 experts responded to the first round, of which n = 17 completed the questionnaire in full and were subsequently included in the study.

First Delphi round

In the first Delphi round, the experts in our study (n = 17) completed a questionnaire with items on professional background and experience with VCs and a list of the 38 preliminary VCSES items. The experts were asked to whether items should be included in the VCSES. We opted for simplified three-choice response options. They could choose the following responses: “Significant (should be included)”, “Don’t know” and “Not significant (should not be included)” for each item. Moreover, they could justify their choice, give comments on items, and suggest additional items they thought were important but were not on the list. A week later, we sent a reminder to inform about the study progress and to motivate more experts to participate.

The experts in our study (n = 17) were healthcare personnel such as psychologists, nurses or doctors (n = 10), researchers (n = 8), people with an IT background (n = 1), and others (n = 3). Multiple background choice was possible. To achieve consensus for an item, at least 70% of all the experts’ responses had to be “Significant”. We predetermined that 70% would be satisfactory, considering that reaching 100% consensus is unrealistic given the complexities and diversity of study topic and expert opinions. Furthermore, we considered a 70% consensus to be reasonable, given that our goal was to provide supporting insights rather than definitive decisions on scale items. Qualitative feedback (opinions) from experts also played an important role, offering valuable insights into their perspectives on preliminary scale items. Thirteen of 38 items that received > = 70% consensus after the first round were retained and used in the second round. The experts provided six comments regarding other challenges in providing VCs, which resulted in 13 new items that were used in the second round.

Second Delphi round

We asked experts from the first round to participate again in the second, modified, round. We asked them to reconsider 26 items (13 original and 13 new) based on anonymized results from the first round. In this round, we asked the experts to rank items specifically in the context of challenges of VCs within Norwegian specialized healthcare. A total of 15 experts responded, of which n = 9 completed the questionnaire. The notable decrease (from 17 respondents in the first round to nine in the second) poses a recognized challenge for Delphi studies. This decrease may be attributed to the more comprehensive nature of the questionnaire in the second round, making it more demanding to participate. Respondents were healthcare personnel such as psychologists, nurses and doctors (n = 8), researchers (n = 3), and people with an IT background (n = 1). The 13 items that received > = 70% consensus on self-efficacy beliefs also received > = 70% consensus on significance in the second round. Six out of 13 new items received > = 70% consensus on significance. A total of 19 items reached > = 70% consensus (13 original and six new) after both rounds. The items reaching highest consensus were related to challenges with video, audio or logging into VCs and finding out that there is no access to technical equipment.

Creation of the preliminary VCSES

After the Delphi rounds, we revised all 51 preliminary VCSES items again. We also added six items based on our empirical knowledge and considerations gained in the research process. We rejected those items which were not relevant for self-efficacy in context of VCs. We rephrased every item so that it stated: “I can organize and carry out video consultations with patients …” when a particular challenge occurs (or to achieve another desirable state). We chose a 10-point response scale, which allows respondents to express their beliefs on a scale from 0 (cannot do at all) to 100 (very sure can do). The scale had equidistant intervals between each point. This resulted in a preliminary 15-item VCSES.

Study 2: Methods

To validate the VCSES, we carried out a questionnaire study in a sample of healthcare practitioners (Clark & Watson, 1995) to test the construct validity of the VCSES. The questionnaire consisted of demographics about background and experience with VCs in specialized healthcare questions, and the preliminary 15-item scale to measure self-efficacy in providing VCs. The three other scales that were included to test convergent and divergent validity (construct validity) of the VCSES were: the Digital Competence Questionnaire, a 5-item instrument measuring digital competence (Golz et al., 2021), the WHO-5 Well-Being Index, a 5-item instrument measuring well-being (Kaiser & Kyrrestad, 2019), and the General Self-Efficacy Scale, a 10-item instrument assessing general self-efficacy (Røysamb et al., 1998). We collected data between May and August 2023.

Respondents

To recruit participants, we posted a link to an electronic questionnaire on Facebook websites and groups for Norwegian practitioners in specialized healthcare. A total of N = 183 responded to the questionnaire study, from which n = 100 respondents completed demographic questions and the VCSES and were included in the study. Multiple background choice was possible. Respondents were aged between 21 and 60 years, with 0 to 42 years of experience in healthcare. These were psychologists (n = 38), nurses (n = 32), doctors, including chief physicians and resident doctors (n = 20), nursing students (n = 8) and others (n = 6). The mean age of the respondents was 39 years, and the majority were female (n = 79, 79%). All respondents reported having done an internship or having worked in the Norwegian specialized healthcare system, 4% of them practiced or had practiced in a somatic outpatient clinic, 36% in a non-somatic outpatient clinic, 13% in both clinics and 6% in neither. A total of 65% reported having knowledge of VCs, 58% reported having experience of VCs, and 51% reported using or having used VCs with patients, 6% of which reported that having used VCs more than three times per week and 45% less than three times per week.

Reliability

We first used Cronbach’s alpha to investigate the internal consistency of the scale, i.e., the interrelatedness of the items in a measure and how well the items measure the same construct. To reduce the number of items, we then performed an item analysis on the original 15 items by iteratively removing the items with lowest item-to-scale correlations (Ferketich, 1991) while maintaining Cronbach’s alpha (Cronbach, 1951) above 0.9. Three items (2, 3, and 5) were removed and the final scale was reduced from 15 to 12 items.

Construct validity

Dimensionality

We tested the 12 remaining items of VCSES for construct validity by conducting EFA using principal component extraction with varimax rotation and a requirement of eigenvalue > 1 for including factors. We tested the requirements for EFA by a Kaiser-Meyer-Olkin (KMO) test that assesses the suitability of data for factor or principal components by measuring the degree of correlation between variables. An acceptable KMO value (above 0.5) suggests the dataset is suitable for analysis.

The sample size necessary to perform EFA is mainly dependent on the number of items in the scale. We used the approach that there should be at least five observations for each item (Hair, 2019). Hence, for a 12-item scale, we needed at least 60 respondents. We thus met this criterion with our sample of n = 100 respondents.

Additionally, to assess construct validity, we examined single items’ cross loadings with 12-item VCSES and the distribution of scores on the scale (ceiling and floor effects).

Convergent and divergent validity

As the data were not normally distributed, we used Spearman non-parametric correlations between VCSES and the Digital Competence Questionnaire and between VCSES and the General Self-Efficacy Scale to examine convergent validity, and between VCSES and the WHO-5 Well-Being Index to examine divergent validity.

Study 2: Results

Reliability

The first Cronbach’s alpha of the original 15-item-scale resulted in Cronbach’s alpha = .972. The item analysis resulted in removal of three items (items 2, 3, and 5) from the 15-item VCSES, which resulted in a 12-item VCSES. In the 12-item version, Cronbach’s alpha was .974, which indicates high correlation among the items in the scale.

Construct validity

EFA on the 12-item VCSES revealed one component with eigenvalue >1, which explained 78% of the variance. The requirements for EFA were met with the KMO = 0.915. Examination of the single items’ cross loadings with the 12-item VCSES showed that all 12 items had correlation with VCSES > 0.8. (Table 1).

Table 1

Component matrix

Item Rough English translation FROM NORWEGIAN Component 1
7 I can organize and carry out video consultations with patients, despite this entailing new tasks and/or changes in my usual work routines in my work environment. .929
14 I can organize and carry out video consultations with patients so that I can comply with the organization’s rules and guidelines in my work environment. .919
15 I can organize and carry out video consultations with patients so that I can observe, hear, and communicate information in connection with video consultations with patients in my work environment. .913
9 I can organize and carry out video consultations with patients despite negative attitudes in my work environment and/or the patient’s environment. .913
8 I can organize and carry out video consultations with patients despite various delays in my work environment and/or the patient’s environment. .900
11 I can organize and carry out video consultations with patients even if I am tired in my work environment. .898
10 I can organize and carry out video consultations with patients despite negative experiences in my work environment and/or the patient’s environment. .890
12 I can organize and carry out video consultations with patients even if I am stressed in my work environment. .879
13 I can organize and carry out video consultations with patients so that the image and sound quality is good enough for me to make good professional judgments in connection with video consultations with patients in my work environment. .851
4 I can organize and carry out video consultations with patients despite the fact that I do not have a colleague with whom I can confer, when necessary, in my work environment. .849
1 I can organize and carry out video consultations with patients despite technical challenges and/or uncertainties in my work environment and/or the patient’s environment. .825
6 I can organize and carry out video consultations with patients despite various distractions and/or disturbances in my and/or the patient’s environment. .803

Note. Items 2, 3 and 5 were removed from the primary 15-item VCSES.

The distribution of scores on scale was not heavily concentrated around the highest (ceiling) or lowest (floor) possible values (Table 2). This suggests that the scale may be more reliable and valid because it can capture variations in the VC self-efficacy construct without being limited by an excess of high or low values.

Table 2

Descriptive statistics for the individual items

Item Mean (SD) Median C.I. low C.I. high
1 51.60 (33.84) 50.00 44.89 58.31
4 59.70 (37.24) 70.00 52.31 67.09
6 49.00 (34.63) 50.00 42.13 55.87
7 63.30 (36.02) 70.00 56.15 70.45
8 52.10 (34.56) 50.00 45.24 58.96
9 57.90 (35.14) 60.00 50.93 64.87
10 54.80 (35.41) 60.00 47.77 61.83
11 63.50 (35.00) 70.00 56.56 70.44
12 61.50 (35.60) 70.00 54.44 68.56
13 52.00 (35.48) 50.00 44.96 59.04
14 62.50 (36.66) 70.00 55.23 69.77
15 59.30 (36.93) 70.00 51.97 66.63
VCSES 57.27 (31.31) 65.42 - -

Note. *Items 2, 3 and 5 were removed from the preliminary 15-item VCSES.

Convergent validity

We observed a statistically significant moderate positive correlation between scores from 12-item VCSES and the Digital Competence Questionnaire (see Table 3). This finding supported our hypothesis that we should observe positive correlation between VC self-efficacy and digital competence.

We observed a statistically significant weak positive correlation between scores from 12-item VCSES and the General Self-Efficacy Scale (see Table 3). This finding partially supported our hypothesis that we should observe positive (moderate) correlation between VC self-efficacy and general self-efficacy. However, this correlation was weak.

Divergent validity

We observed a statistically significant weak positive correlation between scores from 12-item VCSES and the WHO-5 Well-Being Index (see Table 3). This finding partially supported our hypothesis that we should observe no or a weak correlation between video consultation self-efficacy and well-being, given that they are not theoretically similar constructs.

Table 3

Spearman correlations for 12-item VCSES

    12-item VCSES****
Digital Competence Questionnaire*** Correlation coefficient .474**
  Sig. (2-tailed) p < 001
  N 96
General Self-Efficacy Scale *** Correlation coefficient .209*
  Sig. (2-tailed) .048
  N 90
WHO-5 Well-Being Index *** Correlation coefficient .214*
  Sig. (2-tailed) .040
  N 93

Note. *Correlation is statistically significant at the .05 level (2-tailed).** Correlation is statistically significant at the .01 level (2-tailed).***  Ordinal variable. ****Continuous variable.

Discussion

In Study 1, we developed a new VCSES using the Delphi methodology. This approach allowed us to gather expert opinions on potential items for inclusion in the scale and identify new items. The Delphi method has been used previously in the development of other self-efficacy scales (Kim et al., 2020) and is expected to be widely used in scale development the future.

In Study 2, we found promising results regarding reliability and construct validity of 12-item VCSES. We found evidence for a high level of internal consistency among the items, suggesting that they are measuring the same underlying construct, and as such that VCSES is reliable. Furthermore, the EFA results suggest the items are measuring one single underlying construct (self-efficacy in video consultations). The strong variance explained points to the scale’s robust factor structure and reliability. The single-factor structure simplifies interpretation and use, allowing all items to be combined into a single score for self-efficacy in VCs. We observed a statistically significant moderate positive correlation between scores from the VCSES and the Digital Competence Questionnaire, as we hypothesized. However, we did expect that this correlation would be stronger. The correlation may naturally vary due to random fluctuations or sample size limitations. Furthermore, we observed a statistically significant weak positive correlation between scores from 12-item VCSES and the General Self-Efficacy Scale, indicating that video consultation self-efficacy is positively associated with general self-efficacy, as we hypothesized. Nevertheless, the correlation observed was weak, indicating that there is not a strong resemblance between video consultation self-efficacy and general self-efficacy. Indeed, VC self-efficacy and general self-efficacy are related, but they differ in their focus and application. General self-efficacy refers to an overall belief in one’s abilities, whereas VC self-efficacy refers to a specific competence. We therefore suggest that in future validations, convergent validity should be assessed by examining correlations between VCSES and other more similar measures such as scales measuring self-efficacy in use of technology.

Regarding divergent validity, we observed a statistically significant weak positive correlation between scores from 12-item VCSES and the WHO-5 Well-Being Index. This partly supports our hypothesis that these theoretically different constructs should not covary. However, our results also suggest that these constructs do correlate to some degree, which may suggest that they may be more theoretically aligned than we initially assumed. We therefore suggest that in future validations, divergent validity should be assessed between VCSES and other more dissimilar measures. Overall, our analyses showed a reliable unidimensional 12-item VCSES. However, the scale may contain redundant items and could possibly be shortened.

Implications

VCSES has the potential to contribute to the evolution and improvement of VC services by providing assessment of VC self-efficacy among practitioners. The scale may be used to shed light on the barriers and facilitators in the acceptance and adoption of VCs. By understanding practitioners’ perceived capabilities and limitations, healthcare leaders may better organize and carry out VCs. They may also tailor strategies aimed at addressing specific challenges or concerns, such as technological proficiency, and at enhancing practitioners’ confidence, competence, and safety.

The VCSES we developed, should be examined in future studies. After being further validated, it has the potential to be used to conduct comparative analyses across diverse healthcare settings, clinical specialties or geographic locations. As such, it may be valuable to better understand why VC self-efficacy varies, allowing for the identification of contextual factors influencing practitioners’ beliefs regarding confidence in VCs. Moreover, the use of VCSES in longitudinal studies may provide insights into the evolution of video consultation self-efficacy over time. Tracking changes in practitioners’ belief levels and identifying factors contributing to these changes can potentially impact the optimization of VCs services. Finally, the results from VCSES may inform the development of healthcare guidelines related to best practices of VCs services.

Limitations

Respondents were anonymous and as such, we did not confirm whether they were eligible or not in neither Study 1 nor Study 2. Also, the attrition in the two Delphi study phases is a drawback. Moreover, after the first Delphi round, differences in experts’ justifications indicated a possible misunderstanding of the concept of self-efficacy. As a result, for the second round, we requested experts to prioritize items, focusing on just challenges (not self-efficacy) within Norwegian specialized healthcare. In addition, the sample size in Study 2 was on the verge of acceptability.

Conclusions

This is the first report on development of a Norwegian VCSES. We developed a 12-item scale and assessed reliability and validity of this measure. Our results suggest that the 12-item VCSES is a unidimensional (one component) reliable, and valid measure for assessment of practitioners’ self-efficacy in providing video consultations to patients in the current context. The scale should undergo further validation in diverse contexts involving more respondents, using comprehensive methods such as confirmatory factor analysis or item response theory, and including more similar and different measures of constructs to resolve ambiguity in results for convergent and divergent validity. Upon comprehensive validation, the scale may be relevant for use among Norwegian practitioners, psychologists, and healthcare leaders.

Acknowledgements

We would like to thank Aleksandra Sevic and Viktoria Loretto Holsey Foss from the University of Stavanger for their contribution. We also gratefully acknowledge the consultations in psychometrics for JBB during a stay in 2023 at the Institute of Psychology, Jagiellonian University, Krakow, Poland.


Note. This manuscript has not previously been published.  Some of the results were presented by JBB in her presentation at the 90% PhD seminar. The study was funded by a PhD grant at the University of Stavanger, Norway for JBB (PhD project).

Teksten sto på trykk første gang i Tidsskrift for Norsk psykologforening, Vol 61, nummer 7, 2024, side

Baluszek, J. B., Brønnick, K. K., & Wiig, S. (2023). The relations between resilience and self-efficacy among healthcare practitioners in context of the COVID-19 pandemic – a rapid review. International Journal of Health Governance, 28(2), 152–164. https://doi.org/10.1108/IJHG-11-2022-0098

Baluszek, J. B., Wiig, S., Myrnes-Hansen, K. V., & Brønnick, K. K. (2022). Specialized healthcare practitioners’ challenges in performing video consultations to patients in Nordic Countries – a systematic review and narrative synthesis. BMC Health Services Research, 22(1), 1432. https://doi.org/10.1186/s12913-022-08837-y

Bandura, A. (1997). Self-efficacy: The exercise of control. Freeman.

Bandura, A. (2006). Guide for constructing self-efficacy scales. In F. Pajares & T. Urdan (Eds.), Self-Efficacy Beliefs of Adolescents. Information Age Publishing, pp. 307–333

Barrett, D. & Heale, R. (2020). What are Delphi studies? Evidence-based nursing, 23(3), 68–https://doi.org/10.1136/ebnurs-2020-103303

Campbell, D. T., & Fiske, D. W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin, 56(2), 81–105. https://doi.org/10.1037/h0046016

Clark, L. A., & Watson, D. (2016). Constructing validity: Basic issues in objective scale development. In A. E. Kazdin (Ed.), Methodological issues and strategies in clinical research (4th ed., pp. 187–203). American Psychological Association. https://doi.org/10.1037/14805-012

Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika,16(3), 297–334. https://doi.org/10.1007/BF02310555

Ferketich, S. (1991). Focus on psychometrics. Aspects of item analysis. Research in Nursing & Health, 14(2), 165–168. https://doi.org/10.1002/nur.4770140211

Golz, C., Peter, K. A., Müller, T. J., Mutschler, J., Zwakhalen, S. M. G., & Hahn, S. (2021).Technostress and digital competence among health professionals in Swiss psychiatric hospitals: Cross-sectional study. JMIR Mental Health, 8(11), e31408. https://doi.org/10.2196/31408

Hair, J. F., Black, W. C., Babin, B. J., Anderson, R. E., & Tatham, R. L. (1998). Multivariate data analysis. Vol. 5, No. 3, 207–219. Prentice Hall, Upper Saddle River.

Kaiser, S., & Kyrrestad, H. (2019). Måleegenskaper ved den norske versjonen av WHO Well-Being Index (WHO-5). PsykTestBarn, 9(1). https://doi.org/10.21337/0063

Kim, T., Sydnes, A., & Batalden, B. (2020). Development and validation of a safety leadership self-efficacy scale (SLSES) in maritime context. Safety Science, 134, 105031. https://doi.org/10.1016/j.ssci.2020.105031

Messick, S. (1995). Standards of validity and the validity of standards in performance assessment. Educational Measurement: Issues and Practice, 14(4), 5–8. https://doi.org/10.1111/j.1745-3992.1995.tb00881.x

Nasa, P., Jain, R., & Juneja, D. (2021). Delphi methodology in healthcare research: How to decide its appropriateness. World Journal of Methodology, 11(4), 116–129. https://doi.org/10.5662/wjm.v11.i4.116

Ohannessian, R., Duong, T. A., & Odone, A. (2020). Global telemedicine implementation and integration within health systems to fight the COVID-19 pandemic: A call to action. JMIR Public Health and Surveillance, 6(2), e18810. https://doi.org/10.2196/18810

Rho, M. J., Choi, I. Y., & Lee, J. (2014). Predictive factors of telemedicine service acceptance and behavioral intention of physicians. International Journal of Medical Informatics, 83(8), 559–571. https://doi.org/10.1016/j.ijmedinf.2014.05.005

Røysamb, E., Schwarzer, R., & Jerusalem, M. (1998). Norwegian version of the general perceived self-efficacy scale. http://userpage.fu-berlin.de/~health/norway.htm