Traditionally, psychological assessment of affective states usually relies on the individual’s own report of their feelings. However, it has been found that people do not always identify and report emotions accurately (Quirin, Kazen, & Kuhl, 2009). The latter may partly be attributed to the complexity of affective experiences, as they are comprised of different components such as situation appraisal, subjective feelings, expressive behavior, physiological responses, and action preparation (Scherer & Moors, 2019). It has been argued that these different processes occur at a pre-reflective (i.e., automatic) and a reflective (i.e., rational) level (Lieberman, 2019). Therefore, self-report methods may not fully reflect an individual’s affective experience. Hence, the importance of studying implicit (i.e., automatic) affective processes.
Implicit affective processes are in line with a dual-process view of appraisal theories of affect (Clore & Ortony, 2000). According to this view, information can be processed with reflective propositions and rules (which convey one or more appraisal values) but alternatively (or additionally) be processed in an associative way (automatically activating learned associations between representations of the stimuli and previously stored appraisal outputs) (Moors, 2013). Accordingly, the affective experience would initiate with a pre-reflective process with several simultaneous automatic processes giving rise to experience that has not (yet) been reflected on. In line with this approach of affect as information processing, implicit affect is conceptualized as the automatic activation of cognitive representations of affective experiences (Quirin et al., 2009).
Previous research has demonstrated that affective processes, even if not fully recognized, can impact human behavior (e.g., Winkielman et al., 2005), and are related to brain processes (Lane, 2008; Pessoa, 2013), and health (e.g., Quirin & Bode, 2014; Lane, 2008; Weil et al., 2019). A number of procedures have been developed for taping affective processes indirectly, such as the Implicit Association Test (IAT; Greenwald et al., 2003; see also IAT-Anxiety, Egloff, & Schmukle, 2002), the Affect Misattribution Procedure (AMP; Payne et al., 2005). However, these measures have been developed to assess individuals’ attitudes (or self-concepts) rather than affect itself, which has been led to the development of the IPANAT.
The IPANAT aim is to assess a pre-reflective (i.e., automatic) dimension of affect, and draws on the principle of affect infusion as a method to assess implicit affect. According to this principle, affect exerts an impact on evaluative processes influencing the judgments of unrelated objects. Thus, the goal of the test is to capture the automatic affective process expressed in the participants’ biased judgments. Accordingly, the IPANAT uses participants’ ratings of the degree to which six nonsense words (i.e., SAFME, VIKES, TUNBA, TALEP, BELNI, and SUKOV) sound like six mood adjectives (i.e., happy, cheerful, energetic, helpless, tense, and inhibited). Thus, the test is composed of 36-items, which are scored on a 4-point Likert scale, ranging from doesn’t fit at all to fits very well.
The IPANAT showed good psychometric properties and construct validation (Quirin et al., 2009; Quirin et al., 2018). In addition, criterion-based validity was found by research showing relationships between implicit NA and low implicit PA with slow blood pressure recovery after harassment (Brosschot et al., 2014; van der Ploeg et al., 2014), and under unconscious stress induction (van der Ploeg et al., 2019). As well as with both stress-contingent and circadian saliva cortisol, which did not occur for explicit affect (Mossink et al., 2015; Quirin, Kazén, Rohrmann, & Kuhl, 2009). An fMRI study demonstrated that implicit (IPANAT) but not explicit negative affect predicted accuracy of recognizing briefly presented anger gestures, as well as concomitant neural correlates in the fear network of the brain (Suslow et al., 2015; see also Quirin & Lane, 2012, for the necessity of considering implicit affect in the neurosciences).
Bodenschatz et al. (2018) used eye-tracking in a healthy population to demonstrate that implicit NA predicts attention towards sad faces over and above self-reported depressive symptoms. Kazén et al. (2014) found that implicit NA predicted local processing, whereas implicit PA predicted global processing in individuals with low versus high emotion regulation abilities, respectively, these effects were not found for explicit affect. Additional studies demonstrated validity of the IPANAT as an affect measure that is incremental to explicit affect (e.g., Dekker & Johnson, 2018; Quirin et al, 2011; Remmers et al., 2016). Hence, implicit affect assessed via the IPANAT appears to contribute the understanding of affective phenomena.
In addition, the IPANAT has been adapted to many languages, displaying good psychometric properties (e.g., Hernández et al., 2020; Shimoda et al., 2014; Sulejmanov & Spasovski, 2017). Results from ten different countries showed that the best-fitting model consisted of two factors corresponding to positive affect and negative affect (on average, χ2/df = 2.53, CFI = .96, TLI = .91). Both factors showed a good reliability coefficient, on average, implicit PA, α = .81; implicit NA, α = .78 (Quirin et al., 2018).
Investigations on affect and health often require economical assessments. For example, due to the fact that affective processes are fleeting after experimental affect induction (see Hermans et al., 2001), because sometimes participants respond to the IPANAT in multiple assessments (like in ecological momentary assessment studies), or simply because it is administered in conjunction with time consuming other measures. Therefore, the purpose of this study was to create and evaluate a brief version of the original test (called IPANAT-18 in the remainder of this article). A validated brief version of the test could also improve the reliability on some experimental designs (e.g., if there is need of repeated measures of affect), as well as avoid extra burden or boredom to participants. Thus, a brief version would improve the instrument’s utility without sacrificing its psychometric properties.
The sample included 242 Spanish adults (111 males). Participants’ age after classification into age bands of 18–24, 25–34, 35–44, 45–54, and 55–65 was distributed as follows: 18%, 18%, 26.8%, 18.9% and 18.3%. Participants were recruited online by a Spanish market research firm (CERES), they received 12 euros as compensation for their participation. The only requirement for participation was to be above 18 years. Participants first saw a full description of the experiment, which served simultaneously as the informed consent form. Participants who provided consent were then given a URL directing them to the experiment. More than 90% (i.e., 218) of participants reported to have been born in Spain. Regarding the education level, the majority of participants self-reported to have a university degree or above (52%). Otherwise, 37% reported a high school degree, and 11% reported a secondary school degree.
A Spanish version of the IPANAT was used (see Hernández et al., 2020). All testing took place online via Qualtrics (Qualtrics Provo, 2013). In total, the experiment took approximately 10 minutes to complete. A computerized version of the IPANAT presented one item each per screen, after the presentation of the instruction (i.e., cover story) of the IPANAT. Then, participants were asked to provide judgments of six artificial words across six mood adjectives. For each of the artificial words (SAFME, TALEP, BELNI, SUKOV, GOLIP, and KERUS) participants indicated on a four-point answer scale (1 = doesn’t fit at all, 2 = fits somewhat, 3 = fits quite well, and 4 = fits very well) to what extend does the sound of the artificial word convey each of the following moods: happy, helpless, energetic, tense, cheerful, and inhibited. Thus, the test consisted of 36-items. The artificial words were randomly presented to avoid order effects, each adjective within the same artificial word was also randomized, and the six mood adjective belonging to each artificial word were presented subsequently. Global scores for implicit PA and implicit NA were computed by averaging the scores derived from positively valence, and negatively valence adjectives (following Quirin, et al., 2009).
After answering the IPANAT participants were presented with a series of personality and affect questionnaires used to examine construct validity of the IPANAT. Explicit PA and NA were assessed with two instruments. First, we used the broadly applied Positive and Negative Affect Schedule (PANAS, Watson et al., 1988; Spanish version: Lopez et al., 2015). Second, explicit affect was also assessed by asking participants for explicit mood judgments of the same mood adjectives included in the IPANAT (i.e., asking individuals to report the extent to which they feel happy, cheerful, energetic, helpless, tense, and inhibited at the moment) on a rating scale from 0 (not at all) to 10 (absolutely) (following Quirin et al., 2009). Analogously to the original IPANAT, we composed a PA and an NA scale computing average scores for happy, cheerful, and energetic, versus helpless, tense, and inhibited, respectively.
The goal of the present study was to create and evaluate a brief version of the IPANAT. As other projective tests, the IPANAT uses judgments of artificial words to track changes on responses to ambiguous stimuli with the objective of revealing pre-reflective emotions. As detailed before, the instrument items are composed of six mood adjectives that are assessed several times, then the 36 items are in fact six truly different items asked repeatedly to capture biased responses. Thus, for the brief version of the IPANAT it is paramount to identify the number of repetitions of the items and not which particular items are needed to keep in a brief version (since they are redundant), thus a random selection of the right number of items should yield similar psychometric properties that the full test. As suggested by Taber (2018), high levels of Cronbach’s alpha indicate that items in a scale elicit the same pattern of responses (which implies they are redundant), even though a higher number of items in a scale improve the reliability, additional items measuring the same thing as the existing items leads to redundancy that is inefficient, because almost no additional useful information is obtained, nonetheless the instrument takes longer to administer.
Since we aimed to improve the usefulness of the test, in our study reliability analysis for different number of items were tested via Cronbach’s alpha coefficient, to determine the best ratio between the length of the test and good internal consistency. This item reduction analysis based on classical test theory was found to be a reliable item reduction method (Erhart et al., 2010) in comparison with other methods like Rasch item-fit analysis. As suggested by Erhart et al. (2010), our study accompanied this item reduction method by additional analysis (i.e., confirmatory factor analysis) to corroborate the psychometric properties of the instrument. Once Cronbach’s alpha coefficient provided a notion of the least number of items required to keep the psychometric properties of the original IPANAT, item reductive procedure consisted of a random selection of the stimuli words used in the IPANAT. Then, the newly stablished set of items were extracted for the original 36-items. Finally, the descriptive statistics, reliability coefficient, and latent structure of the full IPANAT were compare with the brief version.
As mentioned above, the latent structure of the IPANAT-18 was evaluated using Confirmatory Factor Analysis (CFA). The CFA model tested was based on the model proposed by authors of the original test and previous findings with the IPANAT (see Quirin et al., 2018). The CFA model expressed the hypothesis that the IPANAT measures two factors, implicit NA and implicit PA. Scores for each one of the six mood adjectives assessed (i.e., 3 for PA and 3 for NA) were calculated by averaging across ratings of the combination of the mood adjective and the three artificial words, then the corresponding 3 adjectives were loaded to its belonging factor. It was a restricted model, which allowed each of the items to load on the respective predicted factor only. Previous findings indicate that a correlation between the two underlying factors could occur (see Quirin, et al., 2018). Thus, in our study the two factors were set to be non-orthogonal, to better explore this possibility. According to Izquierdo et al. (2014), to allow the covariance of the latent factors of the model is the better way to corroborate its possible orthogonality.
The CFA models included error variances for each item and were set to load with a coefficient of 1. We estimated factor loadings via diagonally weighted least squares (DWLS) estimator, which has specifically been designed for ordinal data (Cheng-Hsien, 2016). We used Chi-squared values and degrees of freedom for each model to assess the fit of the CFA models. As well as Comparative Fit Index (CFI; Bentler, 1990), the TLI (Tucker-Lewis index), the Root-Mean-Square Error of Approximation (RMSEA), and Standardized Root Mean Square Residual (SRMR), as they are commonly recommended to assess absolute measures of fit (Browne & Cudeck, 1992; Jackson et al., 2009; Steiger & Lind, 1980). Following guidelines for Hopper et al. (2008), the present study used the next thresholds for determining model fit: Chi-squared (CMIN/df) less than 3, CFI ≥ 0.95, TLI ≥ 0.95, RMSEA ≤ 0.05 and SRMR ≤ 0.08.
Finally, we used correlational analysis and Z-tests to determine the relationships between the brief and the full version of the IPANAT, and with explicit measures of affect. Basic statistical analyses were conducted using IBM SPSS Statistics 22.0. In addition, Confirmatory Factor Analysis (CFA) were performed using R 3.6 and RStudio 1.2.
An analysis of the internal consistency of the IPANAT’s scales while performing item’s reduction (see Figure 1), determined that the best ratio between number of items and an acceptable level of alpha coefficient were three artificial words (i.e., 18 items). Since the alpha coefficients for 18 items corresponded to the least number of items with similar reliability coefficient to the ones reported for the different versions of the full IPANAT (see Quirin et al., 2018). Therefore, for the IPANAT-18 three artificial words were randomly selected (i.e., SAFME, TALEP and BELNI) from the stimuli words used in the IPANAT-S.
After having completed the test, participants responded to a question about the presumed underlying aim of the IPANAT. Twelve individuals suggested that the test might assess affective states and were excluded from the initial sample of 242 participants (4.95% of the sample). Descriptive statistics (mean scores, standard deviations, skewness, and kurtosis) for the brief and the full version of the IPANAT can be found in Table 1. There were no missing data. After evaluating the assumptions of multivariate normality and linearity, we identified that the assumption of multivariate normality is slightly violated in our sample. Therefore, we used the diagonally weighted least squares (DWLS) estimator, since this method provides more accurate parameter estimates (Mîndrilă, 2010). Regarding sample size, it was determined that the size we used in the present study is adequate for the stability of the parameter estimates, since 10 participants per estimated parameter appears to be the general consensus (see Schreiber et al., 2006). In the CFA model we specify 6 regressions, 1 covariance, and 6 variances, totalling 13 parameters that need to be estimated. Since we have a final sample size of 230, we have an acceptable ratio of 17.69 participants to 1 parameter estimated.
|Implicit PA (Full version)||1.82||0.58||0.20||–0.89||0.91|
|Implicit NA (Full version)||1.59||0.44||0.60||–0.41||0.87|
|Implicit PA (IPANAT-18)||1.82||0.61||0.19||–1.00||0.86|
|Implicit NA (IPANAT-18)||1.57||0.46||0.78||0.42||0.77|
As Table 1 shows, the mean scores for PA are higher than the mean score for implicit NA. The latter is consistent with previous findings with the IPANAT (Quirin et al., 2018). Table 1 also shows that the internal consistency estimates for the IPANAT-18 scales reached an acceptable level, implicit PA obtained an alpha coefficient of .86, while implicit NA was .77. Moreover, the alpha coefficients are comparable to the ones reported by the original version of the test (Quirin et al., 2009).
The model tested for the brief version of the IPANAT-18 obtained a χ2 of 3.93, 8 degrees of freedom, a χ2/df (CMIN) of 0.49, with a CFI of 1, the TLI was also 1, the RMSEA was 0.00, while the SRMR was 0.02. According to Hu and Bentler (1999), those values indicate a good fit between the model and the observed data (see also Schreiber et al., 2006). Table 2 depicts the χ2 and fit indices of the full and brief version of the test, and Table 3 depicts standardized and unstandardized coefficients of the CFA Models. Along with Figure 2, the results suggest an acceptable model fit for a two-factorial solution of the IPANAT-18. Moreover, the fit indices obtained by the brief version (18-items) are slightly lower, yet comparable to fit indices found for the full version on this sample, and to the ones reported for ten different versions of the full test (see Quirin et al., 2018).
|Observed variable||Latent construct||IPANAT||IPANAT-18|
The differences between the mean scores of implicit affect assessed with 18 items or 36 items were statistically non-significant. For example, differences for implicit PA brief and full version was t(229) = –.35, p > .05, and for implicit NA t(229) = 1.22, p > .05. In addition, implicit affect mean scores assessed with the 18-items and 36-items versions showed strong correlations (implicit PA, r = .92; implicit NA, r = .88).
As shown in Table 4, the correlations between the IPANAT-18 and explicit affect measures are of moderate strength. In addition, Z-tests were run to compare the correlations between implicit and explicit scales of affect. For implicit negative affect, the results show that the correlation with explicit negative affect (assessed by PANAS) is significantly higher than the correlation with explicit positive affect, z = 2.441, p < .01. Inversely, it was found that implicit positive affect (assessed by Same Adjectives Scale), was more strongly correlated to explicit positive affect than to explicit negative affect, z = 3.107, p < .01.
|Measure||IPANAT-18 Implicit PA||IPANAT-18 Implicit NA|
|PANAS PA||0.15*||0.07 ns|
|Explicit scale PA (Same Adjectives)||0.26**||0.08 ns|
|Explicit scale NA (Same Adjectives)||–0.05 ns||0.15***|
Statistical analysis were also performed on the non-selected stimuli words of the IPANAT (i.e., SUKOV, GOLIP and KERUS). Results indicated that this set of three artificial words also shows good psychometric properties. The fit indices of the CFA model of this brief version were CMIN 0.52, CFI 0.99, TLI 0.99, RMSEA 0.01, and SRMR 0.04.
The present study aimed to create and validate a brief version of the IPANAT, a measure for the indirect assessment of affect. Based on the results from the items reduction procedure, three artificial words (i.e., SAFME, TALEP and BELNI) were randomly selected from the six stimuli words used in the IPANAT. Therefore, the brief version of the IPANAT is composed of 18 items. We explored the goodness of fit of IPANAT-18 via CFA technique and found that the best fitting model supports a two-factor structure of the test, corresponding to implicit PA and implicit NA, which is in line with the factor structure found in the original IPANAT (see Quirin et al., 2009). As mentioned in the results section, chi-square and fit indexes indicated a good fit of the proposed model. In addition, the sample size used in the present study was adequate to produce relative stability of the parameter estimates. Internal consistency analyses showed a good reliability for both scales, and the CFA goodness of fit was comparable to findings from previous validations of explicit affect instruments (López et al., 2015).
In our study, PA and NA dimensions occurred to be non-orthogonal, as also reflected in a positive correlation between mean values of implicit PA and implicit NA. This is consistent with previous findings from cross-cultural studies with the IPANAT (see Hernández et al., 2020; Quirin et al., 2018). The authors argued that positive correlations between positive and negative affect could be due the fact that different cultures attribute slightly different meaning to mood adjectives. The latter is also consistent with findings of adjectives referring to personality (Nye et al., 2008). In concordance, previous cross-cultural studies with the IPANAT showed that high correlations between positive and negative affect was mostly due to a positive correlation between the mood adjectives energetic and tense (Quirin et al., 2018). In addition, it has been argued that in some languages the mood adjectives provide a smaller variability on the responses range. Therefore, future studies exploring this hypothesis should use a sample with a strong emotional context or under emotional priming. Nonetheless, according to Brown (2006) a factor structure with a positive correlation between factors might be the better model fit, particularly if the factor loadings are strong, and the fit indices are better that the one-factor model, as previously found in the IPANAT’s CFAs (see Hernández et al., 2020).
Not least, convergent and discriminant validity of the IPANAT-18 was supported by valence-congruent findings of correlations with explicit affect scales. For example, results showed that correlations between the IPANAT-18 and explicit affect measures are significant and of moderate strength. These moderate correlations are consistent with results reported for the original IPANAT, since Quirin et al. (2009) reported significant correlations of .20 for implicit and explicit PA and .22 for implicit and explicit NA. The moderate correlations between implicit and explicit measures are also consistent with findings of other implicit measures like: the Implicit Association Test (Greenwald et al., 2003), or the Affect Misattribution Procedure (Payne et al., 2005) (See Echebarria-Echabe, 2013). According to some authors, these low correlations between implicit and explicit measures can be due to different aspects, as motivational biases in the explicit measure, lack of introspective access of the participants, or even complete independence of the underlying constructs (Hofmann et al., 2005). In addition, evidence of discriminant validity of the IPANAT-18 can be obtain for our results, since Z-tests showed that implicit NA was more strongly correlated with explicit NA measures than with explicit PA measures, the opposite was found for implicit PA.
Finally, a different set of the artificial words (i.e., SUKOV, GOLIP and KERUS) can be used as a different version of the IPANAT-18. Since results showed that the random selection of items (i.e., three artificial words by 6 mood adjectives) yield similar psychometric properties than the full test. The latter is useful for researchers of the affective phenomena, particularly in experimental settings were repeated measures of the test are needed, since having different version of the test could reduce anchoring effects on participant’s responses.
In conclusion, the present study suggests that the psychometric properties of the IPANAT-18 version are almost as good as those of the full-length measure. Hence, it appears that the shorter measure will serve studies requiring less time for administration than the original test. The latter is especially important for research where affective processes are experimentally induced, since it has been determined that the induced affect is often fleeting (Hermans et al., 2001), so a brief version is useful to better capture these processes. Likewise, research using repeated assessment, as daily-diaries studies, can also benefit by an economical multiple assessment, since a shorter version of instruments will help not to frustrate participants.
This work was partially made possible through a grant from the Templeton Rlg. Trust (TRT 0119) supporting MQ and GPH; by the National Council for Science and Technology of Mexico (CONACyT) supporting GPH, and the Spanish Government (under Grant PSI2016-76411-R) supporting GPH, SE and TR. Special thanks to Cafer Bakac for providing advice regarding analysis.
The authors have no competing interests to declare.
Bentler, P. M. (1990). Comparative fit indexes in structural models. Psychological Bulletin, 107(2), 238–246. DOI: https://doi.org/10.1037/0033-2909.107.2.238
Bodenschatz, C. M., Skopinceva, M., Kersting, A., Quirin, M., & Suslow, T. (2018). Implicit negative affect predicts attention to sad faces beyond self-reported depressive symptoms in healthy individuals: An eye-tracking study. Psychiatry Research, 265, 48–54. DOI: https://doi.org/10.1016/j.psychres.2018.04.007
Brosschot, J. F., Geurts, S. A. E., Kruizinga, I., Radstaak, M., Verkuil, B., Quirin, M., & Kompier, M. A. J. (2014). Does Unconscious Stress Play a Role in Prolonged Cardiovascular Stress Recovery? Stress and Health, 30(3), 179–187. DOI: https://doi.org/10.1002/smi.2590
Brown, T. A. (2006). Confirmatory factor analysis for applied research. New York: Guilford Publications. DOI: https://doi.org/10.5860/CHOICE.44-2769
Browne, M. W., & Cudeck, R. (1992). Alternative Ways of Assessing Model Fit. Sociological Methods & Research, 21(2), 230–258. DOI: https://doi.org/10.1177/0049124192021002005
Cheng-Hsien, L. (2016). Confirmatory factor analysis with ordinal data: Comparing robust maximum likelihood and diagonally weighted least squares. Behavior research methods, 48(3), 936–949. DOI: https://doi.org/10.3758/s13428-015-0619-7
Clore, G. L., & Ortony, A. (2000). Cognitive Neuroscience of Emotion. In Series in Affective Science. Cognitive Neuroscience of Emotion, R. D. R. Lane, L. Nadel, G. L. Ahern, J. Allen, & A. W. Kaszniak (Eds.). (pp. 24–61). Oxford: Oxford University Press.
Dekker, M. R., & Johnson, S. L. (2018). Major depressive disorder and emotion-related impulsivity: Are both related to cognitive inhibition? Cognitive Therapy and Research, 42(4), 398–407. DOI: https://doi.org/10.1007/s10608-017-9885-2
Echebarria-Echabe, A. (2013). Relationship between implicit and explicit measures of attitudes: The impact of application conditions. Europe’s Journal of Psychology, 9(2), 231–245. DOI: https://doi.org/10.5964/ejop.v9i2.544
Egloff, B., & Schmukle, S. (2002). Predictive Validity of an Implicit Association Test for Assessing Anxiety. Journal of Personality and Social Psychology, 83(6), 1441–1455. DOI: https://doi.org/10.1037//0022-3518.104.22.1681
Erhart, M., Hagquist, C., Auquier, P., Rajmil, L., Power, M., Ravens-Sieberer, U., & European KIDSCREEN Group. (2010). A comparison of Rasch item-fit and Cronbach’s alpha item reduction analysis for the development of a Quality of Life scale for children and adolescents. Child: care, health and development, 36(4), 473–484. DOI: https://doi.org/10.1111/j.1365-2214.2009.00998.x
Greenwald, A., Nosek, B., and Banaji, M. (2003). Understanding and using the implicit association test: I. An improved scoring algorithm. Journal of Personality and Social Psychology, 85, 197–216. DOI: https://doi.org/10.1037/0022-3522.214.171.124
Hermans, D., De Houwer, J., & Eelen, P. (2001). A time course analysis of the affective priming effect. Cognition & Emotion, 15(2), 143–165. DOI: https://doi.org/10.1080/02699930125768
Hernández, G. P., Rovira, T., Quirin, M., & Edo, S. (2020). A Spanish Adaptation of the Implicit Positive and Negative Affect Test (IPANAT). Psicothema, 32(2). DOI: https://doi.org/10.7334/psicothema2019.297
Hofmann, W., Gawronski, B., Gschwendner, T., Le, H., & Schmitt, M. (2005). A Meta-Analysis on the Correlation Between the Implicit Association Test and Explicit Self-Report Measures. Personality and Social Psychology Bulletin, 31(10), 1369–1385. DOI: https://doi.org/10.1177/0146167205275613
Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6(1), 1–55. DOI: https://doi.org/10.1080/10705519909540118
Izquierdo, I., Olea, J., & Abad, F. J. (2014). Exploratory factor analysis in validation studies: Uses and recommendations. Psicothema, 26(3), 395–400.DOI: https://doi.org/10.7334/psicothema2013.349
Jackson, D. L., Gillaspy, J. A., & Purc-Stephenson, R. (2009). Reporting practices in confirmatory factor analysis: An overview and some recommendations. Psychological Methods, 14, 6–23. DOI: https://doi.org/10.1037/a0014694
Kazén, M., Kuhl, J., & Quirin, M. (2014). Personality Interacts with Implicit Affect to Predict Performance in Analytic vs. Holistic Processing. Journal of Personality, 83(3), 251–261. DOI: https://doi.org/10.1111/jopy.12100
Lane, R. D. (2008). Neural substrates of implicit and explicit emotional processes: a unifying framework for psychosomatic medicine. Psychosomatic Medicine, 70(2), 214–231. DOI: https://doi.org/10.1097/PSY.0b013e3181647e44
Lieberman, M. D. (2019). Boo! The consciousness problem in emotion. Cognition and Emotion, 33, 24–30. DOI: https://doi.org/10.1080/02699931.2018.1515726
López-Gomez, I., Hervas, G., & Vázquez, C. (2015). An adaptation of the positive and negative affect schedules (PANAS) in a Spanish general sample. Behavioral Psychology-Psicología Conductual, 23(3), 529–548.
Mîndrila, D. (2010). Maximum likelihood (ML) and diagonally weighted least squares (DWLS) estimation procedures: A comparison of estimation bias with ordinal and multivariate non-normal data. International Journal of Digital Society, 1(1), 60–66. DOI: https://doi.org/10.20533/ijds.2040.2570.2010.0010
Moors, A. (2013). On the causal role of appraisal in emotion. Emotion Review, 5, 132–140. DOI: https://doi.org/10.1177/1754073912463601
Mossink, J. C. L., Verkuil, B., Burger, A. M., Tollenaar, M. S., & Brosschot, J. F. (2015). Ambulatory assessed implicit affect is associated with salivary cortisol. Frontiers in Psychology, 6. DOI: https://doi.org/10.3389/fpsyg.2015.00111
Nye, C. D., Roberts, B. W., Saucier, G. & Zhou, X. (2008). Testing the measurement equivalence of personality adjective items across cultures. Journal of Research in Personality, 42, 1524–1536. DOI: https://doi.org/10.1016/j.jrp.2008.07.004
Payne, B. K., Cheng, C. M., Govorun, O., & Stewart, B. D. (2005). An inkblot for attitudes: affect misattribution as implicit measurement. Journal of Personality and Social Psychology, 89, 277–293. DOI: https://doi.org/10.1037/0022-35126.96.36.1997
Pessoa, L. (2013). The Cognitive-Emotional Brain. From Interactions to Integration. Cambridge: MIT Press. DOI: https://doi.org/10.7551/mitpress/9780262019569.001.0001
Qualtrics Research Suite. (2013). Qualtrics and all other Qualtrics product or service names are registered trademarks or trademarks of Qualtrics, Provo, UT, USA. Retrieved from http://www.qualtrics.com.
Quirin, M., & Bode, R. C. (2014). An Alternative to Self-Reports of Trait and State Affect The Implicit Positive and Negative Affect Test (IPANAT). European Journal of Psychological Assessment, 30(3), 231–237. DOI: https://doi.org/10.1027/1015-5759/a000190
Quirin, M., Bode, R. C., & Kuhl, J. (2011). Recovering from negative events by boosting implicit positive affect. Cognition and Emotion, 25(3), 559–570. DOI: https://doi.org/10.1080/02699931.2010.536418
Quirin, M., Kazén, M., & Kuhl, J. (2009). When nonsense sounds happy or helpless: The Implicit Positive and Negative Affect Test (IPANAT). Journal of Personality and Social Psychology, 97(3), 500–516. DOI: https://doi.org/10.1037/a0016063
Quirin, M., Kazén, M., Rohrmann, S., & Kuhl, J. (2009). Implicit but not explicit affectivity predicts circadian and reactive cortisol: Using the implicit positive and negative affect test. Journal of Personality, 77(2), 401–426. DOI: https://doi.org/10.1111/j.1467-6494.2008.00552.x
Quirin, M., & Lane, R. D. (2012). The construction of emotional experience requires the integration of implicit and explicit emotional processes. Behavioral and Brain Sciences, 35(3), 159–160. DOI: https://doi.org/10.1017/S0140525X11001737
Quirin, M., Wróbel, M., Norcini Pala, A., Stieger, S., Brosschot, J., Kazén, M., … Kuhl, J. (2018). A cross-cultural validation of the implicit positive and negative affect test (IPANAT). European Journal of Psychological Assessment, 1–12. DOI: https://doi.org/10.1027/1015-5759/a000315
Remmers, C., Topolinski, S., & Koole, S. L. (2016). Why being mindful may have more benefits than you realize: Mindfulness improves both explicit and implicit mood regulation. Mindfulness, 7(4), 829–837. DOI: https://doi.org/10.1007/s12671-016-0520-1
Scherer, K. R., and Moors, A. (2019). The emotion process: event appraisal and component differentiation. Annual Review of Psychology, 70, 719–745. DOI: https://doi.org/10.1146/annurev-psych-122216-011854
Schreiber, J. B., Nora, A., Stage, F. K., Barlow, E. A., & King, J. (2006). Reporting structural equation modeling and confirmatory factor analysis results: A review. The Journal of educational research, 99(6), 323–338. DOI: https://doi.org/10.3200/JOER.99.6.323-338
Shimoda, S., Okubo, N., Kobayashi, M., Sato, S., & Kitamura, H. (2014). An attempt to construct a japanese version of the implicit positive and negative affect test (IPANAT). Japanese Journal of Psychology, 85(3), 294–303. DOI: https://doi.org/10.4992/jjpsy.85.13212
Suslow, T., Ihme, K., Quirin, M., Lichev, V., Rosenberg, N., Bauer, J., … Lobsien, D. (2015). Implicit affectivity and rapid processing of affective body language: An fMRI study. Scandinavian Journal of Psychology, 56(5), 545–552. DOI: https://doi.org/10.1111/sjop.12227
Taber, K. S. (2018). The use of Cronbach’s alpha when developing and reporting research instruments in science education. Research in Science Education, 48(6), 1273–1296. DOI: https://doi.org/10.1007/s11165-016-9602-2
van der Ploeg, M. M., Brosschot, J. F., Quirin, M., Lane, R. D., & Verkuil, B. (2019). Inducing unconscious stress: Subliminal anger and relax primes show similar cardiovascular activity patterns. Journal of Psychophysiology. DOI: https://doi.org/10.1027/0269-8803/a000247
van der Ploeg, M. M., Brosschot, J. F., & Verkuil, B. (2014). Measuring Unconscious Stress: the Implicit Positive and Negative Affect Test and Cardiovascular Activity After Anger Harassment. Psychosomatic Medicine, 76(3), A90–A91.
Watson, D., Clark, L. A., & Tellegen, A. (1988). Development and validation of brief measures of positive and negative affect: The PANAS scales. Journal of Personality and Social Psychology, 54(6), 1063–1070. DOI: https://doi.org/10.1111/sjop.12227
Weil, A-S., Hernández G. P., Suslow, T., & Quirin, M. (2019). Implicit Affect and Autonomous Nervous System Reactions: A Review of Research Using the Implicit Positive and Negative Affect Test. Frontiers in Psychology, 10, 1634. DOI: https://doi.org/10.3389/fpsyg.2019.01634
Winkielman, P., Berridge, K. C., & Wilbarger, J. L. (2005). Unconscious affective reactions to masked happy versus angry faces influence consumption behavior and judgments of value. Personality and Social Psychology Bulletin, 31(1), 121–135. DOI: https://doi.org/10.1177/0146167204271309