Four-C’s of creativity.
The history of creativity assessment is as old as the concept itself. Researchers from various cultures and disciplines attempted to define the concept of creativity and offer a valid way to assess it. Creativity is generally defined as the ability to produce work that is novel and appropriate. Researchers in the field attempted to measure creativity from different perspectives and tried to answer the question like “What are the mental processes involved in creative thought?, Which personality traits are associated with creativity?, How can a product can be judged to be creative? and, What are the external forces that affect creativity?”. The answers of these questions constitute the most commonly used creativity assessment instruments. This chapter presents a brief overview on assessment of creativity through the eyes of the psychometric perspective and discusses the strengths and weaknesses of various instruments used in the field.
- creativity assessment
- psychometric approach
- divergent thinking
- tests of creative thinking
The belief that creativity is too difficult to measure is still a dominant myth  and can be considered as a byproduct of definitional issues. Researchers from various cultures and disciplines attempted to define creativity and offer a valid way to assess it. As creativity is a multifaceted phenomenon, it is a complicated task to define and operationalize it. For the sake of the discussion, one should start with defining “creativity”. The usefulness of higher order cognitive constructs is related to their definitions’ degree of clarity . Unfortunately, most creativity research oversees the importance of this point. In a content analysis done for the articles published in two major creativity research journals, Creativity Research Journal and Journal of Creative Behavior respectively, researchers found that only 34% of the selected articles provided and explicit definition of creativity . In order to examine a concept scientifically, we should rely on operationalized definitions and the relatively low rates of explicit definitions on creativity, constitutes a major problem for the field. As a result, I will use the following definition provided in Ref.  to clarify my perspective for this chapter. Creativity is “the interaction among aptitude, process, and environment by which an individual or group produces a perceptible product that is both novel and useful as defined within a social context”.
Starting with a definition would help but not provide the answer to our question at hand, why assess creativity? Although this question may have hundreds of answers the most basic and extensive answer would be: because creativity is the apex of human evolution and it is the most desirable skill in the information age. Creative thinking was the main ability that helped humans to move forward towards using a hand ax to complicated machines or produce complex language algorithms. Furthermore, creativity has become one of the most popular skills that schools and organizations search for. World Economic Forum, in its Future of Jobs report, ranked creativity in number three out of ten most important skills for the fourth industrial revolution , and also creativity is listed in the competencies part of 21st century skills. As of now, supporting creativity is the common goal of a kindergarten, a research institute or the biggest corporations in the world. The importance of creativity is anticipated to increase in the future due to various societal and economic trends as explained in Ref. .
Globalized markets require more competition.
Product development cycles shortened due to the information and communication technologies (For example, contemporarily any product that has been manufactured is redesigned within 5–10 years and this time period decreases to 6–12 months if the product is a technological device).
More and more jobs get automatized if it does not require creativity.
As job market demanded creativity more, the schools started to restructure their goals and curriculum to meet those need too. In the educational context, assessment of creativity is mostly about recognizing creativity and creating ideal conditions to nurture it, not about categorizing the students as “creative” or “not creative”. In Ref.  possible purposes of creativity assessment have been discussed; these can be summarized as follows:
Guide the individuals recognize their own strengths and support them in nourishing them.
Develop a better understanding about human abilities like intelligence and creativity. By maintaining that we will gain insight into the working structures of these complicated concepts.
Restructure the curriculum and learning experiences in accordance with the needs of the students. If educators understand their students’ strengths and weaknesses regarding creativity, they can tailor the educational opportunities for supporting creativity.
Imply creativity assessment as a program evaluation tool. Educators typically implement programs to enhance creativity, without pre and post assessments it would be impossible to know which approach worked best.
Utilization of standard measures will provide a common language for professionals to discuss various aspects of creativity.
Despite its importance, creativity did not become a major research area in psychology. Till the midst of 20th century creativity was seen as a marginal research topic and only 0,2% of the references in Psychological Abstracts indexes were about creativity . Even the term “creativity” was not widely used before 50’s, however there were some influential works and essays written by philosophers and scientists (e.g. Bergson, Einstein, Kekulé, Poincaré) or early models proposed by researchers (see ). Modern creativity research began in 1950s and J. P. Guilford’s famous presidential address in American Psychological Association ignited the wick . After Guilford’s call various researchers began to work on the field of creativity. Before that, assessment of creativity was not even a concern, especially for the young people or in the educational context. Because previous studies were solely focused on extraordinary creative achievements or eminent creative people. However, Binet’s pioneering intelligence test constituted an exception, it included some items to measure “creative imagination” . Historically, some intelligence test developers considered creativity to be a part of intelligence or a totally independent construct . In Ref. , authors categorized the approach towards the relations between creativity and intelligence under five groups. These are; creativity is a subset of intelligence, intelligence is a subset of creativity, creativity and intelligence are overlapping sets, creativity and intelligence are coincident sets and creativity and intelligence are disjoint sets. In the light of recent research, it can be claimed that the relation between intelligence and creativity depends on how each construct is defined and measured. Contemporary research widens these horizons. Creativity is now seen as a psychological trait distributed in the general population, that can be developed and measured .
The growing mindset which sees creativity as a flexible trait, increased the attention about the levels of creative magnitude. Creative accomplishments were categorized as everyday (little c) and historical (Big C) creativity. Imagine a 14-year-old math fan solving problems enthusiastically and compare it with the work of Fields Medal winner Andrew Wiles. She will not be as creative as Wiles and she does not need to be. Everyday creativity is certainly different from world changing efforts. The little-c, Big-C dichotomy was so sharp that one cannot distinguish the creative levels ranging in between. Kaufman and Beghetto [14, 15] proposed a “Four-C Model of Creativity” (mini-c, little-c, Pro-c and Big-C) to present a new perspective to this problem (see Table 1).
|Mini-c||Learning is closely related to creativity, when we learn a new thing or try to solve a new problem some degree of creativity will be involved. At the mini-c level the creative act or product is new and original for the individual himself. For example, after several trials Sasha baked her first ceramic, although it was just in beginner’s level, it was new and meaningful to her.|
|Little-c||The little-c level is one step further of the mini-c. The product or idea might be valuable to others. Sasha brought her ceramic to her house and her family loved it and put it on top of the dresser so that they use and enjoy seeing it.|
|Pro-c||In the Pro-c level individual is at a professional level with years of experience and deliberate practice. Sasha majored in art in college and her artwork is now exhibited in galleries. Her work is followed by art experts and she is considered to be a creative artist.|
|Big-C||People who achieve Big-C level are eminent ones and will be remembered in history books. One’s whole career and work is evaluated for this level. Sasha’s ceramics have been bought by art collectors and exhibited in art galleries regularly.|
Thus, it can easily be seen that every level of “c” requires a different approach and technique for assessing creativity. Over the years, researchers and theorists have proposed several different methods and theories for assessing creativity (e.g., Amabile, Csikszentmihalyi, Kaufman and Baer, Sternberg and Lubart, Torrance) (see [16, 17, 18, 19, 20]). These few examples constitute just the tip of the iceberg, there exist dozens of definitions, methods and theories in the field of creativity. As an illustration, in Ref.  Treffinger presented more than 100 different creativity definitions and as your definitions guide your assessment approaches, there are at least as many techniques to assess it. The reader can find information on more 70 different creativity assessments on Center for Creative Learning’s web page (see reference ). However, the variety of definitions and assessment techniques does not mean that creativity research has no consensus at all. Researchers tried to identify psychological factors that best predict creative outcomes and proposed several assessment techniques that imply these factors as a means of measurement . Indeed, we can even argue that the field of creativity assessment has never been so prosperous before.
2. The psychometric perspective in creativity research
Today it is accepted that creativity is a combination of cognitive, conative and emotional factors which interact with the environment dynamically. As all of these factors are present in human beings and all these variables affects us to a certain degree, it can be argued that a specific combination of them results in creativity. In the historical research of creativity, several researchers tried to investigate the nature of creativity through the eyes of the aforementioned factors. The 4P framework (process, person, product, press) proposed by Rhodes  is a widely accepted categorization in psychometric study of creativity.
Process: Mental processes involved in creative thought or creative work.
Person: Personality traits or personality types associated with creativity.
Product: Products which are judged to be creative by a relevant social group.
Press (Environment): The external forces that effects creative person or process (e.g. sociocultural context, trauma)
In this section, historical and recent research in the field of creativity assessment will be presented. Although, every single creativity test, scale or rating will not be discussed, instead the focus will be on the historical milestones and contemporary methods of creativity assessment. This chapter embraced the integrative review approach with the aim of assessing, critiquing and synthesizing the literature on assessment of creativity.
2.1 Assessing the creative process
Psychometric measures of creative process and potential has been extensively implied in the field. These processes involve cognitive factors that lead to creative production like finding and solving problems, selective encoding (i.e. selecting info that is relevant to problem and ignoring distractions), evaluation of ideas, associative thinking, flexibility and divergent thinking. Nevertheless, from this long list of cognitive factors the assessment of creative process mostly relied on divergent thinking in the creativity assessment tests. Even researchers in Ref.  underlined the irony in the study of creativity, although creativity itself requires novel and original solutions to a problem, researchers mostly focused on divergent thinking (DT) tasks. Not only major efforts were put on developing DT tests, even the earliest DT tests are still widely used in creativity research and educational areas. Divergent thinking can be explained as a thought process used to generate creative ideas via searching for many possible solutions. Whereas, convergent thinking is the ability to arrive the “correct” solution. Guilford  who came up with these concepts clearly underlined the difference between them.
In divergent thinking it is important to produce as many responses to verbal or figural stimuli as possible such that, more is better in DT. After the examinee come up with various answers, testers score them. The scoring is based on the concepts of originality (uniqueness of responses to a given stimuli), fluency (number of responses produced to a given stimuli), flexibility (number and/or uniqueness of categories of responses to a given stimuli) and elaboration (to add details to the ideas produced for a given stimuli) [25, 26]. As Guilford pioneered the research on creativity, initial efforts to assess it came from him and his colleagues too. Though, there were others who developed test batteries to measure creative thinking abilities and focused mostly on process components (e.g., Kogan and Wallach, Torrance, Mednick).
The TTCT-Verbal is entitled as “Thinking Creatively with Words” and the Figural form entitled as “Thinking Creatively with Pictures”. Verbal form consists of six activities each whereas figural form consists of three (see Table 2).
|Picture Construction||Participant uses a basic shape and expands on it to create a picture.|
|Picture Completion||Participant is asked to finish and title incomplete drawings.|
|Lines/Circles||Participant is asked to modify many different series of lines and circles.|
|Asking||Participant asks as many questions as possible about the picture.|
|Guessing Causes||Participant lists possible causes for the pictured action.|
|Guessing Consequences||Participant lists possible consequences for the pictured action.|
|Product Improvement||Participant is asked to make changes to improve a toy.|
|Unusual Uses||Participant is asked to think of many different possible uses for an ordinary item.|
|Unusual Questions||Participant asks as many questions as possible about an ordinary item (this item does not appear in later editions).|
|Just Suppose||Participant is asked to “just suppose” that an improbable situation has happened then list possible ramifications|
Mednick argued that people can achieve a creative solution through serendipity, similarity and mediation. His analysis showed that people’s associative hierarchies or set of responses to stimulus situations differ. Noncreative people have steep hierarchies, with a strong or dominant response to a given situation. As an example, if someone says
For the operational definition of his theory, Mednick developed the Remote Associates Test (the RAT). RAT consisted of 30 items originally, each item included three stimulus words and the participant was required to find a fourth word that links them all. As an example; given stimulus set is; ‘book/shelf/telephone’ and the fourth word that link them all will be ‘book’. Some argued that, as test requires a single correct answer, it does not seem to require creative thinking . However, one should note that the RAT itself is not aimed to measure creative thinking directly; it is measuring the capacity to think creatively and also in order to reach a single answer one should think divergently in RAT. Weisberg  joined this discussion by giving the example of a marathon runner, if one wants to identify a runner who has the potential to be a good marathon runner, he should measure lung capacity instead of running speed.
|Continuations (Cn)||Any use, continuation or extension of the six given figural fragments.|
|Completion (Cm)||Any additions, completions, complements, supplements made to the used, continued or extended figural fragments.|
|New elements (Ne)||Any new figure, symbol or element.|
|Connections made with a line (Cl)||Between one figural fragment or figure or another.|
|Connections made to produce a theme (Cth):||Any figure contributing to a compositional theme or “gestalt”.|
|Boundary breaking that is fragment dependent (Bfd)||Any use, continuation or extension of the “small open square” located outside the square frame.|
|Boundary breaking that is fragment independent (Bfi)||Any use or extension located outside the square frame independent of “small open square”.|
|Perspective (Pe)||Any breaking away from two-dimensionality.|
|Humor and affectivity (Hu)||Any drawing which elicits a humorous response, shows affection, emotion, or strong expressive power.|
|Unconventionality, (Uc, a)||Any manipulation of the material.|
|Unconventionality, b (Uc, b)||Any surrealistic, fictional and/or abstract elements or drawings.|
|Unconventionality, c (Uc, c)||Any usage of symbols or signs.|
|Unconventionality, d (Uc, d)||Unconventional use of given fragments.|
|Speed (Sp)||A breakdown of points, beyond a certain score-limit, according to the time spent on the drawing production.|
|Field of expression||Exploratory-divergent thinking||Integrative-convergent thinking|
|Graphic||Abstract form||Abstract forms|
|Concrete object||Concrete objects|
|Verbal||Story endings||Story with given title|
|Story beginnings||Story with characters|
For convenience TCT-DP and EPoC has been presented under assessing the creative process and the discussion regarding their psychometric evidence is included in the next part along with other process assessment tools. As the reader may guess, there exist numerous tools for creativity assessment. Furthermore, there is a growing interest for domain-specific creativity assessment but domain-specific measures of creative potential are beyond the scope of this chapter, interested readers may check the suggested sources (i.e., For example, see [46, 47, 48]).
2.1.1 Issues of reliability and validity in creativity assessment
The most important question regarding any measurement instrument, whether it is a thermometer or test of creative thinking would be; is it reliable, does it produce consistent outcomes? To ensure reliability psychometric instruments must show consistent results in tests of reliability like test-retest reliability and split-half reliability. Research studies have showed that divergent thinking tests are reliable . However, there are important points for further consideration, for example, some studies found that performance on DT tasks is affected by instructions (if you instruct people to be creative, they score higher). Weisberg , highlighted this situation by asking the question ‘If you instruct the examinee to be smart in the IQ test, will he be smarter?’. Weisberg himself gives the answer to this question; as children are used to answer questions exists in IQ tests, their score will not change with the instruction to be smart. However, questions in creativity tests are different in nature, most of them do not have a single correct answer and children are not familiar with this kind of questions. Thus, additional instruction might not be flaw for tests of creativity.
Once the reliability of a testing instrument is maintained, questions about validity arouse. Validity is a complex concept that can be ensured in a testing instrument via different analyses like discriminant, face, criterion and predictive validity. Tests of creative potential are reliable yet major discussions and suspicions exists about their predictive and discriminant validity.
To start with the Guilford SOI model, it is known that there exist enormous amount of assessment data and the archives are still available. SOI data was analyzed extensively within the years and the results generally supported the model [49, 50], or some researchers said that revisions needed  or concluded that the model has serious problems . The results are pretty much same for Wallach and Kogan, although tests are reliable there are mixed results about its validity.
TTCT has been the most widely used and researched test of creativity, thus having extensive data to support its reliability and validity. Research about TTCT report good reliability scores for scoring and test-retest reliability [53, 54]. The majority of predictive validity studies for TTCT was run by Torrance himself, beginning in 1958 they included all grades 1 to 6 in two Minnesota elementary schools and in 1959 all students in grades 7–12 took TTCT. They followed up these students in four time periods (7-12-22-40 years) and collected data about their creative achievements. The longitudinal studies have shown that [20, 37, 55, 56] TTCT results correlate to adult creative achievement thus having predictive validity (for a detailed discussion see ). Though, Baer  raised some questions about the relevance of criterion variables (subscribing to a professional journal, learning a foreign language), do questions asked for the creative achievements in adult life are solely related to creativity? One can justifiably argue that, these criterion variables are strongly related to intelligence too. In addition, Torrance tests also correlate with intelligence then the predictability of creative achievements might be based on intelligence not on divergent thinking ability . On the other hand, Plucker  presented more positive results concerning the predictive validity of the divergent thinking tests. He used multiple-regression analysis to reanalyze the Torrance data and examined its predictive power and provided support for the tests’ usefulness. Weisberg and Baer make other criticisms including the design of the study and interested readers should refer to these sources (see [41, 58]).
Mednick ‘s Remote Associates Test enjoy mixed support in terms reliability and validity too. Although RAT showed to be reliable , validity of the test is problematic . It is important to note that the criterion/predictive validity of RAT, TCT-DP or EPoC have been subject to less investigation compared to divergent thinking tests like SOI or TTCT. TCT-DP has been normed in several countries like Germany, Korea, Poland and Australia for different age groups. The reliability studies showed fair to very good scores in terms of parallel test, scoring and differential reliability [42, 43]. Urban stated that the question of validity is hard to answer for TCT-DP as there are no instruments directly comparable to it . So, they examined correlations with intelligence and verbally oriented divergent thinking tests and expected low or slightly positive correlations to ensure the instruments validity and attained supportive findings for the validity of the test . As a modern creativity assessment instrument, EPoC was initially developed and validated in France with French sample. Internal validity was acceptable and for external validity researchers reached satisfactory results by proving that EPoC scores are independent from intelligence scores, moderately correlated with personality-relevant dimension like openness to experience and highly correlated with classic divergent tests [13, 44]. Although, EPoC shows promising validity results, extensive research is needed to support its criterion and predictive validity.
Extensive discussion regarding the reliability and validity of creativity assessment is mostly based on the divergent thinking tasks and tests. One major problem is about the scoring systems and several researches showed that fluency can act as a contaminating factor on originality scores . To resolve fluency problem a new calculation named Creativity Quotient (CQ) was proposed by researchers . CQ formula rewards response pools that are highly fluent and flexible at the same time. The discussion on fluency scoring is ongoing and some researchers advocate that fluency is a more complex construct than it is originally thought.
The debate on the predictive validity of divergent thinking tests is still ongoing, it seems like there exist two camps of researchers, one supporting the predictive power of DT [59, 64] and the other opposes [41, 58]. In an extensive review Kaufman and his colleagues  summarized the methodological issues in studies of DT tests’ predictive validity and pointed out that scores may be susceptible to intervention effects, administration procedures can affect the originality and fluency scores, statistical procedures may be inadequate, score distributions often violate the statistical assumption of normal distribution and creative achievement in adulthood may be domain specific and the DT tests used are almost always domain general. Runco  with all these criticism in mind, advocated for DT tests by saying;
In the early 60s and 70s creativity assessment was pretty much equal to DT tests however after several years and hundreds of research, the field should embrace a wider perspective. We now have more complex systems theories of creativity and it would be more prosperous for the field, if the upcoming research focus on developing and testing contemporary instruments more.
2.2 Assessing the creative person
Autonomous, self-confident, open to new experiences, independent and original are some of the character traits that creative persons possess and the assessment of creative person deals with it. Measures that focus on the characteristics of creative person are self-reports or external ratings of past behavior or personality traits and they have been reviewed extensively in the literature . Creative personality traits are diverse and can be perceived to be both positive and negative. Such as; perseverance, tolerance for ambiguity risk taking, psychoticism, dominance or non-conformity. One of the leading theories of personality is the five-factor theory. These five factors are neuroticism, extraversion, openness to experience, conscientiousness and agreeableness. Openness to experience is highly associated with creativity measures such as self-reports , verbal creativity , and psychometric tests .
Researchers study the common personality characteristics and past behaviors of people who are accepted as creative and develop instruments to measure personality correlates of creative behavior. There exist numerous instruments of personality scales and attitude checklist such as;
“Person” perspective or conative factors in creativity assessment mainly take into account that significant personal characteristics and existing creative behavior are best predictors of future creative behavior. Feist, an influential personality researcher, for example investigated the personality characteristics of scientists versus scientists, more creative versus less creative nonscientists and artists versus nonartists. In general, he showed that creative people are more open to new experiences, less conventional and less conscientious, more self-confident, self-accepting, ambitious, dominant, hostile and impulsive [81, 82]. In sum, self-reported creativity has attracted considerable attention in the field because it is fast and easy to score. Although, researchers willing to use these instruments should take into account the validity issues and the possibility that respondents may not be telling the truth. All kinds of self-assessments generally correlate to each other but the correlation data with performance assessments are contradictory [83, 84, 85]. Thus, citing from reference  “although self-assessments have a function and purpose, they are not useful in any type of high-stakes assessment”.
2.3 Assessing the creative product
Think about the Nobel, Oscar or Grammy prizes, how the winners are designated? For example, do the Nobel committee requires the nominees to take TTCT or fill the creativity questionnaires or a taxi driver’s opinion will be count as an expert opinion in determining the nominees for chemistry? As explained in theories of Csikszentmihalyi and Amabile any idea or product to be seen as creative it should be valued by others or recognized experts in that field [86, 87]. Measuring the creativity of a product can be the most important aspect of creativity assessment yet it did not receive as much attention as process or personality variables. Some researchers even believe that product assessment is probably the most appropriate assessment of creativity and referred as the “gold standard” of it . Researchers developed several instruments to evaluate creative products, such as Creative Product Semantic Scale or Student Product Assessment Form. These instruments ask educators to rate the specific features of students’ products. Though, above all Consensual Assessment Technique is the most popular way of assessing products. A brief explanation of each is provided below.
CAT has been proven to be reliable in several studies [58, 85, 88, 93, 94], inter-rater reliabilities ranged between .70 to .90. The average number of judges involved in the CAT studies run by Amabile  was just over ten. Using expert judges ranging between 5 to 10 is recommended, fewer than 5 experts may results in low inter-reliability levels and using more than 10 (although desirable) can be expensive and hard. Although, CAT steadily shows high reliability in various studies, using experts in creativity assessment is not without controversy. For example, Amabile states that determining the necessary level of expertise for judges is important and it is recommended that the experts should have formal training and experience in the target domain. Furthermore, researchers reported mixed results about the expert and novice ratings. For example, Kaufman and his colleagues showed low correlations among novice and expert raters , whereas in another study higher correlations reported , in more recent work researchers approached the expertise problem from a different perspective and argued that it should be understand as a continuum . CAT also possess strong face validity yet, face validity (an instruments capability to measure what it looks like to measure) is not sufficient enough. For example, experts can agree a product is not creative and still be wrong (e.g. van Gogh was not valued as a creative artist by the experts in his time). Predictive validity discussion is even more complicating, it has been shown that CAT scores do predict later CAT scores, meaning they are stable across time in the same domain. However, does this mean CAT scores can predict later creative achievement? Historiometric research data supports this argument, for example analysis of Mozart’s music pieces in his early life predicted his later creative achievement .
2.4 Assessing the creative press
Various environmental factors contribute to creative potential and have deep effects on it. Parental practices, trauma, birth order, culture, teaching practices and group interactions may affect creativity. Following the previous example of Mozart, we know that he was born in Salzburg and to a musical family (his father was a music teacher, composer, conductor and violinist). Imagine what would happen to the same Mozart if he would have born in small village in the Alps as son of a shepherd, would he be able to develop as a musical prodigy? Although creativity is highly related to cognitive factors, it is impossible to disregard the impact of environment.
As environmental factors are identified as important contributors to creative potential, studies aiming to determine the presence or absence of these factors in an individual’s environment become really important. There are instruments for assessing classroom and learning environment like Classroom Activities Questionnaire-CAQ (cited in ). However, the majority of the instruments for assessing environmental effects on creativity are mostly about the organizational structures, such as KEYS: Assessing the Climate for Creativity . CAQ has not been widely applied in research studies therefore lacking the psychometric data, KEYS on the other hand, which was designed to “assess individuals perceptions and influence of those perceptions on the creativity of their work” (, p. 1157) possess evidence of reliability and validity and is widely applied in the organizational creativity field.
Creativity has various definitions, theories and also understood therefore assessed in many ways. Enhancing students’ creative thinking skills has become one of the major goals of education. Unfortunately, Kim’s comprehensive research on TTCT is disquieting. The normative data of TTCT 1974, 1984, 1990, 1998 and 2008 (272,599 participants) were re-analyzed and it was found that creative thinking scores either remained static or decreased, starting at the sixth grade . There can be millions of reasons behind this failure. The inability to embed creativity in classroom practices can be one reason whereas the development and implication of up to date creativity assessment is the other. The field should move forward to using comprehensive theories as the basis of assessment, renew the norms of existing creativity tests such as TTCT and pay more attention to the validity studies of the creativity assessment instruments.
This chapter introduced a brief overview of existing tools of creativity assessment and to reach a “perfect” measure, researchers should take these approaches’ and instruments’ strengths and weaknesses into account (a brief overview is provided in Table 5).
|Type of Assessment||Examples||Advantages||Disadvantages|
|Process based assessment (e.g. divergent thinking tests)||Torrance Tests of Creative Thinking||Well researched having years of research data available||May only tap limited aspects of creativity|
|Person based assessment (e.g. Assessment by others)||Group Inventory for Finding Creative Talent or other instruments||Creativity is rated by a teacher, peer, or parent who knows the individual.||Questions about validity and reliability|
|Person based assessment (e.g. Self-assessment)||Asking someone to rate his or her own creativity||Quick, cheap, and has high face validity||People can be subjective about their level of creativity|
|Product based assessment (e.g. Consensual assessment technique)||Having experts rate a creative product||Allows for very domain-specific information about creativity,||Time consuming and expensive|
Furthermore, the argument that Sternberg  made by claiming that the evaluation of creativity is always local has to be kept in mind. Judging any thought or product is relative to some set of norms and this perspective raises questions for tests like TTCT or Unusual Uses, because these tests assume that some sort universal creativity exists and they measure it. Sternberg believes that creativity should be assessed locally because it has culture dependent elements just like intelligence and he suggests that “we should agree that our evaluations of what usually is viewed as constituting creativity – novel, surprising, and compelling ideas or products – represent local norms” (, p. 399).
The laypeople, the philosophers, the artists, and the creativity researchers all agree that creativity is a complex phenomenon and we know less about its scope and measurement than we wish to know. However, from a historical perspective in recent years more research has been conducted on creativity and the field of creativity can said to be at its prime. Hence, upcoming efforts of understanding and assessing creativity has the potential to produce more reliable, valid and comprehensive methods and theories. As discussed in this chapter, creativity assessment has its own limitations but it is recommended for future efforts to focus more on building a theoretical basis and providing multifaceted, multimodal assessment systems to measure creativity in order to overcome the aforementioned limitations.
Conflict of interest
The author declares no conflict of interest.