PIAAC is designed to measure adult skills across a wide range of abilities, from basic reading and numerical calculations to complex problem solving. To achieve this, PIAAC assesses literacy, numeracy, and problem solving. The tasks developed for each domain are authentic, culturally appropriate, and drawn from real-life situations expected to be important or relevant in different contexts. The content and questions within these tasks reflect the purposes of adults’ daily lives across different cultures, as well as the changing nature of complex, digital environments, even if they are not necessarily familiar to all adults in all countries.
In 2023 (Cycle 2), PIAAC continues to assess adults' skills in literacy, including reading components, and numeracy. This approach allows for the measurement of trends and changes in adult skills between the PIAAC 2012/2014 and 2017 (Cycle 1) administrations and PIAAC 2023 (Cycle 2).
Since the first cycle, the digital landscape has evolved significantly, impacting both adults' personal and professional lives. In response to these changes, the literacy and numeracy frameworks were updated with innovative elements that mirror tasks found in digital settings. For example, in literacy, certain tasks now require consulting multiple information sources, including both static and dynamic texts. Similarly, in numeracy, some tasks involve dynamic applications that utilize interactive, digitally based tools.
In addition, the PIAAC 2023 assessment introduced a new numeracy component to obtain information on adults with low numeracy skills and replaced digital problem solving with a new domain: adaptive problem solving.
While literacy and numeracy results from the two PIAAC cycles can be compared, the digital problem-solving and adaptive problem-solving domains cannot be compared due to differences in their assessment frameworks and in the skills being measured.
Another major difference between the two cycles is the mode of assessment. In PIAAC Cycle 1, literacy and numeracy were assessed in both a paper-and-pencil format and on computers, while digital problem-solving items were computer-administered and reading components were limited to paper-and-pencil. In Cycle 2, all respondents completed the entire assessment on a tablet (for more details, see How PIAAC is Administered).
For more detailed information about the skills measured in 2023 (Cycle 2) or 2012-2017 (Cycle 1), click on the tabs below.
The PIAAC literacy conceptual framework was developed by panels of experts to guide the development and selection of literacy items and to inform the interpretation of results. It defines the underlying skills that the assessment aims to measure and describes how best to assess skills in the literacy domain. The literacy framework provides an overview of
While the conceptual framework for the literacy domain remains largely unchanged from PIAAC 2012/2014 and 2017 (Cycle 1), it is designed to ensure comparability with PIAAC Cycle 1 and earlier adult literacy assessments. However, the PIAAC 2023 (Cycle 2) literacy experts have updated the framework to reflect the growing importance of reading in digital environments, including various text types (such as digital texts and materials) that require different cognitive demands and pose different challenges for readers. Specifically, the new framework highlights the growing need for readers to engage effectively with the diverse range of texts they encounter online.
It is important to note that the assessment tasks and materials in PIAAC are designed to measure a broad set of foundational skills needed to successfully interact with the range of real-life tasks and materials that adults encounter in everyday life. The resolution of these tasks does not require specialized knowledge or more specific skills: in this sense, the skills assessed in PIAAC can be considered “foundational” or, more appropriately, “general” skills required in a very broad range of situations and domains.
1 IALS and ALL definition: Literacy is using printed and written information to function in society to achieve one's goals and to develop one's knowledge and potential.
In PIAAC Cycle 2, literacy is defined as:
"Literacy is accessing, understanding, evaluating and reflecting on written texts in order to achieve one's goals, to develop one's knowledge and potential and to participate in society."
As described in the PIAAC Cycle 2 literacy framework, the word literacy is taken in its broadest but also most literal sense, to describe the proficient use of written language artifacts such as texts and documents, regardless of the type of activity or interest considered. This characterization of literacy highlights both the universality of written language (i.e., its potential to serve an infinite number of purposes in an infinite number of domains) and the very high specificity of the core ability underlying all literate activities, that is, the ability to read written language.
Some key terms within this definition are explained below.
“accessing…”
Proficient readers are not just able to comprehend the texts they are faced with. They can also reach out to texts that are relevant to their purposes, and search passages of interest within those texts (McCrudden and Schraw, 2007; Rouet and Britt, 2011).
"understanding…"
Most definitions of literacy acknowledge that the primary goal of reading is for the reader to make sense of the contents of the text. This can be as basic as comprehending the meaning of the words, to as complex as comprehending the dispute between two authors making opposite claims on a social-scientific issue.
"evaluating and reflecting…"
Readers need to assess whether the text is appropriate for the task at hand, determining whether it will provide the information they need. Readers also make judgments about the accuracy and reliability of both the content and the source of the message (Bråten,Strømsø and Britt, 2009; Richter, 2015).
"on written text…"
In the context of PIAAC Cycle 2, the phrase "written text" designates pieces of discourse primarily based on written language. Written texts may include non-verbal elements such as charts or illustrations.
"in order to achieve one’s goals,"
Just as written languages were created to meet the needs of emergent civilizations, at an individual level, literacy is primarily a means for one to achieve their goals. Goals relate to personal activities but also to the workplace and to interaction with others.
"to develop one’s knowledge and potential and participate in society."
Developing one's knowledge and potential highlights one of the most powerful consequences of being literate. Written texts may enable people to learn about topics of interest, but also to become skilled at doing things and to understand the rules of engagement with others.
For the complete PIAAC literacy framework, see:
In PIAAC, results are reported as averages on a 500-point scale or as proficiency levels. Proficiency refers to competence that involves “mastery” of a set of abilities along a continuum that ranges from simple to complex information-processing tasks.
This continuum has been divided into six levels of proficiency. By assigning scale values to both individual participants and assessment tasks, it is possible to see how well adults with varying literacy proficiencies performed on tasks of varying difficulty. Although individuals with low proficiency tend to perform well on tasks with difficulty values equivalent to or below their level of proficiency, they are less likely to succeed on tasks receiving higher difficulty values. This means that the more difficult the task relative to each individual’s level of proficiency, the lower the likelihood he or she will respond correctly. Similarly, the higher one’s proficiency on the literacy scale relative to difficulty of the task, the more likely they are to perform well on the task.
The following descriptions summarize the types of tasks that adults at a particular proficiency level can reliably complete successfully.
Description of PIAAC literacy discrete proficiency levels
Proficiency level and score range | Task descriptions |
---|---|
Below Level 1 0–175 points
|
Most adults at Below Level 1 are able to process meaning at the sentence level. Given a series of sentences that increase in complexity, they can tell if a sentence does or does not make sense either in terms of plausibility in the real world (i.e., sentences describing events that can vs. cannot happen), or in terms of the internal logic of the sentence (i.e., sentences that are meaningful vs. not). Most adults at this level are also able to read short, simple paragraphs and, at certain points in text, tell which word among two makes the sentence meaningful and consistent with the rest of the passage. Finally, they can access single words or numbers in very short texts in order to answer simple and explicit questions. The texts at Below Level 1 are very short and include no or just a few familiar structuring devices such as titles or paragraph headers. They do not include any distracting information nor navigation devices specific to digital texts (e.g., menus, links or tabs). Tasks Below Level 1 are simple and very explicit regarding what to do and how to do it. These tasks only require understanding at the sentence level or across two simple adjacent sentences. When the text involves more than one sentence, the task merely requires dealing with target information in the form of a single word or phrase. |
Level 1 176–225 points
|
Adults at level 1 are able to locate information on a text page, find a relevant link from a website, and identify relevant text among multiple options when the relevant information is explicitly cued. They can understand the meaning of short texts, as well as the organization of lists or multiple sections within a single page. The texts at level 1 may be continuous, noncontinuous, or mixed and pertain to printed or digital environments. They typically include a single page with up to a few hundred words and little or no distracting information. Noncontinuous texts may have a list structure (such as a web search engine results page) or include a small number of independent sections, possibly with pictorial illustrations or simple diagrams. Tasks at Level 1 involve simple questions providing some guidance as to what needs to be done and a single processing step. There is a direct, fairly obvious match between the question and target information in the text, although some tasks may require the examination of more than one piece of information. |
Level 2 226–275 points
|
At Level 2, adults are able to access and understand information in longer texts with some distracting information. They can navigate within simple multi-page digital texts to access and identify target information from various parts of the text. They can understand by paraphrasing or making inferences, based on single or adjacent pieces of information. Adults at Level 2 can consider more than one criterion or constraint in selecting or generating a response. The texts at this level can include multiple paragraphs distributed over one long or a few short pages, including simple websites. Noncontinuous texts may feature a two-dimension table or a simple flow diagram. Access to target information may require the use of signaling or navigation devices typical of longer print or digital texts. The texts may include some distracting information. Tasks and texts at this level sometimes deal with specific, possibly unfamiliar situations. Tasks require respondents to perform indirect matches between the text and content information, sometimes based on lengthy instructions. Some tasks statements provide little guidance regarding how to perform the task. Task achievement often requires the test taker to either reason about one piece of information or to gather information across multiple processing cycles. |
Level 3 276–325 points
|
Adults at Level 3 are able to construct meaning across larger chunks of text or perform multi-step operations in order to identify and formulate responses. They can identify, interpret or evaluate one or more pieces of information, often employing varying levels of inferencing. They can combine various processes (accessing, understanding and evaluating) if required by the task . Adults at this level can compare and evaluate multiple pieces of information from the text(s) based on their relevance or credibility. Texts at this level are often dense or lengthy, including continuous, noncontinuous, mixed. Information may be distributed across multiple pages, sometimes arising from multiple sources that provide discrepant information. Understanding rhetorical structures and text signals becomes more central to successfully completing tasks, especially when dealing with complex digital texts that require navigation. The texts may include specific, possibly unfamiliar vocabulary and argumentative structures. Competing information is often present and sometimes salient, though no more than the target information. Tasks require the respondent to identify, interpret, or evaluate one or more pieces of information, and often require varying levels of inferencing. Tasks at Level 3 also often demand that the respondent disregard irrelevant or inappropriate text content to answer accurately. The most complex tasks at this level include lengthy or complex questions requiring the identification of multiple criteria, without clear guidance regarding what has to be done. |
Level 4 326–375 points
|
At level 4, adults can read long and dense texts presented on multiple pages in order to complete tasks that involve access, understanding, evaluation and reflection about the text(s) contents and sources across multiple processing cycles. Adults at this level can infer what the task is asking based on complex or implicit statements. Successful task completion often requires the production of knowledge-based inferences. Texts and tasks at Level 4 may deal with abstract and unfamiliar situations. They often feature both lengthy contents and a large amount of distracting information, which is sometimes as prominent as the information required to complete the task. At this level, adults are able to reason based on intrinsically complex questions that share only indirect matches with the text contents, and/or require taking into consideration several pieces of information dispersed throughout the materials. Tasks may require evaluating subtle evidence-claims or persuasive discourse relationships. Conditional information is frequently present in tasks at this level and must be taken into consideration by the respondent. Response modes may involve assessing or sorting complex assertions. |
Level 5 376–500 points
|
Above level 4, the assessment provides no direct information on what adults can do. This is mostly because feasibility concerns (especially with respect to testing time) precluded the inclusion of highly complex tasks involving complex interrelated goal structures, very long or complex document sets, or advanced access devices such as intact catalogs, deep menu structures or search engines. These tasks, however, form part of the construct of literacy in today's world, and future assessments aiming at a better coverage of the upper end of the proficiency scale may seek to include testing units tapping on literacy skills above Level 4. From the characteristics of the most difficult tasks at Level 4, some suggestions regarding what constitutes proficiency above Level 4 may be offered. Adults above Level 4 may be able to reason about the task itself, setting up reading goals based on complex and implicit requests. They can presumably search for and integrate information across multiple, dense texts containing distracting information in prominent positions. They are able to construct syntheses of similar and contrasting ideas or points of view; or evaluate evidence-based arguments and the reliability of unfamiliar information sources. Tasks above Level 4 may also require the application and evaluation of abstract ideas and relationships. Evaluating reliability of evidentiary sources and selecting not just topically relevant but also trustworthy information may be key to achievement. |
NOTE: Every test item is located at a point on the proficiency scale based on its relative difficulty. The easiest items are those located at a point within the score range below level 1 (i.e., 175 or less); the most difficult items are those located at a point at or above the threshold for level 5 (i.e., 376 points). An individual with a proficiency score that matches a test item’s scale score value has a 67 percent chance of successfully completing that test item. This individual will also be able to complete more difficult items (those with higher values on the scale) with a lower probability of success and easier items (those with lower values on the scale) with a greater chance of success.
In general, this means that tasks located at a particular proficiency level can be successfully completed by the “average” person at that level approximately two-thirds of the time. However, individuals scoring at the bottom of the level would successfully complete tasks at that level only about half the time while individuals scoring at the top of the level would successfully complete tasks at the level about 80 percent of the time. Information about the procedures used to set the achievement levels is available in the OECD PIAAC Technical Report.
SOURCE: OECD 2023 Survey of Adult Skills International Report
In Cycle 2, the assessment is delivered on a tablet device.2 The assessment interface has been designed to ensure that most, if not all, respondents are able to take the assessment on the tablet even if they have limited experience with such devices.
The PIAAC literacy assessment tasks are organized along a set of dimensions that ensure broad coverage and a precise description of what people can do at each level of proficiency.
The following tables show the distribution of the 80 literacy items selected for the final item set in PIAAC across different dimensions.
Distribution of PIAAC Cycle 2 literacy items by cognitive strategy
Number | Percent | |
Access | 30 | 38% |
Understand | 35 | 44% |
Evaluate and reflect | 15 | 19% |
Total | 80 | 100% |
Source: OECD 2023 PIAAC Reader’s Companion
Distribution of PIAAC Cycle 2 literacy items by text source
Number | Percent | |
Single | 51 | 64% |
Multiple | 29 | 36% |
Total | 80 | 100% |
Source: OECD 2023 PIAAC Reader’s Companion
Distribution of PIAAC Cycle 2 literacy items by text format
Number | Percent | |
Continuous | 40 | 50% |
Noncontinuous | 25 | 31% |
Mixed | 15 | 19% |
Total | 80 | 100% |
Source: OECD 2023 PIAAC Reader’s Companion
Distribution of PIAAC Cycle 2 literacy items by context
Number | Percent | |
Work | 9 | 11% |
Personal | 33 | 41% |
Community | 28 | 35% |
Education and training | 10 | 13% |
Total | 80 | 100% |
Source: OECD 2023 PIAAC Reader’s Companion
2 In the first cycle of the study, the assessment could be completed on a laptop computer or in paper-and-pencil format. The computer-based assessment (CBA) format constituted the default format, with the paper-based assessment (PBA) option being made available to those respondents who had little or no familiarity with computers, had poor information communications technology (ICT) skills, or did not wish to take the assessment on computer. In the second cycle, all countries administered the assessment on a tablet.
Examples of literacy items included in Cycle 2 are presented as screenshots of the displays that appear on the tablet used to deliver the assessment. These items were not administered in either the Field Test or Main Study; while no PIAAC proficiency levels are available for these items, estimated difficulty levels are provided. To view and interact with the full set of sample items, see the PIAAC released items.
This example, the first of three items in this unit, represents a low difficulty level.
SOURCE: OECD (2024). PIAAC Cycle 2 Released Cognitive Items.
The second item represents a moderate to high difficulty level.
SOURCE: OECD (2024). PIAAC Cycle 2 Released Cognitive Items.
This item represents a moderate to high difficulty level.
SOURCE: OECD (2024). PIAAC Cycle 2 Released Cognitive Items.
This item represents a low difficulty level.
SOURCE: OECD (2024). PIAAC Cycle 2 Released Cognitive Items.
This item represents a moderate difficulty level.
SOURCE: OECD (2024). PIAAC Cycle 2 Released Cognitive Items.
References
Bråten, I., Strømsø, H., and Britt, M. (2009). Trust Matters: Examining the Role of Source Evaluation in Students’ Construction of Meaning Within and Across Multiple Texts. Reading Research Quarterly, 44(1), 6-28. http://dx.doi.org/10.1598/rrq.44.1.1
McCrudden, M., and Schraw, G. (2007). Relevance and Goal-Focusing in Text Processing. Educational Psychology Review, 19(2), 113-139. http://dx.doi.org/10.1007/s10648-006-9010-7
Richter, T. (2015). Validation and Comprehension of Text Information: Two Sides of the Same Coin. Discourse Processes, 52(5-6), 337-355. http://dx.doi.org/10.1080/0163853x.2015.1025665
Rouet, J., and Britt, M. (2011). Relevance Processes in Multiple Document Comprehension. In M. McCrudden, J. Magliano, and G. Schraw (Eds.), Text Relevance and Learning from Text. Greenwich, CT: Information Age Publishing.
The development of the numeracy framework for PIAAC 2023 (Cycle 2) was based on a review conducted by experts charged with identifying changes in the field over the past decade and specifying ways to revise and update the Cycle 1 framework (Tout, et al. 2017). The review suggested that the updated framework for Cycle 2 should:
Specifically, the numeracy framework provides an overview of:
1 Note that the PIAAC Cycle 2 numeracy framework was designed to ensure comparability with PIAAC Cycle 1 and earlier assessments.
For PIAAC Cycle 2, numeracy was defined as follows:
"Numeracy is accessing, using and reasoning critically with mathematical content, information and ideas represented in multiple ways in order to engage in and manage the mathematical demands of a range of situations in adult life."
As described in the PIAAC Cycle 2 numeracy framework, numeracy is a broad, multifaceted construct that refers to a complex competency. Therefore, the definition of numeracy given above should not be considered in isolation; it should be coupled with a more detailed definition of numerate behavior and further specification of its dimensions. This pairing is essential for operationalizing the construct of numeracy in an actual assessment, thereby contributing to the assessment’s validity and interpretability and further broadening the understanding of key terms in the definition itself. Consequently, a definition of numerate behavior similar to the one used for the ALL survey, but more concise, has been adopted for PIAAC:
"Numerate Behavior involves managing a situation or solving a problem in a real context, by responding to mathematical content/information/ideas represented in multiple ways..."
The PIAAC numeracy framework references several research studies (Maguire and Smith 2016; Roth 2012; Weeks et al. 2013) and notes that a key aspect of numeracy is the ability to recognize when mathematics is present in real-world situations and to act upon it. This skill—identifying how mathematics applies to everyday life—plays a critical role in being numerate, helping individuals connect the math they learned in school with practical, real-world applications. This concept is also highlighted in studies on workplace numeracy, such as calculating medication dosages, where recognizing embedded math is crucial.
Box 1: Numerate behavior and practices: Key dimensions and their components
Numeracy is an individual’s capacity to ...
1. access, use and reason critically
2. with mathematical content
3. represented in multiple ways
4. . in order to engage in and manage the mathematical demands of a range of situations in adult life:
|
An individual’s numerate capacity is founded on the activation of several enabling factors and processes:
|
Four core dimensions of numeracy are defined for Cycle 2:
Numeracy as described in PIAAC is comprised both of cognitive elements (i.e., various knowledge bases and skills) as well as noncognitive or semicognitive elements (i.e., attitudes, beliefs, habits of mind, and other dispositions) that together help to shape a person’s numerate behaviour and practices.
It should be noted that the bottom section of Box 1 lists several enabling factors and processes whose activation also underlies numerate behavior. Most of these enabling factors and processes appeared in the ALL conceptual framework and in PIAAC Cycle 1. Overall, the definition of numeracy and the description of numerate behavior, along with the details in Box 1 and the further explanations of the core dimensions within the complete numeracy framework (see below), provide the structure and roadmap for the development of the numeracy assessment as part of PIAAC Cycle 2.
For the complete PIAAC numeracy framework, see:
In PIAAC, results are reported as averages on a 500-point scale or as proficiency levels. Proficiency refers to competence that involves “mastery” of a set of abilities along a continuum that ranges from simple to complex information-processing tasks.
This continuum has been divided into six levels of proficiency. By assigning scale values to both individual participants and assessment tasks, it is possible to see how well adults with varying numeracy proficiencies performed on tasks of varying difficulty. Although individuals with low proficiency tend to perform well on tasks with difficulty values equivalent to or below their level of proficiency, they are less likely to succeed on tasks receiving higher difficulty values. This means that the more difficult the task relative to each individual’s level of proficiency, the lower the likelihood he or she will respond correctly. Similarly, the higher one’s proficiency on the literacy scale relative to difficulty of the task, the more likely they are to perform well on the task.
The following descriptions summarize the types of tasks that adults at a particular proficiency level can reliably complete successfully.
Description of PIAAC numeracy discrete proficiency levels
Proficiency level and score range | Task descriptions |
---|---|
Below Level 1 0–175 points
|
Adults at Below Level 1 demonstrate elementary whole number sense and can access and use mathematical knowledge to solve single-step problems, where the information is presented using images or simple structured information set in authentic, commonplace contexts with little or no text or distracting information. The mathematical content is non-formal and explicit.
Adults at this level can:
|
Level 1 176–225 points
|
Adults at Level 1 demonstrate number sense involving whole numbers, decimals, and common fractions and percentages. They can access, act on and use mathematical information located in slightly more complex representations set in authentic contexts where the mathematical content is explicit and uses informal mathematical terminology with little text and minimal distracting information. Adults can devise simple strategies using one or two steps for determining the solution.
Adults at this level can:
|
Level 2 226–275 points
|
At Level 2, adults can access, act on and use mathematical information, and evaluate simple claims, for tasks set in a variety of authentic contexts. They are able to interpret and use information presented in slightly more complex forms (e.g., doughnut charts, stacked bar graphs, or linear scales) that includes more formal terminology and more distracting information. Adults at this level can carry out multi-step mathematical processes.
Adults at this level can:
|
Level 3 276–325 points
|
At Level 3, adults can access, act on, use, reflect on and evaluate authentic mathematical contexts. This requires making judgements about how to use the given information when developing a solution to a problem. The mathematical information may be less explicit, embedded in contexts that are not always commonplace, and use representations and terminology that are more formal and involve greater complexity. Adults at this level can complete tasks where mathematical processes require the application of two or more steps and where multiple conditions need to be satisfied. Tasks may also require the use, integration, or manipulation of multiple data sources in order to undertake the mathematical analyses necessary for the specific task.
Adults at this level can:
|
Level 4 326–375 points
|
At Level 4, adults can use and apply a range of problem-solving strategies to access, analyze, reason, and critically reflect on and evaluate a broad range of mathematical information that is often presented in unfamiliar contexts. Such information may not be presented in an explicit manner. Adults at this level can devise and implement strategies to solve multi-step problems. This may involve reasoning about how to integrate concepts from different mathematical content areas or applying more complex and formal mathematical procedures.
Adults at this level can:
|
Level 5 376–500 points
|
At Level 5, adults can use and apply problem-solving strategies to analyze, evaluate, reason and critically reflect on complex and formal mathematical information, including dynamic representations. They demonstrate an understanding of statistical concepts and can critically reflect on whether a data set can be used to support or refute a claim. Adults at this level can determine the most appropriate graphical representation for relational data sets. |
NOTE: Every test item is located at a point on the proficiency scale based on its relative difficulty. The easiest items are those located at a point within the score range below level 1 (i.e., 175 or less); the most difficult items are those located at a point at or above the threshold for level 5 (i.e., 376 points). An individual with a proficiency score that matches a test item’s scale score value has a 67 percent chance of successfully completing that test item. This individual will also be able to complete more difficult items (those with higher values on the scale) with a lower probability of success and easier items (those with lower values on the scale) with a greater chance of success.
In general, this means that tasks located at a particular proficiency level can be successfully completed by the “average” person at that level approximately two-thirds of the time. However, individuals scoring at the bottom of the level would successfully complete tasks at that level only about half the time while individuals scoring at the top of the level would successfully complete tasks at that level about 80 percent of the time. Information about the procedures used to set the achievement levels is available in the OECD PIAAC Technical Report.
SOURCE: OECD 2023 Survey of Adult Skills International Report
In Cycle 2, the assessment is delivered on a tablet device.2 The assessment interface has been designed to ensure that most, if not all, respondents are able to take the assessment on the tablet even if they have limited experience with such devices.
The PIAAC numeracy assessment tasks are organized along a set of dimensions that ensure broad coverage and a precise description of what people can do at each level of proficiency.
The following tables show the distribution of the 80 numeracy items selected for the final item set in PIAAC.
Distribution of PIAAC Cycle 2 numeracy items by cognitive strategy
Number | Percent | |
Access and assess | 23 | 29% |
Act on and use | 38 | 48% |
Evaluate, critically reflect and make judgements | 19 | 24% |
Total | 80 | 100% |
Source: OECD 2023 PIAAC Reader’s Companion
Distribution of PIAAC Cycle 2 numeracy items by representation
Number | Percent | |
Text or symbols | 15 | 19% |
Images of objects | 11 | 14% |
Structured information | 37 | 46% |
Dynamic applications | 17 | 21% |
Total | 80 | 100% |
Source: OECD 2023 PIAAC Reader’s Companion
Distribution of PIAAC Cycle 2 numeracy items by mathematical content area
Number | Percent | |
Quantity and number | 19 | 24% |
Space and shape | 16 | 20% |
Change and relationships | 17 | 21% |
Data and chance | 28 | 35% |
Total | 80 | 100% |
Source: OECD 2023 PIAAC Reader’s Companion
Distribution of PIAAC Cycle 2 numeracy items by context
Number | Percent | |
Work | 25 | 31% |
Personal | 26 | 33% |
Social/community | 29 | 36% |
Total | 80 | 100% |
Source: OECD 2023 PIAAC Reader’s Companion
2 In the first cycle of the study, the assessment could be completed on a laptop computer or in paper-and-pencil format. The computer-based assessment (CBA) format constituted the default format, with the paper-based assessment (PBA) option being made available to those respondents who had little or no familiarity with computers, had poor information communications technology (ICT) skills, or did not wish to take the assessment on computer. In the second cycle, all countries administered the assessment on a tablet.
The first item is a complex multiple-choice item, scaled at Level 2 on the PIAAC Numeracy proficiency scale.
SOURCE: OECD (2024). PIAAC Cycle 2 Released Cognitive Items.
The second sample item is a complex multiple-choice item of moderate difficulty, classified at Level 3 on the PIAAC Numeracy proficiency scale.
SOURCE: OECD (2024). PIAAC Cycle 2 Released Cognitive Items.
The third sample item is a numeric entry item, scaled at Level 2 on the PIAAC Numeracy proficiency scale.
SOURCE: OECD (2024). PIAAC Cycle 2 Released Cognitive Items.
Adaptive problem solving (APS) is a new domain developed by international experts and introduced in PIAAC 2023 (Cycle 2). The APS conceptual framework was specifically designed to account for the digital environments that adults now routinely navigate, highlighting respondents’ ability to react to unforeseen changes and emerging information. In today’s fast-changing world, the ability to adapt quickly, continue learning, and apply knowledge is essential. With constant access to diverse information, the ability to adjust to unexpected changes has become even more crucial.
Specifically, the PIAAC Cycle 2 adaptive problem solving framework provides an overview of:
APS was introduced to replace the assessment of problem solving in technology-rich environments (PS-TRE) administered in Cycle 1 of PIAAC, although it is not linked to it. While the PS-TRE domain focused on evaluating individuals' problem-solving skills in specific, nonroutine interactions with technology-rich environments, the APS assessment is based on a broader notion of general problem-solving applicable across various information environments. APS is not limited to digitally embedded problems, although digital aspects remain significant.
For PIAAC Cycle 2, adaptive problem solving was defined as follows:
“Adaptive problem solving involves the capacity to achieve one’s goals in a dynamic situation, in which a method for solution is not immediately available. It requires engaging in cognitive and metacognitive processes to define the problem, search for information, and apply a solution in a variety of information environments and contexts.”
“Adaptive problem solving…”
As described in the PIAAC Cycle 2 adaptive problem solving framework, the term “adaptive” emphasizes the flexible nature of problem solving, regardless of the environment or context in which it occurs. This highlights that problem solving is a process that takes place in complex environments and is not a static sequence of a number of preset steps. “Problem solving” was chosen as a core term to focus on situations that require nonroutine solutions (as opposed to tasks, see below), independent of any specific content domain.
Some key terms within this definition are explained below.
“…involves the capacity to achieve one’s goals in a dynamic situation…”
The broad term “capacity” in APS refers to a sets of skills, primarily cognitive and metacognitive, that are assessed directly. Motivation to tackle problems and manage unforeseen changes is also essential, though it is measured indirectly. APS is a goal-driven activity that places problem solvers in dynamic situations, requiring them to adapt to evolving conditions.
“…in which a method for solution is not immediately available.”
A key aspect of problem solving is that the solution path isn’t immediately clear, requiring the problem solver to work through a process to reach the goal. This differentiates problem-solving from routine tasks, in which solutions are usually more readily apparent.
"…it requires engaging in cognitive and metacognitive processes…"
Both cognitive and metacognitive skills are essential in APS. Cognitive skills involve organizing and integrating information into a mental model or evaluating whether operators are relevant for reaching the desired goal state. Metacognitive skills, such as setting goals and reflecting on progress, are equally important. While the role of metacognition has been acknowledged in previous assessment frameworks, it has often not been explicitly targeted; instead, it has been considered an implicit part of the overall assessment.
"…to define the problem, search for information, and apply a solution…"
The APS framework defines three broad problem-solving stages that are logically ordered from first, defining the problem; second, searching for information; and, finally, applying a solution. However, this is a schematic description and any problem-solving activity switches between the different stages or might even employ them simultaneously.
"…in a variety of information environments and contexts."
This final part of the definition stresses that in information-rich environments–and virtually all of today’s problems are embedded into such–the different sources from which the information originates and the different contexts are of high relevance.
For the complete PIAAC adaptive problem solving framework, see:
In PIAAC, results are reported as averages on a 500-point scale or as proficiency levels. Proficiency refers to competence that involves “mastery” of a set of abilities along a continuum that ranges from simple to complex information-processing tasks.
This continuum has been divided into five levels of proficiency. By assigning scale values to both individual participants and assessment tasks, it is possible to see how well adults with varying adaptive problem solving proficiencies performed on tasks of varying difficulty. Although individuals with low proficiency tend to perform well on tasks with difficulty values equivalent to or below their level of proficiency, they are less likely to succeed on tasks receiving higher difficulty values. This means that the more difficult the task relative to each individual’s level of proficiency, the lower the likelihood he or she will respond correctly. Similarly, the higher one’s proficiency on the literacy scale relative to difficulty of the task, the more likely they are to perform well on the task.
In adaptive problem solving a range of difficulty levels is considered. For example, a set of “static” tasks—lacking dynamic features that require the application of adaptive strategies—is included to measure basic problem-solving abilities in individuals with more limited skills
The following descriptions summarize the types of tasks that adults at a particular proficiency level can reliably complete successfully.
Description of PIAAC adaptive problem solving discrete proficiency levelsProficiency level and score range | Task descriptions |
---|---|
Below Level 1 0–175 points
|
Adults at this level understand very simple static problems situated within a clearly structured environment. Problems contain no invisible elements, no irrelevant information that might distract from the core of the problem, and typically only require a single step solution.
Adults at this proficiency level are able to engage in basic cognitive processes required to solve problems if explicit support is given and if they are prompted to do so. |
Level 1 176–225 points
|
Adults at this level are able to understand simple problems and develop and implement solutions to solve them. Problems contain a limited number of elements and little to no irrelevant information. Solutions at this level are simple and consist of a limited number of steps. Problems are embedded in a context that includes one or two sources of information and presents a single, explicitly defined goal.
Adults at Level 1 engage in the following cognitive processes:
|
Level 2 226–275 points
|
Adults at this level can identify and apply solutions that consist of several steps in problems that require considering one target variable to judge whether the problem has been solved. In dynamic problems that exhibit change, adults at this level can identify relevant information if they are prompted to specific aspects of the change or if changes are transparent, occur only one at a time, relate to a single problem feature, and are easily accessible. Problems at this level are presented in well-structured environments and contain only a few information elements with direct relevance to the problem. Minor impasses may be introduced but these can be resolved easily by adjusting the initial problem-solving procedure.
Adults at Level 2 engage in the following cognitive processes:
Adults at this level engage in the following metacognitive processes:
|
Level 3 276–325 points
|
Adults at this level understand problems that are either more complex static problems or problems that have an average to high level of dynamics. They can solve problems with multiple constraints or problems that require the attainment of several goals in parallel. In problems that change and require adaptivity, adults deal with frequent and, to some extent, continuous changes. They discriminate between changes that are relevant and those that are less relevant or unrelated to the problem. Adults at this level can identify and apply multi-step solutions that integrate several important variables simultaneously and consider the impact of several problem elements on each other. In dynamically changing problems, they predict future developments in the problem situation based on information collected from past developments. They adapt their behavior according to the predicted change. Adults at Level 3 engage in the following cognitive processes:
Adults at this level engage in the following metacognitive processes:
|
Level 4 326–500 points
|
Adults at this level are able to define the nature of problems in ill-structured and information-rich contexts. They integrate multiple sources of information and their interactions, identify and disregard irrelevant information, and formulate relevant cues. Adults identify and apply multi-step solutions towards one or more complex goals. They adapt the problem-solving process to changes even if these changes are not obvious, occur unexpectedly, or require a major reevaluation of the problem. Adults are able to distinguish between relevant and irrelevant changes, predict future developments of the problem situation, and consider multiple criteria simultaneously to judge whether the solution process is likely to lead to success. Adults at Level 4 engage in the following cognitive processes:
Adults at this level engage in the following metacognitive processes:
|
NOTE: Every test item is located at a point on the proficiency scale based on its relative difficulty. The easiest items are those located at a point within the score range below level 1 (i.e., 175 or less); the most difficult items are those located at a point at or above the threshold for level 5 (i.e., 376 points). An individual with a proficiency score that matches a test item’s scale score value has a 67 percent chance of successfully completing that test item. This individual will also be able to complete more difficult items (those with higher values on the scale) with a lower probability of success and easier items (those with lower values on the scale) with a greater chance of success.
In general, this means that tasks located at a particular proficiency level can be successfully completed by the “average” person at that level approximately two-thirds of the time. However, individuals scoring at the bottom of the level would successfully complete tasks at that level only about half the time while individuals scoring at the top of the level would successfully complete tasks at that level about 80 percent of the time. Information about the procedures used to set the achievement levels is available in the OECD PIAAC Technical Report.
SOURCE: OECD 2023 Survey of Adult Skills International Report
In Cycle 2, the assessment is delivered on a tablet device.1 The assessment interface has been designed to ensure that most, if not all, respondents are able to take the assessment on the tablet even if they have limited experience with such devices.
The PIAAC adaptive problem solving assessment tasks are organized along a set of dimensions that ensure broad coverage and a precise description of what people can do at each level of proficiency.
The following tables show the distribution of the 65 adaptive problem-solving items selected for the final item set in PIAAC.
Distribution of PIAAC Cycle 2 APS items by information environment
Number | Percent | |
Digital | 26 | 40% |
Physical | 24 | 37% |
Social | 15 | 23% |
Total | 65 | 100% |
Source: OECD 2023 PIAAC Reader’s Companion
Distribution of PIAAC Cycle 2 APS items by cognitive strategy
Number | Percent | |
Define the problem | 19 | 29% |
Search for solution | 33 | 51% |
Apply solution | 13 | 20% |
Total | 65 | 100% |
Source: OECD 2023 PIAAC Reader’s Companion
Distribution of PIAAC Cycle 2 APS items by metacognitive strategy
Number | Percent | |
Define the problem | 23 | 40% |
Search for solution | 22 | 39% |
Apply solution | 12 | 21% |
Total | 57 | 100% |
Note: Static items with no dynamic features do not require the application of metacognitive strategies.
Source: OECD 2023 PIAAC Reader’s Companion
Distribution of PIAAC Cycle 2 APS items by context
Number | Percent | |
Work | 26 | 40% |
Personal | 27 | 42% |
Social/community | 12 | 18% |
Total | 65 | 100% |
Source: OECD 2023 PIAAC Reader’s Companion
1 In the first cycle of the study, the assessment could be completed on a laptop computer or in paper-and-pencil format. The computer-based assessment (CBA) format constituted the default format, with the paper-based assessment (PBA) option being made available to those respondents who had little or no familiarity with computers, had poor information communications technology (ICT) skills, or did not wish to take the assessment on computer. In the second cycle, all countries administered the assessment on a tablet.
Examples of adaptive problem-solving items included in Cycle 2 are presented as screenshots of the displays that appear on the tablet used to deliver the assessment. These items were not administered in either the Field Test or Main Study; while no PIAAC proficiency levels are available for these items, estimated difficulty levels are provided. To view and interact with the full set of released sample items, see the PIAAC released items.
SOURCE: OECD (2024). PIAAC Cycle 2 Released Cognitive Items.
Unit Name – Item # |
Best Route – Item 1 |
Cognitive Process |
Searching for solution: Searching for operators in the problem environment |
Metacognitive Process |
Searching for solution: Evaluating operators/plans |
Problem Context |
Personal |
Information Environment |
Physical resources |
Item Format |
Tap on stimulus |
Answers |
Taps on the following locations (in this order):
|
Estimated Difficulty |
Low to moderate |
SOURCE: OECD (2024). PIAAC Cycle 2 Released Cognitive Items.
Unit Name – Item # |
Best Route – Item 2 |
Cognitive Process |
Searching for solution: Searching for operators in the problem environment |
Metacognitive Process |
Applying a solution: Monitoring/regulating progress |
Problem Context |
Personal |
Information Environment |
Physical resources |
Item Format |
Tap on stimulus |
Answers |
Taps on the following locations (in this order):
|
Estimated Difficulty |
Low to moderate |
The PIAAC reading components framework, developed by international experts, is designed to assess the literacy skills of adults with the lowest level of literacy skills. Specifically, it examines whether they have the foundational skills necessary to develop the higher literacy and numeracy abilities necessary for functioning in society. PIAAC’s reading components assess basic elements of reading that are comparable across the range of languages in the participating countries.
The two basic elements of reading components
To provide more information regarding the skills of low-skilled readers, an assessment of reading component skills was introduced in PIAAC Cycle 1 (Sabatini and Bruce 2009). This covered three skills: print vocabulary, sentence processing and passage fluency. Print vocabulary assessed basic vocabulary knowledge, sentence processing evaluated the ability to understand the semantic logic of simple sentences, and passage fluency assessed the capacity to understand passages of text. Reading components continue to be assessed in PIAAC Cycle 2 with some modifications. Only two skills (sentence processing and passage fluency) are assessed. 1
For the complete PIAAC Cycle 2 framework, see:
1 Performance on the reading components tasks will also be integrated as part of the literacy proficiency scale in Cycle 2, adding precision to its lower end. Performance on the reading components assessment was reported separately from performance in literacy in PIAAC Cycle 1.
The reading components assessment includes two sets of tasks, both of which were administered in the first cycle of PIAAC. The first set focuses on the ability to process meaning at the sentence level. Respondents are shown a series of sentences, which increase in complexity, and asked to identify if the sentence does or does not make sense in terms of properties of the real world or the internal logic of the sentence. The second set of tasks focuses on passage comprehension. For these tasks, respondents are asked to read passages where, at certain points, they must select a word from two provided alternatives so that the text makes sense (see sample tasks in OECD [2019]).
Because PIAAC Cycle 2 is administered on tablets, it is possible to precisely record both accuracy and response times for the component tasks. The accuracy data in the sentence verification and passage comprehension tasks will serve as indicators of the mastery of basic reading comprehension processes. They will be included in the scaling of the items in the PIAAC literacy assessment, increasing measurement precision in the lower range of the scale. The response times will serve as an indicator of fluency in basic reading processes, allowing researchers to explore its potential contribution to the mastery of the more complex literacy tasks in the PIAAC literacy assessment.
Examples of reading components items included in Cycle 2 are presented as screenshots of the displays that appear on the tablet used to deliver the assessment. To view and interact with the full set of released sample items, see the PIAAC released items.
A single sentence is displayed on the screen and the respondent is asked to indicate if the sentence makes sense. As soon as the respondent taps on “YES” or “NO,” the next sentence displays on screen.
SOURCE: OECD (2024). PIAAC Cycle 2 Released Cognitive Items.
Respondents are asked to read a short article, which builds sentence by sentence on the screen. Most sentences include two underlined words, and respondents are asked to select the one word that best completes the sentence.
SOURCE: OECD (2024). PIAAC Cycle 2 Released Cognitive Items.
Organization for Economic Cooperation and Development (OECD). (2019). The Survey of Adult Skills: Reader’s Companion, Third Edition (OECD Skills Studies). Paris: OECD Publishing. https://dx.doi.org/10.1787/f70238c7-en.
Sabatini, J., and Bruce, J. (2009). PIAAC Reading Component: A Conceptual Framework (OECD Education Working Papers, No. 33). Paris: OECD Publishing. http://dx.doi.org/10.1787/220367414132
The PIAAC numeracy components framework, developed by international experts, is designed to parallel the existing reading components assessment and provide insights into the skills and knowledge of the many adults with low levels of numeracy.
The numeracy components assessment, a new element in Cycle 2 of PIAAC, is the first of its kind in the context of large-scale international adult surveys. Like reading components, numeracy components represent basic numeracy skills that serve as prerequisites for developing the more advanced skills measured in the main numeracy assessment. Including these components allows for a more accurate measurement of skills at the lowest end of the distribution.
The PIAAC Cycle 2 numeracy component skills focus on number sense. Number sense relates to the understanding of quantities and how numbers represent quantities. The numeracy component items ask participants to estimate quantities from real-life pictures and to estimate the relative magnitude of several numerical representations of quantities.
The numeracy component skills assessment includes two types of fluency-based measures, each focusing on different aspects of number sense:
How Many: where respondents are asked to look at an image and identify how many items are shown.
Which is Biggest: where respondents view four numbers and are asked to identify the biggest one.
For the complete PIAAC Numeracy Components framework, see:
Given constraints in skill level, reading demands, time, and task format, the numeracy expert group opted for a concise set of number sense items that would anchor the assessment of relevant numeracy components. These items prompt participants to estimate quantities from real-life pictures and to compare the relative sizes of different numerical quantities. The tasks were designed so that respondents can quickly view each question, select an answer, and move immediately to the next item without needing to read additional text.
The content is limited to a fundamental perspective on number sense and more specifically to:
Examples of reading components items included in Cycle 2 are presented as screenshots of the displays that appear on the tablet used to deliver the assessment. To view and interact with the full set of released sample items, see the PIAAC released items.
How many?
For the set of “How Many?” items, respondents are shown a screen with an image of a set of objects and are asked to tap on a number to indicate how many items are shown. As soon as a number is selected, the next screen displays. The items vary in terms of the number of objects shown and the format in which they are displayed (e.g., presented in an organized array, grouped, or in a random visual display).
SOURCE: OECD (2024). PIAAC Cycle 2 Released Cognitive Items.
Which is biggest?
For this set of items, respondents are shown a group of four numbers and asked to tap on the number that is biggest. As is the case with the “How Many?” items, once a selection is made, the next screen displays.
SOURCE: OECD (2024). PIAAC Cycle 2 Released Cognitive Items.
The Definition of Literacy Domain expanded on the definition of literacy used in the International Adult Literacy Survey (IALS) and the Adult Literacy and Lifeskills Survey (ALL)1 and provides a broad definition of literacy:
"Literacy is understanding, evaluating, using and engaging with written text to participate in society, to achieve one's goals, and to develop one's knowledge and potential."
This definition (a) highlights the ranges of cognitive processes involved in literacy; (b) focuses on a more active, participatory role of individuals in society; and (c) includes a range of text types, such as narrative and interactive texts, in both print and electronic formats.
While this is broader than the definition used in IALS and ALL, selected items from those assessments are used to provide a link to IALS and ALL. PIAAC items include continuous texts (e.g., sentences and paragraphs), noncontinuous texts (e.g., schedules, graphs, maps), and electronic texts (including hypertext, or text in interactive environments, such as forms and blogs). Task activities are presented in home, work, and community contexts, addressing the various purposes that adults pursue in their lives.
Based on the PIAAC framework, literacy tasks include items (in both computer-based and paper-and-pencil modes) that cover a range of difficulties—low, middle, and high—to present a comprehensive picture of the range of skills of adults in each country.
1 IALS and ALL definition: Literacy is using printed and written information to function in society to achieve one's goals and to develop one's knowledge and potential.
The Definition of Literacy Domain was prepared by an international literacy expert team in order to reflect the overall understanding of how best to assess adult literacy. The framework provides an overview of
For PIAAC, literacy was defined as follows:
"Literacy is understanding, evaluating, using and engaging with written texts to participate in society, to achieve one’s goals, and to develop one’s knowledge and potential."
Some key terms within this definition are explained below.
"Understanding"
A basic task for the reader is constructing meaning, both large and small, literal and implicit, from text. This can be as basic as understanding the meaning of the words to as complex as comprehending the underlying theme of a lengthy argument or narrative. Certainly, evaluating or using a text implies some level of understanding and so provides an indirect measure of it, but it is the intent of the PIAAC assessment to have a more direct measure. The components framework provides the construct to support basic understanding, but the literacy assessment itself should also include tasks that explicitly tap more complex understanding, such as the relationships between different parts of the text, the gist of the text as a whole, and insight into the author’s intent. Readers also have to understand the social function of each text and the way this influences structure and content.
"Evaluating"
Readers continually make judgments about a text they are approaching. They need to assess whether the text is appropriate for the task at hand, determining whether it will provide the information they need. They have to make judgments about the truthfulness and reliability of the content. They need to account for any biases they find in the text. And, for some texts, they must make judgments about the quality of the text, both as a craft object and as a tool for acquiring information.
Such judgments are especially important for electronic texts. While published print information carries a sense of legitimacy, especially where the reader can assume there has been some review and edit process, sources for online information are more varied, ranging from authoritative to postings with unknown or uncertain authenticity. All information must be evaluated in terms of accuracy, reliability, and timeliness, but this is particularly important with online material.
"Using"
Much adult reading is directed toward applying the information and ideas in a text to an immediate task or to reinforce or change beliefs. Nearly all the tasks in previous international assessments have been of this kind. In some cases, using a text in this way requires just minimal understanding, getting the meaning of the words with some elementary recognition of structure (many menus, for example). In others, it requires using both syntactic and more complex structural understanding to extract the information. In all cases though, the reader approaches the text with a specific task in mind.
"Engaging with"
Many adults appear to read text only when some task requires them to do so. Others sometimes also read for the pleasure it brings them. That is, adults differ in how they engage with text and how much of a role reading plays in their lives. Studies have found that engagement with (that is, the attitude toward and practice of) reading is an important correlate with the direct cognitive measures. As such, it is necessary to understand these differences to get a full picture of adult literacy.
"Written text"
Previous literacy assessments have focused primarily on informative texts of both continuous and noncontinuous form. It is the intention of the new construct to expand the range of texts to include a greater variety of text types, such as narrative and interactive texts, and a greater variety of media. Until recently, most adult reading was of material printed on paper. Now, adults need to access and use text that is displayed on a screen of some kind, whether computer, a mobile device, or an ATM, or a Blackberry or iPhone. The PIAAC definition encompasses all these.
It is worth noting that including electronic text opens the assessment to new types of text and content. While one can find examples of similar texts in paper, they are much less common in that form. Some of these novel form/content combinations include interactive texts, such as exchanges in comments sections of blogs or in e-mail response threads; multiple texts, whether displayed at the same time on a screen or linked through hypertext; and expandable texts, where a summary can be linked to more detailed information if the user chooses.
"Participate in society"
While earlier definitions referred to the role of literacy in “functioning” in society, the PIAAC use of “participating” is meant to focus on a more active role for the individual. Adults use text as a way to engage with their social surroundings, to learn about and to actively contribute to life in their community, close to home and more broadly. And for many adults, literacy is essential to their participation in the labor force. In this, we recognize the social aspect of literacy, seeing it as part of the interactions between and among individuals.
"Achieve one’s goals"
Adults have a range of needs they must address, from basic survival to personal satisfaction and to professional and career development. Literacy is increasingly complicit in meeting those needs, whether it is to simply find one’s way through shopping or to negotiate complex bureaucracies, whose rules are commonly available only in written texts. It is also important in meeting adult needs for sociability, for entertainment, and leisure and for work.
"Develop one’s potential"
Surveys suggest that many adults engage in some kind of learning throughout their life, much of it self-directed and informal. Much of this learning requires some use of text, and, as individuals want to improve their life, whether at work or outside, they need to understand, use, and engage with printed and electronic materials.
For the complete PIAAC literacy framework, see
PIAAC Literacy: A Conceptual Framework
http://www.oecd-ilibrary.org/education/piaac-literacy-a-conceptual-framework_220348414075
In PIAAC, results are reported as averages on a 500-point scale or as proficiency levels. Proficiency refers to competence that involves “mastery” of a set of abilities along a continuum that ranges from simple to complex information-processing tasks.
This continuum has been divided into five levels of proficiency. Each level is defined by a particular score-point range associated with competence at specific information-processing tasks. Adults with literacy scores within the score-point range for a particular proficiency level are likely to successfully complete the tasks at that proficiency level as well as any lower proficiency levels. Adults with scores at a particular proficiency level might be able to complete a task at a higher proficiency level, but the probability is small and diminishes greatly the higher the level. The following descriptions summarize the types of tasks that adults at a particular proficiency level can reliably complete successfully.
The following descriptions summarize the types of tasks that adults at a particular proficiency level can reliably complete successfully.
See sample items for each level
Description of PIAAC literacy discrete proficiency levels
Proficiency level and score range | Task descriptions |
---|---|
Below Level 1 0–175 points
|
The tasks at this level require the respondent to read brief texts on familiar topics to locate a single piece of specific information. There is seldom any competing information in the text, and the requested information is identical in form to information in the question or directive. The respondent may be required to locate information in short continuous texts; however, in this case, the information can be located as if the text were noncontinuous in format. Only basic vocabulary knowledge is required, and the reader is not required to understand the structure of sentences or paragraphs or make use of other text features. Tasks below Level 1 do not make use of any features specific to digital texts. |
Level 1 176–225 points
|
Most of the tasks at this level require the respondent to read relatively short continuous, noncontinuous, or mixed texts in digital or print format to locate a single piece of information that is identical to or synonymous with the information given in the question or directive. Some tasks, such as those involving noncontinuous texts, may require the respondent to enter personal information into a document. Little, if any, competing information is present. Some tasks may require simply cycling through more than one piece of information. The respondent is expected to have knowledge and skill in recognizing basic vocabulary, determining the meaning of sentences, and reading paragraphs of text. |
Level 2 226–275 points
|
At this level, texts may be presented in a digital or print medium and may comprise continuous, noncontinuous, or mixed types. Tasks at this level require respondents to make matches between the text and information and may require paraphrasing or low-level inferences. Some competing pieces of information may be present. Some tasks require the respondent to
|
Level 3 276–325 points
|
Texts at this level are often dense or lengthy and include continuous, noncontinuous, mixed, or multiple pages of text. Understanding text and rhetorical structures becomes more central to successfully completing tasks, especially navigating complex digital texts. Tasks require the respondent to identify, interpret, or evaluate one or more pieces of information and often require varying levels of inference. Many tasks require the respondent to construct meaning across larger chunks of text or perform multi-step operations in order to identify and formulate responses. Often, tasks also demand that the respondent disregard irrelevant or inappropriate content to answer accurately. Competing information is often present, but it is not more prominent than the correct information. |
Level 4 326–375 points
|
Tasks at this level often require respondents to perform multi-step operations to integrate, interpret, or synthesize information from complex or lengthy continuous, noncontinuous, mixed, or multiple-type texts. Complex inferences and application of background knowledge may be needed to perform the task successfully. Many tasks require identifying and understanding one or more specific, noncentral idea(s) in the text in order to interpret or evaluate subtle evidence, claims, or persuasive discourse or relationships. Conditional information is frequently present in tasks at this level and must be taken into consideration by the respondent. Competing information is present and sometimes seemingly as prominent as correct information. |
Level 5 376–500 points
|
At this level, tasks may require the respondent to search for and integrate information across multiple, dense texts; construct syntheses of similar and contrasting ideas or points of view; or evaluate evidence-based arguments. Application and evaluation of logical and conceptual models of ideas may be required to accomplish tasks. Evaluating the reliability of evidentiary sources and selecting key information is frequently a requirement. Tasks often require respondents to be aware of subtle, rhetorical cues and to make high-level inferences or use specialized background knowledge. |
NOTE: Every test item is located at a point on the proficiency scale based on its relative difficulty. The easiest items are those located at a point within the score range below level 1 (i.e., 175 or less); the most difficult items are those located at a point at or above the threshold for level 5 (i.e., 376 points). An individual with a proficiency score that matches a test item’s scale score value has a 67 percent chance of successfully completing that test item. This individual will also be able to complete more difficult items (those with higher values on the scale) with a lower probability of success and easier items (those with lower values on the scale) with a greater chance of success.
In general, this means that tasks located at a particular proficiency level can be successfully completed by the “average” person at that level approximately two-thirds of the time. However, individuals scoring at the bottom of the level would successfully complete tasks at that level only about half the time while individuals scoring at the top of the level would successfully complete tasks at the level about 80 percent of the time. Information about the procedures used to set the achievement levels is available in the OECD PIAAC technical report.
SOURCE: OECD Skills Outlook 2013
PIAAC has two modes of assessment computer-administered and paper-and-pencil. Respondents who are not familiar with computers are given the paper-and-pencil version of the assessment. PIAAC measures literacy and numeracy in both computer and paper modes.
The beginning of the computer-based assessment (CBA) includes a 5-minute literacy/numeracy core, while the paper-based assessment (PBA) begins with a 10-minute core of literacy/numeracy items in paper-and-pencil format. The literacy/numeracy core is a set of short, easy literacy and numeracy items that gather information about the basic literacy and numeracy skills of the participants and serve as a basis for routing.
Within the CBA, the CBA literacy domain consists of 52 items based on the PIAAC definition of literacy, which are all scored automatically. Of these 52 computer-based items, 30 were adapted from paper-based items administered as part of IALS and/or ALL and were used to link PIAAC results with those from IALS and ALL. The remaining 22 computer-based literacy items were newly created for PIAAC.
Within the PBA, the literacy domain consists of 24 items based on the PIAAC definition of literacy, which are scored by expert scorers. Of these 24 items, 6 are paper-based only and 18 are administered in both the paper-based and computer-based assessments.
Literacy items (both CBA and PBA) ask participants to answer questions about texts that are drawn from a broad range of real-life settings, including occupational, personal (home and family, health and safety, consumer economics, leisure, and recreation), community and citizenship, and education and training contexts. The text may be:
The questions or tasks using these texts are meant to assess three specific cognitive processes:
Distribution of items by type of text
Type of text | Number | Percent | |
Print-based texts | 36 | 62 | |
Digital texts | 22 | 38 | |
Total | 58 | 100 |
Note: Each category includes continuous, noncontinuous, and, combined texts.
Source: Literacy, Numeracy and Problem Solving in Technology-Rich Environments: Framework for the OECD Survey of Adult Skills. Paris: OECD, 2012, p. 25
Distribution of items by context
Context | Number | Percent | |
Work | 10 | 17 | |
Personal | 29 | 50 | |
Community | 13 | 23 | |
Education | 6 | 10 | |
Total | 58 | 100 |
Source: Literacy, Numeracy and Problem Solving in Technology-rich Environments: Framework for the OECD Survey of Adult Skills. Paris: OECD, 2012, p. 25.
Distribution of items by task aspect
Task aspect | Number | Percent | |
Access and identify | 32 | 55 | |
Integrate and interpret | 17 | 29 | |
Evaluate and reflect | 9 | 16 | |
Total | 58 | 100 |
Source: Literacy, Numeracy and Problem Solving in Technology-rich Environments: Framework for the OECD Survey of Adult Skills. Paris: OECD, 2012, p. 25.
Distribution of items by mode of administration
Mode of administration | Number | Percent | |
Computer- and paper-based | 18 | 31 | |
Computer-based only | 34 | 59 | |
Paper-based only | 6 | 10 | |
Total | 58 | 100 |
Source: Adapted from Table 2.4 (p. 69) in the Technical Report of the Survey of Adult Skills (PIAAC) (3rd Edition). Paris: OECD, 2019.
In the PIAAC literacy domain, item difficulty is reported along a five-level proficiency scale with Level 1 corresponding to the easiest items and Level 5 corresponding to the most difficult items.
The stimulus consists of a short report of the results of a union election containing several brief paragraphs and a simple table identifying the three candidates in the election and the number of votes they received. The test taker is asked to identify which candidate received the fewest votes. He or she needs to compare the number of votes that the three candidates received and identify the name of the candidate who received the fewest votes. The word “votes” appears in both the question and in the table and nowhere else in the text.
SOURCE: OECD Skills Outlook 2013: First Results From the Survey of Adult Skills. Paris: OECD, 2013. Page 65.
The stimulus consists of a job search results webpage containing a listing of job descriptions by company. The test taker is asked to identify which company is looking for an employee to work at night. He or she needs to review the job descriptions and identify the name of the company that meets the criteria.
SOURCE: Sample Items: Education and Skills Online http://www.oecd.org/skills/piaac/documentation.htm
The stimulus is a simulated website containing information about the annual fun run/walk organized by the Lakeside community club. The test taker is first directed to a page with several links, including “Contact Us” and “FAQs.” He or she is then asked to identify the link providing the phone number of the organizers of the event. In order to answer this item correctly, the test taker needs to click on the link “Contact Us.” This requires navigating through a digital text and some understanding of web conventions. While this task might be fairly simple for test takers familiar with web-based texts, some respondents less familiar with web-based texts would need to make some inferences to identify the correct link.
SOURCE: OECD Skills Outlook 2013: First Results From the Survey of Adult Skills. Paris: OECD, 2013, p. 65.
Respondents are asked to answer the question shown in the left pane by highlighting information in the list of rules for a preschool.
Correct response: 9:00 am.
SOURCE: Literacy, Numeracy and Problem Solving in Technology-Rich Environments: Framework for the OECD Survey of Adult Skills. Paris: OECD, 2012, p. 26.
The stimulus displays results from a bibliographic search from a simulated library website. The test taker is asked to identify the name of the author of a book called Ecomyth. To complete the task, the test taker has to scroll through a list of bibliographic entries and find the name of the author specified under the book title. In addition to scrolling, the test taker must be able to access the second page where Ecomyth is located by either clicking the page number (2) or the word “next.” There is considerable irrelevant information in each entry this particular task, which adds to its complexity.
SOURCE: OECD Skills Outlook 2013: First Results From the Survey of Adult Skills. Paris: OECD, 2013, p. 65.
The stimulus displays an exercise equipment chart. The respondent has to use the chart to determine which equipment received the largest number of “ineffective” ratings.
Correct response: Dumb bells/weights.
SOURCE: Literacy, Numeracy and Problem Solving in Technology-Rich Environments: Framework for the OECD Survey of Adult Skills. Paris: OECD, 2012, p. 27.
The test taker is presented with a newspaper article and an e-mail. The test taker is asked to identify a sentence in each source that contains common criticisms made about the devices described in the sources.
SOURCE: Sample Items: Education and Skills Online http://www.oecd.org/skills/piaac/documentation.htm
Another item based on the exercise equipment chart stimulus asks the test taker to use the chart to identify which muscles will benefit the most from the use of a particular piece of equipment.
Correct response: Abdominal muscles.
SOURCE: Literacy, Numeracy and Problem Solving in Technology-Rich Environments: Framework for the OECD Survey of Adult Skills. Paris: OECD, 2012, p. 26.
NOTE: Items on this page are not actual replicas of test items.
The primary goal of PIAAC’s numeracy assessment is to evaluate basic mathematical and computational skills that are considered fundamental for functioning in everyday work and social life. In the Definition of Numeracy Domain, numeracy is defined as:
"the ability to access, use, interpret, and communicate mathematical information and ideas, to engage in and manage mathematical demands of a range of situations in adult life"
This definition (a) highlights the ranges of cognitive processes involved in numeracy; (b) focuses on a more active, engaged role of individuals in society; and (c) includes a range of representations of mathematical information, such as text or graphs.
This definition is compatible with that used in the Adult Literacy and Lifeskills Survey (ALL),1 and selected items from that assessment are used to provide a link to ALL. PIAAC items are represented in multiple ways, including pictures, mathematical symbols, formulas, diagrams, maps, graphs, tables, texts, and technology-based displays. Task activities are presented in home, work, and community contexts, addressing the various purposes that adults pursue in their lives.
Based on the PIAAC framework, numeracy tasks include items (in both computer-based and paper-and-pencil modes) that cover a range of difficulties—low, middle, and high—to present a comprehensive picture of the range of skills of adults in each country.
1ALL definition: Numeracy is the knowledge and skills required to effectively manage and respond to the mathematical demands of diverse situations.
The Definition of Numeracy Domain was prepared by an international numeracy expert team to reflect the overall understanding of how best to assess adult numeracy. The framework provides an overview of
For PIAAC, numeracy was defined as follows:
"Numeracy is the ability to access, use, interpret, and communicate mathematical information and ideas, in order to engage in and manage the mathematical demands of a range of situations in adult life."
This definition captures essential elements in numerous conceptualizations of numeracy in the extant literature; it is compatible with the definition used in the Adult Literacy and Life Skills Survey (ALL), an international assessment of adult skills conducted in 2003 and 2008, and appears to provide a solid basis from which to develop an assessment scale for PIAAC with its emphasis on competencies in the information age. The inclusion of "engage" in the definition signals that not only cognitive skills but also dispositional elements–i.e., beliefs and attitudes–are necessary for effective and active coping with numeracy situations. It is also important to note that while the definition of numeracy for PIAAC has been developed in the context of an assessment program, it has been crafted so as to contribute to public dialogue regarding the goal of educational and social interventions focused on developing adult competencies in general and adult numeracy and related mathematical and statistical skills and dispositions in particular.
However, since numeracy is a broad, multifaceted construct referring to a complex competency, the definition of numeracy given above should not be considered by itself but should be coupled with a more detailed definition of numerate behavior and with further specification of the facets of numerate behavior. This pairing is essential to enable operationalization of the construct of numeracy in an actual assessment, thereby contributing to the assessment’s validity and interpretability and further broadening the understanding of key terms appearing in the definition itself. Consequently, a definition of numerate behavior similar in general terms to the one used for the ALL survey, but shorter, has been adopted for PIAAC:
"Numerate Behavior involves managing a situation or solving a problem in a real context, by responding to mathematical content/information/ideas represented in multiple ways..."
Table 1: Numerate behavior: Key facets and their components
Numerate behavior involves managing a situation or solving a problem...
|
Numerate behavior is founded on the activation of several enabling factors and processes:
|
The definition of numerate behavior pertains to four facets: Contexts, Responses, Mathematical Content/Information/Ideas, and Representations. Table 1 lists the components of the four facets, and these components are explained in more detail the next section. Table 1 is based on the original description of the facets of numerate behavior developed for the ALL survey, but some changes have been implemented, such as the addition of "access" and "evaluate/analyze" as possible responses, the merging of the content categories of "change" and "pattern and relationship," and the reference to "technology-based displays" as another representation mode.
It should be noted that the bottom section of Table 1 lists several enabling factors and processes whose activation also underlies numerate behavior. Most of these enabling factors and processes appeared in the ALL conceptual framework, but some changes were introduced for PIAAC, such as the inclusion of "adaptive reasoning and mathematical problem-solving" as a separate factor. Overall, the definition of numerate behavior presented earlier, together with the detail in table 1 and the further explanations within the complete numeracy framework (see below), provided a roadmap for the development of a numeracy scale for PIAAC.
For a more complete description of the numeracy framework, see:
PIAAC Numeracy: A Conceptual Framework
http://www.oecd-ilibrary.org/education/piaac-numeracy-a-conceptual-framework_220337421165
In PIAAC, results are reported as averages on a 500-point scale or as proficiency levels. Proficiency refers to competence that involves “mastery” of a set of abilities along a continuum that ranges from simple to complex information-processing tasks.
This continuum has been divided into five levels of proficiency. Each level is defined by a particular score-point range associated with competence at specific information-processing tasks. Adults with literacy scores within the score-point range for a particular proficiency level are likely to successfully complete the tasks at that proficiency level as well as any lower proficiency levels. Adults with scores at a particular proficiency level might be able to complete a task at a higher proficiency level but the probability is small and diminishes greatly the higher the level.
The following descriptions summarize the types of tasks that adults at a particular proficiency level can reliably complete successfully.
Description of discrete proficiency levels for PIAAC numeracy
See sample items for each level
Proficiency level and score range | Task descriptions |
---|---|
Below Level 1 0–175 points
|
Tasks at this level require the respondents to carry out simple processes such as counting, sorting, performing basic arithmetic operations with whole numbers or money, or recognizing common spatial representations in concrete, familiar contexts where the mathematical content is explicit with little or no text or distractors. |
Level 1 176–225 points
|
Tasks at this level require the respondent to carry out basic mathematical processes in common, concrete contexts where the mathematical content is explicit with little text and minimal distractors. Tasks usually require one-step or simple processes involving counting, sorting, performing basic arithmetic operations, understanding simple percentages such as “50 percent,” or locating and identifying elements of simple or common graphical or spatial representations. |
Level 2 226–275 points
|
Tasks at this level require the respondent to identify and act on mathematical information and ideas embedded in a range of common contexts where the mathematical content is fairly explicit or visual with relatively few distractors. Tasks tend to require the application of two or more steps or processes involving calculations with whole numbers and common decimals, percentages, and fractions; simple measurement and spatial representation; estimation; or interpretation of relatively simple data and statistics in texts, tables, and graphs. |
Level 3 276–325 points
|
Tasks at this level require the respondent to understand mathematical information that may be less explicit, embedded in contexts that are not always familiar, and represented in more complex ways. Tasks require several steps and may involve the choice of problem-solving strategies and relevant processes. Tasks tend to require the application of number sense and spatial sense; recognizing and working with mathematical relationships, patterns, and proportions expressed in verbal or numerical form; or interpretation and basic analysis of data and statistics in texts, tables, and graphs. |
Level 4 326–375 points
|
Tasks at this level require the respondent to understand a broad range of mathematical information that may be complex, abstract, or embedded in unfamiliar contexts. These tasks involve undertaking multiple steps and choosing relevant problem-solving strategies and processes. Tasks tend to require analysis and more complex reasoning about quantities and data; statistics and chance; spatial relationships; or change, proportions, and formulas. Tasks at this level may also require understanding arguments or communicating well-reasoned explanations for answers or choices. |
Level 5 376–500 points
|
Tasks at this level require the respondent to understand complex representations and abstract and formal mathematical and statistical ideas, possibly embedded in complex texts. Respondents may have to integrate multiple types of mathematical information where considerable translation or interpretation is required; draw inferences; develop or work with mathematical arguments or models; or justify, evaluate, and critically reflect upon solutions or choices. |
NOTE: Every test item is located at a point on the proficiency scale based on its relative difficulty. The easiest items are those located at a point within the score range below level 1 (i.e., 175 or less); the most difficult items are those located at a point at or above the threshold for level 5 (i.e., 376 points). An individual with a proficiency score that matches a test item’s scale score value has a 67 percent chance of successfully completing that test item. This individual will also be able to complete more difficult items (those with higher values on the scale) with a lower probability of success and easier items (those with lower values on the scale) with a greater chance of success.
In general, this means that tasks located at a particular proficiency level can be successfully completed by the “average” person at that level approximately two-thirds of the time. However, individuals scoring at the bottom of the level would successfully complete tasks at that level only about half the time while individuals scoring at the top of the level would successfully complete tasks at that level about 80 percent of the time. Information about the procedures used to set the achievement levels is available in the OECD PIAAC Technical Report.
SOURCE: OECD Skills Outlook 2013.
The PIAAC assessment has two modes of administration: computer-based and paper-and-pencil. Respondents who are not familiar with computers, as well as those who refuse to take the test on the computer, are given the paper-and-pencil version of the assessment. PIAAC measures literacy and numeracy in both modes of assessment.
The beginning of the computer-based assessment (CBA) includes a 5-minute literacy/numeracy core, while the paper-based assessment (PBA) begins with a 10-minute core of literacy/numeracy items in paper-and-pencil format. The literacy/numeracy core is a set of short, easy literacy and numeracy items that gather information about the basic literacy and numeracy skills of the participants and serve as a basis for routing.
There are 56 numeracy items in total in the PIAAC assessment. The CBA numeracy domain consists of 52 items, which are based on the PIAAC definition of numeracy and are scored automatically. Of these 52 computer-based items, 30 were adapted from paper-based items administered in ALL and were used to link PIAAC results with results from ALL. The remaining 22 computer-based numeracy items were newly created for PIAAC.
The PBA numeracy domain consists of 24 items, which are based on the PIAAC definition of numeracy and are scored by expert scorers. Of these 24 items, 4 items are paper-based only and 20 items are administered in both the paper-based and computer-based assessments.
Numeracy items (in both the CBA and PBA) ask participants to answer questions about four content areas: quantity and number; dimension and shape; pattern, relation, and change; and data and chance. The information in these questions may be presented as
Questions about the four key areas of mathematical content are placed in one of four contexts: everyday life, work-related, society or community, and further learning. The questions are meant to assess three types of specific cognitive processes:
Distribution of numeracy items by mathematical content
Mathematical content | Number of items | Percent | |
Quantity and number | 13 | 23 | |
Dimension and shape | 16 | 29 | |
Pattern, relationships, and change | 15 | 27 | |
Data and chance | 12 | 21 | |
Total | 56 | 100 |
Note: Each category includes continuous, non-continuous and combined texts.
Source: Literacy, Numeracy and Problem Solving in Technology-rich Environments: Framework for the OECD Survey of Adult Skills. Paris: OECD, 2012, p. 40
Distribution of numeracy items by context
Context | Number of items | Percent | |
Everyday life | 25 | 45 | |
Work-related | 13 | 23 | |
Society and community | 14 | 25 | |
Further learning | 4 | 7 | |
Total | 56 | 100 |
Source: Literacy, Numeracy and Problem Solving in Technology-rich Environments: Framework for the OECD Survey of Adult Skills. Paris: OECD, 2012, p. 40.
Distribution of numeracy items by response process
Response process | Number of items | Percent | |
Identify, locate, or access | 3 | 5 | |
Act upon or use | 34 | 61 | |
Interpret, evaluate | 19 | 34 | |
Total | 56 | 100 |
Source: Literacy, Numeracy and Problem Solving in Technology-Rich Environments: Framework for the OECD Survey of Adult Skills. Paris: OECD, 2012, p. 40.
Distribution of numeracy items by mode of administration
Mode of administration | Number of items | Percent | |
Both computer-based and paper-based | 20 | 36 | |
Computer-based only | 32 | 57 | |
Paper-based only | 4 | 7 | |
Total | 56 | 100 |
Source: Adapted from Table 2.4 (p. 69), including corrected numbers through communication with the authors, in the Technical Report of the Survey of Adult Skills (PIAAC) (3rd Edition). Paris: OECD, 2019.
In the PIAAC numeracy domain, item difficulty is reported along a five-level proficiency scale with Level 1 corresponding to the easiest items and Level 5 corresponding to the most difficult items.
The stimulus for this item consists of four supermarket price tags. These identify the product, the price per kilogram, the net weight, the date packed, and the total price. The test taker is asked to indicate the item that was packed first by simply comparing the dates on the price tags.
SOURCE: OECD Skills Outlook 2013: First Results from the Survey of Adult Skills. Paris: OECD, 2013. Page 77.
The stimulus for this item is a photo of a box containing tea light candles. The packaging identifies the product (tea light candles), the number of candles in the box (105 candles), and the weight of the box. Although the packaging partially covers the top layer of candles, it can be seen that the candles are packed in five rows of seven candles each. The instructions inform the test taker that there are 105 candles in a box. The test taker is asked to calculate how many layers of tea candles are packed in the box.
Correct response: 3
SOURCE: OECD Skills Outlook 2013: First Results from the Survey of Adult Skills. Paris: OECD, 2013, p. 77.
The test taker is presented with a table showing the number of workplace injuries per month in 2010 and 2011, as well as a bar chart displaying the numbers for 2011. The test taker needs to compare the table to the bar chart in order to find the two bars that do not match the data in the table. The test taker is asked to click on the incorrect bars in the graph.
Correct response: The bars for June and July
SOURCE: Sample Items: Education and Skills Online, http://www.oecd.org/skills/piaac/documentation.htm
The test taker is presented with a graph showing the number of births in the United States from 1957 to 2007, with data for every 10 years. The test taker is asked to determine which period(s) had a decline in the number of births, and then respond by selecting one or more of the time periods provided in the left pane on the screen.
Correct response: 1957-1967 and 1967-1977
SOURCE: Literacy, Numeracy and Problem Solving in Technology-Rich Environments: Framework for the OECD Survey of Adult Skills. Paris: OECD, 2012, p. 40.
The test taker is presented with information about a shoe sale. The test taker is asked to calculate the total amount they would need to pay in order to purchase both pairs of shoes, and then type the correct dollar amount into the answer box.
Correct response: $48.95
SOURCE: Sample Items: Education and Skills Online, http://www.oecd.org/skills/piaac/documentation.htm
Question 1 of 2
The test taker is presented with a thermometer that shows both the Fahrenheit scale (°F) and the Celsius scale (°C). The first question asks the respondent to estimate the temperature shown on the thermometer in degrees Fahrenheit and type a numerical response into the answer box.
Correct response: Any value between 77.7 and 78.3
Question 2 of 2
In the second question for this item, the test taker is presented with the same thermometer showing the same temperature. The test taker is asked to calculate what the temperature would be in degrees Celsius (°C) if the temperature shown on the thermometer were decreased by 30 degrees Celsius. Again, the respondent is asked to type a numerical response into the answer box.
Correct response: Any value between -4 and -5
SOURCE: Literacy, Numeracy and Problem Solving in Technology-Rich Environments: Framework for the OECD Survey of Adult Skills. Paris: OECD, 2012, p. 41.
The test taker is presented with information about a restaurant’s income and expenditures for 3 months. The test taker is asked to calculate the mean for the total expenditures during the 3-month period and then type a numerical response into the answer box.
Correct response: One of the following three values: 595, 596, or 600 (no values between)
SOURCE: Literacy, Numeracy and Problem Solving in Technology-Rich Environments: Framework for the OECD Survey of Adult Skills. Paris: OECD, 2012. Page 42.
The test taker is presented with information about a restaurant’s income and expenditures for 3 months. The test taker is asked to calculate the mean for the total expenditures during the 3-month period and type the correct answer into the box.
Correct response: €84
SOURCE: Sample Items: Education and Skills Online, http://www.oecd.org/skills/piaac/documentation.htm
NOTE: Items on this page are not actual replicas of test items.
Problem solving in technology-rich environments (PS-TRE), also known as digital problem solving, is a type of digital problem solving that was introduced in PIAAC’s Cycle I as an innovative addition to both adult literacy and international large-scale assessments. In the Definition of Digital Problem-Solving Domain, PS-TRE is defined as:
"using digital technology, communication tools, and networks to acquire and evaluate information, communicate with others, and perform practical tasks."
This definition focuses on the skills and abilities required for (a) accessing and using digital environment tools and resources, (b) evaluating the validity of the information, and (c) communicating information.
PS-TRE assesses the cognitive processes of problem solving: goal setting, planning, selecting, evaluating, organizing, and communicating results. The environment in which PS-TRE assesses these processes was meant to reflect the reality that digital technology has revolutionized access to information and communication capabilities over the past decades. In particular, the Internet has immensely increased instantaneous access to large amounts of information in multiple formats and expanded the capabilities of instant voice, text, visual, and graphic communications across the globe.
In order to effectively operate in the digital environment, it is necessary to have mastery of information and communications technology (ICT) skills, including (a) skills associated with manipulating input and output devices (e.g., the mouse, keyboard, and digital displays), (b) awareness of concepts and knowledge of how the environment is structured (e.g., scrollbars, hyperlinks, files, folders, and different types of menus or buttons), and (c) the ability to interact effectively with digital information (e.g., how to open and close apps, move or highlight text, and save, delete, and send files). Such interaction involves familiarity with electronic apps, files, texts, images, graphics and numerical data, as well as the ability to locate, evaluate, and critically judge the validity, accuracy, and appropriateness of accessed information. These skills constituted the core aspects of the PS-TRE assessment.
PS-TRE tasks were performed in simulated software applications using commands and functions found in the common technology environments at the time–browser-based e-mail, web pages, and spreadsheets. These tasks ranged from purchasing particular goods or services online and finding interactive health information to managing personal information and business finances.
Based on the PIAAC framework, PS-TRE tasks include items (only in the computer-based mode and not in the paper-and-pencil mode) that cover a range of difficulties–low, middle, and high–to present a comprehensive picture of the range of skills of adults in each country.
The Definition of Digital Problem-Solving Domain was prepared by an international expert team in order to reflect the overall understanding of how best to assess adult’s ability to solve problems in technology-rich environments. The framework provides an overview of
In the context of the PIAAC survey, problem solving in technology-rich environments is defined as follows:
"Problem solving in technology-rich environments involves using digital technology, communication tools and networks to acquire and evaluate information, communicate with others and perform practical tasks. The first PIAAC problem-solving survey will focus on the abilities to solve problems for personal, work and civic purposes by setting up appropriate goals and plans, accessing and making use of information through computers and computer networks."
The two sentences in the definition each serve a specific purpose. The first sentence is aimed at providing a broad basis for the first as well as subsequent surveys of PS-TRE. The second sentence acknowledges some constraints that limit the scope of the first survey. We provide below a series of more specific comments on the words and phrases used in this definition.
"using digital technology, communication tools and networks"
PIAAC focuses on problems that are specifically related to the use of ICT. The problem-solving context means that routine or basic ICT skills will not be central to the framework. Instead, PS-TRE will focus on situations that involve the active construction of goals and strategies on the part of the user. We also acknowledge the increasing diversity and versatility of digital technologies, and we emphasize that a proper assessment of PS-TRE should not be limited to traditional desktop computing. Instead, we envision that mobile and integrated technologies may be involved in new types of problem solving that will need to be represented in future assessments.
"to acquire and evaluate information"
This phrase acknowledges that most uses of digital technologies involve the use of symbolic information, such as texts, graphics, links, and commands. Symbolic information is used as part of human computer interfaces (e.g., icons, commands) and it constitutes the primary content of most computer applications (e.g., word processor, spreadsheet, Internet browser, and e-mail applications). The phrase also emphasizes that computers and computers networks, such as the Internet, mostly offer a multiplicity of information sources from which the relevant and reliable pieces have to be chosen for their respective purposes.
"communicate with others"
An important role of digital technologies is to provide powerful and flexible means for people to communicate with each other. Examples include e-mail, chats, short message systems, and IP audiovisual communication. Digital communication may take place in the context of purposeful, problem-like situations and therefore it is an integral part of the PIAAC PS-TRE construct.
"and perform practical tasks"
The ability to solve problems with digital technologies is tightly related to the achievement of personal, civic and work-related purposes, which, in turn, take the form of concrete, practical tasks. Examples include shopping, learning about laws and regulations, and organizing teamwork through online agendas and reservation systems. The problems assessed in PIAAC will use authentic, meaningful scenarios based on surveys of computer uses and input from participating countries.
"The first PIAAC problem-solving survey"
This is the first attempt to assess PS-TRE on a large scale and as a single dimension. This creates many challenges concerning the definition of tasks and the practical collection of data. Furthermore, digital technologies keep evolving at a rapid pace, as do the personal, social, and work-related uses of those technologies. While setting the stage for further rounds of surveys, the present framework will take a perspective on PS-TRE that takes into consideration feasibility issues as well as possible evolutions of technology and technology uses.
"will focus on the abilities to solve problems for personal, work and civic purposes"
In order to reflect the pervasiveness of ICT in society, PIAAC PS-TRE will assess problem-solving ability based on scenarios that pertain to these three important contexts.
"by setting up appropriate goals and plans,"
An assessment of problem-solving capacity should focus on situations where test takers cannot immediately reach their goal based on routine, mechanistic sets of actions. Instead, we focus on tasks that require test takers to actively construct a solution based on the resources available in the assessment environment.
"accessing and making use of information"
Again, this phrase emphasizes a specific aspect of PS-TRE, namely, that these are often information-rich problems that require individuals to access, interpret and integrate multiple sources of information.
"through computers and computer networks."
There is more to "technology-rich environments" than merely personal computers. A full assessment of PS in TRE would require a range of devices that mimic the diversity and versatility of the digital technologies of today's world. However, for feasibility reasons, this first survey will be limited to problems requiring the use of computers and internet-based services.
For a more complete description of the problem-solving framework, see:
PIAAC Problem Solving in Technology-Rich Environments: A Conceptual Framework
In PIAAC, results are reported as averages on a 500-point scale or as proficiency levels. Proficiency refers to competence that involves “mastery” of a set of abilities along a continuum that ranges from simple to complex information-processing tasks.
This continuum has been divided into four levels of proficiency. Each level is defined by a particular score-point range associated with competence at specific information-processing tasks. Adults with literacy scores within the score-point range for a particular proficiency level are likely to successfully complete the tasks at that proficiency level as well as any lower proficiency levels. Adults with scores at a particular proficiency level might be able to complete a task at a higher proficiency level, but the probability is small and diminishes greatly the higher the level. The following descriptions summarize the types of tasks that adults at a particular proficiency level can reliably complete successfully.
Description of PIAAC Problem Solving in Technology-Rich Environments (PS-TRE) or “Digital Problem Solving,” Proficiency Levels
See sample items for each level
Proficiency level and score range | Task descriptions |
---|---|
Below Level 1 0–240 points
|
Tasks are based on well-defined problems involving the use of only one function within a generic interface to meet one explicit criterion without any categorical or inferential reasoning or transforming of information. Few steps are required, and no subgoal has to be generated. |
Level 1 241–290 points
|
At this level, tasks typically require the use of widely available and familiar technology applications, such as e-mail software or a web browser. There is little or no navigation required to access the information or commands required to solve the problem. The problem may be solved regardless of the respondent's awareness and use of specific tools and functions (e.g., a sort function). The tasks involve few steps and a minimal number of operators. At the cognitive level, the respondent can readily infer the goal from the task statement; problem resolution requires the respondent to apply explicit criteria; and there are few monitoring demands (e.g., the respondent does not have to check whether he or she has used the appropriate procedure or made progress toward the solution). Identifying content and operators can be done through a simple match. Only simple forms of reasoning, such as assigning items to categories, are required; there is no need to contrast or integrate information. |
Level 2 291–340 points
|
At this level, tasks typically require the use of both generic and more specific technology applications. For instance, the respondent may have to make use of a novel online form. Some navigation across pages and applications is required to solve the problem. The use of tools (e.g. a sort function) can facilitate the resolution of the problem. The task may involve multiple steps and operators. The goal of the problem may have to be defined by the respondent, although the criteria to be met are explicit. There are higher monitoring demands, and there may be unexpected outcomes or impasses. The task may require evaluating the relevance of a set of items to discard distractors. Some integration and inferential reasoning may be needed. |
Level 3 341–500 points
|
At this level, tasks typically require the use of both generic and more specific technology applications. Some navigation across pages and applications is required to solve the problem. The use of tools (e.g. a sort function) is required to make progress toward the solution. The task may involve multiple steps and operators. The goal of the problem may have to be defined by the respondent, and the criteria to be met may or may not be explicit. There are typically high monitoring demands, and unexpected outcomes and impasses are likely. The task may require evaluating the relevance and reliability of information in order to discard distractors. Integration and inferential reasoning may be needed to a large extent. |
NOTE: Every test item is located at a point on the proficiency scale based on its relative difficulty. The easiest items are those located at a point within the score range below level 1 (i.e., 240 or less); the most difficult items are those located at a point at or above the threshold for level 3 (i.e., 341). An individual with a proficiency score that matches a test item’s scale score value has a 67 percent chance of successfully completing that test item. This individual will also be able to complete more difficult items (those with higher values on the scale) with a lower probability of success and easier items (those with lower values on the scale) with a greater chance of success.
In general, this means that tasks located at a particular proficiency level can be successfully completed by the “average” person at that level approximately two-thirds of the time. However, individuals scoring at the bottom of the level would successfully complete tasks at that level only about half the time, while individuals scoring at the top of the level would successfully complete tasks at the level about 80 percent of the time. Information about the procedures used to set the achievement levels is available in the OECD PIAAC Technical Report.
SOURCE: OECD Skills Outlook 2013.
The PS-TRE domain consisted of 14 items that were computer-administered. PIAAC respondents who received the paper-and-pencil version of the assessment were not assessed in the PS-TRE domain.
All of the PS-TRE items were newly created for PIAAC specifically for testing participants’ ability to manage tasks that could include multiple steps and, in some cases, multiple “technology environments.” For example, items could require participants to navigate between e-mail and spreadsheet “environments” to locate information and create a table that represents that information for a specific purpose. The PS-TRE tasks were all scenario-based, ranging from easy to difficult. They measured four specific cognitive processes:
Distribution of tasks as a function of cognitive dimensions
Dimension | Number1 | Percent1 | |
Setting goals and monitoring progress | 4 | 29 | |
Planning | 7 | 50 | |
Acquiring and evaluating information | 8 | 57 | |
Using information | 6 | 43 |
1 Parts do not sum to total because some tasks are coded to more than one dimension.
Source: Literacy, Numeracy and Problem Solving in Technology-rich Environments: Framework for the OECD Survey of Adult Skills. Paris: OECD, 2012, p. 52.
Distribution of tasks as a function of technology dimensions
Dimension | Number1 | Percent1 | |
Web | 7 | 50 | |
Spreadsheet | 4 | 29 | |
9 | 64 |
1 Parts do not sum to total because some tasks are coded to more than one dimension.
Source: Literacy, Numeracy and Problem Solving in Technology-rich Environments: Framework for the OECD Survey of Adult Skills. Paris: OECD, 2012, p. 52.
Distribution of tasks by context
Dimension | Number | Percent | |
Personal | 8 | 57 | |
Work / Occupation | 4 | 29 | |
Civic | 2 | 14 | |
Total | 14 | 100 |
Source: Literacy, Numeracy and Problem Solving in Technology-rich Environments: Framework for the OECD Survey of Adult Skills. Paris: OECD, 2012, p. 52.
In the PIAAC problem-solving in technology-rich environments domain, item difficulty is reported along a four-level proficiency scale with Below Level 1 corresponding to the easiest items and Level 3 corresponding to the most difficult items.
In this item, respondents need to select a set of files to download onto a portable music player. As shown in figure 4, the item requires respondents to select files meeting specified criteria in terms of genre (jazz and rock) and file size (maximum of 20 MB).
The software includes an automatic summing functionality (“Total Size Selected”) that facilitates the task by updating the total file size as files are selected or deselected. Respondents must monitor their progress as they select files, checking against the provided criteria to know when they have satisfied the constraints presented in the problem.
Figure 4: Downloading music files
It is also possible to sort the spreadsheet by file size and/or genre, a strategy that can increase task efficiency. The connection between the use of resources in a technology-rich environment and resulting efficiencies for solving problems is one aspect of the domain that is emphasized in the framework and therefore included across items in the assessment.
SOURCE: Literacy, Numeracy and Problem Solving in Technology-Rich Environments: Framework for the OECD Survey of Adult Skills. Paris: OECD, 2012. Page 55.
This task involves sorting e-mails into preexisting folders. An e-mail interface is presented with five e-mails in an inbox. These e-mails are responses to a party invitation. The test taker is asked to place the response e-mails into a pre-existing folder to keep track of who can and cannot attend a party. The item requires the test taker to “Categorize a small number of messages in an e-mail application in existing folders according to a single criterion.” The task is performed in a single and familiar environment, and the goal is explicitly stated in operational terms. Solving the problem requires a relatively small number of steps and the use of a restricted range of operators and does not demand a significant amount of monitoring across a large number of actions.
SOURCE: OECD Skills Outlook 2013: First Results from the Survey of Adult Skills. Paris: OECD, 2013, p. 89.
This task involves navigating across a website and an e-mail interface to exchange a desk lamp that was received with the correct desk lamp that was ordered. The stimuli for this item include a company webpage showing several hyperlinks and an e-mail interface. The test tasker is asked to use the website and the e-mail to fill out a customer service return form, request a return authorization number, and submit the authorization number.
SOURCE: Sample Items: Education and Skills Online, http://www.oecd.org/skills/piaac/documentation.htm
This task involves responding to a request for information by locating information in a spreadsheet and e-mailing the requested information to the person who asked for it. The test taker is presented with a word-processor page containing a request to identify members of a bike club who meet two conditions and a spreadsheet containing 200 entries in which the relevant information can be found. The required information has to be extracted by using a sort function. The item requires the test taker to “Organize large amounts of information in a multiple-column spreadsheet using multiple explicit criteria and locate and mark relevant entries.” The task requires switching between two different applications and involves multiple steps and operators. It also requires some amount of monitoring. Making use of the available tools greatly facilitates identifying the relevant entries.
SOURCE: OECD Skills Outlook 2013: First Results from the Survey of Adult Skills. Paris: OECD, 2013. Page 89.
This is an example of an item created for the PIAAC domain of Problem Solving in Technology-Rich Environments. In this item, respondents must access and evaluate information in the context of a simulated job search. As shown in the item directions located on the left side of the screen, respondents must find one or more sites that do not require users to register or pay a fee.
Figure 1: Opening screen of job search task
As the screen clip above shows, this item is situated in a simulated web environment that includes tools and functionality similar to those found in real-life applications. Users are able to
The response mode in this item is reflective of real-life actions within the environment; in this case, respondents are asked to bookmark their selection. In addition to scoring this item based on the selection of the two correct sites, the process data and path tracking that are possible in this computer-based item also contribute to the response data. For example, one of the websites, as shown in figure 2, meets the specified criteria, but the relevant information about fees and registration is not on the opening page. If a respondent bookmarks this site as a correct answer without clicking on the “Learn More” link to view the relevant information (shown in figure 3), we might interpret that response differently than we would if that page had been viewed. This breadth of information, combined with frameworks that specify behaviors of interest, allow us to learn more about what adults know and can do relative to the problem-solving construct as it is being measured in PIAAC.
Figure 2: Website where relevant information on fees and registration is not on opening screen
Figure 3: Second page of same website: Relevant information is located in the directions for the form and indicate that users must sign up (register) and pay a fee
SOURCE: Literacy, Numeracy and Problem Solving in Technology-Rich Environments: Framework for the OECD Survey of Adult Skills. Paris: OECD, 2012, pp. 53-54.
This task involves managing requests to reserve a meeting room on a particular date using a reservation system. Upon discovering that one of the reservation requests cannot be accommodated, the test taker has to send an e-mail message declining the request. Successfully completing the task involves taking into account multiple constraints (e.g. the number of rooms available and existing reservations). Impasses exist, as the initial constraints generate a conflict (one of the demands for a room reservation cannot be satisfied). The impasse has to be resolved by initiating a new subgoal, i.e., issuing a standard message to decline one of the requests. Two applications are present in the environment: an e-mail interface with a number of e-mails stored in an inbox containing the room reservation requests and a web-based reservation tool that allows the user to assign rooms to meetings at certain times. The item requires the test taker to “Use information from a novel web application and several e-mail messages, establish and apply criteria to solve a scheduling problem where an impasse must be resolved, and communicate the outcome.” The task involves multiple applications, a large number of steps, a built-in impasse, and the discovery and use of ad hoc commands in a novel environment. The test taker has to establish a plan and monitor its implementation in order to minimize the number of conflicts. In addition, the test taker has to transfer information from one application (e-mail) to another (the room-reservation tool).
SOURCE: OECD Skills Outlook 2013: First Results from the Survey of Adult Skills. Paris: OECD, 2013. Page 89.
NOTE: Items on this page are not actual replicas of test items.
PIAAC’s reading components provide information about the literacy skills of adults at the lowest end of the literacy spectrum–specifically, whether they have the foundational skills to develop the higher literacy and numeracy abilities necessary for functioning in society. PIAAC’s reading components assess basic elements of reading that are comparable across the range of languages in the participating countries: reading vocabulary, sentence comprehension, and basic passage comprehension.
PIAAC developed the PIAAC reading components framework to define the foundational skills that should be measured because studies prior to PIAAC, including the National Assessment of Adult Literacy (NAAL), were unable to measure the skills of adults who had very low literacy skills. Using the PIAAC reading components items, it has been possible to put adults with very low literacy skills on the PIAAC literacy scale and compare performance on reading components across participating countries.
The three basic elements of reading components
The reading component portion of the assessment is optional for countries participating in PIAAC. In countries that choose to adopt the reading components tasks, all participants who take the paper-based assessment, as well as those who fail to pass the computer-administered ICT and literacy/numeracy “core” items, are directed to the reading components tasks.
The reading components assessment framework builds upon the basic principle that comprehension–i.e., the “meaning construction” processes of reading–is built upon a foundation of knowledge of how one’s language is represented in one’s writing system (i.e., the component print skills). This basic principle of learning to read has now been widely researched and accepted internationally (Curtis, 1980; Oakhill, Cain, and Bryant, 2003; Perfetti, 1985, 2003; Sabatini, 2003; Strucker, Yamamoto, and Kirsch, 2004). Evidence of an individual’s level of print skill can be captured in tasks that examine a reader’s ability and efficiency in processing the elements of the written language: letters/characters, words, sentences, and larger, continuous text segments.
Another key principle guiding the framework is that as one becomes proficient in reading, the component skills will become more efficient and integrated. As learners, we spend extra time, effort, and energy to solve problems that are novel. On familiar tasks, we can often respond accurately, quickly, with seemingly little conscious effort. When the tasks are easy, we can spend more effort solving and learning from more complex problems and tasks. This is a basic tenet of “automaticity” (LaBerge and Samuels, 1974) and verbal efficiency theory (Perfetti, 1985, 1992, 2003). Component efficiency is typically indexed by assessing the speed or rate of processing, as well as accuracy. Speed or rate can be approximated by recording the time it takes to complete certain tasks or by setting a time limit and observing how many items are completed in the time frame allotted.
Item Design
Word Meaning (Print Vocabulary)
In the reading component skills framework, we seek to determine whether individuals can identify in print those words that are in the everyday listening lexicon of average adult speakers of the language: that is, the emphasis is on the everyday words of the language. This would be the language used in the neighborhood or market. It would be the language of popular media such as newspaper, radio, and television. This is the most cross-country, comparable vocabulary.
The Word Meaning (Print Vocabulary) measure presents an image and four word choices. The respondent must select the word choice that matches the picture. Target words are concrete, imageable nouns of common objects. Distractors are designed to tap similar semantic and/or orthographic features of the target word.
Sentence Processing
A variety of psychological studies of reading show that the sentence is a natural breakpoint in the reading of continuous text (e.g., Kintsch, 1998). A skilled reader will generally pause at the end of each sentence. A variety of operations are typically performed, including encoding the propositions of the sentence, making anaphoric inferences, relating meaning units to background knowledge and to previous memory of the passage as it unfolds, and deciding which meaning elements to hold in working memory.
The Sentence Processing measure presents sentences of increasing difficulty (as indexed by length) and asks the respondent to make a sensibility judgment about the sentence with respect to general knowledge about the world or about the internal logic of the sentence.
Passage Comprehension
In recent research, a silent reading assessment task design has gained empirical support as an indicator of basic reading fluency and comprehension. The design uses a forced-choice cloze paradigm: that is, a choice is given between a word that correctly completes a sentence in a passage and an option that is incorrect. The incorrect item is meant to be obviously wrong to a reader with some basic comprehension skills. The integration of decoding, word recognition, vocabulary, and sentence processing is required to construct the basic meaning of a short passage. Fluent, efficient performance on such a basic, integrated reading task is a building block for handling longer, more complex literacy texts and tasks.
The Passage Comprehension measure presents three passages, each with embedded cloze items. Passages were constructed based on the kinds of text types that adults typically encounter: narrative, persuasive, and expository.
For the complete Reading Components framework see:
PIAAC Reading Components: A Conceptual Framework
http://www.oecd-ilibrary.org/education/piaac-reading-component-a-conceptual-framework_220367414132
PIAAC has two modes of assessment: computer-administered and paper-and-pencil. Respondents who are not familiar with computers are given the paper-and-pencil version of the assessment. PIAAC measures reading components only in the paper mode.
Within the paper-based assessment (PBA), which is scored by expert scorers, the Reading Components domain of PIAAC is designed to provide information on the reading abilities of adults with limited English literacy skills. Reading components include 34 items on reading vocabulary, 22 items on understanding the literal meaning of sentences, and 44 items on comprehending multi-paragraph passages. These questions are designed to provide information about the skills of the target population (i.e., the lowest performers) and to capture data on timing and accuracy. In this way, reading components measure the accuracy and fluency (as shorter response time) of respondents in each of the three Reading Components sections.
Print Vocabulary questions present an image and four concrete word choices, and participants must select the word choice that matches the picture. Sentence Processing questions asks participants to make a sensible judgment about the accuracy of a sentence. Passage Comprehension questions present passages embedded with a choice of words to complete the passage.
For Print Vocabulary items, respondents are asked to circle the word that matches the picture.
SOURCE: Literacy, Numeracy and Problem Solving in Technology-rich Environments: Framework for the OECD Survey of Adult Skills. Paris: OECD, 2012, p. 28.
SOURCE: Literacy, Numeracy and Problem Solving in Technology-rich Environments: Framework for the OECD Survey of Adult Skills. Paris: OECD, 2012, p. 29.
SOURCE: Literacy, Numeracy and Problem Solving in Technology-rich Environments: Framework for the OECD Survey of Adult Skills. Paris: OECD, 2012, p. 29.
SOURCE: Literacy, Numeracy and Problem Solving in Technology-rich Environments: Framework for the OECD Survey of Adult Skills. Paris: OECD, 2012, p. 29.
For Sentence Processing items, respondents are asked to make a sensibility judgment about a sentence with respect to the real world or the internal logic of the sentence. The respondent reads the sentence and circles YES if the sentence makes sense or NO if the sentence does not make sense.
Three girls ate the song. | YES | NO | |
The man drove the green car. | YES | NO | |
The lightest balloon floated in the bright sky. | YES | NO | |
A comfortable pillow is soft and rocky. | YES | NO | |
A person who is twenty years old is older than a person who is thirty years old. | YES | NO |
SOURCE: Literacy, Numeracy and Problem Solving in Technology-rich Environments: Framework for the OECD Survey of Adult Skills. Paris: OECD, 2012. Page 30.
For Passage Comprehension items, respondents are asked to read a passage and circle the word that makes the sentence make sense.
To the editor: Yesterday, it was announced that the cost of riding the bus will increase. The price will go up by 20 percent starting next wife / month. As someone who rides the bus every day, I am upset by this foot / increase. I understand that the cost of gasoline / student has risen. I also understand that riders have to pay a fair price / snake for bus service. I am willing to pay a little more because I rely on the bus to get to object / work. But an increase / uncle of 20 percent is too much.
This increase is especially difficult to accept when you see the city’s plans to build a new sports stadium. The government will spend millions on this project even though we already have a science / stadium. If we delay the stadium, some of that money can be used to offset the increase in bus fares / views. Then, in a few years, we can decide if we really do need a new sports cloth / arena. Please let the city council know you care about this issue by attending the next public meeting / frames.
SOURCE: Literacy, Numeracy and Problem Solving in Technology-rich Environments: Framework for the OECD Survey of Adult Skills. Paris: OECD, 2012, p. 30.