In early childhood, we don’t often use the words test or testing…we tend to use the word assessment, and/or progress monitoring. In this age of accountability, however, make no mistake, children are being tested.
Since the advent of “No Child Left Behind” (2001), educational policies in the United States have resulted in more and more pressure for states and local educational agencies to document children’s performance. Pressure has risen to produce data regarding children’s progress on state standards, to develop large scale indicators of children’s growth, and measure “readiness” for kindergarten.
This trend for testing children has only increased, as a result of the Race to the Top Early Learning Challenge, or what some of us like to refer to, as the “Race to Nowhere.”
As I type, states are spending millions (yes millions) of dollars on Comprehensive Statewide Assessment Systems (aka, they are developing more tests for use with younger and more vulnerable children).
I’ve spent the better part of the past two decades immersed in assessment work, and will be one of the first to acknowledge its complexity.
We don’t, however, solve complex problems by finding complex solutions; rather, we solve complex problems by finding the simplest solutions possible.
As amazing as technology and sophisticated systems are, they often end up performing no better than far simpler, yet often ignored simple answers. Same holds true for life. We spend so much time looking for the fancy methodologies, systems and technologies. We assume they’ve got to better than something that appears so simple. So we ignore the simple and waste tons of time and money building something that makes us feel better, but doesn’t beat the easy answer. And that is a huge mistake. ~ Jonathan Fields, Founder – Good Life Project
Further, we don’t create “solutions”, that place children at risk or that do harm, just to meet a misguided policy or rule.
From my vantage point, there are at least 5 mistakes being made at all levels, meaning by some direct service providers, by some local education agencies; and perhaps most concerning, by some state leaders and policymakers. These mistakes are putting children’s development and learning at risk; and I’m increasingly convinced, they are causing harm:
- Use the wrong tool for the job
- Adopt a narrow view of early development
- Misuse assessment terms
- Engage in standardized testing
- Neglect to link key program elements
Next, I offer, what I see as “simple solutions” to the mistakes being made in testing children in this age of accountability. You can also read other posts I’ve written on the topic early childhood assessment.
Mistake #1: Use the wrong tool for the job
The golden rule, when it comes to testing (or as we say in early childhood, assessing), is to use tools for the purposes for which they were designed. Think of it like this…you want to eat a bowl of soup, and while you have many options of how to get the soup to your mouth, only a few are effective and efficient. For example, broadly speaking, you have fingers, bread, chopsticks, spoons, forks, or even the ability to lift the bowl to your lips. Which ones are well suited for eating soup? Well, you might ask, what type of soup? Is it thick like chowder, or thin like Wonton? If I pick a spoon, will a flat spatula type work, or do I need a deep ladle? See, it’s not as easy as it appears; and we’re just trying to eat soup.
Imagine, trying to pick an assessment tool when you have multiple purposes, multiple decisions to make, children who are from different backgrounds, children of different ages, children served in different locations (or not served), children with and without identified disabilities/abilities, and a workforce that may or may not have training on how to use the tool.
See, it’s complicated, and we’re only on Mistake #1!
- Be clear as to what your purpose is before you pick a tool, and understand who you will be assessing. Watch this 9:24 screencast that describes the major purposes of early childhood assessment.
- Recognize that most tools can’t be used for more than one purpose. In, other words, asking an assessment tool to give you scores and information that can be used for accountability and guide instruction is like asking your spoon to work for eating soup and washing your car. See LINKing Authentic Assessment and Early Childhood Intervention: Best Measure for Best Practices to learn more about which tools are designed for which purposes.
- Use a tool ONLY if it has been validated for the purpose(s) and population(s) for which it will be used. For example, don’t use a developmental and behavioral screening tool that has been designed and validated for determining whether additional testing is needed, as outcome data for accountability purposes. Similarly, don’t use a tool designed to compare a child to a normative sample following standardized procedures to plan individualized and personalized instruction. Lastly, keep in mind, tools aren’t valid or reliable, but rather, the scores and interpretations we make from the tool.
Mistake #2: Adopt a narrow view of early development
Early development is complicated, variable, and highly dependent upon a number of factors including, relationships with responsive adults, quality nutrition, limited exposure to toxins (including stress), and opportunities to play and explore.
When we approach development and learning from a narrow perspective, we lose sight of the richness that is inherent in our collective cultural and individual differences. Further, we need to remind ourselves (and decision makers) that it is atypical, NOT typical, for children to reach a given milestone at the same age.
The simple solutions provided next, stem from a whole-child approach, where we understand that learning happens only when complex skills are nurtured and mature, simultaneously. Further, they highlight a focus on the strengths and needs of individual children rather than making broad group or age comparisons.
- Pay attention to all relevant early childhood theories (e.g., developmental, ecological, transactional, sociocultural) not just maturational theories. See previous blog post on my concern in using a maturational theory to drive decisions for young children, particularly, around “readiness.”
- Prioritize and measure outcomes that are functional and meaningful for the child in the context of their family and community. Don’t rely on a narrow set of skills to define a child’s abilities, disability, readiness, or potential. See article on “readiness” and how a broader view of development and learning is needed. Think personalized not standardized (to borrow from Sir Ken Robinson).
- When individual performance data are aggregated, the sum should be seen only as valid as its parts. In other words, aggregated data on changes in children’s acquisition of developmental competencies or changes in trajectory are meaningless, unless related to aggregated data about the programs and services in which children participate. There must a be a functional interrelationship between each child’s pattern of progress and the type, quality, length, and intensity of their programs and type of teaching and care strategies used. (Pretti-Frontczak, Bagnato, & Macy, 2011).
Mistake #3: Misuse assessment terms
On a daily basis, I witness early childhood professionals and state leadership teams making decisions from assessment results. In particular, I hear people talking about how they will evaluate teacher performance, whether or not children from a district are ready for Kindergarten, and if a program is meeting various quality indicators.
In these situations, I hear terms like correlated, standardized, universal, and formative; and to be honest, I often cringe, because terms are used with a great deal of authority and very little accuracy.
Again, don’t get me wrong, I know assessment terms can be confusing. Take for example, the different types of validity. You have construct, content, criterion, and concurrent…just to name a few.
That said, there are simple solutions that can lead to effective testing practices and use of assessment terms.
- Educate self and others on what different terms mean. Start by downloading our B2K Fact Sheet: Assessment Terms Primer that contains definitions of common assessment terms.
- Access experts, who likely live in your state, to help you understand what different terms mean. If you aren’t sure who to reach out to, contact me (Kristie.email@example.com) any time.
- Don’t claim a tool or assessment process is helping to guide instruction, if you don’t have evidence from those who use the tool that it actually meets the claim.
Mistake #4: Engage in standardized testing
One term that is misused the most, or maybe, misguides practice the most, is the term standardized. Somehow, when countries determined educational reform was in order, systems that are thriving today, chose a personalized learning approach. In contract, the US, which is not thriving on many fronts, including education, chose a standardized approach.
My theory is, that the root of our love affair with standardization, is distrust and fear, particularly, of things we don’t understand or things we want to control. In such instances, we tend to seek and rely upon what we think are practices that are more objective; and, thus, seemingly more useful in helping us gain knowledge and securing control. This perspective has lead many to believe that standardized is better, is trustworthy, and is the truth! In fact, we know that truth varies based upon perspective, that opinions always influence findings, that outliers aren’t always a bad thing, and what we should aim for is fidelity and systematic decision making, not standardization.
As it applies to standardized testing, I’ll highlight three major concerns.
- First, each school, district, county or parish, state, and geographical area is comprised of diverse groups of children. So, when we implement standardized tests on a national scale with divergent populations, the scores we get are practically meaningless. It is impossible to compare all of these different children and schools, and geographic areas, on standardized testing results in a way that is equitable. Further, the more one aims to compare children with differing abilities to a normative group, the less valid and trustworthy the conclusions.
- Second, each school, district, state, and geographic area is vastly different in terms of funding, resources, type of professional development delivered, the adopted curriculum, teacher training, types of programs, local/state rules, etc. Thus, it is impossible to tell what is actually responsible for any difference in scores we see among children.
- Third, particularly in the case of young children, standardized testing practices are in direct conflict with developmentally appropriate practices and are not sensitive to individual patterns of progress, especially for those with identified disabilities and functional limitations.
- Understand the pitfalls of standardized testing, particularly with young children. Here are a few resources on the issue:
- NAEYC’s Response: Standardized Testing in Kindergarten
- ACEI Position Paper on Standardized Testing
- Resources from Dr. Sam Meisels, early childhood assessment expert
- Engage in authentic assessment practices where familiar people gather information in familiar settings, using familiar objects, and asking children/adults to do familiar things.
- Follow early childhood recommended assessment practices:
Mistake #5: Neglect to link program elements
Just as children are complex, so, too, are the programs that serve them. Early childhood programs vary in terms of staff, overarching mission, assessment tools used, sources that drive what children should be learning, the adopted curriculum, how progress and performance are measured over time, and the leadership and support providers receive.
Further, each program is governed by different licensing rules, require different teacher certifications, has drastically different budgets, and must follow highly variable rules in terms of who receives services and in what amounts.
While this diversity and variability adds to the complexity, I don’t see them as the problem Rather, the underlying problem is that program elements, specifically, assessment tools, standards, and curricula have not been aligned or linked. Key elements need to be linked in order to give programs clear direction, and, ultimately, aide in data interpretation and decision-making.
Take for example a teacher in a Head Start program, they are responsible for teaching performance indicators from the following sources, none of which have been created by the same group, for the same reason, nor are all grounded in research.
- The Head Start Child Development and Early Learning Framework (U. S. Department of Health and Human Services, Administration for Children and Families, Office of Head Start [HHS/ACF/OHS], 2010)
- Items from a curriculum-based assessment (e.g., Teaching Strategies GOLD™ [GOLD]: Heroman, Burts, Berke, & Bickart, 2010)
- State early learning standards
- District “readiness: checklists
- Milestones found in the developmentally, appropriate practice guidelines
- Children’s individualized education plan (IEP) goals and objectives
- Adopt and implement a curriculum framework designed to link assessment to instructional practices; serve as a foundation for curriculum design in blended early childhood programs; and provide a process for decision making for teachers who teach diverse groups of children.
- Don’t confuse stakeholders and providers about expectations by acting as though performance indicators across adopted/mandated tools, state early learning standards, and/or a program’s curriculum align. Stay tuned for a future blog post on the lack of alignment between tools, standards, and curricula, particularly as it relates to how many letters preschoolers should be able to identify by Kindergarten.
- Engage in a valid process of aligning or linking key practices such as a program’s curriculum, and/or state early learning standards.
Overall, when we think of early childhood assessment, we should think of young children who are engaged in play, engaged in creative exploration, and engaged in inquiry. Information regarding their knowledge, skills, and behaviors, used for any purpose, should be gathered during these authentic situations. Children should not be subjected to testing demands and situations that are timed, contrived, require practice sessions, or cause stress and lead to a disinterest in learning.
Summary of resources mentioned in the post:
- Screencast on the purposes of EC assessment
- Division for Early Childhood’s Recommended Practices
- National Association for the Education of Young Children
- LINKing Authentic Assessment and ECI: Best Measure for Best Practices
- Assessing Yong Children in Inclusive Environments: The Blended Practices Approach
- Maryland Learning Links – additional blog posts on early childhood assessment
- Young Exceptional Children article on readiness
- Accountability in early childhood: No easy answers
- Why Giving Standardized Tests to Young Children is “Really Dumb”
Pretti-Frontczak, K., Bagnato, S., & Macy, M. (2011). Data driven decision-making to plan programs and promote performance in early childhood intervention: Applying best professional practice standards. In C. Groark (Series Ed.) & S. P. Maude (Vol. Ed.), Early childhood intervention: Shaping the future for children with special needs and their families, three volumes: Vol. 2 (pp. 55-80). Santa Barbara, CA: ABC-CLIO, Praeger