Design by Phoebe Unwin

Sometimes things grab my brain, grip it tight and refuse to let go. I try to go to class, eat my lunch and hang out with my friends, but the thing holds fast. The case of a prehistoric amputated foot was one such thing, launching me into a spiral of determining when I stopped being a ‘child’ and what ‘childhood’ even means. 

On Sept. 7, 2022, there was a major finding in the world of archaeology and medical history. In a cave in Indonesian Borneo, archaeologists found the grave of an individual with a successfully amputated foot from the Stone Age, 31,000 years ago. Previously, the earliest evidence of a successful surgical amputation was from the grave of a Neolithic farmer, 7,000 years ago (a forearm, partially healed).

As a history major and archaeology nerd, I couldn’t get enough of this discovery. On the night of Sept. 7, only a few published articles detailed the groundbreaking remains, and I read each one of them, hoping to find some missing piece about how the individual had died. I was clicking on every article, some of which were written in different languages, and one nearly resulted in a computer virus. One article, from a Madison, Wisconsin news website, WKOW, caught my attention. 

While most articles focused on the estimated age when the individual received the operation with descriptions like “young” and “child,” WKOW went a step further. They identified the individual’s age at death — approximately 19-20 years old — and still called the individual a child. A NewScientist article from the same date uses the word “child” three times in the first three sentences of the page. 

The archaeological discovery’s declaration of childhood unnerved me. The 19 to 21-year-old was wholly picked up from their prehistoric context and dropped into 2022 North American’s definition of childhood. In their prehistoric context, by 21, the individual could have had many children themself.

Perhaps the news station made a mistake by calling a 19 to 21-year-old a child, or perhaps they hoped to draw on their reader’s sympathies and attention. Though, at 21 years and 8 months old, it felt jarring — and uncomfortably wistful — for me and my fellow 21-year-olds to be called children. 

The Encyclopedia Britannica defines childhood as ending at age 12 or 13, but our cultural, medical and legal definitions of childhood are not consistent. Culturally in the U.S., we define the end of childhood as the end of our dependency on our parents. Medically, brains do not reach ‘adulthood’ and finish developing until age 25. Legally, the U.S. considers 18-year-olds as adults but withholds the full privileges of adulthood (like being able to legally drink alcohol or check-in to your own hotel room).

Long after reaching legal adulthood, policies further confuse the definition of adulthood. At 25 years old, you no longer have to pay an extra “young-driver” fee to rent a car, and it is the last year of coverage on your parents’ health insurance.

Our cultural, medical and legal institutions — like WKOW with the 19 to 21 year-old child — are in disagreement about what constitutes a child. 

Is being a child the period before you can legally work? Is it the period before your first truly rational 25-year-old thought, snapped into place after the clock strikes midnight? Do you cease being a child the moment you live independently from those that were responsible for your health, safety and happiness? Is being a child simply about the act of being cared for? 

Does the contested definition of a child have a concrete, “correct” answer, or is the confusion indicative of an ever-changing definition?

The concept of childhood has existed in many different forms throughout recent history. Some academics, such as renowned French Medieval historian Philippe Ariès, believed that it was only after the Late Middle Ages that children were depicted and described as something other than “small adults,” and that “childhood” was a modern concept.

Ariès posited that childhood was a social construct rather than an established biological life stage. He believed that applying modern concepts of “childhood” was ahistorical in a context where Medieval children experienced death and hardship like adults. Child labor began as soon as one could walk and wield a knife.

During the Enlightenment and the following years, philosophers defined children differently. The Puritan doctrine said that children were inherently evil, born of sin. In his writing, John Locke argued against that prevailing belief and proposed that children are born blank slates, not shaped by their environment, neither good nor bad. However, his philosophical construction of children puts pressure on how a child is raised. Childhood transforms from a period of Medieval “small adults” to an important developmental stage.

It was common to see children working in the agricultural realm prior to industrialization, but new industry bred new dangers for young children. Victorian era children were employed in workhouses, coal mines, textile mills, shrimp farms and were often in servitude. They wielded sharp knives, worked in tiny spaces to fix big machines, suffered illnesses, serious injuries, loss of limb and sight and if not death, then a highly decreased chance of reaching old age.

Children were indispensable to factory production during wartime, but after World War I and the resulting plummet in employment, governmental trust and general living conditions of the Great Depression, President Franklin Roosevelt’s New Deal replaced children with men in need of work in factory positions. The 1930s saw further workforce change, with the National Industrial Recovery Act of 1933 and the Fair Labor Standards Act of 1938 — children under the age of sixteen could no longer work in mining or manufacturing, a national minimum wage was set and legal restrictions were put on child labor. Informed by this new context, the U.S. government defined childhood for the 20th century. 

Researching how the definition of childhood has shifted and changed over time, it became increasingly clear to me that “childhood” is in part a social construct informed by both historical and cultural context. 

As a young adult who is not often called a child but still feels like one, I often think about the boundary between adult and child. Modern science has declared that the end of childhood can be marked by new hormones emerging, as early as 8 or 9 years old. But other markers, like the first time you go to the mall without your parents, the first time you drive and the first time you get your period are not consistent with the finality of childhood.

I may not be a child anymore on paper, as I drive, drink and vote (not all at once), but I remember being called one.  

The title of child is beneficial to those who are bestowing it: factory operators looking for cheap labor, the government hoping to recover losses from the Great Depression, your parents who believe that you are making all the wrong decisions. 

There is only so much child labor history I can read before I remember that children worked more often than not pre-industrialization. Maybe the labor laws would not have changed if the Great Depression hadn’t forced powerful hands. Maybe if governments gained jurisdiction over wages and working hours of employees earlier, the age of factory workers would not have become a factor in the United States Great Depression economic recovery strategy. 

Maybe, regardless of age or legal definition, that Stone Age 19 to 21-year-old “child” with the amputated foot was a parent caring for their own children. 

We put a lot of pressure on “childhood” as a critical junction during which children need special protection and care. But as the definition of childhood reveals itself to be unstable, perhaps the better question is not “what is a child” but rather, “who is deserving of care and love”? 

Regardless of age or life stage, the Stone Age individual exemplifies how healed and held everyone should be.

Statement Contributor Giselle Mills can be reached at gimills@umich.edu.