Societal Progress and the Externalization of Information

Is societal progress defined by its ability to externalize information?

In the pre-stone-age, information was in a persons head, and when that person died, it was lost.
Then came (the starts of) spoken language, and now that information was external, it could be preserved.
Speech is temporal, so the next step was to invent something that lasted: writing.
Next came external forms to increase distributability of information: printing presses and telegrams.
Finally in the modern age we have even externalized manipulation of information through the use of computers.
What is the next step but to externalize the very essense of learning itself?

One of my friends posted a response:
Each step of that externalisation has it's own costs and I would argue that diminishing returns kick in and result in an optimal level, though that would change depending on available resources.

To which I replied....

I'm not sure I agree. What costs are those? I can see few downsides to the introduction of spoken language (over whatever came before), of writing over speech, and of externalizing information-manipulation. From a cultural perspective I guess you're referring to the "unknowns" of ML. Just as computers allowed execution of complex physics/engineering analysis that are too complex to be done by hand, ML allows the derivation of meaning from information that is too large to be understood by a human.

Do we "lose" something? Sure, the same way no-one here can recite from memory their family lineage going back 20 generations, but our ancestors without written language probably could. In the future (and probably already in the present), not outsourcing your thoughts is going to be strange. Or maybe you're looking at the physical cost: producing paper is more expensive than speaking, and producing a computer is more expensive than producing paper. By extrapolating this, yeah, at some point the "next stage" will be too expensive. But I would say that each step in information-externalisation comes with associated technologies. Large scale production of paper was only possible after written language. Large scale production of computers was probably only possible after automation. It stands to reason then that perhaps large scale deployment of ML is awaiting it's own meta-analysis?

Resources aren't a completely fixed quantity - even in a closed system like earth. We're digging up "depleted" roman-era mines because our technology allows extraction of their unusable materials. If ML allows the invention of fusion reactors or allows improving the efficiency of solar-collectors (or whatever else I can't conceive of), then it will "expand" the resources. Back in the day they worried about firewood shortages. They happened, and then humans moved on to coal. From coal to oil, oil to (in many places) nuclear. We no longer care about firewood shortages. The opposite perspective is that there are absolute limits (which there definitely are). This is presented in https://en.wikipedia.org/wiki/The_Limits_to_Growth

I guess I'm a techno-optimist eh?!