Cats and Theories
a blog by coticheque
a blog by coticheque

Brain, knowledge and the Internet

A few weeks ago I finished ‘The Shallows’ by Nicholas Carr. Legitimately, this is one of the most horrifying books I’ve ever read. It focuses on the ongoing transitions in human patterns of thinking that we often fail to notice: the subtle differences between information and knowledge, scanning and learning, browsing and understanding. Are all media equally good for retaining knowledge, or some are inherently harmful for the very system that makes its acquisition possible?

The first part of the book focuses on the idea of brain neuroplasticity. Since the neurons in human brain are driven by the principle of survival of the busiest, the brain rewires itself to accommodate the changes in technologies, and we can be pretty sure that the the use of Internet triggers a very different set of neurons as opposed to reading a book. The second part of the book focuses on conditions that make the very acquisition of knowledge possible and allow us to transfer information from short-term memory to long-term one. In the end, the book also focuses on the organization of information on the Internet – that aggravates the situation even further. Tech corporations such as Google and Facebook (alright, Alphabet and Meta) and their advertising revenue models that’re optimized to deliver the most interesting and relevant information to the users, might neglect unintended consequences of making information more and more fragmented and easier to consume (‘what unimaginable evil must be hiding in such a happy place as Googleplex’).

Technological determinism, books and neuroplasticity

The theory of technological determinism states that technologies shape the course of human history. Maps made people think in abstract terms. Clocks made the time non-continuous. Moreover, ‘once technologized, the world cannot be de-technologized’. We cannot go back to the world without clocks, maps, print and computers (is this the Heideggerian critique of technology?)

The dependence on technology is made even more serious by neuroplasticity of the brain. To put it short, human brain is much more flexible than we think. Nature of neurons ensures survival of the busiest. For this reason, violinists have an enlarged sensory cortex that processes signals from the left hand. Taxi drivers – enlarged posterior hippocampus for spatial processing. Neurons want to receive input – and rewire themselves to accommodate whatever input is the most available. One positive consequence of it is that consistent stimulation of certain neural activity can make complex tasks less cognitively intense over time. Neuroplasticity is also the reason why it’s hard to create a strong AI. Since the mind is a system of ‘organization and causation’ – as much brain affects the mind, mind equally affects the brain and its structure. To replicate the same relation, we’ll need to build a computer that rewires its hardware by a command of software. Interestingly, neuroplasticity also bridges the gap between genetic determinism and free will, but that’s a different story.

That being said, through rewiring neural connections technologies change the way we think – which can be easily illustrated by the invention of books. The shift from oral to literary culture caused a transition to more logical and analytical thinking. The oral traditions not only required knowledge to be formed and passed through rhythmical structures, poetry, cliche phrases, and proverbs for the sake of memorization, but ‘preliterate people must have also enjoyed a particularly intense sensual involvement with the world’ – ‘living in a world of substance rather than symbol’. Writing and reading caused detachment from this emotional involvement. Even Socrates didn’t approve writing, while Plato did (being the first philosopher-writer), which is well illustrated in the Phaedrus – a dialogue dwelling on skepticism towards writing. Isn’t it perhaps the Heideggerian abandonment of Being that had supposedly started with Plato?

On a good note, the shift to literary culture made people develop new competences – such as ability for sustained attention, deep concentration and mental discipline. Which is highly unnatural – as human brains are wired for the opposite – to be easily distracted, constantly scanning surroundings for predators with fast-paced shifts in focus. The capacity for solitary contemplation and reflection, as well as intense concentration was largely affected by the invention of writing script. That being said, historically, reading books has always been an activity reserved to a small social elite. In the 21 century, it seems like the world is going back to this arrangement. ‘The era of mass book reading was a brief anomaly’.

What came to replace books as the storage of knowledge is the Internet. However, is the new format suitable for this purpose? What is guaranteed though is that browsing the Internet relies on a very different set of neurons as opposed to reading a book.

Information acquisition and retention

Even though the Internet is full of information, it’s perhaps the least useful tool, when it comes to transforming information into knowledge.

Essentially, humans have two types of memory: working memory and long-term memory. The long-term memory consists of grasped concepts, schemas and patters of thinking. These are the tools of comprehension and understanding. Transfer of information from working memory to the long-term one requires a conscious effort, repetition and concentration, which is often as cumbersome as ‘filling a bathtub with a thimble’. Internet with its infinite data feeds easily creates an excessive cognitive load that makes our hypothetical thimble overflow, and prevents anything from the working memory to be digested and absorbed by the long-term one.

To say the same in more scientific terms, in order to be retained, information should be digested, internalized, or ‘synthesized’. As already mentioned, long-term memory is based on understanding rather than memorization of separate facts. It forms conceptual schemas by ‘organizing bits of information into patterns of knowledge’. For this process to work, new memories have to be consolidated based on the synthesis of new proteins (causing not only biochemical, but also anatomical changes in the brain). The entire process is largely supported by data repetition and rehearsal, and takes substantial time. The following factors are crucial to it.

1) Context and connections

Intelligence is not about information, but mainly about connections. Long-term memory consolidation involves formation of synaptic connections to existing neurons: the context is therefore crucial, as the new data has to be incorporated into the present structure of knowledge stored in the brain. In other words, information has to be processed and ‘meaningfully and systematically associated with knowledge already established in memory’ – as Eric Kandel describes it.

On the opposite, the Internet with its infinite data feeds often fragments the information and places it out of context. ‘We don’t see the forest when we search the Web. We don’t even see the trees. We see twigs and leaves.’ The Google search is not helping it either – after scanning the source of data for key words, there’s rarely any incentive to evaluate the text as a whole. ‘The strip-mining of relevant content replaces the slow excavation of meaning’.

2) Time for reflection

The part of the brain crucial for consolidating knowledge is hippocampus: it helps to stabilize memory in the cortex, which may take time: from a few days to many years (as it was proven by the case of Henry Molaison, and removal of his hippocampus for the sake of curing seizures). Hippocampus is especially active during sleep.

Internet feeds with their infinite supply of new data rid the brain of the time needed for reflection, contemplation, and mental connections crucial for memory consolidation. As a result, the facts, ideas and experiences encountered on the Internet are only remembered as long as neurons storing them maintain their electric charge.

It was proven by research that spending time in nature and other quiet places improves attentiveness and cognition. Alas, the ‘landscape of the psyche’ of the modern man has changed, and people lost space needed for solitary single-minded concentration in favor of juggling and multitasking in the ‘ecosystem of interruption technologies’. Internet divides attention, and makes the sustained concentration almost impossible. Perhaps there’s a downtrend in the number of writers and authors because people are losing the ability to focus and contemplate things in order to accumulate enough reflections that could potentially become a source for a novel?

Google is not helping it either

Originally, Google was built as a mere sorting algorithm. It assumed that relevance of web pages is determined by 2 factors: numbers of references and popularity of each reference that links to the page. Since then, the amount of factors increased by more than 10 times, making the information we search and find the most relevant, interesting and tailor-made for our needs. Legitimately, the Internet has become ‘the most interesting thing in the world’.

However, it’s worth keeping in mind that the current business model of Google is not storing of knowledge, but selling and distribution of online ads. The original services, such as search of text, images and video, are merely complementary – meaning, that the information available through Google services is just a secondary thing that facilitates placement of ads. For Google, as well as other social networks, knowledge is just a data to be mined to increase the throughout of users on the webpage (and get more revenue from ads). Due to their commercial nature, Internet websites and social networks essentially turn knowledge into content – and make this content available to generate traffic.

What is worse, is that by suggesting and delivering information tailored to the interests of each user, Google and other social media platforms create an information overload: with Internet legitimately being ‘the most interesting thing in the world’, we’re tempted to consume more information than our brain can possibly handle.

Knowing as a collective process

Even though the Internet is based on links and connections, these connection are not our own. Therefore, to remain intelligent, we have no choice but to merge with intelligence of the network. To a better or worse, this is what humans are actually mentally predisposed for: coordination of thoughts and actions within larger social groups. Perhaps, Internet is exactly this kind of an emerging group intelligence that can no longer be associated with any particular individual, but serves a group as a whole. The outsourced group intelligence, where you can trust the result, but don’t have access to the underlying algorithm.

Marshall McLuhan writes: ‘Our isolated, fragmented selves, locked for centuries in the private reading of printed pages, were becoming whole again, merging into the global equivalent of a tribal village. We were approaching ‘the technological simulation of consciousness, when the creative process of knowing will be collectively and corporately extended to the whole of human society.’

Alas, by making knowledge external, the Internet also deteriorates human ability for ‘critical thinking, inductive analysis and reflection’, as well as ‘logical reasoning, abstract thought, problem solving and creativity’ on the side of a particular individual. As the book author writes: ‘Net diminishes primary kind of knowledge: the ability to know, in depth, a subject for ourselves, to construct within our minds the rich and idiosyncratic set of connections that give rise to a singular intelligence’. Idiosyncratic knowledge and judgements get lost.

This can be well-illustrated by the fact that modern-day teenagers seem to be quite smart – they’re great at searching for trustworthy information online through quick scanning, pattern recognition, and fast context evaluation. They have a better judgement about what’s credible, and they don’t easily believe into something they’re told, and thus don’t fall victim of conspiracies – unlike their parents. On the other hand, teenagers often tend to underperform when it comes to logical reasoning and arguing for what makes the particular facts truthful. Barely anyone can formulate their line of argumentation properly, as most opinions are picked up on the Internet. Therefore, in order to evaluate the impact of the Internet on intelligence, we need to answer what defines intelligence first: ability to merely choose a right answer, or ability to logically reason, analyze and synthesize information in order to come up with it in the first place? As Patricia Greenfield writes: ‘the Net is making us smarter, only if we define intelligence by the Net’s own standards’.

Once again, it makes me think about Heidegger and his idea that the essential feature of truth is its discoverability. Alas, with the distributed system of knowledge and ‘technological simulation of consciousness’ this condition no longer holds – truth is no longer something that’s ought to be discovered personally. Knowledge is not to be obtained individually. On the opposite, it now belongs to the realm of shared distributed processing.

I write about cats and theories. About the blog »