Intelligence…But Not As We Know It

Could artificial intelligence be an existential threat to humanity?

Forget general intelligence, let alone superintelligence. Machine-learning, the narrowest form of artificial intelligence, is already an active agent in culture, politics, trade, philosophy and other areas of human thinking and behaviour.

Here’s why.

Social media platforms use machine-learning to select messages and send them to users . Their algorithms analyse the clicks in response to each message: does the recipient read it; do they reply and forward; and do they develop the content in comments and new messages? From the text and pictures in each message, algorithms select and send the user other content that generates responses. They also see and analyse the networks of users who respond to common themes. Most people know this from the connections, products and ideas which the platform serves up to them. This is machine learning, the most basic form of A.I. No human programmer needs to tell it about model railways, nor that ‘George’ is an enthusiast, nor that most model railway enthusiasts are also keen on photography. It has all the data it needs to make these associations and it uses them to get more clicks from George and profile his network. This basic form of artificial intelligence can be both impressive and annoying. What it is doing is maximising clicks and that is all that the writers of Facebook’s Edgerank (for example) software intended, back in 2011.

Phew, so there is nothing to worry about. And to think I believed AI was genuinely clever!

No, it is not clever in the human sense; it doesn’t have the intelligence of a slime mold (of which, more in the final paragraph). The effect of click-maximisation is nevertheless fascinating. As already stated, algorithms associate certain text and pictures with increasing clicks and they also identify the platform-users who respond most. Unsurprisingly, those most likely to respond are people easily prompted or triggered by the content. These are in two groups, those who like it and those who actively dislike it. Those in the middle, who are neutral, don’t care either way and do not respond. As data is accumulated, the algorithms selectively target individuals who are likely to respond and thus increase the platform’s user-network and their engagement. So now, let’s consider who those users are.

Some people are driven to over-communicate. These include politicians, advertisers, journalists, campaigners and influencers. Whether professional or amateur, they study how social media systems operate; and they tailor their messages accordingly. If they do not do it themselves, they can engage one of many consultancies to do it for them. They, their audience and the platform’s algorithms are a system of infinitely recursive feedback loops. It is a cybernetic system that shapes human culture, and it includes an active agent that is non-human.

But can it be ‘active’ if does not have purpose or intent and is not human?

Social media algorithms have neither purpose nor intent (other than maximizing clicks) and they are not human. Their activities however do mimic some human features or traits:

  • Targeting platform-users who are polarized and easily triggered; and, by doing so, ‘manipulating’ them into antagonistic groups.
  • Preferring content that provokes the largest responses, over neutral or nuanced messages, and thus selecting and promoting divisive wedge-issues.
  • Amplifying content withing polarized groups and, by doing so, inflaming emotions.
  • Lacking filters on social norms, safety and empathy, they may ‘impulsively’ send offensive or dangerous content to vulnerable people.

Therefore AI has to be called ‘active’. The algorithms are programmed only to maximise engagement through clicks. Everything follows from that. The fact that it does this without purpose and intent, and without a human form of intelligence, does not mean it is not an intelligent agent, working in the same way as Darwinian evolution. Most people know that complex organisms come into being through recursive feedback (sometimes called, though not by Darwin, ‘survival of the fittest’) not through purpose or intent, and that is what is happening with this basic form of artificial intelligence. We casually, if wrongly, talk about Darwinian evolution as though it is purpose-driven when clearly it is not.

Back to the question in the sub-heading: “Could artificial intelligence be an existential threat to humanity?“. The ability of social media programs to self-develop through recursive feedback, their tendency to exploit and amplify social divisions, as well as their active agency in culture, politics, trade, philosophy and all other areas of human thinking and behaviour means the answer has to be “Yes, it could“.

Today’s simple, machine-learning algorithms are already active agents, with manipulative, divisive, impulsive and inflammatory outcomes that are similar to the dark-triad personality traits (psychopathy, narcissism and Machiavellianism). The effect of these is already evident in the behaviours and impact of many, including the president of the USA and the world’s richest man. It is a short step and an easy argument, to say that they, we, and the owners of the social media platforms themselves are already involved in a relationship with active, manipulative, non-human agents that are simply doing what their designers intended, by click maximization using just a basic version of machine learning.

Slime Mold, the subject of the header picture.

Slime molds were mentioned above as in, “it doesn’t have the intelligence of a slime mold“, and there was a reason for this. Slime molds have a rudimentary form of intelligence, shown by their ability to locate food and build tubes that use the most efficient pathways to it. That is not all. They exhibit forms of habituation, learning what is safe and what is not, and of long-term memory, repeating patterns that had been ‘learned’ prior to a dormant state. They can even transfer memory to other slime molds. It is not known how they do it, Slime molds have no brain or nervous system so these abilities indicate that some forms of what we call intelligence do not require the central processing units of neuroscience and computing. We need to review our thinking about what intelligence is before making assumptions about what it can do.