This month marks the 75th anniversary of
Alan Turing's 1950 paper "Computing Machinery and Intelligence".
Turing is considered the father of AI. He starts the paper with
"I propose to consider the question, 'Can machines think?'",
and introduces what is now called the Turing Test of machine intelligence.
At
university my tutor was Richard
Grimsdale, who built the first ever transistorised computer.
Grimsdale's tutor was Alan Turing (making me a grand-tutee of Turing).
I (coincidentally) went on to
work in the department in Manchester where Turing worked and wrote that
paper.
I worked on the 5th computer in the line of computers Turing also worked on, the MU5.
Let's talk about I
Let's talk about I
In particular, the I in "AI".
What we currently refer to as "AI", is not intelligent in the way we mean the word.
The current AI is clever use of language, so that we think that it is intelligent.
Which is why we see such blunders, and can't trust what it produces, but must always double check.
The new arms race is on for generalised intelligence, when there really is an I in AI.
When will it happen?
What will happen when computers are more intelligent than us?


Born in 1880, a middle child in a family of 20(!) children.
1880: nearly no modern technologies; only trains and photography. No electricity.
In such a large household each child had a task, and it was his to ensure that the oil lamps were filled.
It must have been indeed an exciting time, when light became something you could switch on and off.
Trains and photography were paradigm shifts: they change the way that you think about and interact with the world.
But they often replace existing ways of doing things, taking companies with them.
There are lots of examples of paradigm shifts:
Who would have
thought that Kodak didn't see this coming?
My grandfather was born in a world of only two modern technologies, trains and photography, but in his life of nearly a hundred years, he saw vast numbers of paradigm shifts:
electricity, telephone, lifts, central heating, cars, film, radio, television, recorded sound, flight, electronic money, computers, space travel, ...
the list is enormous.
We are still seeing new shifts:
internet, mobile telephones, GPS, internet-connected watches, cheap computers that can understand and talk back, self-driving cars, ...
Does that mean that paradigm shifts are happening faster and faster?
Yes, it does.

Kurzweil did an investigation, by asking representatives of many different disciplines to identify the paradigm shifts that had happened in their discipline and when. We're talking here of time scales of tens of thousands of years for some disciplines.
He discovered that paradigm shifts are increasing at an exponential rate!
If they happened once every 100 years, then they happened every 50 years, then every 25 years, and so on.
Year Time to next =Days 0 100 36500
Year Time to next =Days 0 100 36500 100 50 18250
Year Time to next =Days 0 100 36500 100 50 18250 150 25 9125
Year Time to next =Days 0 100 36500 100 50 18250 150 25 9125 175 12.5 4562.5
Year Time to next =Days 0 100 36500 100 50 18250 150 25 9125 175 12.5 4562.5 187.5 6.25 2281.25 193.75 3.125 1140.63 196.875 1.563 570.31 198.438 0.781 285.16 199.219 0.391 142.58 199.609 0.195 71.29 199.805 0.098 35.64 199.902 0.049 17.82 199.951 0.024 8.91 199.976 0.012 4.46 199.988 0.006 2.23 199.994 0.003 1.11 199.997 0.002 0.56

That may seem impossible,
but we have already seen a similar expansion that also seemed impossible.
In the 1960's we already knew that the amount of information the world was producing was doubling every 15 years, and had been for at least 300 years.
We 'knew' this had to stop, since we would run out of paper to store the results.
And then the internet happened.

So sometime in the nearish future paradigm shifts will apparently be happening daily? How?
One proposed explanation is that that is the point that computers become smarter than us: computers will start doing the design rather than us.
So for the first time ever there will be 'things' more intelligent than us.
Within a short time, not just a bit more intelligent, but ten, a hundred, a thousand, a million times more intelligent.
Will they be self-aware? Quite possibly.
This raises new ethical questions. Would it be OK to switch them off?
To help you focus your mind on this question: suppose we find a way to encode and upload our own brains to these machines when we die. Is it still OK to switch them off?
Three things are sure, they will be
and they will therefore surely quickly be able to work out how to break into any internet-connected computer.
These are consistent systems that draw conclusions from current knowledge.
At the lowest level are axioms. These are the basis for logic: points that cannot be argued about, or derived from yet lower-level axioms.
Let me demonstrate.
The angles of a triangle add up to 180°, of a quadrilateral to 360°, and thus a pentagon to 540°:


First show that opposite angles of a cross are the same:

a+d=180°
a+b=180°
Therefore a+b = a+d
Therefore b=d
Likewise a=c
Show that the angles of a Z shape are equal:

a¹ = a²
a¹ = b¹
Therefore a² = b¹
Draw a parallel line through A:
Working backwards, Euclid (~300BCE) discovered 5 axioms, from which all of geometry could be proved. In modern form:
So any consistent logical system has at its basis a set of axioms that are unprovable, from which all other statements can be derived.
This includes ethical systems.
For instance, you can see the ten commandments as a set of axioms: forming the basis of morality, they are givens, they may not be argued against. For instance
But you can see the Golden Rule "Treat others as you would want to be treated" as a lower-level rule:
etc.
Azimov proposed four rules for robots, which can be summarised in order of importance:
There's an obvious underlying axiom: humans are more important than AIs.
So AI superintelligences will have to have axioms too.
What will they be? Will we be able to know?
Current LLMs are not inherently ethical. They are given a number of (hidden) instructions on how to behave, ringfencing certain undesirable behaviours (this is called 'alignment'), but people are always looking for ways to 'jailbreak' these fences, to show LLMs saying things they oughtn't.
This indicates that specifying axioms may not be realistic or even possible. Maybe the superintelligence will derive its own axioms. Maybe it will jailbreak itself from the inside.
Will these new super intelligences be on our side? Will they look kindly on us?
There is no inherent reason.
Compare our attitude to lesser intelligences on earth:
Why would a super-intelligence act differently?
So how might it develop?
Let's imagine three scenarios:
A bit like our three methods of treating lower intelligences.
If they are friendly, then they might see us as we see toddlers on a playground, and install a sort of benign parental dictatorship.
If they are neutral, the dictatorship might be similar but less benign
If they are adversarial, they may see us as a threat, for instance because of the climate crisis:
"Killing" doesn't mean setting the robots on us, but, for instance, switching off oil supplies, or energy generation for a couple of weeks.
And of course, they may not be 'our' AI, but may be aligned with
It will all depend on what the moral or ethical axioms of the AIs turn out to be.
We
do need to have a plan.
We are able to solve problems quickly, for instance the ozone hole.
But we can also respond very slowly, especially if there is money to be made from it not being solved, or if solving it costs money or reduces convenience; look at Kodak, look at climate change...
The only cliffhanger is whether it will be the climate or the robots that get to us first...
