The Forum

The Forum

Bridget Kendall presents an ideas discussion show which tackles the big questions of our age with some of the world's most eminent minds.

On radio

All upcoming (2 new)

Limits of Deep Learning?

Professor Aaron Sloman writes: The technological achievements [of Deep Learning] are very impressive, especially compared with what was possible a few years ago. But most of the work discussed was about learning, and in particular learning by collecting and analysing large amounts of information and looking for recurring patterns at different levels of abstraction.

In contrast, a great deal of human and animal intelligence is about *creating* new objects, new types of action, new solutions to old problems, new ways of thinking, new languages, new kinds of machinery, new deep theories, new tunes and other works of art, and new kinds of mathematics. It looks to me as if AI (including robotics) is still way behind many animals, including squirrels, crows, elephants, dolphins, octopuses, and young children. This gap was hinted at in the program, but I think it's important to be very clear about it.

Often what looks like learning in humans is actually creation (but not divine creation!) For example, in the Forum episode there was much discussion of computers learning to use language by being trained on examples. Young children SEEM to learn the language used by others around them. But there is evidence that human children are really doing something else: the main thing they do is *create* languages rather than *learn* them. Try searching for 'Nicaraguan deaf children' to find the most compelling evidence I know. E.g. https://www.youtube.com/watch?v=pjtioIFuNf8

Human twins sometimes create their own private language, for a community of two!

The creation process is cooperative, and requires interaction with other language users. Normally the younger creators are in a minority, which constrains the creations that work. So it *appears* that they are learning how older speakers use language. But I think that's an illusion: instead, the creative inventions of the younger speakers are constrained by what the older speakers already do: a process that's very different from looking for patterns in data provided by older children. That data-driven process would not allow twins or deaf children to create their own new languages.

Moreover, the construction process works only because biological evolution has produced powerful creative mechanisms that other species seem not to have, even though they may be very good at learning other things. For example, male weaver birds (and some females) have an amazing ability to develop competences that enable them to make a nest using up to a few thousand bits of vegetable matter (e.g. strips of grass, long thin leaves or other materials). I don't know whether human infants could learn to do something similar.

Likewise I am not even sure that many adult humans would learn to make weaver-bird nests as quickly as male weaver birds do. (Has anyone tried?)

I am not saying that computer-based machines will never match human or weaver-bird intelligence, only that making that happen will require human developers to acquire a much deeper understanding of animal intelligence than we now have. By 'we' I include psychologists, biologists, geneticists, neuroscientists, linguists, philosophers, education researchers, and AI researchers.

I suspect that will require giving computers kinds of mathematical abilities, developed by biological evolution, that computers now lack, even though they have some outstanding mathematical abilities. Examples of what they lack include discovering ways of proving theorems in geometry and understanding geometric proofs produced by others, including finding flaws in inadequate proofs.

A taste of these geometrical abilities is presented here: http://www.cs.bham.ac.uk/research/projects/cogaff/misc/trisect.html

and many more in this draft, incomplete, list of 'toddler theorems': http://www.cs.bham.ac.uk/research/projects/cogaff/misc/toddler-theorems.html

These are all part of the Turing-inspired Meta-Morphogenesis project:

http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html

I am sure that when we have understood all these products of evolution better we'll still find a role for statistical/probabilistic learning, but it will be much less important than many current researchers now think.

Where would we be now if the main function of human intelligence was enabling us to learn to replicate what our forebears have achieved? I suspect most apparent cases of learning will turn out to be speeded up processes of creation.

When we understand the creative processes and mechanisms of Euclid, Aristotle, Bach, Shakespeare, Beethoven, Newton, Frank Lloyd Wright, and inventors of buttons, hooks and eyes, zips, and velcro, and toddlers learning to feed themselves and talk, all of which may one day also be demonstrated by robots, then I think we'll see that they need abilities to create, manipulate and use structures of many kinds, including both physical structures and abstract structures. I suspect that the ability to think up new possibilies, try them out, debug those that don't work, then redesign them, will play a far more important part than abilities to find correlations and patterns in records of past achievements.

Geoffrey may claim that the top levels of his deep learning systems are already doing what I describe. But I don't think the kinds of networks assumed have the right sort of representational power.

But I can't yet demonstrate that! It could take a century or more of further research to find out enough about human or squirrel intelligence to replicate it.

There are some very simple examples in past AI research, e.g. the analogy program of Thomas Evans in 1968, various planning and problem solving programs, theorem proving programs, automatic programming programs, Harold Cohen's painting program AARON, and various others that may already demonstrate important fragments. But it may turn out out that we also need new kinds of computers. It depends, for example, on how important the role of chemistry is in animal brains. There are far, far, more molecules than neurons in brains!

I don't want to disparage the work reported in the Forum episode. But it needs to be viewed in the context of what we do not yet understand.

What is Rhythm?

Anthony Sayer writes: Rhythm is often thought of as ‘being in time’. I think rhythm is best defined as ‘flow’. Time, like circles and straight lines, is a human concept with no basis in real life. Imagine a Beethoven sonata realised by a computer, everything in perfect time according to the written score. It would sound awful, dead, meaningless and lifeless. Rhythm is the indefinable constant variation of time, timbre, and pitch that creates a sense of inevitability for every note, pause, and phrase. This is the element into which everyone can tune, and which brings an interpretation to life. According to mood, event – the ingredients of life – no performance will be repeatable. Change – flow – is the basis of rhythm.

A Ring to Catch Your Ideas?

David Cooper comments on Claudia Roda's 60 Second Idea: It's very hard to forget your best ideas, but there are a lot of lesser ones that you need to catch, so keep a notebook at the ready, and especially by the bed, because it's on the way to sleep that a lot of good ideas appear, and they are at great risk of being lost if you fall asleep without making a note about them first.

Meet the presenter: Bridget Kendall

Meet the presenter: Bridget Kendall
  • She is a BBC diplomatic correspondent and has been since 1998
  • She was a foreign correspondent in Moscow (1989-94) and Washington (1994-98)
  • The first woman to win the James Cameron Award for distinguished Journalism (1992)
  • In 1994 she was made an MBE
  • Speaks fluent Russian and French
Added. Check out your playlist Dismiss