« Previous | Main | Next »

Mathematical art, search and robots

Post categories:

Jamillah Knowles | 17:36 UK time, Tuesday, 6 December 2011

Hello Outriders!

This week on the podcast our Chris Vallance has been out again - we keep letting him out, is this wise? Well it's certainly wise this week as he has been considering the idea of our robotic friends.

Meanwhile, back in the broom cupboard, I've been finding out about a new modifiable search engine and talking to the creator who accidentally turned beautiful mathematics into, of all things, a rather attractive hair accessory.

First a story of maths. No! Don't run away! This is a story of mathematical art. Henry Segerman is a research fellow at the University of Melbourne in mathematics and he's also mathematical artist, we talked about the surprising world of 3D mathematical shapes and their possibilities when transferred from the virtual to the real world.

Also this week I had a chat with a creator of a new search engine. You might already be thinking of some large and well known companies on the web who do this, but you should maybe think again.

Michael Christen is working as a freelance consultant with the German Digital Library and he has created Yacy - a highly adjustable search engine that is in its infancy - so in fact you can help it grow.

Chris Vallance dropped by Outriders towers armed with tapes from a visit to Robotville at the Science Museum. He also spoke with two sci-fi authors with science backgrounds, Alastair Reynolds and Daniel H Wilson about the interplay between robot fact and robot fiction.

That's all for this week, but if you would like to get in touch about your own electrical adventures, then there are many ways. You can email me at Outriders at bbc dot co dot uk, you can tweet at me as @bbc_outriders or find us on facebook - just type Outriders into your search box.

Until next week! ~ Jamillah


  • Comment number 1.

    I found the question of 'what exactly is a robot' very interesting. Partly this was in a fruitless but interesting intellectual way but it was also because, in the light of the recent problems that the US has been having with its unmanned airborne drones, I was wondering about The Three Laws.

    Years ago, I saw one definition of a robot as being 'a machine connected to a computer'. OK, that seems to work. It encompasses everything from a washing machine to HAL 9000. But by that definition, a Predator drone is also a robot; it is a machine, controlled at least for a large chunk of time, by a computer. My understanding is that, like an airliner, the only reason why a Predator drone can't be entirely autonomous is because of policy rather than engineering. American Airlines wants to be able to tell its passengers that humans can fly the aircraft at all times if needs be. And the US Defence Department has decided that if a weapon is to be released that might kill a person, then another person should be the one who pushes the button. But the 747 could take off and land all by itself, if AA management were to let it. And the Predator is perfectly capable of working out for itself what an enemy target looks like. Image recognition may or may not be good enough to spot an infantryman at a thousand yards, but I'm pretty sure it IS good enough to spot a named, specific building at the same range.

    So, we have a definition ('a machine controlled by a computer') that seems perfectly reasonable, but which, were we to give those machines full reign, would be capable of violating the First Law. I read in one of Asimov's books - it might have been his memoirs, but it might also have been a commentary on his robot stories; I'm really not sure - of him arguing that the Three Laws were nothing special, because we implicitly apply them to every machine we build. Any machine, he said, from a wedge to the ISS, must meet three basic criteria. It has to be safe. It has to work. And it has to last. He wrote that as such, at best he should be given credit for articulating these laws, rather than creating them A very logical observation, even if, in making it, Asimov was being uncharacteristically modest. My memory is that in saying this, he wasn't suggesting that humans were being wise or foresighted in insisting on these characteristics in our creations, we were being self-interested. If Asimov was right in saying that 'all machines, and therefore all robots, must be safe to humans' then how do we explain the fact that we've quite obviously built one which isn't?

    And if he's wrong, where does that leave us? I can't take refuge in the heiracrchy of laws. I can't say 'OK, so Predator can kill people, but that doesn't break First Law because the bomb it drops or the missile it fires, does it because a human ordered it to.' because if the Predator really IS a robot, First Law would override the order from a human.

    I can, I suppose, try and invoke the uncertainty about what a robot is. I could say 'Well OK, the Predator can kill people, but only if a human - rather than the machine - decides to drop that bomb or fire that missile. So it might look like a robot, but it's not one, really.'. But this strikes me as a very weak argument - it's like saying that the atomic bomb isn't really a weapon, but a small bucket of sunshine that people sometimes get burned by. However much we argue about the name, the thing it refers to remains the same.)

    So what I'm left with is that Asimov was wrong. He was wrong in believing that any robot would have the Three Laws buried in it. I can understand why he would have believed it, because it is certainly not in our interest as a species to go around building machines that can kill other members of the species. It's almost as if he had in mind a Prime Law of Humanity. (I can't say he had a 'Zeroth Law, as he spotted that one himself.) The Prime Law would read something like 'No human shall build a machine that harms humanity, or, through inaction, fail to build a machine that prevents humanity from coming to harm.' He was wrong because he assumed we as humans would be logical and broad-minded, as opposed to emotional and narrowly self-interested. If he had ever such a concept, the Prime Law of Humanity never stood a chance, did it? '

    What started off as a 'well done on the podcast' note went from 'I wonder...?' to 'We're all doomed!' note. Oops. Erm, Merry Christmas?



Sign in

BBC navigation

BBC © 2014 The BBC is not responsible for the content of external sites. Read more.

This page is best viewed in an up-to-date web browser with style sheets (CSS) enabled. While you will be able to view the content of this page in your current browser, you will not be able to get the full visual experience. Please consider upgrading your browser software or enabling style sheets (CSS) if you are able to do so.