Talking Points on Humanism and Artificial Intelligence

Talking points based on the article “AI on the Go: Notes on the current development and use of Artificial Intelligence”  by Carl Mahoney, Australian Humanist #121. 

I’m Mary-Anne, Deputy Convenor and Webmaster of the ACT Humanist Society. I’m a software engineer by profession, and I have been keenly following the progress of research into artificial intelligence, or AI. I was interested to read Carl Mahoney’s article “AI on the Go: Notes on the current development and use of Artificial Intelligence” in a recent edition of the Australian Humanist.

Carl Mahoney is a Humanist Society of Victoria member. He was professor and Dean of the Faculty of Architecture and Building, University of Technology, Papua New Guinea.

Mahoney makes a number of statements in the article that are interesting and worth discussing from the humanist perspective. This article sets out those talking points for use in discussion groups.

What is AI?

In his article, Carl Mahoney does not provide a definition of AI. He mentions a great many technological innovations. Some of these are examples of AI. Some are not. Here’s one definition:

The theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.
— Oxford Dictionary

Some of the innovations mentioned in Mahoney’s article include:

  • ancient greek and chinese automata
  • big data
  • autonomous vehicles
  • cyber-warfare
  • keystroke monitors
  • language translation
  • composing music
  • visual arts
  • remote controlled devices 
  • the Internet of Things
  • industrial robots
  • robotic vacuum cleaners
  • “brain peripherals such as those for sight and sound” (presumably cochlear and retinal implants) 

Choose one or two of these examples from the article.  Do you think it is AI? Why / why not?


An interesting example is autonomous vehicles and robots. These can be classified into three types:

  1. Those that move in a set way (fixed or pre-programmed), like the ancient greek and chinese automata. Mahoney gives an example of a mining vehicle that collided with a human driven vehicle in Western Australia recently. That was a more sophisticated example of this type.
  2. Those that are remote controlled, like military and hobby drones, deep sea submersibles, and bomb disposal robots.
  3. Those that use sophisticated programming to achieve a goal while receiving and processing the inputs of their environment. Google’s autonomous vehicles fall into this category.

Mahoney does not distinguish between these in his article. Only those in category 3 are genuinely AI.

Other examples of AI

There have been many developments in Artificial Intelligence that were not mentioned in the article. Some have made headline news over the last few years.

Give one example of AI that you know of. What does that development mean to you?


Here are some I find interesting:

  • Eliza - natural language processing. Developed 1964-66. Early beginnings of AI, very primitive. Reproduced today in less than 600 lines of JavaScript and available online for chatting.
  • Aipoly - phone app for the blind. Machine learning, visual processing
  • Apple’s Siri - phone app. Speech recognition, voice control, information retrieval.
  • Google’s Cloud Speak - speech recognition. Used to power voice control and information retrieval for phones and tablets, amongst other things.
  • Microsoft’s Tay - a chatbot created for 18- to 24- year-olds in the U.S. for entertainment purposes, released on Twitter March 2016. Machine learning. Microsoft forced to pull the plug on Tay and delete offending tweets, after Twitter users taught her racist hate speech. https://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/
  • Google DeepMind’s AlphaGo - deep neural networks combined with reinforcement learning. Won a challenge competition (best of 5) against Lee Sedol March 2016.
  • IBM’s Watson - natural language processing, machine learning, information retrieval. Won first place $1m prize on Jeopardy!, playing against two former winners. Watson was not connected to the internet during the game.
  • IBM’s Deep Blue - brute computational force chess player. Beat world chess champion Garry Kasparov in 1997
  • Honda’s Asimo - humanoid robot designed as a companion. Understands and can act on requests. Not for sale.
  • Aldebaran’s NAO, Romeo and Pepper. Pepper is designed as a companion, can recognise emotions, hear and speak, and can be adapted via downloadable apps. It is available for sale currently in Japan only for around $1600 USD with a service plan of $1200 USD per month!
  • Alpha 2 - small humanoid robot designed as household companion. On sale for less than $3000 USD.
  • Hiroshi Ishiguro’s humanoid robots - actually not AI (his Geminoid is remotely operated, for example) but worth a mention because of their very lifelike appearance.
  • In Japan the hotel Henn-Na (“Weird Hotel”) is staffed almost entirely by robots.

"The Computer"

Throughout this article, Carl Mahoney makes statements about "the computer", for example:
“AI is a child of the computer”, “The computer is now a fairly mature device”, “the computer is inherently regimented and logical”, “the computer has an inbuilt tendency towards what we call in human terms ‘fascism’”, “All matters normally given over to human discretion are difficult for the computer to handle”, “the difficulty of using the computer to manage our affairs”.

Given the huge variety of computational devices that have been created, does "the computer" have any meaning?


In my opinion, talking about “the computer” in a discussion on AI is akin to talking about “the animal” in a discussion on intellect. Saying “the computer is inherently regimented and logical” and “All matters normally given over to human discretion are difficult for the computer to handle” is like saying “the animal is inherently vague and unpredictable, and will never be good at logical reasoning”.

Machines You Can Relate To

Even in the absence of AI, sometimes a machine seems to take on a personality you can relate to. Carl Mahoney gives the example of robotic pets for comforting the elderly. There are two machines in our household that have inspired a feeling of anthropomorphism in me, such that they have names: “Rosie”, the robotic vacuum cleaner, and “Supernanny”, the car. Both have voices - Rosie communicates with simple phrases like “Please select mode” and “Check side brush”. Supernanny provides navigational guidance in a crisp british accent. 

Are there any machines in your household that you have named? Do you think they are an example of AI?


Artificial General Intelligence

In his article, Carl Mahoney draws the distinction between specialised AI, in which machines can perform one function that was previously thought to be possibly only for humans, and Artificial General Intelligence, in which a machine can perform any intellectual task a human can.

How will we know when we have achieved this? As early as 1950, Alan Turing described a test for AGI - if a human can hold a conversation with a machine for five minutes, and 70% of the time is persuaded that the machine is human, then the machine is said to have passed the test. This test is now known as the Turing Test. In 2014 a program called Eugene Goostman, which simulates a 13-year-old Ukrainian boy, was able to fool 33% of the judges after 5 minutes of conversation at an event organised by the University of Reading. Hugh Loebner, an American inventor, has offered a controversial prize for the first program to pass his more rigorous Turing Test. That prize is yet to be claimed.

Mahoney warns about some of the dangers of AGI. In this he keeps very good company - Stephen Hawking, Elon Musk and Bill Gates have also been outspoken about the risks. But Mahoney leaves unexplored a question of interest to Humanists:

Is it ethical from a Humanist perspective, to be striving to develop AGI? 


It has been reported that because ASIMO’s walk is so eerily human-like, Honda engineers felt compelled to visit the Vatican just to make sure it was okay to build a machine that was so much like a human. (The Vatican thought it was okay.)
— http://science.howstuffworks.com/asimo.htm

And more on ethics and AI: Navy researchers are working developing a robot with a sense of morality.

The dangers of AI are explored in fiction:

  • Terminator - 1984 - 2015 film series, James Cameron and Gale Anne Hurd
  • Battlestar Galactica - Several TV Series and films, 1978 - 2013, Creator Glen A Larson.
  • The Matrix -1999 - 2003 film series, dir. The Wachowski Brothers
  • Blade Runner - 1982 - dir. Ridley Scott, based on novel by Philip K Dick
  • 2001 a Space Odyssey - 1968 film dir. Stanley Kubrick, based on novel by Arthur C Clarke

AGI rights

Another question of interest to Humanists that Mahoney has left unasked, is this: What if we did develop a machine indistinguishable from a human?

How should we, as Humanists, treat such a machine?


Fiction that explores the rights of machines that can think:

  • Fallout 4 - 2015 immersive computer game, Bethesda Game Studios
  • Humans - 2015 TV Series, AMC, Channel 4 and Kudos co-production
  • Ex Machina - 2015 film, dir. Alex Garland
  • Her - 2013 film, dir. Spike Jonze
  • A.I. Artificial Intelligence - 2001 film, dir. Steven Spielberg
  • Bicentennial Man - 1999 film, dir. Chris Columbus