It’s minimalist Blogjune – one pic, one thing I did today.
It’s a public holiday for the pretend birthday of some bloke in England. I could even hear a 12 gun salute for it at midday.
What I did today:
I sat in the sunshine and caught up on reading for an AI Ethics bookclub I accidentally joined last month.
Looking over many of the earlier books the group had discussed, I saw several from which I had set readings for students (like Weapons of Math Destruction , Invisible Women and Made by Humans ), so it seemed like a good idea.
This month’s book is from 1976, written over a two year sabbatical by MIT computer scientist, Joseph Weizenbaum. Between 1964 and 1966, he created ELIZA, a very simple chatbot style program that used pattern matching to respond to typed enquiries. The DOCTOR routine imitated a Rogerian therapist, almost as a parody of this style of reflective questioning, where the therapist prompted the patient to speak using a rather scripted set of responses, mainly using the patient’s own words.
So, a session with ELIZA, or with a real Rogerian therapist, would start something like this:
PATIENT: Hello, can you help me with my problems?
DOCTOR: You want me to help you with your problems?
PATIENT: Yes, I am having problems with my mother.
DOCTOR: What about your mother is troubling you?
PATIENT: She doesn’t listen to me.
DOCTOR: Why do you think she doesn’t listen to you?
You can play with it for yourself on many sites like this ELIZA emulation at New Jersey Institute of Technology.
The program was demo-ed to many visitors to MIT to show what was possible. This use was relatable and understandable, while many other early computing uses required higher-level maths or computing to appreciate. But …. people bonded with ELIZA. They asked for privacy when they chatted with it. They were outraged that Weizenbaum suggested recording the data that was being input when many, many people used it overnight. Something a little different was happening. Even scientists who KNEW that this was a prediction tool following a script attributed far more humanity and effectiveness than was reasonable.
The book, Computer Power and Human Reason is Weizenbaum’s attempt to make sense of how we should progress into a world where people may have a tendency to ascribe wisdom or humanity to a set of probabilistic scripts. I am not far in, but this quote from his introduction has already jumped out and shaken me by the lapels:
The reaction to ELIZA showed me more vividly than anything I had seen hitherto the enormously exaggerated attributions an even well-educated audience is capable of making, even strives to make, to a technology it does not understand. Surely, I thought, decisions made by the general public about emergent technologies depends much more on what that public attributes to such technologies than on what they actually are or can or cannot do.p. 7 Weizenbaum, J. (1976) Computer Power and Human Reason. W.H. Freeman and Company