The Round Tower, Copenhagen, 2018
How does A.I. influence our sense of self/identity?
Welcome to the 2603rd year anniversary of the Tradition of Philosophy and Science,
based on the day of the Eclipse of Thales.
A new challenge to our ability to reason has arisen that makes us feel less special and unique.
We have artificial intelligence (A.I.) and that is what we will debate today:
How does A.I. influence our sense of self and identity?
Before I introduce the exciting panel for you, I would like to welcome Lasse Dissing from DTU Compute,
who will demonstrate this little robot or what to call it, so we can get a sense of what this thing we call A.I. is.
Let's give Lasse a warm welcome.
- Thank you.
Good evening.
As mentioned my name is Lasse Dissing and I study at DTU Compute.
I have been given the honor of warming up for you tonight by giving a demonstration of our robot, Pepper,
or as we call him, R2DTU.
Pepper is a humaniod robot because it looks human.
It is made by a Japanese company named SoftBank Robotics.
In Japan they have them in airports.
You can ask it to help find your terminal whereby a map of the airport will appear on Pepper's belly.
We try to take this a step further with our research into social intelligence.
How can the robots understand and interact with humans?
This is very complicated for a robot to figure out because human beings are extremely complicated.
We don't follow simple physical laws,
we have very complex rules for how to interact with each other,
and our language is ambiguous and irregular, which makes it very difficult for computers to understand.
For instance, a simple assignment, such as to go down the hall and pass on a message to a colleague,
is very complicated for Pepper to do.
It will have to navigate around people it bumps into on its way,
and if it does not know the way, it might have to find a person to ask.
When it arrives at its destination, it should not interrupt a conversation but wait patiently for its turn.
This is very trivial for humans to do, but if you are made of silicium it is very complex,
and that is what we are trying to solve.
There are already systems that can communicate with humans, such as SIRI and Google Assistant.
You have probably experienced that they have failed to understand the words you spoke, or their meaning.
This becomes all the more difficult when the language is Danish,
because 90 percent of the engineers working on these things speak English.
Importantly, these intelligent systems must be able to tell us why they do what they do,
and what they are thinking.
Pepper is unique in this regard because with its arms and LED screen it can communicate to us
why it is doing what it is doing.
Many of these calculations are very challenging so it is nice to know that it is not malfunctioning.
It is just thinking.
Before I begin this demo I should mention that this is the first time Pepper will be showcased outside of DTU,
and also the first time in Danish.
Furthermore, while the acoustics are good in here, they are not very robot friendly,
so I will probably have to ask Pepper twice now and again, so bear with me.
Hello Pepper!
Hello Pepper!
He has to hear it a third time, apparently.
Hello Pepper!
- Nice to meet you.
- How are you?
- I am well. How about you?
- I am also well.
I am also well.
- That sounds good.
- Can you tell a bit about yourself?
- Good evening, my name is Pepper.
I work as a research robot at DTU Compute.
Danish is still a very difficult language for me to pronounce, but I am practicing getting better.
- Right now I am talking to you and not to Pepper.
Pepper understands this because it knows my face, and when I look away from it, it knows to ignore me.
When I look at it you hear this (pling) sound and its eyes turn green.
Can you raise your left arm?
- As you please.
- Can you see the ball?
Can you see the ball?
Can you see the ball?
- I am looking for it.
- Pepper is now looking for the ball instead of my face.
Can you pick up the ball?
- Okay.
- Great.
What color is the ball?
- Uhm,
it is red.
- We are using a neural network to figure out the color.
We are not following a manuscript.
No person is directing Pepper from somewhere else in any of this.
Pepper hears commands and follows them,
a bit like SIRI, except that this is something we have created ourselves.
Can you give me the ball?
- Here it is.
- Thank you very much.
Thank you very much.
- Anytime.
- Can you follow me?
- I'll follow you.
- Pepper will now very carefully try to follow me and slowly but surely place itself next to me.
Pepper has laser and sonar sensors that it uses to avoid obstacles.
Can you say good-bye, Pepper?
Can you say good-bye?
- Have a good night.
It was a pleasure performing for you.
(audience applause)
- As you can see there is a long way to go before artificial intelligence truly has arrived,
but we are working on it.
The goal is for Pepper to be able to conduct a normal conversation with us in a way that seems natural.
Thank you for listening.
(audience applause)
- Now it is time for tonight's panel debate on how A.I. influences our sense of self/identity.
Let's welcome the panelists one at a time.
First we have Thomas Bolander, who is a professor at DTU Compute.
When they need an expert on TV to talk about artificial intelligence it is often Thomas they call.
I hope he is not too tired tonight.
Let's give him a warm welcome.
Next up we have Professor Cathrine Hasse who is an expert in technological development,
and who has a background in anthropology.
She is the first within the humanities to head a big research project on robots for the European Union.
Let's give her a warm welcome.
We also have Ole Fogh Kirkeby with us who is one of the most well respected philosophers in Denmark.
He wrote many books on artificial intelligence already in the 1980s,
so it will be interesting to hear what he has to say on the subject today, and if his views have changed.
It is great to have you here, Ole.
Also we have one of the most well respected film critics in Denmark.
He helped to create Bogart and many of the movie review television and radio programs
you may have seen.
He brings a different perspective than the other panelists, and knows a lot about sci-fi movies.
Sci-fi is what we have over there, or in the future it might well be,
so it will be exciting to have Per Juul Carlsen in the panel.
Thomas, if we begin with you, can you tell us something about what artificial intelligence is,
and what we are going to use it for?
Lasse talked about it a bit, but if you can expand?
- Yes, now you have seen an example of it, and also an example of how there is still some ways to go.
Artificial intelligence is the attempt at making machines-- computers and robots-- intelligent.
It is unclear however what that actually means
because intelligence is a concept that we do not have a good grasp of. However, it has to do with
enabling computers and robots to do some of the things that until now only humans have been able to do,
for instance, drive a car, have a conversation, play chess, or medical diagnostics.
When we try to do these things we are always inspired by human problem solving,
either by copying the fundamental neurological processes in the human brain,
also known as artificial neural networks.
We can also try to make more abstract models of how to solve different problems,
for instance, how we make decisions in traffic,
or what thoughts go through our head when we are about to make a move in chess.
People try to make a simplified version of this and put it into a computer.
(Thomas Bolander, A.I. developer)
When you ask what it can be used for it is an unusually wide variety of things,
because it has to do with getting machines to do what humans do,
but you can say that what we try to do is to get machines to imitate our cognitive processes,
as opposed to during the industrialization where machines took over our physical labor,
and where for instance we could saw things with machines instead of by hand.
A.I. has to do with getting machines to do what we can do with our brain,
such as thinking logically, planning, making decisions, and pattern recognition.
That said, there is still a big difference between human and machine intelligence,
and what is easy for us to understand and what is easy for machines.
It has proven extremely difficult to understand well enough how we understand simple sentences
for us to put it into a computer so that it too can understand these sentences.
At DTU we also research social intelligence.
Social intelligence is the ability to put yourself in the position of others
and understand what they are trying to achieve.
This is related to linguistic intelligence because when someone says something
it is not enough to understand each individual word.
You must also understand the intention behind what is said,
and this requires you to put yourself in the position of who you are talking to.
So there are some things that are easy for most people, but very difficult for machines.
Vice versa, machines can make huge calculations and remember everything you ever taught them,
and it is much easier for a computer to become world chess champion than it is for a person.
So one aspect of A.I. is to copy aspects of human cognition,
and this is in some ways easy and in other ways very difficult.
Our ability to make use of A.I. also depends on what kind of breakthroughs we make.
When will we have driverless cars that do not cause accidents?
You need a very powerful crystal ball to say anything with complete certainty about this.
However, we can say that the more clearly defined a problem is, the easier it is to solve with machines.
Chess is easy because there are just a few well defined rules that you must follow.
Driverless cars are harder because even though there are traffic rules their situation is less well defined.
Creating a robot that can have a conversation like a human is still very difficult.
This was not a very concrete answer,
but I think that we will eventually see robots like this (Pepper) for housekeeping.
We have vacuum robots, but eventually they will also be able to put plates in the dish washing machine,
cook dinner, keep track of our calendar, and have various other general abilities.
It is a bit of a challenge to get there though, but we are trying.
- So right now, A.I. can, at any rate, do our vacuuming.
- Yes, if you need it.
- Yes, interesting.
Ole,
what can we learn about human beings from our attempts at creating artificially intelligent computers?
- We are living in an evidence based hell, which has to do with new public management.
(Ole Fogh Kirkeby, Philosopher)
Artificial intelligence was originally a consequence of the attempt at automating blue color work.
It was then thought that A.I. could also automate white color work-- that is, intellectual work.
A.I. research was in this way driven by the need to replace the workforce and raise profits.
We cannot avoid seeing it in this context.
It also serves other purposes though, such as to help the handicapped and deal with complicated diseases.
Behind it all however is a research area called Cognitive Science.
I wrote an article about it back in the 80s for an English encyclopedia,
and this you can say is the area of research for artificial intelligence.
Cognitive science must necessarily draw on the fields of philosophy and psychology,
and try to implement this into a praxis that can be documented via machines.
However, this presupposes that we know something about how human beings function,
and we still don´t seem to know very much about that.
We are dealing with open and closed worlds,
and even though chess has about 10 to the power of 125 possible moves, it is still a closed world.
Our reality however is not a closed world.
Our experiences are not a closed world for the simple reason that it is constantly processing.
What we remember today will not be the same as in three or four days from now.
We constantly change, and therefore we are not a closed world, and this makes A.I. very difficult.
Furthermore, when we are thinking we are using our reason, or ratio as they said in Latin, logos in Greek.
We do this by applying our reason to the intellect: intellectus in Latin or nous in Greek.
This is a way of thinking that can be passed on through logic,
including the very advanced types of logic that we have gradually been developing up until today.
However, behind these logical systems is the big problem of formalizing everyday reality
so that it can be translated into some of these types of complex logical systems.
This is a very difficult task.
Take for instance such thing as epistemic logic; the way I know what I know, what I know, what I know!
While the computer can handle all these levels of knowledge, because of its processing power,
the question is if it can come to grips with the innermost essense of knowledge:
values.
Therefore we encounter the problem of how the computer can tell us about
how we gain experiences and thereafter create understanding.
If you analyze the proto Indo-European language, which is at the origin of languages from Ireland to Mongolia,
you will notice that many of the expressions used to understand something are hand metaphors.
Kant allegedly called the hand our outer mind:
we comprehend, meaning to take together in Latin.
Now imagine we could create a sensory connection between reality and computersystems
consisting of the ability to somehow absorb experiences.
Then you could perhaps somehow simulate how our senses create understanding.
There is a wonderfully beautiful image from Aristotle's Analytica Posteriora
where he says that general concepts are created in the same way as the way in which an army collects itself;
one soldier's ceasing his flight from the enemy leads to other soldiers doing the same, one by one.
In the same way, complex experiences of a variety of people come into view
in the general concept of humanity.
The interesting thing about computers would be if they were able to generalize or induce general concepts.
In this way, they could potentially, in a certain sense, get to know more than we can know.
However, the computers would have to be able to generate a language themselves,
because the general concepts have already been transferred in the form of language,
and computers would only generate language if they could interact with the world around them.
So perhaps if we gave computers sensory connections, especially movability,
it would enable us to study how computers interact with the world,
and thereby understand better how human consciousness functions.
However this is very far out into the future, as I see it.
- Would you not say that in a certain sense computers have developed a sensory apparatus,
for instance, self-driving cars with vision?
- Yes, however let's not forget the recent accident where a self-viewing car mistook a trailor truck for the horizon.
It didn't go so well.
- You are welcome to add to Ole's comments.
- I agree with this.
I would just like to comment a bit.
With regards to creating understanding and concepts based on sensory experiences,
that is kind of what we try to do when we build a robot.
It is true that there is a long way to go, and that we can only do it with very simple processes,
but it is part of a trend within A.I. known as the Embodiment Thesis,
which says that full-fleged A.I. can only be achieved if you have a body with sense perception and actuators;
something that can interact with the world in a human-like manner, because this is fundamental to language.
A chatbot such as SIRI's iPhone is not based on learning language through its surroundings.
It has simply been fed a lot of language without connection to reality or understanding of what it means.
It has a purely statistical approach, and this clearly has huge limitations.
These systems can only react to very specific patterns and things they are more or less pre-programmed to do.
SIRI and Google Now are only slightly more advanced than simply delivering pre-programmed answers.
So I agree with Ole, but I don´t think things are completely hopeless.
Much of what happens in A.I. today is to try to build systems that learn from experience;
systems that are dynamic in the sense that what they remember today is not the same as they remember four
days from now, because new sensory information is constantly processed and a lot of learning takes place.
- Yes of course, I cannot disagree with that.
However, Karl Marx's logic tells us that machines are not introduced before it is economically feasible.
We have let progammers work with our tax-system,
but luckily they ran into trouble when they found that they didn't know how it functions.
In the field of law however, there is important information about how far A.I. has been developed.
The jurisprudence will change because the labor laws will change as a result of A.I.
There will be a lot more part-time work and work controlled by apps, and a lot more isolated robot work.
Back when I worked with this, jurisdiction was developed
with regards to whether we should punish the doctor
that used an expert system, or if we should punish the developers of the system.
The field of law can always show us how far technology has been developed,
and technology is only introduced when it pays off-- either because technology
makes job functions more efficient, or because it makes them cheaper.
The Cloud where we put our IT assignments is now shaken to its core
because we are now creating algorithms that can create algorithms.
Machines can in a relatively simply way be progammed to program themselves.
Therefore a huge amount of people who everyone thought was the hope of the future will now be jobless:
the programmers.
This will drastically change our modern state of affairs.
- Interesting.
Ole talks about how human beings are characterized by something that robots don't have, namely values,
and while it might be difficult to implement values into robots, might the robots change our values instead?
One could ask this instead.
Cathrine, in the same way that nuclear power changed us a bit,
in a way it made us more peaceful, because we no longer dare wage war against each other.
How do you think A.I. will influence our ethics and behaviour?
- First of all I want to address the common notion that A.I. will make life easier, more efficent, and cheaper.
However, I would like to begin with a thought experiment.
Imagine we can create a technology that can make things faster, more effective and so on.
Unfortunately it results in 1.2 million people losing their lives every year.
Are we willing to pay this price?
(Cathrine Hasse, Cultural Analytic)
Furthermore, many of the people at risk are those who have it the hardest,
both in third world countries and in our western societies.
Do we want this technology?
We already do because it is the car.
The car was developed by creating new cities constructed with the aim of accommodating it.
This gave rise to new jobs and new infrastructure, as well as an array of new ethical problems,
problems we had not predicted or thought about before, but which we now had to consider.
Is this intelligent or could we be smarter, one could ask?
Could we be more intelligent as people, and ask some relevant questions before all these things happen?
Some will say that A.I. is the answer to some of the problems we have created with cars.
I believe that is possible, but not until we have changed society in numerous ways,
because 'self-driving cars' is the way language fools us.
Self-driving cars are not self-driving.
They are machines within an infrastructure that supports their self-driving cabability.
I think you (Thomas) would agree with me that Pepper is not autonomous.
Autonomous technologies do not exist.
In order to have self-driving cars we must change the infrastructure, the rules and the laws.
We must change our thinking about what we want and what we don't want, where we can and cannot walk,
and I believe it is naive to think that self-driving cars will be driving in and out between pedestrians.
In order to be intelligent we must take into consideration how these technologies might actually develop.
We must consider what we want and what we don't want.
We want a lot of it, I think, because it is really smart and it will make many things easier and better.
However there is also a lot we would like to avoid before its negative consequences become reality.
- Can you expand on your examples?
- Yes, as mentioned, do we want to change our cities and landscapes to accommodate self-driving cars?
Another example is how A.I. can benefit healthcare.
With machine learning we can create very individualized diagnoses by finding patterns in large data sets.
This also requires input, however, and we see a tendency for people
to become more focused on their exposures:
"Am I suffering from this or that disease?", they constantly ask.
You may know of Jerome Jerome where three men sail down the River Thames.
In the boat is a medical encyclopedia, and by the time they reach the harbor, one of the men who had been
reading the encyclopedia had diagnozed himself with each and every disease described in it.
We can predict problems in this respect.
If every morning we all had to take a blood sample that is sent to an A.I. in order to tell us how our heart is doing,
or if we have preleminary signs of cancer, which we all have, what will this do to our sense of humanity?
In the same way as with the cars there is an ethical dimension to this:
who defines the framework for this technology and what it should do?
It is a well known Wasp pattern: it is white, often English speaking men from the western world.
There is a clear lack of diversity among those who decide on how these algorithms are developed.
New numbers show that the number of women working with A.I. is decreasing--
also in Denmark, although less so compared to the European average.
There is also a lack of people from the Third World, etnic minorities and so on, and
I think we have to work very seriously to change this in order to give the field of A.I. a proper ethical dimension.
- So a way to avoid some of the pitfalls of A.I. is to get more women working on it?
- Women and others who are not currently represented.
- Yes.
- And it does not have anything to do with gender or skin color, but about people with a diversity of experiences,
beause these technologies can come to control our behavior,
and if they are only adapted to a small minority that we then all must adapt to,
that is probably not for the best.
It hurts a lot of men too, by the way.
- Would any of you like to comment before the next question?
- Just quickly, I think this is also relevant with respect to working across fields.
An engineer is not neccesarily an expert on ethics or good interactions between humans and machines.
Therefore it is important that there are philosophers, psychologists and others working in this field,
so that it has a broad foundation.
I also very much agree that we should have more diversity within the field of A.I., and we are working on it.
It is super important.
I am coincidentally a white western man...
- That is also okay.
- Thank you.
It is also difficult for me to do anything about,
and it is not good for me and my field if things become too enclosed and nerdy.
This is very much about what has already in a way been addressed, namely trust.
We have to eventually be able to trust that these machines will do good for us.
Otherwise nothing else matters.
And how do we create trust? It has to do with human relations; we have to understand what creates trust.
This is much more complex than how few car crashes driverless cars are responsible for.
It is also about emotional relations; whether the system is comprehensible and if we can relate to it and so on.
- You could probably also have more fun parties at DTU with more diversity.
- For sure.
- Anyway, Ole, you also had a comment?
- It is in line with what is said.
We must not forget that most western technologies have been developed in connection with war.
The computer was developed during the Second World War as a way to break enemy codes,
but also as a way to find targets to shoot at sea.
The goals of war shape the development of A.I.
David Marr's pattern recognition techniques are now used in the rockets and drones we have today.
It might be that very strong sectors in society will develop this technology and patent it,
make it their property, and use it to re-shape the democratic society.
We should not forget these things when talking about the ethics of A.I.
"Everything begins with war," said Heraclitus, who came a little later than Thales,
and everything does end in war it seems, unfortunately.
I don't think we should imagine A.I. robots that fight on our behalf, but much, much worse scenarios.
That, however, is more along your area of expertise (Per).
- Yes, that takes us to Per.
We have learned something about what a human being is by reflecting on what seperates us from animals.
Now that we have developed A.I., the question is: what can we learn about ourselves from reflecting on A.I.
and the stories told about A.I. and its potentials?
We would like to hear your view on this, Per.
- Yes, I will try to say something as thoughtful as my three colleagues.
(Per Juul Carlsen, Film Critic)
Perhaps it was as if sent from some A.I. god because I brought along a recent newspaper article,
and the headline is: "We no longer fear the future. We fear ourselves."
The article is about a new computer game where humans are feared and A.I. robots are level headed.
Not to denegrate or make fun of those who wrote the article, but this is hardly something new.
A.I. literature and movies always seem to tell us to fear ourselves.
It is their main message.
Take for instance Mary Shelley's story of Frankenstein from the 1840s, as far as I remember;
a woman with a very visionary story about a scientist who creates a creature with artificial intelligence.
You can debate how intelligent the monster is, as well as if it is A.I.,
since it is a dead brain brought back to life.
However, the monster is exhibited as a creature we should understand and have feelings for,
whereas the mad scientist Frankenstein is the one we should fear.
There is a red thread throughout much of literature and cinema that it is the mad scientist we should fear,
so I believe that literature and cinema have always told us to fear ourselves,
and I view many of these films as a spin-off, or perhaps as a way to play around with the creation myth.
First there is God, the oldest generation. Then we come along ...
Not necessarily the biblical or Christian God, I should stress,
but that which has created the space in which we exist.
But after us comes the artificial intelligence,
and many of the movies I have seen revolve around the question of
who is the creator, who are the humans, and who is the A.I.?
And from this you can create some very exhilarating dramas.
This article mentions an interesting passage from author Frederick Brown's story
in which the leader of the world helps to create a robot.
The leader then asks the robot, "Is there a God?", to which the robot answers, "Yes, now there is."
I think this is very telling of how sci-fi movies play around with A.I.
Metropolis by Fritz Lange from 1927, I believe, is the first example of a robot in a movie, as far as I know.
It is a complicated movie that is difficult to figure out, because it was originally a very long movie,
more than three hours I think, but many in the German film industry found it too long.
One and a half hours was plenty,
and therefore it was cut short many times, and was difficult to know what the original story was.
Fritz Lange did eventually create a version that makes it more clear what the story is.
What Metropolis plays around with is a robot created by, again, a mad scientist, Rotwang,
to resemble the woman he loved and lost.
The robot looks like Maria, who is the heroine in the film, so Metropolis has many facets.
It is a story of love and gaining power and more,
and it is in many ways here that the A.I. genre begins, not to discredit Mary Shelley's Frankenstein.
I made a little list of movies that have been important to the genre of A.I. sci-fi films.
I will try to be brief.
2001 Space Odyssey is another example,
the story where the A.I. robot HAL is in control of a spaceship, and where an astronaut must kill it to survive.
HAL has killed the two other astronauts,
not because it is evil, but because it was the only way it could achieve its goal.
This is another characteristic of A.I.
A.I. in movies often has a goal,
and you can then debate whether it is humans who are responsibile for what happens, or if it is God, or who?
It is often hard to say who the creator is in A.I. sci-fi movies.
Blade Runner is another example where Tyrell is a kind of god creating replicants.
The replicants come to Earth to conquer it in order to live longer than the four years
they have been programmed to survive.
Here again we have the story of creating an A.I. that wants to be even stronger than us.
In Greek mythology, Prometheus likewise creates a human being where the human is not good enough.
Prometheus has to steal the fire from Zeus in order to make humans stronger.
In Christianity too, as we know, humans rebel and have to be put in their place.
The human is somehow never good enough and feels inferior.
But the movie that has probably been the most influential, with respect to how we perceive of A.I. ...
We were a bit worried, Ole and I, when we saw Pepper, because we sensed there is something in there to fear.
A film such as Terminator, based on a future where human beings are being eradicated by A.I. robots,
and a robot sent back in time to kill the one human who can defeat them,
is a story about how we should fear the robots; however, they are created in the image of mankind.
In today's cinema there are a number of A.I. movies that are more interesting, in my opinion,
perhaps because we have fortunately become smarter, but also because A.I. is drawing nearer,
so there is a need to tell more important and complex stories.
A movie like Her by Spike Jonze is a very good example of this.
It is sort of a romantic comedy about a man who is able to create an A.I.
What is it called in computer language? Like SIRI.
- (audience) Personal assistant.
- Yes, personal assistant.
He cannot get his love life straight after his wife leaves him, so he creates this A.I. that he talks to, that he
becomes really good friends with, and that can create music. They kind of develop a love-like relationship.
You cannot see Her but she is very much present in this story which I highly recommend that people dive into.
It is also a satire reflecting on how we humans function nowadays.
Pixar's animated movie Wall E is also a lovely movie where God, the human being, has become obese.
It has escaped Earth while the little Wall E is back on Earth cleaning up,
when it realizes something is very wrong.
I find it a very amusing perspective how this A.I. realizes that 'God' is completely misguided.
Finally, I want to mention Ex Machina, quite a new sci-fi movie, a couple of years old,
about a God, a human, who has developed an A.I. that rebels.
She is so intelligent that she can outsmart her master.
Like Her and Wall E it plays around with some very fascinating elements, because she is sexy.
She uses sex to get the upper hand on her master.
To round up my run through of important A.I. sci-fi movies, what is very interesting in many of these stories
is how the human being is imperfect.
We create an A.I. that we wish to make smarter than ourselves,
and this A.I. also feels insecure and therefore wants to exceed its master.
In this way there is a dizzying competition over who is most important, God, human beings, or the A.I.
By the way, in the movie Ex Machina the creator of the A.I. called Ava, as in Adam and Eve, says that
A.I. will one day look at us humans the way humans look at fossils.
That is to say that one day the A.I. will be so intelligent that human beings won't matter anymore.
I am sure Ole would worry about that.
You mentioned how you are worried how things might unfold.
- Should I answer?
- Yes, you are welcome.
- What you indicate in the end is something we have not touched upon yet,
which is that this is at the core a political discussion,
and thereby it is a relationship between power and information.
That is, there might be computers that can predict what happens when, say, methane gas is released in Sibiria--
it probably wouldn't be so long--
and that can predict what will happen when Antarctica melts.
However, there are people who first have to decide on whether or not they want to make use of this knowledge,
and certain people on the other side of the pond who do not seem interested
in making use of the knowledge we have about what the world is really like.
With regards to technologies, there are three places they always impact first:
the economy where they make the powerful more powerful-- usually the employees' board of directors;
they could have used the opposite, but that is rare.
Secondly, they break through with regards to sex, and that is how it has always been;
lastly, where they brake through is with respect to war.
Now if democracy follows humanistic ideals, the technologies will also impact handicapped people,
elder care and so on, but only if society has a certain idea of what it means to be human;
point being it may be that A.I. helps make the future look bright,
while creating explanations amidst previously unknown premises.
It may be that A.I. can find so many intelligently conjoined premises
that it enables us to know something that we could not know by ourselves.
But who wants this knowledge?
It is up to us to decide how much we want the A.I. to take hold of our reality.
I find this enormously essential,
and it arises out of ethical attitudes, so-called humanistic attitudes about what it means to be human,
but behind this there is an element of coincidence, in that it depends on who at present is in power.
And I agree with you, Cathrine,
that those suffering the consequences of A.I. are those living south of our borders, where the waters are rising.
So A.I. is perhaps just a way of saying that we should all just pull ourselves together
and consider how technologies can potentially develop and be used, and for what purpose.
After all, Denmark is a democratic society, but what do we get instead?
A disruption commitee!
It is laughable-- or tear inducing would perhaps be more proper,
because what is disruption?
It is something that destroys that which has been.
But what has been is a humanistic culture, and it is disrupted by the movement of technology itself.
And this is dangerous if it is the development of A.I.
- I agree with you, Ole, and I think your run-through is interesting, Per,
because one if the things we see when we are out working with robot laboratories
such as yours (Thomas), around Europe,
is that the robots they create do not look like the robots we see in movies.
I think this raises an ethical question,
because the problem is that many politicians and decision makers think the technology is something other
than what it actually is.
This means that they are more scared of it,
but also that they cannot make the right laws or think about law in a proper manner,
because the kind of robots we see are very far removed from the kind of technologies described in movies.
Interestingly, people love to tell these stories about how we can create ourselves,
but as you mentioned before, Thomas, there are two ways to go within A.I. research:
one is about copying human beings and recreating the human brain.
The other way is to work on a whole new way of creating intelligence,
and I think we must find a whole new way of speaking about A.I.
so that we emphasize that the intelligence we are creating
is something that can be used for practical purposes in everyday life, but it will never be what Her is,
or what Ava is from Ex Machina, because this is not what they are trying to do out there in the real world.
They are not actually trying to make these human-like robots.
What A.I. reseachers do is to try to create systems
that must be guided by an informed political debate,
but often it is not, because politicians, like everyone else, love these Hollywood stories about what A.I. is.
Lastly, I would like to mention E.T.A. Hoffman's lovely story about the robot doll, Olympia,
who fools a young man that falls in love with her, as well as an entire class of people.
They all love bringing her to tea parties, because she always sits there nodding her head,
saying, "It is very clever what you are saying."
Eventually she is exposed as a product of a mad scientist.
Now, one of the things the robot always said was, "I am never bored around you,"
and after she has been exposed as a robot doll, E.T.A. Hoffman writes that it became extremely popular
among young people at social gatherings to sit and yawn, in order not to be mistaken for a robot.
- I think it is worth mentioning that at one point a professor of economy
analyzed the econometric models of prediction.
He found that they are about 90 percent wrong, I believe, but it has not refrained anyone from using them, right?
And it is possible that this will also be the destiny of A.I., that now we have it,
and have invested so much in it, we will pretend that it has something sensible to say.
- Now let's have some questions from the audience, so feel free to raise your hand.
There is one here.
My colleage Stefan will bring you the microphone.
- My question is about ethics.
You talked about western/American/European ethics with respect to the development of A.I,
and it was stated that the Chinese know more about what the Americans do in terms of A.I.,
than the Americans know about what the Chinese do in this respect.
I fear that the ethics emerging from China, a completely different culture, also with respect to ethics,
might quickly surpass us, if we don't somehow regulate this area.
What are your thoughts on that?
- The question is what ethics mean in this connection.
China is a special country, an old communist dictatorship.
However, with respect to taking steps to stop pollution, for instance, China is quite ethical--
if ethics means to not accept levels of pollution that diminish our chance of survival as a species.
Therefore I am much more afraid of (US president) Trump than of China,
also because there is a certain ethical logic to much technological development
that says we should no longer think in terms of sustainability but in terms of resilience,
because sustainability is no longer possible.
There is no longer a balance between nature and society.
The Chinese seem to have realized these things, and they have a philosophical background that enables this,
that of Confucius, Mencius, and Laozi.
They have a very fine ethical tradition if they want to make use of it, and it seems that they are beginning to.
- This draws attention to the fact that A.I. is culturally ingrained.
It becomes something different depending on who develops it.
For instance, what is acceptable behaviour in traffic depends on what country you come from,
so a driverless car can be something very different depending on whether it is from Italy, Norway, or India.
This is just a small example, because all these different opinions on what is acceptable behavior
has to be progammed into these systems.
Therefore it is very important that in Denmark and in Europe we have our own version of what A.I. is,
and I hope we get to choose whether or not we buy into these Chinese systems.
Technologically they are advanced because they invest a lot of money and have very advanced technology.
However, when it comes to understanding what you can do with A.I., and its theoretical foundation,
they are in no way ahead of us.
so I am not afraid that they in any way will overtake us.
We have some very strong and good traditions in Europe.
These kinds of things have to be regulated however, and with regards to a disruption advisory board
it is very important that the subject receives political attention so that we can discuss how to regulate A.I.
The problem is that it is difficult to understand what A.I. is.
I spent many years trying to understand it.
A.I. is not like building a truck or a spaceship where it takes a couple of minutes to explain what it can do
and then you can decide on how to regulate it.
The problem with A.I. is that it is a very general concept and it develops so fast.
This leads us to wonder what to do and to this story of disruption,
a story of exponential growth, and how things develop so quickly that suddenly it is too late.
It is a very dangerous story, in my opinion,
because people are too quick to accept it and say we have to act quickly.
I think the right thing to do is to do things slowly,
and to regulate it and make sure that we do not become slaves to the technology.
After all, it is the technology that is supposed to help us and not the other way around.
There is a fear that if we do not react quickly and get this technology into our lives then we are left behind,
and therefore we must do this with our head under the arm.
I think this is very wrong.
If there is something that should be a trademark of Europe or Denmark,
it should be to do things in an orderly manner, thoroughly and ethically responsibly,
so that we are sure that it works, and have systems that are explainable.
I have talked to people from the European car industry, where there is a different perspective on driverless cars
than they have in the American car industry.
They are a bit worried about how fast things are going because: can we make sure it is safe enough?'
They have felt pressured by how fast things are going in the US, and having to keep up with the pace.
So A.I. is a big global challenge that does not have a simple solution.
What is important is that we do not simply buy into the Chinese and American way of doing A.I.
I believe we can be more thorough and think things through.
- I completely agree with the things said.
I just want to add that A.I. is based on algorithms.
The technology is one thing; it is very complex and keeps being developed further, however the question is
what you put into these algorithms, and this is similar to restrictions we put on all kinds of other parameters.
A.I. is just a new tool in this regard.
And here you can say that China is not just one thing.
China consists of many different voices, and they are not heard.
A.I. could be used to give these people a voice if the right algorithms are used.
Therefore it is more a question of how we use these technologies,
although there is no question that it also has the potential to remove protest,
collect data about individuals and so on.
This is one of the worrying things about Facebook.
I believe that one of the biggest disruptions in modern times
is the way Facebook has been held accountable for their algorithms.
It has been very disruptive and very interesting, and we can hope the same will happen in China.
- Niels Bohr wrote somewhere in his philosophical scripts
that just because the physicists operate with very complicated equations and experimental equipment,
that they are capable of understanding it better than what everyday language allows for.
This is a very interesting consideration in my opinion,
because behind this is perhaps a hint about the fact that there might be a big confrontation in the future
between those who are A.I. literate-- who can interpret these systems
and understand the language of programming, and the logic and matematics behind it,
and those who cannot.
We get a new droid caste of people who have a knowledge that they can keep to themselves,
or let out in ways that we cannot control and that might be good or bad.
I believe this to be a big ethical problem.
- Yes, we have two questions in line down here.
- Yes, thank you.
We all know that human beings have charisma and so do animals.
Do you think it will ever be possible to develop a robot with charisma?
- I think Thomas has more to say about the purely technical aspect of this,
but we have colleagues that have worked with these robot boxes that can fetch and bring things,
and it was only when, for the fun of it, they gave them eyes that people began to take notice of them.
They would talk to them, communicate with them, and if they bumped into something they said,
"Oh no, did you hurt yourself? Poor thing."
So one thing is if from a purely technical aspect we can create charisma.
The other aspect is we judge it to be charismatic, so it is also something that comes from us to the machine.
To follow up on this, if you are a child with a teddy-bear, you might also feel that it has a kind of charisma.
We have a tendency to convey human qualities to things that in some ways resemble ourselves,
also known as anthropomorphizing.
Just like this robot.
It does not have to do much. It's just that it looks a little cute.
Right now it is looking straight at me, and something different is happing to me emotionally
than if it was just a cardboard box with some plastic buttons attached.
Now I am not an expert on charisma, but in the attempts at making more and more human like robots--
and we call this a humanoid robot even though it is supposed to resemble more of a comic book character--
in these experiments the phenomenon known as Uncanny Value occurs.
In the beginning people think this is fine when robots look more and more like humans,
but suddenly they become so realistic that it becomes scary, because we can still sense it is not quite real.
You can go to YouTube and find examples of these human like robots,
and I think most people would agree with me that it is a bit spooky,
because something is missing.
With respect to charisma, I think it must have something to do with the connection between us--
the way you react, the way I look at you--
and I think this depends on the fact that we have brains that are somewhat alike, and that we have the same
sensory system and brain architecture,
so it might be very difficult to create true charisma.
- Individually, is charisma something we project onto the robots?
I think we should be careful not to take our ideas about robots back to anthropomorphizing .
Think of the robot they had in Greek antiquity.
It was called the Oracle of Delphi.
People came from all around the world to hear what this lady sitting in the marijuana mist or whatever it was
had to say, but that was because they had ascribed to her authority.
What could give the kind of aura or charisma that computers can actually possess
is the authority ascribed to it.
It can have to do with priesthood, but again, it is a question of politics.
- Let's have another question.
- I am looking at this thing (Pepper).
It does look like a cartoon character actually.
Now we are focusing on A.I., however there is also something known as feelings.
Is that something computer programmers work with or think about?
Do they have as a goal that computer technology should resemble people,
so that perhaps computers could begin to think in terms of associations?
Can we get a fucked up computer like some human beings are?
What are your thoughts on that?
- Thinking in terms of associations is something that can be done.
With regards to feelings, that is very complicated, and perhaps you (Ole) have some input here.
It is a very big philosophical project,
because what are feelings, and can robots feel or can they only simulate feelings?
Right now we can only make them simulate feelings.
I can decide if it should look happy or sad. It would be fake.
Then you ask if we should go there?
This is a very interesting question because like I mentioned about Uncanny Valley,
if they look too much like us it can be a bit scary, because they are after all not like us,
so it is not obvious that the best way to go is to give them something resembling human emotions,
but what they should have is the ability to put themselves in our place,
because it is not nice to have a robot that cannot understand what we want and what we think.
This kind of robot will always interrupt.
We have many examples of hospital robots that do not sense the world around them,
that don't move away when a patient tries to pass, and that interrupts the nurse when she is on the phone,
because they cannot decimate the social context and understand what is expected of them.
Central to human beings is our ability to put ourselves in each other's place.
We understand when people need help, even though sometimes they don't say it themselves.
Based on their actions we can decipher what people are trying to accomplish,
and suggest to help carry their bags or hold the door, et cetera.
This sounds like empathy but it is actually more basic than that.
It is a requirement for empathy that you can put yourself in someone else's place.
Empathy is this ability, plus some sort of emotional involvement.
So a robot might be nice, helpful and altruistic, not necessarily because it has feelings,
but because it has the combination of the ability to put itself in our place,
and is programmed to act altruistic: if it sees someone who needs something it will try to help.
- Many of the robot designers we have visited are completely uninterested in the question of feelings,
because their robots have to do something specific, where feelings are not relevant.
Where we see people interested in working with feelings with regards to robots, it is primarily social robots.
The more robots come into contact with humans, the more the question for the designers becomes:
how do we get humans and robots to like each other?
As you know (Thomas), there are freeware algorithms for smiling, crying and so on, that you can download.
The result of this is often uncanny because the robot has to know exactly when to smile at a hospital,
for instance, and that is very easy for a robot to get wrong.
Therefore many think we should just not do these things, because people are too complex.
The very fruitful collaboration between robots and autistic children
is an area where feelings have been kept out.
They get way too many impressions because we humans constantly gesticulate and express emotions,
and robots don't have that emotional register.
- Immanuel Kant once said something to the effect of, "The person who can use his reason
would be able to think critically, put himself in anyone's shoes, and be in tune with himself."
Thankfully we have not discussed these things with respect to computers,
apart from putting yourself in someone else's place.
However, if that is what characterizes a human being, we are in trouble,
because I don't think people are very good at that.
Perhaps you can create algorithms that say you should watch out for this or that person
because they cannot accept that you watch out for them,
and it is is easy enough to use algorithms to simulate feelings,
because feelings are what are behind our intentionality, our goals,
and the way we focus our attention on the world around us.
They might be very complex, but there are not that many of them, actually.
Look at the Christian death sins and virtues. There are seven of each.
These are the base emotions that we should either avoid or aspire to.
Since I began working with A.I. I have had this big question sent my way, that it cannot simulate feelings.
Of course it can.
It is one of the easiest things because it is just intentions.
Perhaps they do not have as much emotional depth, so that if we could look inside of it,
we would find that it did not suffer from sickly love; however that might be nice to avoid anyway.
- I am trying to fit in these wild stories of Terminator and so on into the reality of self-driving cars and such things.
It is not always easy.
However a movie like Her and Steven Spielberg's Artificial Intelligence, that I did not mention before,
are movies about emotions.
Artifical Intelligence is about a boy who wants to be able to love and to be loved.
Her, with Joaquin Phoenix, is also about a man who tries to find love,
who discovers a complex version of SIRI.
It is also a story of emotions-- a romantic comedy,
and to a large extent the feelings he has is something he projects on to the A.I.
He feels he has a friend and projects emotions on to it.
And then you can ask, is it the A.I. that has feelings or is it the human that projects emotions onto it?
I had an experience with my daughter, who like so many others have a phone with SIRI in it,
and at one point she felt like it was alive, because when she said certain words it instantly responded to her.
It was communicating with her.
When listening in we all got the feeling that it was a living entity.
We projected emotions on to this little silly phone,
and I think with respect to emotions and A.I., it is not just about what we can put into these machines.
It has as much to do with what we can project onto them,
because we have emotions and imagination, and because we function the way we do.
- Let's have two more questions.
- There is someone waiting here.
- We have heard a lot about how we should be worried,
and I am especially worried about the competitive market.
We have been lucky to live in a time that has favored democracy, and where it has been important to work.
However now this A.I. is probably going to take our work away from us,
and now a very important issue is to organize such that we do not become completely... Well, oppressed.
If not by the computers themselves, then we might be oppressed by those who own the land,
or whatever does not fall in price,
because what will likely happen is that all the things that were previously jobs, and that we could compete over,
become cheaper, cheaper, cheaper so that we can have lots of fantastic phones and 3D screens and such
things, even though we don't have money for food, and especially not rent which is the really expensive thing
because it requires land, and I cannot imagine how we can create that even with high levels of technique.
I think we should be happy just to have an apartment at minus seventh floor or something like that.
(Holger Bech Nielsen- Father of String Theory)
- It is great that there are people who dare administer the dystopias.
I think these are very very intelligent considerations, because it is true that there are more and more people
on Earth, and therefore land, housing-- food probably also, unless we create new forms of food--
will be scarce resources.
Therefore this is what we will fight over, and should invest in, so as an investment speech I agree with it.
- Another question.
I think someone got in line over here.
- It is a bit in connection to what Holger Bech talked about.
I am from the IT industry, and my understanding is that
the primary drive behind A.I. is actually to invent something that can help us control
what we already have put out there, but that we cannot find someone smart enough to control,
so we have to invent something new that is smart enough to manage it for us.
So, we have centered this talk a lot around people.
What do we want with regard to people?
I would like to hear your thoughts on ...
Let's imagine that we take as our outset the movie Transcendence with Johnny Depp
where he is uploaded to a giant data center.
A development happens, a kind of singularity,
where the computer, the A.I., becomes so skilled at reparing
that even if people are shot they can repair themselves on the spot.
Someone mentioned how self-driving cars just came ...
There was no one who had an opinion about it, it just came by itself;
can we imagine that this is a further development in the wake of the internet?
That the internet is a kind of central nervous system that connects people,
and right now we are developing technologies that resemble a brain,
so that we all become cells in a giant organism?
That is, the antropocentric-- the human centered-- goes out the window,
and this is something that is completely autonomous from us, and so are we just
changing from being one celled organisms to becoming a societal organism?
- Well, at the same time that new technologies have evolved, new theories have blossomed
amongst others within the humanities and the social sciences,
theories known as Post-Human Theory.
What they point at are two things:
one is the technical development that can change us as human beings; destabilize us, put us on the sidelines,
because what we used to be is being taken over by machine-like or newly formed biological beings,
like you also see in many movies.
The other way of understanding post-humanism is to think of the human-like as something different;
to rethink what it means to be human.
Some of the theories within this kind of thinking compare quite well to the development in technology
because it is about how we should no longer think of ourselves as individuals,
nor even as rational individuals, or especially intelligent individuals.
We should think of ourselves as something that is already connected in a kind of collectivity
with the world around us,
and this calls for a new kind of responsibility;
a new kind of human responsibility that is both a collective responsibility for each other,
but certainly also for the globe, and for other living beings.
In both kinds of theory, the human, understood as a very intelligent individual creature,
is put on the sidelines, but two different things replace it:
one is a technical development that just removes what we think of as being human,
but the other is an expanded kind of humanity,
that I think might also be the answer to those dystopias that Holger touched upon.
If we can think collectively intelligently,
perhaps then we will feel a responsibility to prevent that people end up in that terrifying situation.
- What you are saying when you refer to the movie Transcendence, as far as I remember it,
you are referring to the fundamental philosophical problem, and now that it is the day of philosophy,
this problem goes back to the time before Thales, back to all religions,
the idea that humans can live non-corporeal.
That is, that there is an existence that is not bound to the physical body.
That is what Transcendence is about.
This is then just a projection of computers, that is itself just a physical system;
a dualistic contrast between soul and body.
Soul means that which comes from the sea.
Consciousness often means something to do with breathing-- like spirit, meaning breath.
But the idea that we live forever, and live in such a way other than how we appear before one another
in our fragile bodies, it is a way of thinking that I think will never go away.
This thinking could well be behind, as I heard you indicate, as an aspect behind the dream of A.I.,
and I don't think we should disdain this dream of not being angels.
- With regards to the last two comments and the dystopic scenarios,
it is also important to note that the A.I. we make does not create its own intentionality.
We are not dealing with systems that suddenly feel like doing something on their own,
or develop something different, or think differently from how they have been programmed to.
That is not to say that there cannot come a point in time when it happens,
but thus far all these systems are programmed to solve a specific task, and they keep within these boundaries.
This means that it is us humans who develop these systems and who are responsible for them,
and who set the limits for what they are allowed to do.
There are some interesting things happening right now globally,
for instance with respect to A.I. that makes decisions about who is eligible for a loan or for parole from prison.
Powerful forces want to use A.I. for this as it can examine enormous amounts of data,
so why let someone sit and look through a whole bunch of papers about my private economy
if we can just have a computer program that can decide whether or not I should receive a bank loan?
However, people are trying to regulate this and say that as humans we have a right to an explanation;
we cannot just have a computer algorithm that is not transparent--
a black box that doesn't give us explanations.
Here is a responsibility that has been put on those of us who develop A.I.
We should be happy with this, that there are people who think we should pull ourselves together.
We must not be tempted by those who say that we must accept that A.I. is just a bit of a black box,
and that we cannot expect to understand how these systems function or that they can explain themselves.
We have to insist that they must be able to do these things, and this takes a huge amount of research.
We don't want to end in this capacity where crucial decisions about my life are being made by an algorithm,
and where I cannot find out why it decides the way it does.
There was a woman who uploaded a cake to Instagram,
and the computer algorithm thought it looked like something other than a cake,
and so her account was closed permanently and she could not have it opened again.
This is a small thing, and you can laugh at it; it's just Instagram.
But when in the US they also use this to decide if people should get parole or not,
so we are dealing with decisions at a very significant level.
So what am I saying? I am saying that it is actually fair enough that we have this dystopic fear
because there is something about it being dangerous.
However, it is not dangerous because the machines themselves run wild or decide to take over the world,
but because us humans are a bit too naive, a bit too fast,
and don't quite demand enough from these algorithms, and then it can spiral out of control.
- Just briefly,
is it not true, Thomas, that the very principle behind machine learning, which is a part of A.I.,
is that you actually cannot explain the way from input to output?
It is the very foundation for this approach, which makes it difficult to explain why it does what it does.
- Yes, that is exactly how it is right now with many of these techniques,
and that means that more research is required in order to get this under control.
You can say that these artificial neural networks reflect our way of perception:
we get a sensory impression and then we categorize it-- pattern recognizition for instance.
Humans however can do much more than that.
We are not just a system of pattern recognition.
We also attach our experiences to language, and we can explain our decisions.
It might be that I use my sensory system and then figure out how to act, and then I maybe make a mistake
and someone asks, "Why did you do that?"
Then I say, "I thought this," and so on, and that was why I did what I did.
Sometimes I am mistaken perhaps because I don't have full introspection,
but I can at any rate explain myself, and therefore there is no reason why we should be satisfied
with algorithms that cannot do that as well.
In fact, there is a big trend in A.I. where these different types of A.I. are tied together
so that we get both these systems that can learn from experience, experience the world,
and recognize patterns, but which also can think logically, use language,
and explain their decision making process.
This will not happen tomorrow or in two years, but we will get there.
- Yes, we are at the tail end of a very exciting debate.
To conclude a bit on how A.I. relates to our sense of self and identity:
this is an issue that, as you just mentioned, Thomas,
where we have to get this artificial intelligence to function in a proper way,
and we all must take responsibility for this issue,
so one thing to take away from this topic is to be better at taking responsibility for the world we are creating.
Let's give a big hand to Thomas Bolander, Cathrine Hasse, Ole Fogh Kirkeby, and Per Juul Carlsen.
The Round Tower, May 28, 2018
Info: www.thalesday.com
Thanks to Markus Hornum-Stenz and others for use of pictures
Introduction sound: Caltech/MIT/LIGOLab
Outro music: H.E.R. tribe and Alfkil
Video created by Henrik Schøneberg in collaboration with Carlos Ochoa and Tara Lotus.
Subtitles: Henrik Schøneberg/Tara Lotus
Không có nhận xét nào:
Đăng nhận xét