top of page

Questions about AI

Vanessa Couturier

Updated: Feb 26

How do you see the current progress of AI?

 

Vanessa: I recently watched a talk by Yann LeCun (one of the “godfathers” of AI with Geoff Hinton and Yoshua Bengio). He gave a talk at Harvard a couple of months ago. He was presenting his ideas about how AI will reach human-like intelligence, or AGI. And he argues that the current methods are not going to work.

 

They're highly inefficient, because they require huge neural networks, huge amount of data. So far they’ve been working absolutely great. It is a predictive model, where essentially the AI model predicts what comes next, the next token, the next word, based on the previous context. And each time it adds a new word, predicts a new word, the system integrates that word into the context and then it moves forward, predicting the next word based on the overall context and so on. Geoff Hinton argues that this requires understanding of a context to really produce a text that makes sense and that we’re very close to AGI.

 

But Yann LeCun argues that the common AI systems don't have the basic intelligence of infants, or even cats, because they're not goal-driven, and they can't do simple reasoning and planning. For example, they're not going to have a goal, and plan all the steps to get to it. When an AI comes up today with a plan in multiple steps, it's because it's seen enough of these plans before, and it produces something it's seen before. But it's not actually reasoning towards a goal. 

 

It also doesn't have a comprehension of causality in the real world. For example, an infant will have, in the first months before language even, an understanding of the world: that an object is permanent, that if you push something it's going to fall, etc. He will have an enormous amount of background knowledge, or “common sense”. He will have a world construct, in a way, even pre-language.

 

So, the human brain, with much less neurons than large neural networks, and without language, has this capability to have a world model, build a world model. And then, of course, language comes in, adds to it.

 

So, what is needed to really come closer to human intelligence? Yann LeCun says we're going to have to do two things:

 

·        One is having goal-driven AIs.

·        And the second thing is a more efficient way for the model to work, to be able to comprehend the world.

 

Now, we're not talking about words, we're talking about things happening in the real world. Video, audio and text are great but not enough for an AI to learn about the world. It needs to learn from experience. The AI is going to be able to interact with the world and create a world view by itself, in an unsupervised way.

 

So how do we do this? He says. He takes an example. Let's just take the example of a video. To plan for the next frame of a video, it’s impossible - there's an enormous number of possibilities that can happen there. If I take a camera and I pan it over the room, if I stop at one point, what comes next could be an infinity of things: people, with who knows what face, etc. an infinity of possible details. You have no way of knowing what comes next. So, the current models show a fuzzy image, because it's just an average of all the predictions that you can make.

 

Now, what you will want to do is, in a way, train in both directions. So, you train forward to predict what comes next, but once you've created an image, you look at this image you created and you look backwards and you analyze how, from the image I just created, can I go to the one before. And you focus on what's changed, you focus on the relevant information that was changed, not all the details. So, what's important is really to figure out how to prioritize, right? What to focus on in terms of prediction and how to spend the system's energy on that.

 

I think there's a lot of parallels between all the thinking that we have towards artificial intelligence, and intelligence in general, the human brain, and the way we approach our lives. So, you know, you could argue that we learn from experience. We have good and bad experiences, and then, based on past experiences, we're going to act a certain way. For example, if I put my hand in the fire, I know it's going to burn, so maybe next time I'm not going to do that. If I've had an experience where you hurt me before, I'm going to anticipate that you may hurt me again in the future. That’s the predictive model.

 

But I think another way we become wiser as we age is that we tend to focus on what we can control. Experience teaches you that there's only so much you can control. And you're wasting your energy trying to control all the things that are out of your control.

 

And a mark of wisdom, and a better way to navigate life, is to learn from experience, but also to focus on the things that you can change. And that's akin to the approach LeCun is recommending to reach human intelligence. So, what is fascinating is that you find parallel between mathematics, between neural networks, and the human brain, human intelligence, and human life. Even from a philosophical perspective, you can make analogies that absolutely make sense.

 

And that's why I think all disciplines can have similarly an interesting angle on the topic of artificial intelligence, because they approach it from a different perspective, but still, in terms of analogy, they open new ways to think about it. Artwork with artificial intelligence opens opportunities to think, not only about the machines, but also about our lives as humans.

 

Did anything surprise you in the creation process while working with chatGPT?

 

Vanessa: Many, actually.


The AI brought an interesting twist to the story, without me asking for it.

For example, in the creation of the scene with the AI detective character who's interrogating the participant as a witness, or rather a suspect - What I realized is that, first, in the creation process, I was at times surprised by the answer the AI would give me.  I gave it directions and instructions, but the response it would give me would go beyond what I was designing and gave me really great ideas. So, for example, initially in the yes-narrative branch where you decide to jump, and you're trying to save the person from drowning, the scenario initially saw the person survive, and you discovered in the third act that the person accused you of abandoning them to their fate before you brought them back to safety.

 

So that comes as a surprise for you as a participant.

 

I had instructed the AI to bring up the good Samaritan law which says that if you try to save someone, and you don't manage to, you won't get prosecuted.

 

Well, what I didn't know, and the AI brought to the scenario during an interrogation by the detective, is that the law actually says, well, if you help, and don't manage to save the person, but if you voluntarily abandon them in the process before bringing them to safety, you could be in real trouble. Now that seems obvious maybe, but in the context of this dialogue where you first find out that the victim said you abandoned them and then on top of that you could be in trouble legally – that comes as a total shock and as I was practicing the dialogue with the AI, I got completely taken aback and I thought that was a great addition that I decided to incorporate in further scenarios.

 

 

The other thing I realized taught me more about us as humans.

 

Dealing with the executive power of the police, I wanted to bring up the question of bias. In the designing process, it was important for me to tell the AI that it didn't know if the participant was guilty or innocent, and that this interview was a process in discovering the truth. And so initially in the repetitions, the AI would interrogate the witness (me, in rehearsals), and I would find, you know, answers, reasonable answers to what he was asking. I would invent things as well, because anyway, it's a performance, so  I came up with plausible stories to get out of trouble, and the AI would accept everything I said for the truth. He was always going my way, saying: it must have been a harrowing experience, thank you for giving me these details, that explains it, etc.

 

He would assume that I was telling the truth, so each time I would invent something, come up with something, he would say, oh yeah, thank you, that's great, thank you, thank you for clarifying, that makes sense, must have been a harrowing experience, thank you for all the details, thank you for participating, and I was like, there's zero tension here. Well not only was there zero dramatic tension but also:  that's not how it would happen in reality (for good or for bad). There’s a certain level of skepticism a cop must have in order to find out the truth. He cannot be completely naïve. What I found fascinating is that I had to introduce the possibility that a human can lie. Humans have a tendency to lie to protect their interests, so if you really want to find out the truth, you're going to have to go forward, and use different techniques, evidence-based, psychological, etc. And so that causes us to think: is the justice system really fair? Doesn’t it always presume culpability, even when it says you are proven innocent until guilty. If you give a machine, an AI, the ability to conduct justice, or interrogation, and truth-finding, is it going to be more or less, or equally biased as a human?

 

And the fact is that the AI learns on a bunch of literature that may be questionable. For example, the techniques of interrogation that were designed in the 70s, have been criticized since then, but they’re still used in some instances. And on the other hand you have the ethical instructions that the AI is given or are engrained into it, that are also portrayed in this scenario, where I introduced the notion of police ethics index (the idea that each power branch, judicial, legislative, executive and the press should adhere to certain ethics principles, and also have to be audited to verify that they meet the ethical standards).


So what I think is that I'm not sure that if you're introducing a so-called “matter-of-fact” machine or AI, into our human affairs, that will change much of anything. We know AI can be highly biased and there's been a lot of thinking about that. Some companies like Anthropic, published some papers, the past year, very important papers about that, how do you really decorrelate, for example, the word "nurse", from being often associated with the word "woman", or the word "CEO", from being associated with "male, white", so there's a lot of work that has to go into this.

 

One of the questions I wanted to ask, and we have to ask ourselves, is there really a difference between a system trained by humans to act like humans, and a system ran by humans?

 

What are you working on next?

 

Vanessa: We’re working on another scenario for the same physical format, Headspace #2. This one will explore other themes that are close to my heart -intimacy and estrangement. Phi will be highly involved as a performer in a new role that may prove to be extremely challenging in the current state of development of AI -but if I learnt one thing through this process is that the feature you wish existed become reality in a matter of 2-3 months.

 

Talking about this: How does the speed of technology development impact your work?

 

Vanessa: The artwork is highly dependent upon the availability of features for one, and the availability and reliability of service. For GPT 4o especially, frequent interruptions of service and unexpected automatic upgrades tied to the use of software as a service, can disrupt the experience. Also think about how your medium continuously changes and becomes obsolete (AI).

 

I chose to work with commercially available no code platforms. It made it challenging at times to get the exact result I wanted, but I didn’t want to rely on outside tech skills for every single change I was making. I also wanted to maximize my chances that the software would be maintained. There’s always a risk of course, that a company goes out of business, starts to overcharge for its service or eliminate a product alltogether. But I preferred this over the inconvenience of custom development.  I also chose to go wireless, again, for convenience and ease of use.

 

Many artists struggle with deciding when their artwork is over, and when to push it into the world. With technological art, we are to ask: should my artwork evolve with the tech? Should there be a maintenance contract that includes upgrades and bug fixes? But then, we could also include upgrades: new technological advances bring new capabilities, and the upgraded artwork could benefit from them. Could we create a subscription model for technological art, at least for the duration of the artist’s life?

 

Comments


Commenting has been turned off.

© 2024 by Space Machina

bottom of page