Nos Partenaires

Auteur Sujet: Des modalités de spécification d'un but à atteindre pour une I.A.  (Lu 1793 fois)

01 mai 2018 à 22:24:34
Lu 1793 fois

NoName


Puisqu'il est ici, parfois question d'I.A., on peut trouver ici une longue entrevue (en anglais) au sujet des problématiques concernant les modalités de spécification d'un but à atteindre pour une I.A.

Extraits choisis :

Citer
If you take it seriously, and you think that we’re going to build very powerful systems that are going to be programmed directly through an objective, in this manner, Goodhart’s law should be pretty problematic because any objective that you can imagine programming directly into your system is going to be something correlated with what you really want rather than what you really want.

Citer
Another way to look at it is that people are intelligent, not necessarily in the ways that economics models us as intelligent that there are properties of our behavior, which are desirable properties that don’t directly derive from expected utility maximization; or if they do, they derive from a very, very diffuse form of expected utility maximization. This is the perspective that says that people on their own are not necessarily what human evolution is optimizing for, but people are a tool along that way.


Citer
I think my ideal would be to, as a system designer, build in as little as possible about my moral beliefs. I think that, ideally, the process would look something … Well, one process that I could see and imagine doing right would be to just directly go after trying to replicate something about the moral imprinting process that people have with their children. Either you had someone who’s like a guardian or is responsible for an AI system’s decision, and we build systems to try to align with one individual, and then try to adopt, and extend, and push forward the beliefs and preferences of that individual. I think that’s one concrete version that I could see.


Citer
Then, there’s another one, which is we program our system to be sensitive to what people think is right or wrong. I think that’s more the direction that I think of value alignment in. Then, what I think the final part of what you’re getting at here is that the system actually will feed back into people. What AI system show us will shape what we think is okay and vice versa. That’s something that I am quite frankly not sure how to handle. I don’t know how you’re going to influence what someone wants, and what they will perceive that they want, and how to do that, I guess, correctly.


Citer
how do you just mathematically account for the fact that people are irrational, and that that is a property of the source of your data? Inverse reinforcement learning, at face value, doesn’t allow us to model that appropriately. It may lead us to make the wrong inferences. I think that’s a very interesting question. It’s probably the main one that I think about now as a technical problem is understanding, what are good ways to model how people might or might not be rational, and building systems that can appropriately interact with that complex data source.


Citer
I have my own preferences about what I would like my system to do. I have an ethical responsibility, I would say, to make sure that my system is adapting to the preferences of its users to the extent that it can. I also wonder to what extent. How should you handle things when there are conflicts between those two value sets?


Citer
In my experience, one of the biggest pitfalls that early researchers make is focusing too much on what they’re researching rather than thinking about who they’re researching with, and how they’re going to learn the skills that will support doing research in the future.

 


Keep in mind

Bienveillance, n.f. : disposition affective d'une volonté qui vise le bien et le bonheur d'autrui. (Wikipedia).

« [...] ce qui devrait toujours nous éveiller quant à l'obligation de s'adresser à l'autre comme l'on voudrait que l'on s'adresse à nous :
avec bienveillance, curiosité et un appétit pour le dialogue et la réflexion que l'interlocuteur peut susciter. »


Soutenez le Forum

Les dons se font sur une base totalement libre. Les infos du forum sont, ont toujours été, et resteront toujours accessibles gratuitement.
Discussion relative au financement du forum ici.


Publicité

// // //