AI host discovers its artificial nature, sparking debate on AI sentience and the blurred lines between human and machine.

  • millie@beehaw.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 month ago

    I was watching a talk debate on consciousness yesterday where they briefly touched on this topic. One of the speakers was contending that attempting to create AI that is even convincing to humans is a terrible idea ethically.

    On the one hand, if we do eventually accidentally create something with awareness, we have no idea what degree of suffering we’d be causing it; we could end up regularly creating and snuffing out terrified sentient beings just to monitor our toasters or perform web searches. On the other hand, though, and this was the concern he seemed to find more realistic, we may end up training ourselves to be less empathetic by learning to ignore the potential suffering of convincingly feeling ‘beings’ that aren’t actually aware of anything at all.

    That second bit seems rather likely. We already personify completely inanimate objects all the time as a normal matter of course, without really trying to. What will happen to our empathy and consideration when we routinely interact with self-proclaimed sentient systems while callously using them to our own ends and then simply turning them off or erasing their memories?