AI has made great strides in the past few months, leading to conversations about AI’s role in the world. Is AI-created art art? Is AI going to replace millions of jobs? Is AI’s intelligence going to surpass human intelligence? How is it going to shape our understanding of the world, of information and expression, of consciousness?
Pope Francis recently addressed AI, citing the benefits it
can contribute to areas like medicine and engineering, but he also cautioned developers
to “respect such values as inclusion, transparency, security, equity, privacy
and reliability.” You’ll forgive me if I’m not as trusting as the pope. I don’t
trust developers, or rather, the corporations funding AI, to act ethically.
Technology should be used to benefit humanity—to reduce
suffering and inequality. And yet, it’s often used for profit—increasing inequality
and remaining unaccountable for its consequences on society. With AI, it’s not
just about the haves getting the profits and the nots not, but also the impact it
will have on information, verification of the truth, and personal/social biases
in the coding creating the algorithms.
But even beyond its use, AI raises other ethical questions.
Can an artificial creation become sentient? Would it, in fact, be alive? Does
it deserve rights and autonomy? Or are we just anthropomorphizing a computer
that we’ve programed to mimic human consciousness and emotions? What happens to
our connections if we can’t tell who we’re talking to is human or not? Does it
cheapen the human experience if it can be replicated by AI?
I think the questions should be asked and debated long
before the technology becomes common-place. Some are having those
conversations. In 2021, the Pontifical Council of Culture held a symposium “The
Challenge of Artificial Intelligence for Human Society and the Idea of the
Human Person.” The panels addressed how and to what extent the emergence of AI
requires us to rethink what it means to be human, the prescriptive and
normative consequences that AI raises, and humanity and hope in the context of
emerging AI.
But I think in the broader society, we lack substantive, ethical
conversations on technology in general. Just because we can do something
doesn’t necessarily mean we should. And what we allow shapes who we are. We are
a society that allows organ transplants, in vitro fertilization, and GMOs. We
flirt around the idea of using nuclear weapons and general reject creating human
clones (though cloned animals are ok). Why is that? It is about our personal
comfort, or are there real ethical lines?
As Christians, we should approach all our actions with ethical
concern, including what technology should be embraced, rejected, or used with
caution. I don’t know how much AI will disrupt society nor its true impact. I
don’t know what social and ethical consequences it may bring. But I do know we
need to be aware of the ethical implications of new technology and address what
is and isn’t worth society’s acceptance. We need to form our consciences before
the robots do.