“Synthetic Mirrors”
By Sabrina Li
Class of 2024, The Bishop’s School
Artificial intelligence has, in many settings, become synonymous with humanity’s downfall.
Doubt dominates discussions of AI: What if artificial intelligence takes over humanity? What if we
become nothing more than the slaves of our own creations?
But before quantifying the ability of AI to destroy humanity, how does this technology work?
Essentially, machines such as computers are trained to perform specific tasks by repeatedly “processing
large amounts of data and recognizing patterns in the data.” (1) This in turn allows the algorithm to acquire skills and teach itself, creating what we know as artificial intelligence.
From this information arises a question seemingly more fundamental than AI’s potential
involvement in humanity’s downfall: Is AI turning our society into nothing more than a series of datasets?
Is the sum of humanity’s glory a finite number that can be quantified by an algorithm?
The distillation of human society into a set of numbers informing AI leads to more problems than
the objectification of humanity. If datasets directly inform the function of AI, then the flaws in society’s
dataset—our biases, our discrimination, our hate—are reflected in the algorithms we create. In fact, many
algorithms “require racial or ethnic minorities to be considerably more ill than their white counterparts to
receive the same diagnosis, treatment, or resources.” (2) And that’s only one example. Bias permeates
society, and therefore also permeates the artificial intelligence that uses society as its dataset. Bias “is in
the data used to train the AI,” and this bias, which is often discriminatory against marginalized groups,
“can rear its head throughout the AI’s design, development, implementation, and use.” (3)
From this perspective, it’s easy to accuse AI of not only objectifying society and depriving us of
our humanity, but also perpetuating the already-rampant discrimination. But why is AI still in use if we are
aware of such flaws? Of course, there are many benefits to AI. AI can reduce human error, handle large
amounts of data, perform dangerous tasks efficiently, and much more. (4) Still, if the algorithms are
obviously supporting systemic discrimination, why are they not put aside until their biases are eliminated?
AI doesn’t make the choice to be used; we humans make that choice. We decide that efficiency
and functionality are more important than the lives of marginalized groups. Our morals are not in danger
of being corrupted by artificial intelligence; rather, artificial intelligence is reflecting society’s morals an
values, flaws and all.
If AI is so problematic, why do I continue to hope for AI, then? Why am I convinced that AI will
revolutionize our future for the better?
To answer this, I look to Ghosts by Vauhini Vara, one of the most touching pieces of writing I
have ever read. Ghosts is a set of nine vignettes; in each vignette, Vara provides an increasingly complex
prompt about her sister’s death, and an artificial intelligence model, GPT-3, completes the rest. When
given a basic prompt, GPT-3 spit out a cliché, generic story. However, as Vara “tried to write more
honestly, the AI seemed to be doing the same…Candor, apparently, begat candor,” which makes sense as
GPT-3 generates language based on the language it is given; its function, as with all AI, is rooted in its
dataset. (5) What does this imply for AI’s role in society? Of course, Vara’s work demonstrates the potential of AI as a “really compelling tool that produces beautiful work.” (6) But more importantly: if GPT-3 can reflect Vara’s emotion, artificial intelligence can reflect all the love and empathy and creativity embedded in society’s numbers. If AI can act as society’s synthetic mirror, reflecting all of humanity’s flaws, it can also reflect all of humanity’s beauty.
There is no elusive “solution” to AI, because AI is not a problem. It is fundamentally a tool, a tool
that we created, a tool we can properly develop to use in both scientific and creative fields, a tool that can
bring out the best in humanity. Of course, if not properly regulated, tools can be dangerous—AI may in
fact take over society one day. But, ultimately, I believe that humanity—its creativity, its intelligence, its
love—will shine through no matter how much we are objectified, quantified, or made slaves to a
calculating robotic system.
1 “Artifical Intelligence: What it is and why it matters,” SAS, accessed May 4, 2024, https://www.sas.
com/en_us/insights/analytics/what-is-artificial-intelligence.html#:~:text=Artificial%20intelligence%20(AI)%20mak
es%20it,learning%20and%20natural%20language%20processing.
2 Isabella Backman, “Eliminating Racial Bias in Health Care AI: Expert Panel Offers Guidelines,” Yale
School of Medicine, published December 21, 2023, https://medicine.yale.edu/news-article/eliminating-racial-bias-in-health-care-ai-expert-panel-offers-guidelines/.
3 Olga Akselrod, “How Artificial Intelligence Can Deepen Racial and Economic Inequities,” American
Civil Liberties Union, published July 13, 2021, https://www.aclu.org/news/privacy-technology/how-artificialintelligence-can-deepen-racial-and-economic-inequities.
4 Rashi Maheshwari, “Advantages Of Artificial Intelligence (AI) In 2024,” Forbes Advisor, published
August 24, 2023, https://www.forbes.com/advisor/in/business/software/advantages-of-ai/.
5 Vauhini Vara, “I didn’t know how to write about my sister’s death—so I had AI do it for me,” The
Believer, published August 9, 2021, https://www.thebeliever.net/ghosts/.
6 Isabelle Levent and Lila Shroff, “On personalized media, alternative AI writing futures, and reconciling
the poetic with the political,” Embeddings, published July 11, 2023, https://embeddings.substack.com/p/vauhini.
Bibliography:
Akselrod, Olga. “How Artificial Intelligence Can Deepen Racial and Economic Inequities.” American
Civil Liberties Union. Published July 13, 2021.
https://www.aclu.org/news/privacy-technology/how-artificial-intelligence-can-deepen-racial-andeconomic-inequities.
Backman, Isabella. “Eliminating Racial Bias in Health Care AI: Expert Panel Offers Guidelines.” Yale
School of Medicine. Published December 21, 2023.
https://medicine.yale.edu/news-article/eliminating-racial-bias-in-health-care-ai-expert-panel-offer
s-guidelines/.
Levent, Isabelle and Lila Shroff. “On personalized media, alternative AI writing futures, and reconciling
the poetic with the political.” Embeddings. Published July 11, 2023.
https://embeddings.substack.com/p/vauhini.
Maheshwari, Rashi. “Advantages Of Artificial Intelligence (AI) In 2024.” Forbes Advisor. Published
August 24, 2023. https://www.forbes.com/advisor/in/business/software/advantages-of-ai/.
SAS. “Artifical Intelligence: What it is and why it matters.” Accessed May 4, 2024.
https://www.sas.com/en_us/insights/analytics/what-is-artificial-intelligence.html#:~:text=Artificia
l%20intelligence%20(AI)%20makes%20it,learning%20and%20natural%20language%20processi
ng.
Vara, Vauhini. “I didn’t know how to write about my sister’s death—so I had AI do it for me.” The
Believer. Published August 9, 2021. https://www.thebeliever.net/ghosts/.
Leave a Reply