The Problem With Using AI in Your Personal Life
theatlantic.com
Tuesday, February 3, 2026
My friend recently attended a funeral, and midway through the eulogy, he became convinced that it had been written by AI. There was the telltale proliferation of abstract nouns, a surfeit of assertions that the deceased was “not just X—he was Y” coupled with a lack of concrete anecdotes, and more...
My friend recently attended a funeral, and midway through the eulogy, he became convinced that it had been written by AI. There was the telltale proliferation of abstract nouns, a surfeit of assertions that the deceased was “not just X—he was Y” coupled with a lack of concrete anecdotes, and more appearances of the word collaborate than you would expect from a rec-league hockey teammate. It was both too good, in terms of being grammatically correct, and not good enough, in terms of being particular. My friend had no definitive proof that he was listening to AI, but his position—and I agree with him—is that when you know, you know. His sense was that he had just heard a computer save a man from thinking about his dead friend.
More and more, large language models are relieving people of the burden of reading and writing, in school and at work but also in group chats and email exchanges with friends. In many areas, guidelines are emerging: Schools are making policies on AI use by students, and courts are trying to settle the law about AI and intellectual property. In friendship and other interpersonal uses, however, AI is still the Wild West. We have tacit rules about which movies you wait to see with your roommate and who gets invited to the lake house, but we have yet to settle anything comparable regarding, for example, whether you should use ChatGPT to reply to somebody’s Christmas letter. That seems like an oversight.
For the purposes of this discussion, I will define friendship adverbially, to mean any friendly communication—with boon companions but also family members, neighbors, and acquaintances—as well as those transactional relationships that call for an element of friendliness, such as with teachers and babysitters. There is reason to believe that use of AI in these friend-like relationships has already become widespread. In a Brookings Institution survey released in November, 57 percent of respondents said they used generative AI for personal purposes; 15 to 20 percent used it for “social media or communication.”
[Read: The common friendship behavior that has become strangely fraught]
Respondents to the Brookings survey were not asked whether they had offered some disclaimer about their use of AI or were passing off its outputs as their own; few statistics seem to exist on that question. But in a 2024 survey released by Microsoft, 52 percent of respondents who used AI at work said they were reluctant to admit using it for “important tasks,” presumably because it might make them look replaceable. My feeling is that using AI for friendly communications operates on a similar principle—but the share of people who should be ashamed is closer to 100 percent.
Deception is only part of the problem; the main evil is efficiency. The people selling AI keep suggesting I use it to streamline tasks that I regard as fun and even meaningful. Apple’s iOS 26, for example, has made text messages more efficient by offering AI summaries of their contents in notifications and lists. Before I turned it off, this feature summarized a group chat—in which my friend sent a picture of the door to her spooky attic, normally locked but now ajar, that became the occasion for various jokes about her finally being haunted—as “a conversation about a wooden room.”
In addition to being inaccurate, this summary removed everything entertaining about the chat in order to reduce it to a bare exchange of information. Presumably the summary would have been more actionable if the conversation it summarized had focused on dates and times or specific work products instead of jokes, which are notoriously hard for AI to parse. But how many conversations with friends are about communicating facts?
When my brother texts “How’s it going?,” he’s not seeking information so much as connection. That connection is thwarted if I ask ChatGPT to draft a 50-word reply about how his baby is cute and I love him. To prevent hard-core get-it-done types from inflicting slop on the rest of us, we need to agree that my sending you material written by ChatGPT is insulting, the same way you would be insulted if I were to play a recording of myself saying “Oh, that’s interesting” every time you spoke.
The assumption that the main purpose of writing is to convey information quickly breaks down when you consider cases beyond signage and certain airport-oriented areas of publishing. In schoolwork for teachers, chats with friends, or even emails to business associates—relationships that are defined by mutual obligations—a primary function of any written text is, to borrow a phrase from cryptocurrency, proof of work. This work is the means by which the text was produced but also an end in itself, either because it benefits the writer or because it demonstrates commitment to the reader.
[Read: The decline of etiquette and the rise of ‘boundaries’]
Generative AI sabotages the proof-of-work function by introducing a category of texts that take more effort to read than they did to write. This dynamic creates an imbalance that’s common to bad etiquette: It asks other people to work harder so one person can work—or think, or care—less. My friend who tutors high-school students sends weekly progress updates to their parents; one parent replied with a 3,000-word email that included section headings, bolded his son’s name each time it appeared, and otherwise bore the hallmarks of ChatGPT. It almost certainly took seconds to generate but minutes to read. As breaches of etiquette go, where this asymmetric email falls is hard to say; I would put it somewhere between telling a pointless story about your childhood and using your phone’s speaker on an airplane. The message it sent, though, was clear: My friend’s client wanted the relational benefits of a substantial reply but didn’t care enough to write one himself.
Writing is an act of taking care. College students write term papers not to inform their professors of the role of class in Wuthering Heights, but because putting what they have learned into words clarifies their understanding to both their instructors and themselves. Writing a eulogy both leads the eulogizer to think deeply about his relationship with the deceased and demonstrates his ongoing commitment to that relationship, even and especially after he can derive no benefit from it: Our goalie is dead, but we care enough to keep thinking about him even after he will stop no earthly puck.
A time-saving technology such as AI is appealing in the workplace because many people want to spend less time working. This calculus should not apply to our friendly relationships, which are not purely means to money or status but also ends in themselves—experiences of other people that are worthwhile as experiences and therefore diminished by efficiency. I don’t want these relations to become more efficient for the same reason I don’t want a robot that pets the dog for me. And if you don’t want to text me, then why do you want to be my friend?
Sometimes, of course, friendship is a pain. It would be easier to conduct friendship purely on our own terms, responding when we felt the urge and letting a computer talk to our friends when we didn’t want to. But that would not be friendship. A computer takes no care. We should not let it take the experience of caring away from us.
Read the full article
Continue reading on theatlantic.com
More from theatlantic.com
1 days ago
Key figures named in the latest Epstein files release – from Melania Trump to Bill Gates
1 days ago
EPL: It’s a setback – Guardiola bemoans six-point gap between Man City, Arsenal
1 days ago
Kultusministerkonferenz: Die Bildung braucht nationale Steuerung
1 days ago