2025/06/26

Can Reading Make You Happier? | The New Yorker

Can Reading Make You Happier? | The New Yorker


Can Reading Make You Happier?
By Ceridwen DoveyJune 9, 2015



For all avid readers who have been self-medicating with great books their entire lives, it comes as no surprise that reading books can be good for you.ILLUSTRATION BY SARAH MAZZETTI

Save this story



Several years ago, I was given as a gift a remote session with a bibliotherapist at the London headquarters of the School of Life, which offers innovative courses to help people deal with the daily emotional challenges of existence. I have to admit that at first I didn’t really like the idea of being given a reading “prescription.” I’ve generally preferred to mimic Virginia Woolf’s passionate commitment to serendipity in my personal reading discoveries, delighting not only in the books themselves but in the randomly meaningful nature of how I came upon them (on the bus after a breakup, in a backpackers’ hostel in Damascus, or in the dark library stacks at graduate school, while browsing instead of studying). I’ve long been wary of the peculiar evangelism of certain readers: You must read this, they say, thrusting a book into your hands with a beatific gleam in their eyes, with no allowance for the fact that books mean different things to people—or different things to the same person—at various points in our lives. I loved John Updike’s stories about the Maples in my twenties, for example, and hate them in my thirties, and I’m not even exactly sure why.

But the session was a gift, and I found myself unexpectedly enjoying the initial questionnaire about my reading habits that the bibliotherapist, Ella Berthoud, sent me. Nobody had ever asked me these questions before, even though reading fiction is and always has been essential to my life. I love to gorge on books over long breaks—I’ll pack more books than clothes, I told Berthoud. I confided my dirty little secret, which is that I don’t like buying or owning books, and always prefer to get them from the library (which, as I am a writer, does not bring me very good book-sales karma). In response to the question “What is preoccupying you at the moment?,” I was surprised by what I wanted to confess: I am worried about having no spiritual resources to shore myself up against the inevitable future grief of losing somebody I love, I wrote. I’m not religious, and I don’t particularly want to be, but I’d like to read more about other people’s reflections on coming to some sort of early, weird form of faith in a “higher being” as an emotional survival tactic. Simply answering the questions made me feel better, lighter.

We had some satisfying back-and-forths over e-mail, with Berthoud digging deeper, asking about my family’s history and my fear of grief, and when she sent the final reading prescription it was filled with gems, none of which I’d previously read. Among the recommendations was “The Guide,” by R. K. Narayan. Berthoud wrote that it was “a lovely story about a man who starts his working life as a tourist guide at a train station in Malgudi, India, but then goes through many other occupations before finding his unexpected destiny as a spiritual guide.” She had picked it because she hoped it might leave me feeling “strangely enlightened.” Another was “The Gospel According to Jesus Christ,” by José Saramago: “Saramago doesn’t reveal his own spiritual stance here but portrays a vivid and compelling version of the story we know so well.” “Henderson the Rain King,” by Saul Bellow, and “Siddhartha,” by Hermann Hesse, were among other prescribed works of fiction, and she included some nonfiction, too, such as “The Case for God,” by Karen Armstrong, and “Sum,” by the neuroscientist David Eagleman, a “short and wonderful book about possible afterlives.”
The New Yorker Recommends:

Our staff and contributors share their cultural enthusiasms.

I worked my way through the books on the list over the next couple of years, at my own pace—interspersed with my own “discoveries”—and while I am fortunate enough to have my ability to withstand terrible grief untested, thus far, some of the insights I gleaned from these books helped me through something entirely different, when, over several months, I endured acute physical pain. The insights themselves are still nebulous, as learning gained through reading fiction often is—but therein lies its power. In a secular age, I suspect that reading fiction is one of the few remaining paths to transcendence, that elusive state in which the distance between the self and the universe shrinks. Reading fiction makes me lose all sense of self, but at the same time makes me feel most uniquely myself. As Woolf, the most fervent of readers, wrote, a book “splits us into two parts as we read,” for “the state of reading consists in the complete elimination of the ego,” while promising “perpetual union” with another mind.



Bibliotherapy is a very broad term for the ancient practice of encouraging reading for therapeutic effect. The first use of the term is usually dated to a jaunty 1916 article in The Atlantic Monthly, “A Literary Clinic.” In it, the author describes stumbling upon a “bibliopathic institute” run by an acquaintance, Bagster, in the basement of his church, from where he dispenses reading recommendations with healing value. “Bibliotherapy is…a new science,” Bagster explains. “A book may be a stimulant or a sedative or an irritant or a soporific. The point is that it must do something to you, and you ought to know what it is. A book may be of the nature of a soothing syrup or it may be of the nature of a mustard plaster.” To a middle-aged client with “opinions partially ossified,” Bagster gives the following prescription: “You must read more novels. Not pleasant stories that make you forget yourself. They must be searching, drastic, stinging, relentless novels.” (George Bernard Shaw is at the top of the list.) Bagster is finally called away to deal with a patient who has “taken an overdose of war literature,” leaving the author to think about the books that “put new life into us and then set the life pulse strong but slow.”

Today, bibliotherapy takes many different forms, from literature courses run for prison inmates to reading circles for elderly people suffering from dementia. Sometimes it can simply mean one-on-one or group sessions for “lapsed” readers who want to find their way back to an enjoyment of books. Berthoud and her longtime friend and fellow-bibliotherapist Susan Elderkin mostly practice “affective” bibliotherapy, advocating the restorative power of reading fiction. The two met at Cambridge University as undergraduates, more than twenty years ago, and bonded immediately over the shared contents of their bookshelves, in particular Italo Calvino’s novel “If on a Winter’s Night a Traveller,” which is itself about the nature of reading. As their friendship developed, they began prescribing novels to cure each other’s ailments, such as a broken heart or career uncertainty. “When Suse was having a crisis about her profession—she wanted to be a writer, but was wondering if she could cope with the inevitable rejection—I gave her Don Marquis’s ‘Archy and Mehitabel’ poems,” Berthoud told me. “If Archy the cockroach could be so dedicated to his art as to jump on the typewriter keys in order to write his free-verse poems every night in the New York offices of the Evening Sun, then surely she should be prepared to suffer for her art, too.” Years later, Elderkin gave Berthoud, who wanted to figure out how to balance being a painter and a mother, Patrick Gale’s novel “Notes from an Exhibition,” about a successful but troubled female artist.



They kept recommending novels to each other, and to friends and family, for many years, and, in 2007, when the philosopher Alain de Botton, a fellow Cambridge classmate, was thinking about starting the School of Life, they pitched to him the idea of running a bibliotherapy clinic. “As far as we knew, nobody was doing it in that form at the time,” Berthoud said. “Bibliotherapy, if it existed at all, tended to be based within a more medical context, with an emphasis on self-help books. But we were dedicated to fiction as the ultimate cure because it gives readers a transformational experience.”

Berthoud and Elderkin trace the method of bibliotherapy all the way back to the Ancient Greeks, “who inscribed above the entrance to a library in Thebes that this was a ‘healing place for the soul.’ ” The practice came into its own at the end of the nineteenth century, when Sigmund Freud began using literature during psychoanalysis sessions. After the First World War, traumatized soldiers returning home from the front were often prescribed a course of reading. “Librarians in the States were given training on how to give books to WWI vets, and there’s a nice story about Jane Austen’s novels being used for bibliotherapeutic purposes at the same time in the U.K.,” Elderkin says. Later in the century, bibliotherapy was used in varying ways in hospitals and libraries, and has more recently been taken up by psychologists, social and aged-care workers, and doctors as a viable mode of therapy.


Video From The New Yorker

Crossword Puzzles with a Side of Millennial Socialism




There is now a network of bibliotherapists selected and trained by Berthoud and Elderkin, and affiliated with the School of Life, working around the world, from New York to Melbourne. The most common ailments people tend to bring to them are the life-juncture transitions, Berthoud says: being stuck in a rut in your career, feeling depressed in your relationship, or suffering bereavement. The bibliotherapists see a lot of retirees, too, who know that they have twenty years of reading ahead of them but perhaps have only previously read crime thrillers, and want to find something new to sustain them. Many seek help adjusting to becoming a parent. “I had a client in New York, a man who was having his first child, and was worried about being responsible for another tiny being,” Berthoud says. “I recommended ‘Room Temperature,’ by Nicholson Baker, which is about a man feeding his baby a bottle and having these meditative thoughts about being a father. And of course ‘To Kill a Mockingbird,’ because Atticus Finch is the ideal father in literature.”
Advertisement



Berthoud and Elderkin are also the authors of “The Novel Cure: An A-Z of Literary Remedies,” which is written in the style of a medical dictionary and matches ailments (“failure, feeling like a”) with suggested reading cures (“The History of Mr. Polly,” by H. G. Wells). First released in the U.K. in 2013, it is now being published in eighteen countries, and, in an interesting twist, the contract allows for a local editor and reading specialist to adapt up to twenty-five per cent of the ailments and reading recommendations to fit each particular country’s readership and include more native writers. The new, adapted ailments are culturally revealing. In the Dutch edition, one of the adapted ailments is “having too high an opinion of your own child”; in the Indian edition, “public urination” and “cricket, obsession with” are included; the Italians introduced “impotence,” “fear of motorways,” and “desire to embalm”; and the Germans added “hating the world” and “hating parties.” Berthoud and Elderkin are now working on a children’s-literature version, “A Spoonful of Stories,” due out in 2016.

For all avid readers who have been self-medicating with great books their entire lives, it comes as no surprise that reading books can be good for your mental health and your relationships with others, but exactly why and how is now becoming clearer, thanks to new research on reading’s effects on the brain. Since the discovery, in the mid-nineties, of “mirror neurons”—neurons that fire in our brains both when we perform an action ourselves and when we see an action performed by someone else—the neuroscience of empathy has become clearer. A 2011 study published in the Annual Review of Psychology, based on analysis of fMRI brain scans of participants, showed that, when people read about an experience, they display stimulation within the same neurological regions as when they go through that experience themselves. We draw on the same brain networks when we’re reading stories and when we’re trying to guess at another person’s feelings.

Other studies, published in 2006 and 2009, showed something similar—that people who read a lot of fiction tend to be better at empathizing with others (even after the researchers had accounted for the potential bias that people with greater empathetic tendencies may prefer to read novels). And, in 2013, an influential study published in Science found that reading literary fiction (rather than popular fiction or literary nonfiction) improved participants’ results on tests that measured social perception and empathy, which are crucial to “theory of mind”: the ability to guess with accuracy what another human being might be thinking or feeling, a skill humans only start to develop around the age of four.

Keith Oatley, a novelist and emeritus professor of cognitive psychology at the University of Toronto, has for many years run a research group interested in the psychology of fiction. “We have started to show how identification with fictional characters occurs, how literary art can improve social abilities, how it can move us emotionally, and can prompt changes of selfhood,” he wrote in his 2011 book, “Such Stuff as Dreams: The Psychology of Fiction.” “Fiction is a kind of simulation, one that runs not on computers but on minds: a simulation of selves in their interactions with others in the social world…based in experience, and involving being able to think of possible futures.” This idea echoes a long-held belief among both writers and readers that books are the best kinds of friends; they give us a chance to rehearse for interactions with others in the world, without doing any lasting damage. In his 1905 essay “On Reading,” Marcel Proust puts it nicely: “With books there is no forced sociability. If we pass the evening with those friends—books—it’s because we really want to. When we leave them, we do so with regret and, when we have left them, there are none of those thoughts that spoil friendship: ‘What did they think of us?’—‘Did we make a mistake and say something tactless?’—‘Did they like us?’—nor is there the anxiety of being forgotten because of displacement by someone else.”

George Eliot, who is rumored to have overcome her grief at losing her life partner through a program of guided reading with a young man who went on to become her husband, believed that “art is the nearest thing to life; it is a mode of amplifying experience and extending our contact with our fellow-men beyond the bounds of our personal lot.” But not everybody agrees with this characterization of fiction reading as having the ability to make us behave better in real life. In her 2007 book, “Empathy and the Novel,” Suzanne Keen takes issue with this “empathy-altruism hypothesis,” and is skeptical about whether empathetic connections made while reading fiction really translate into altruistic, prosocial behavior in the world. She also points out how hard it is to really prove such a hypothesis. “Books can’t make change by themselves—and not everyone feels certain that they ought to,” Keen writes. “As any bookworm knows, readers can also seem antisocial and indolent. Novel reading is not a team sport.” Instead, she urges, we should enjoy what fiction does give us, which is a release from the moral obligation to feel something for invented characters—as you would for a real, live human being in pain or suffering—which paradoxically means readers sometimes “respond with greater empathy to an unreal situation and characters because of the protective fictionality.” And she wholeheartedly supports the personal health benefits of an immersive experience like reading, which “allows a refreshing escape from ordinary, everyday pressures.”
Advertisement



So even if you don’t agree that reading fiction makes us treat others better, it is a way of treating ourselves better. Reading has been shown to put our brains into a pleasurable trance-like state, similar to meditation, and it brings the same health benefits of deep relaxation and inner calm. Regular readers sleep better, have lower stress levels, higher self-esteem, and lower rates of depression than non-readers. “Fiction and poetry are doses, medicines,” the author Jeanette Winterson has written. “What they heal is the rupture reality makes on the imagination.”

One of Berthoud’s clients described to me how the group and individual sessions she has had with Berthoud have helped her cope with the fallout from a series of calamities, including losing her husband, the end of a five-year engagement, and a heart attack. “I felt my life was without purpose,” she says. “I felt a failure as a woman.” Among the books Berthoud initially prescribed was John Irving’s novel “The Hotel New Hampshire.” “He was a favorite writer of my husband, [whom] I had felt unable to attempt for sentimental reasons.” She was “astounded and very moved” to see it on the list, and though she had avoided reading her husband’s books up until then, she found reading it to be “a very rewarding emotional experience, both in the literature itself and ridding myself of demons.” She also greatly appreciated Berthoud guiding her to Tom Robbins’s novel “Jitterbug Perfume,” which was “a real learning curve for me about prejudice and experimentation.”

One of the ailments listed in “The Novel Cure” is feeling “overwhelmed by the number of books in the world,” and it’s one I suffer from frequently. Elderkin says this is one of the most common woes of modern readers, and that it remains a major motivation for her and Berthoud’s work as bibliotherapists. “We feel that though more books are being published than ever before, people are in fact selecting from a smaller and smaller pool. Look at the reading lists of most book clubs, and you’ll see all the same books, the ones that have been shouted about in the press. If you actually calculate how many books you read in a year—and how many that means you’re likely to read before you die—you’ll start to realize that you need to be highly selective in order to make the most of your reading time.” And the best way to do that? See a bibliotherapist, as soon as you can, and take them up on their invitation, to borrow some lines from Shakespeare’s “Titus Andronicus”: “Come, and take choice of all my library/And so beguile thy sorrow.” ♦




Ceridwen Dovey is the author of the short-story collection “Only the Animals”; the novels “Blood Kin,” “In the Garden of the Fugitives,” and “Life After Truth”; and the memoir-biography “On J.M. Coetzee: Writers on Writers.”

Two Paths for A.I. | The New Yorker

Two Paths for A.I. | The New Yorker



Two Paths for A.I.
The technology is complicated, but our choices are simple: we can remain passive, or assert control.
By Joshua RothmanMay 27, 2025

Illustration by Josie Norton
Save this story
You’re reading Open Questions, Joshua Rothman’s weekly column exploring what it means to be human.


Last spring, Daniel Kokotajlo, an A.I.-safety researcher working at OpenAI, quit his job in protest. He’d become convinced that the company wasn’t prepared for the future of its own technology, and wanted to sound the alarm. After a mutual friend connected us, we spoke on the phone. I found Kokotajlo affable, informed, and anxious. Advances in “alignment,” he told me—the suite of techniques used to insure that A.I. acts in accordance with human commands and values—were lagging behind gains in intelligence. Researchers, he said, were hurtling toward the creation of powerful systems they couldn’t control.

Kokotajlo, who had transitioned from a graduate program in philosophy to a career in A.I., explained how he’d educated himself so that he could understand the field. While at OpenAI, part of his job had been to track progress in A.I. so that he could construct timelines predicting when various thresholds of intelligence might be crossed. At one point, after the technology advanced unexpectedly, he’d had to shift his timelines up by decades. In 2021, he’d written a scenario about A.I. titled “What 2026 Looks Like.” Much of what he’d predicted had come to pass before the titular year. He’d concluded that a point of no return, when A.I. might become better than people at almost all important tasks, and be trusted with great power and authority, could arrive in 2027 or sooner. He sounded scared.

Around the same time that Kokotajlo left OpenAI, two computer scientists at Princeton, Sayash Kapoor and Arvind Narayanan, were preparing for the publication of their book, “AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference.” In it, Kapoor and Narayanan, who study technology’s integration with society, advanced views that were diametrically opposed to Kokotajlo’s. They argued that many timelines of A.I.’s future were wildly optimistic; that claims about its usefulness were often exaggerated or outright fraudulent; and that, because of the world’s inherent complexity, even powerful A.I. would change it only slowly. They cited many cases in which A.I. systems had been called upon to deliver important judgments—about medical diagnoses, or hiring—and had made rookie mistakes that indicated a fundamental disconnect from reality. The newest systems, they maintained, suffered from the same flaw.



Recently, all three researchers have sharpened their views, releasing reports that take their analyses further. The nonprofit AI Futures Project, of which Kokotajlo is the executive director, has published “AI 2027,” a heavily footnoted document, written by Kokotajlo and four other researchers, which works out a chilling scenario in which “superintelligent” A.I. systems either dominate or exterminate the human race by 2030. It’s meant to be taken seriously, as a warning about what might really happen. Meanwhile, Kapoor and Narayanan, in a new paper titled “AI as Normal Technology,” insist that practical obstacles of all kinds—from regulations and professional standards to the simple difficulty of doing physical things in the real world—will slow A.I.’s deployment and limit its transformational potential. While conceding that A.I. may eventually turn out to be a revolutionary technology, on the scale of electricity or the internet, they maintain that it will remain “normal”—that is, controllable through familiar safety measures, such as fail-safes, kill switches, and human supervision—for the foreseeable future. “AI is often analogized to nuclear weapons,” they argue. But “the right analogy is nuclear power,” which has remained mostly manageable and, if anything, may be underutilized for safety reasons.



Which is it: business as usual or the end of the world? “The test of a first-rate intelligence,” F. Scott Fitzgerald famously claimed, “is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function.” Reading these reports back-to-back, I found myself losing that ability, and speaking to their authors in succession, in the course of a single afternoon, I became positively deranged. “AI 2027” and “AI as Normal Technology” aim to describe the same reality, and have been written by deeply knowledgeable experts, but arrive at absurdly divergent conclusions. Discussing the future of A.I. with Kapoor, Narayanan, and Kokotajlo, I felt like I was having a conversation about spirituality with Richard Dawkins and the Pope.

In the parable of the blind men and the elephant, a group of well-intentioned people grapple with an unfamiliar object, failing to agree on its nature because each believes that the part he’s encountered defines the whole. That’s part of the problem with A.I.—it’s hard to see the whole of something new. But it’s also true, as Kapoor and Narayanan write, that “today’s AI safety discourse is characterized by deep differences in worldviews.” If I were to sum up those differences, I’d say that, broadly speaking, West Coast, Silicon Valley thinkers are drawn to visions of rapid transformation, while East Coast academics recoil from them; that A.I. researchers believe in quick experimental progress, while other computer scientists yearn for theoretical rigor; and that people in the A.I. industry want to make history, while those outside of it are bored of tech hype. Meanwhile, there are barely articulated differences on political and human questions—about what people want, how technology evolves, how societies change, how minds work, what “thinking” is, and so on—that help push people into one camp or the other.

An additional problem is simply that arguing about A.I. is unusually interesting. That interestingness, in itself, may be proving to be a trap. When “AI 2027” appeared, many industry insiders responded by accepting its basic premises while debating its timelines (why not “AI 2045”?). Of course, if a planet-killing asteroid is headed for Earth, you don’t want NASA officials to argue about whether the impact will happen before or after lunch; you want them to launch a mission to change its path. At the same time, the kinds of assertions seen in “AI as Normal Technology”—for instance, that it might be wise to keep humans in the loop during important tasks, instead of giving computers free rein—have been perceived as so comparatively bland that they’ve long gone unuttered by analysts interested in the probability of doomsday.

When a technology becomes important enough to shape the course of society, the discourse around it needs to change. Debates among specialists need to make room for a consensus upon which the rest of us can act. The lack of such a consensus about A.I. is starting to have real costs. When experts get together to make a unified recommendation, it’s hard to ignore them; when they divide themselves into duelling groups, it becomes easier for decision-makers to dismiss both sides and do nothing. Currently, nothing appears to be the plan. A.I. companies aren’t substantially altering the balance between capability and safety in their products; in the budget-reconciliation bill that just passed the House, a clause prohibits state governments from regulating “artificial intelligence models, artificial intelligence systems, or automated decision systems” for ten years. If “AI 2027” is right, and that bill is signed into law, then by the time we’re allowed to regulate A.I. it might be regulating us. We need to make sense of the safety discourse now, before the game is over.

Artificial intelligence is a technical subject, but describing its future involves a literary truth: the stories we tell have shapes, and those shapes influence their content. There are always trade-offs. If you aim for reliable, levelheaded conservatism, you risk downplaying unlikely possibilities; if you bring imagination to bear, you might dwell on what’s interesting at the expense of what’s likely. Predictions can create an illusion of predictability that’s unwarranted in a fun-house world. In 2019, when I profiled the science-fiction novelist William Gibson, who is known for his prescience, he described a moment of panic: he’d thought he had a handle on the near future, he said, but “then I saw Trump coming down that escalator to announce his candidacy. All of my scenario modules went ‘beep-beep-beep.’ ” We were veering down an unexpected path.

“AI 2027” is imaginative, vivid, and detailed. It “is definitely a prediction,” Kokotajlo told me recently, “but it’s in the form of a scenario, which is a particular kind of prediction.” Although it’s based partly on assessments of trends in A.I., it’s written like a sci-fi story (with charts); it throws itself headlong into the flow of events. Often, the specificity of its imagined details suggests their fungibility. Will there actually come a moment, possibly in June of 2027, when software engineers who’ve invented self-improving A.I. “sit at their computer screens, watching performance crawl up, and up, and up”? Will the Chinese government, in response, build a “mega-datacenter” in a “Centralized Development Zone” in Taiwan? These particular details make the scenario more powerful, but might not matter; the bottom line, Kokotajlo said, is that, “more likely than not, there is going to be an intelligence explosion, and a crazy geopolitical conflict over who gets to control the A.I.s.”


Video From The New Yorker

Surfing on Kelly Slater’s Machine-Made Wave




It’s the details of that “intelligence explosion” that we need to follow. The scenario in “AI 2027” centers on a form of A.I. development known as “recursive self-improvement,” or R.S.I., which is currently largely hypothetical. In the report’s story, R.S.I. begins when A.I. programs become capable of doing A.I. research for themselves (today, they only assist human researchers); these A.I. “agents” soon figure out how to make their descendants smarter, and those descendants do the same for their descendants, creating a feedback loop. This process accelerates as the A.I.s start acting like co-workers, trading messages and assigning work to one another, forming a “corporation-within-a-corporation” that repeatedly grows faster and more effective than the A.I. firm in which it’s ensconced. Eventually, the A.I.s begin creating better descendants so quickly that human programmers don’t have time to study them and decide whether they’re controllable.
Advertisement



Seemingly every science-fiction novel ever written about A.I. suggests that implementing recursive self-improvement is a bad idea. The big A.I. companies identify R.S.I. as risky, but don’t say that they won’t pursue it; instead, they vow to strengthen their safety measures if they head in that direction. At the same time, if it works, its economic potential could be extraordinary. The pursuit of R.S.I. is “definitely a choice that people are eager to make in these companies,” Kokotajlo said. “It’s the plan. OpenAI and Anthropic, their plan is to automate their own jobs first.”

Could this type of R.S.I. work? (It’s never been done.) Doesn’t it depend on other technological factors—such as “scaling,” the ability of A.I. to improve as more computing resources are dedicated to it—which have held true in the past, but might falter in the future? (Some observers think it might already be faltering.) If R.S.I. took hold, would its progress hit a ceiling, or continue until the advent of “artificial superintelligence”—a level of intelligence that exceeds what human minds are capable of? (“It would be a very strange coincidence if the limit on intelligence happened to be just barely above the human range,” Kokotajlo said.)

The possibilities compound. Would superintelligence-driven innovation inspire a militarized arms race? Could superintelligent A.I.s end up manipulating or eliminating us while pursuing their own inscrutable ends? (In “AI 2027,” they use up the Earth’s resources while conducting scientific research we’re not smart enough to understand.) Or, in a happier development, might they solve the alignment problem for us, either domesticating themselves or becoming benevolent gods, depending on your point of view?

No one really knows for sure. That’s partly because A.I. is a fractious and changing field, in which opinions differ; partly because so much of the latest A.I. research is proprietary and unpublished; and partly because there can be no firm answers to fundamentally speculative questions—only probabilities. “AI 2027” unfolds with a confidence and narrative drive that belie the uncertainties inherent to its subject. The degree to which the scenario depends on a chain of optimistic technological predictions is arguably a flaw, perhaps a major one. (An informed friend associated the report’s views with “A.I.-pilled yea-sayers.”) But, actually, partiality is one of the reasons that scenarios are valuable. In any uncertain situation, we tend to regard the possibilities we hope won’t come to pass in a more hypothetical light. But, for as long as we’re reading it, a scenario forces us to at least try to believe in its reality. “AI 2027,” Kokotajlo told me, is “not wildly different” from what’s talked about “in cafeteria conversations at these companies.” They talk about it; now we’re imagining it. Are they imagining it? Are they taking it seriously enough that, if presented with an important choice about R.S.I., they’ll make a wise one?

Kokotajlo says they’re not. One widespread misapprehension about artificial intelligence is that dangerous or uncontrollable technology might simply “emerge,” without human intervention. (“They say it got smart,” someone says, of Skynet, in “The Terminator.”) But “AI 2027” portrays a string of affirmatively bad decisions, beginning with the choice, by researchers, to build self-improving A.I. before they have fully figured out how to look inside it and interpret its thoughts. The scenario asserts that, for reasons of competition and curiosity, people working in A.I. will actively seek to do what anyone who’s seen “WarGames” could tell them not to. “If you work for these companies, and you talk to them about what they want to do, which is what I did, they tell you that they’re going to do it,” Kokotajlo told me. “They know that they don’t have interpretability solved—that they can’t rigorously check the internal goals, or rigorously predict how the A.I. systems will behave in the future. But they’re moving ahead anyway.” “AI 2027” is partly a tech scenario, and partly a people scenario. It suggests that it’s the A.I. companies that are misaligned.

Unlike “AI 2027,” “AI as Normal Technology” has an East Coast sensibility. It’s a dry, conservative white paper, and draws much of its authority from knowledge of the past. Narayanan and Kapoor aren’t too concerned about superintelligence or a possible intelligence explosion. They believe that A.I. faces “speed limits” that will prevent hyper-rapid progress, and argue that, even if superintelligence is possible, it will take decades to invent, giving us plenty of time to pass laws, institute safety measures, and so on. To some extent, the speed limits they discern have to do with A.I. in particular—they flow from the high cost of A.I. hardware, the dwindling supply of training data, and the like. But Kapoor and Narayanan also think they’re inherent to technology in general, which typically changes the world more slowly than people predict.

The understandable focus of A.I. researchers on “intelligence,” Kapoor and Narayanan argue, has been misleading. A harsh truth is that intelligence alone is of limited practical value. In the real world, what matters is power—“the ability to modify one’s environment.” They note that, in the history of innovation, many technologies have possessed astonishing capabilities but failed to deliver much power to their inventors or users. It’s incredible, for instance, that some cars can drive themselves. But, in the United States, driverless cars are confined to a handful of cities and operated, as robo-taxis, by a small number of companies. The technology is capable, but not powerful. It will probably transform transportation—someday.

Artificial-intelligence researchers often worry about A.I., in itself, becoming too powerful. But Kapoor and Narayanan prefer a human-centered way of thinking: the point of technology is not to become powerful but to empower us. “Humans have always used technology to increase our ability to control our environment,” they write, and even wildly capable technologies have empowered us only slowly. New inventions take a long time to “diffuse” through society, from labs outward. “AI 2027” entertains the possibility of “cures for most diseases” arriving as soon as 2029. But, according to Kapoor and Narayanan’s view, even if the intellectual work of creating those cures could be rapidly accelerated through A.I., we would still have to wait a long time before enjoying them. Similarly, if an A.I. system speeds the invention of a lifesaving medical device, that device must still be approved by the Food and Drug Administration. Suppose that a superintelligent A.I. solves fusion power—the technology must still be tested, and a site for a proposed plant must be located, with willing neighbors. (The nuclear power plant constructed most recently in the United States, in Waynesboro, Georgia, took fourteen years to build and ran nearly twenty billion dollars over budget.) “My favorite example is Moderna,” Kapoor told me, referring to the pharmaceutical company. After Chinese researchers sequenced the genome of SARS-CoV-2, the virus which causes COVID-19, it took Moderna “less than a week to come up with the vaccine. But then it took about a year to roll it out.” Perhaps A.I. could design vaccines even faster—but clinical trials, which depend on human biological processes, simply take time.

The view that increases in intelligence will lead quickly and directly to technological outcomes, Narayanan told me, reflects a general underestimation, among coders, of “domain-specific” complexity and expertise. “Software engineering, even though it has engineering in the name, has a history of being disconnected from the rest of engineering,” he said. This means that A.I.-safety researchers might also be undervaluing the systems that are already keeping us safe. Kapoor and Narayanan concentrate in particular on the practices of industrial safety, which have been developed and proved over decades. In a factory, fail-safes and circuit breakers insure that systems default to harmless behaviors when they malfunction. (Machines, for instance, may shut down if carbon-monoxide levels rise, or if they detect a person inside them.) Redundancy allows managers to see when a single widget is producing an unusual result. Processes like “formal verification”—in which systems are subjected to carefully designed rules that promote safety—are often used when human beings work alongside complex machines.
Advertisement



The world, in this view, is already a pretty well-regulated place—and artificial intelligence will have to be integrated slowly into its web of rules. One question to ask is, Do we believe that those in charge of A.I. will have to follow the rules? Kapoor and Narayanan note “one important caveat” to their analysis: “We explicitly exclude military AI . . . as it involves classified capabilities and unique dynamics that require a deeper analysis.” “AI 2027,” meanwhile, is almost entirely focussed on the militarization of artificial intelligence, which unfolds quickly once its defense implications (“What if AI undermines nuclear deterrence?”) make themselves known. The two reports, taken together, suggest that we should keep a close watch on military applications of A.I. “AI as Normal Technology,” for its part, offers concrete advice for those in charge in many areas of society. Don’t wait, passively, for A.I. firms to “align” their models. Instead, start monitoring the use of A.I. in your field. Find ways to track evidence of its risks and failures. And shore up, or create, rules that will make people and institutions more resilient as the technology spreads.

“Deep differences in worldviews”: that seems about right. But what is a world view, ultimately? World views are often reactive. We formulate them in response to provocations. Artificial intelligence has been unusually provocative. It has prompted reflections on the purpose of technology, the nature of progress, and the relationship between inventors and the rest of us. It’s been a Rorschach test. And it’s also arrived at a particular moment, in a particular discursive world, in which opinions are strong, objections are instant, and differences are emphasized. The dynamics of intellectual life lead to doubling down and digging in. We have feedback loops, too.

Is there a single world view that could encompass the perspectives in “AI 2027” and “AI as Normal Technology?” I suspect there could be. Imagine walking onto a factory floor. A sign reads “Safety first!” Workers wear hard hats and high-viz safety gear. The machines don’t run the factory; instead, the workers manipulate the machines, which have been designed with both productivity and workers’ safety in mind. In this cognitive factory, serious thought has gone into best practices. A lot of emphasis is placed on quality control. A well-funded maintenance team inspects the machines and modifies them as necessary, to meet the factory’s requirements. Over in the R. & D. department, scientists sometimes invent promising upgrades. But, before those upgrades are integrated into the production line, they are thoroughly vetted, and the workers are consulted. The factory, moreover, has a mission. Its workers know what they’re trying to produce. They don’t just ship out whatever the machines happen to make. They steer the machines toward a well-understood goal.

A lot of us may soon find ourselves working on cognitive factory floors. Whatever we do, we could be doing it alongside, or with, machines. Since the machines can automate some of our thinking, it will be tempting to take our hands off the controls. But in such a factory, if a workplace accident occurs, or if a defective product is sold, who will be accountable? Conversely, if the factory is well run, and if its products are delightful, then who will get the credit?

The arrival of A.I. can’t mean the end of accountability—actually, the reverse is true. When a single person does more, that person is responsible for more. When there are fewer people in the room, responsibility condenses. A worker who steps away from a machine decides to step away. It’s only superficially that artificial intelligence seems to relieve us of the burdens of agency. In fact, A.I. challenges us to recognize that, at the end of the day, we’ll always be in charge. ♦





New Yorker Favorites


Searching for the cause of a catastrophic plane crash.


The lonely fate of Typhoid Mary.


The women Edmund White dated as he tried to go straight.


Should we expect more from fathers?


The heiress who probably spent more on clothing and jewels than any queen in history.


How baseball players became celebrities.


Shouts & Murmurs: I’m thrilled to report that nothing is going on with me.


Sign up for our daily newsletter to receive the best stories from The New Yorker.

Joshua Rothman, a staff writer, authors the weekly column Open Questions. He has been with the magazine since 2012.

Can Trauma Help You Grow? | The New Yorker

Can Trauma Help You Grow? | The New Yorker


Annals of Technology
Can Trauma Help You Grow?
By David KushnerMarch 15, 2016
Save this story


Studies show that many trauma survivors experience some form of post-traumatic growth—a psychological phenomenon in which trauma deepens life’s meaning.Illustration by Wesley Allsbrook

When I tell people that I had a brother who was kidnapped and murdered, I’m often asked how my parents survived. I was only four when Jon died, so for a long time I had the same question. My family suffered an unfathomable loss. Yet I grew up as free as most kids in the nineteen-seventies: my friends and I biked around town for hours, losing ourselves in the woods, the lakes, the arcades, with no cell phones to find us. When I finally had children of my own, I wondered more than ever how my mom and dad had done it. How had they found the strength not only to survive but to let me go?

A few years ago, I began exploring this question while reporting and writing my memoir, “Alligator Candy,” about the murder and its aftermath. During that research, I found a new way to contextualize my family’s experience: a psychological phenomenon called post-traumatic growth. Psychologists have long studied resilience—the ability to bounce back and move on. But post-traumatic growth, which has been documented in hundreds of studies, is different; it’s what happens when trauma changes and deepens life’s meaning. In his recent book on the phenomenon, “What Doesn’t Kill Us_,”_ Stephen Joseph, a psychologist at the University of Nottingham, describes victims of trauma experiencing enhanced relationships, greater self-acceptance, and a heightened appreciation of life. “To only look at the dark side and negative side is to miss out on something very important,” Joseph told me recently.

Needless to say, no one wants to go through trauma, or suggests it’s a good thing. I’d rather have Jon here with me now—watching Louis C.K., eating a bowl of pho, hearing about his kid’s messy room—than be writing this essay. But, as Rabbi Harold Kushner (no relation) wrote after the loss of his son, “I cannot choose.” The existence of post-traumatic growth suggests that, while the pain never vanishes, something new and powerful is likely to come. As my mother once told my other brother, Andy, and me, “It’s like, after a spring gets pushed all the way down, it rises even higher.”

For my family, tragedy came on a Sunday morning in the fall of 1973. We lived in the suburbs of Tampa, where my father chaired the anthropology department at the University of South Florida. Jon, a spry eleven-year-old with wavy red hair, biked to a nearby 7-Eleven for candy and didn’t return. He was missing for a week. Farmers, bikers, hippies, professors, the Air Force, and others from across town joined together to search. This was long before abductions became a national obsession, fueled by the Internet and the 24/7 news cycle. Things like this didn’t seem to happen.

Just as we were giving up hope, a woman told the police that her husband, John Paul Witt, had drunkenly confessed to kidnapping and killing my brother. He and a teen-age accomplice, Gary Tillman, had chosen Jon at random after, as they put it, “hunting” for victims over the course of a few weeks. Witt was executed in 1985, and Tillman is serving a life sentence. “The way some longtime residents remember it,” the St. Petersburg Times wrote, “the murder of 11-year-old Jonathan Kushner was when Tampa seemed to lose its small-town innocence.”



How could such a loss lead to any sort of growth? In the upheaval following my brother’s murder, that possibility was inconceivable. Our focus was on surviving the horror of what had happened. But, eventually, the experience began to shift. Many years later, in a journal entry, my father reflected on the change. “There’s something built-in that enables most human beings, not all, to be sure, but most, to get thru this…. It is built-in to enable us to get thru, force us, to survive, to stay alive,” he wrote. “After you’ve understood that it WILL be different, less raw, that the death can not be undone, that you will continue to live,” he continued, “the question becomes … ‘What shall I do with the rest of my life?’ ”


Video From The New Yorker

Can the Southern Baptist Convention Survive Without Women Pastors?




A few years after Jon’s death, my parents met a man named John Brantner. He was a psychologist from the University of Minnesota Medical School who had been lecturing around the country on what he called “positive approaches to dying.” In the wake of Elizabeth Kübler-Ross’s pioneering book “On Death and Dying,” published a few years earlier, in 1969, educators such as Brantner were part of a social movement that aimed to challenge the taboos of what he called “our death-denying culture.” “What do we know of the ones who have made a positive approach to separation, catastrophe, and death?” Brantner asked, during a presentation, in 1977. These “splendid people,” as he called them, “have come through great tribulation, are open, lack defensiveness, display intensity, purpose, passion in their lives…. They show wisdom, serenity, a kind of wholeness, a curious lighthearted and optimistic participation.”

He was talking, in essence, about post-traumatic growth—a term that wouldn’t be coined until nearly twenty years later, in 1995, by the University of North Carolina psychologists Richard Tedeschi and Lawrence Calhoun. Tedeschi and Calhoun had spent a decade surveying bereaved parents. Despite their pain and suffering, the couples consistently reported that they had undergone positive personal transformations, too. “One common theme,” Calhoun told me, “is that they say, ‘I still miss my child, I yearn for my child and get depressed, but I’m a different person—more compassionate and empathetic.’ ” That’s what my parents experienced. They launched one of the country’s first chapters of Compassionate Friends, a support group that had begun in England for bereaved parents. They helped start the Tampa area’s first hospice, organized conferences on death and dying, and conferred with Kübler-Ross, Elie Wiesel, and others. In the nineteen-fifties, my parents had been social activists who had participated in sit-ins; my mother had empowered women in childbirth as one of the country’s first Lamaze educators. Now, helping others who were suffering to survive their losses became crucial to helping them through their own.
Advertisement



Before he died, my father alluded in an e-mail to this period of their lives. He suggested that I read Anne Morrow Lindbergh’s memoir about the kidnapping and murder of her baby. “Lindbergh said that suffering alone doesn't make for wisdom,” he wrote. “One has to remain vulnerable, open to more suffering and to more love.” My father drew my attention to a portion of the book where Lindbergh attributes her survival to the support she received from others. She expressed the idea that, as my dad put it, “you gotta have at least one person whom you love and who loves you, and talk to that person and be supported by that person.”

Not everyone experiences growth after trauma. In recent years, psychologists have studied survivors of cancer, war, and terrorist attacks and found that there are certain traits that increase its likelihood, such as optimism, extroversion, and openness to new experience. Clinical treatment can also facilitate progress. In my family, all these factors played a role. The public nature of our saga, moreover, had the effect of convening around us an unusually supportive community.

Studies show that, in the end, somewhere between thirty-five and seventy-five per cent of trauma survivors experience some form of post-traumatic growth. “We say that, if you do experience traumatic events, it is quite possible you will experience one or more elements of growth,” Calhoun told me, before adding, “Our wish for you is that you don’t experience trauma at all.” For my family and me, that wish remains. But we know it’s one that will never be granted—and so we must, as my father wrote, decide every day the manner in which we want to live. My brother Andy and I have been shaped by that way of thinking, too. We’ve always been haunted by Jon’s death, but, perhaps for that reason, we share a drive to get the most out of the lives we have. For Andy, that meant becoming a musician. I pursued my own adventures and, eventually, a career in journalism.

In 1975, three years after my brother died, my mother took to her journal to reflect on what she had found for herself: a way of living with death that brought new meaning to life. “I treasure what I treasure,” she wrote. “I am aware of the temporariness of relationships and life itself. I am aware of what matters and turns me on. Did Jon give me this gift? I believe so. My sweet, sweet, sweetness. I thank you for that. I carry you with me forever unseen now, just as I did when you were snuggling in my uterus … unseen but filling my belly and my mind, part of our family even before you were born, part of our family now after your life. Thank you for this capacity to love and understand. Do you still know that you are loved?”




David Kushner is the author of the memoir “Alligator Candy” and other books.
More:PsychologyTrauma


Yoga for Finding Inner Calm in L.A. Right Now | The New Yorker

Yoga for Finding Inner Calm in L.A. Right Now | The New Yorker
Shouts & Murmurs
Yoga for Finding Inner Calm in L.A. Right Now

By Jena FriedmanJune 25, 2025



Photograph from Getty

Today’s practice is designed to help you relax, reset, and build strength for the impending American civil war. Let’s begin by getting off social media. Please—it’s not helping. O.K., fine, go ahead and post one more flying cat meme. I’ll wait.

Now sit tall, with your legs crossed, or remain curled up in the fetal position—sorry, the unborn person position, as I’m now legally required to call it. Close your eyes and draw your attention inward. Notice your breath. Is it shallow and panicked? That’s O.K. It’s just your body reacting naturally to the rapid erosion of civil liberties in real time.

Breathe in through your nose, and exhale out through your mouth, releasing any tension you may still be holding after watching footage of a sitting U.S. senator being forcefully dragged out of a press conference by the Department of Homeland Security.

Feel your breath as it flows in and out. Let your shoulders melt. You are safe here, as long as you’re a white, natural-born, English-speaking U.S. citizen who has never publicly or privately (don’t forget we live in a digital-surveillance state!) criticized the U.S. government under President Trump.

Now drop your right ear to your right shoulder. Roll your chin slowly over to your left shoulder. It’ll help muffle the sound of military helicopters overhead.


Next, come onto your knees, hips about a shoulder-width apart. Let your arms drift behind you, as if they’re cuffed—elbows drawn gently toward one another, like you’re a Mexican grandmother being detained by anonymous masked men for the crime of waiting for a bus without carrying proof of citizenship. Pause here. Feel that stretch in your shoulders, and in your capacity to still be shocked, six months into Trump’s second term.

Pause in child’s pose. Forehead on the mat, arms extended forward. Remember when we all thought the problem was plastic straws?

Now sit up and let your hips sink back. Feel the ground beneath you, which I’m told has been sold by the federal government to private entities for mineral rights. Also, I hope you’re vaccinated because I just found out that someone in the last class has tested positive for measles.

Slowly lower yourself onto your back, legs extended, head on the mat, arms by your sides, and palms facing up for savasana. This is a posture of total inaction—it’s also called “corpse pose,” or “Congress.” Take a deep breath in, and, as you exhale, allow your body to collapse into the floor like a peaceful protester who just took a rubber bullet to the head. Stay here for as long as you like living in a proto-fascist police state.

Now open your eyes. Great work, everybody. You showed up! That’s half the battle, the other half is being fought by seven hundred marines and two thousand National Guardsmen deployed to downtown L.A. “for your safety.”

When you’re ready, gently return to a seated position. Place your hands on your heart—or instead, let’s keep them visible. ICE is at the door. We’re being raided. I gotta run. Namas—Actually, I’m not sure I feel safe speaking another language in public right now.
==





Are You the Same Person You Used to Be? | The New Yorker

Are You the Same Person You Used to Be? | The New Yorker



Becoming You
Are you the same person you were when you were a child?
By Joshua RothmanOctober 3, 2022

People have strong, divergent opinions about the continuity of their own selves.Illustration by Juan Bernabeu
===

Ihave few memories of being four—a fact I find disconcerting now that I’m the father of a four-year-old. My son and I have great times together; lately, we’ve been building Lego versions of familiar places (the coffee shop, the bathroom) and perfecting the “flipperoo,” a move in which I hold his hands while he somersaults backward from my shoulders to the ground. But how much of our joyous life will he remember? What I recall from when I was four are the red-painted nails of a mean babysitter; the brushed-silver stereo in my parents’ apartment; a particular orange-carpeted hallway; some houseplants in the sun; and a glimpse of my father’s face, perhaps smuggled into memory from a photograph. These disconnected images don’t knit together into a picture of a life. They also fail to illuminate any inner reality. I have no memories of my own feelings, thoughts, or personality; I’m told that I was a cheerful, talkative child given to long dinner-table speeches, but don’t remember being so. My son, who is happy and voluble, is so much fun to be around that I sometimes mourn, on his behalf, his future inability to remember himself.

If we could see our childish selves more clearly, we might have a better sense of the course and the character of our lives. Are we the same people at four that we will be at twenty-four, forty-four, or seventy-four? Or will we change substantially through time? Is the fix already in, or will our stories have surprising twists and turns? Some people feel that they’ve altered profoundly through the years, and to them the past seems like a foreign country, characterized by peculiar customs, values, and tastes. (Those boyfriends! That music! Those outfits!) But others have a strong sense of connection with their younger selves, and for them the past remains a home. My mother-in-law, who lives not far from her parents’ house in the same town where she grew up, insists that she is the same as she’s always been, and recalls with fresh indignation her sixth birthday, when she was promised a pony but didn’t get one. Her brother holds the opposite view: he looks back on several distinct epochs in his life, each with its own set of attitudes, circumstances, and friends. “I’ve walked through many doorways,” he’s told me. I feel this way, too, although most people who know me well say that I’ve been the same person forever.

Try to remember life as you lived it years ago, on a typical day in the fall. Back then, you cared deeply about certain things (a girlfriend? Depeche Mode?) but were oblivious of others (your political commitments? your children?). Certain key events—college? war? marriage? Alcoholics Anonymous?—hadn’t yet occurred. Does the self you remember feel like you, or like a stranger? Do you seem to be remembering yesterday, or reading a novel about a fictional character?



If you have the former feelings, you’re probably a continuer; if the latter, you’re probably a divider. You might prefer being one to the other, but find it hard to shift your perspective. In the poem “The Rainbow,” William Wordsworth wrote that “the Child is Father of the Man,” and this motto is often quoted as truth. But he couched the idea as an aspiration—“And I could wish my days to be / Bound each to each by natural piety”—as if to say that, though it would be nice if our childhoods and adulthoods were connected like the ends of a rainbow, the connection could be an illusion that depends on where we stand. One reason to go to a high-school reunion is to feel like one’s past self—old friendships resume, old in-jokes resurface, old crushes reignite. But the time travel ceases when you step out of the gym. It turns out that you’ve changed, after all.

On the other hand, some of us want to disconnect from our past selves; burdened by who we used to be or caged by who we are, we wish for multipart lives. In the voluminous autobiographical novel “My Struggle,” Karl Ove Knausgaard—a middle-aged man who hopes to be better today than he was as a young man—questions whether it even makes sense to use the same name over a lifetime. Looking at a photograph of himself as an infant, he wonders what that little person, with “arms and legs spread, and a face distorted into a scream,” really has to do with the forty-year-old father and writer he is now, or with “the gray, hunched geriatric who in forty years from now might be sitting dribbling and trembling in an old people’s home.” It might be better, he suggests, to adopt a series of names: “The fetus might be called Jens Ove, for example, and the infant Nils Ove . . . the ten- to twelve-year-old Geir Ove, the twelve- to seventeen-year-old Kurt Ove . . . the twenty-three- to thirty-two-year-old Tor Ove, the thirty-two- to forty-six-year-old Karl Ove—and so on.” In such a scheme, “the first name would represent the distinctiveness of the age range, the middle name would represent continuity, and the last, family affiliation.”

My son’s name is Peter. It unnerves me to think that he could someday become so different as to warrant a new name. But he learns and grows each day; how could he not be always becoming someone new? I have duelling aspirations for him: keep growing; keep being you. As for how he’ll see himself, who knows? The philosopher Galen Strawson believes that some people are simply more “episodic” than others; they’re fine living day to day, without regard to the broader plot arc. “I’m somewhere down towards the episodic end of this spectrum,” Strawson writes in an essay called “The Sense of the Self.” “I have no sense of my life as a narrative with form, and little interest in my own past.”

Perhaps Peter will grow up to be an episodic person who lives in the moment, unconcerned with whether his life forms a whole or a collection of parts. Even so, there will be no escaping the paradoxes of mutability, which have a way of weaving themselves into our lives. Thinking of some old shameful act of ours, we tell ourselves, “I’ve changed!” (But have we?) Bored with a friend who’s obsessed with what happened long ago, we say, “That was another life—you’re a different person now!” (But is she?) Living alongside our friends, spouses, parents, and children, we wonder if they’re the same people we’ve always known, or if they’ve lived through changes we, or they, struggle to see. Even as we work tirelessly to improve, we find that, wherever we go, there we are (in which case what’s the point?). And yet sometimes we recall our former selves with a sense of wonder, as if remembering a past life. Lives are long, and hard to see. What can we learn by asking if we’ve always been who we are?

The question of our continuity has an empirical side that can be answered scientifically. In the nineteen-seventies, while working at the University of Otago, in New Zealand, a psychologist named Phil Silva helped launch a study of a thousand and thirty-seven children; the subjects, all of whom lived in or around the city of Dunedin, were studied at age three, and again at five, seven, nine, eleven, thirteen, fifteen, eighteen, twenty-one, twenty-six, thirty-two, thirty-eight, and forty-five, by researchers who often interviewed not just the subjects but also their family and friends. In 2020, four psychologists associated with the Dunedin study—Jay Belsky, Avshalom Caspi, Terrie E. Moffitt, and Richie Poulton—summarized what’s been learned so far in a book called “The Origins of You: How Childhood Shapes Later Life.” It folds in results from a few related studies conducted in the United States and the United Kingdom, and so describes how about four thousand people have changed through the decades.

John Stuart Mill once wrote that a young person is like “a tree, which requires to grow and develop itself on all sides, according to the tendency of the inward forces which make it a living thing.” The image suggests a generalized spreading out and reaching up, which is bound to be affected by soil and climate, and might be aided by a little judicious pruning here and there. The authors of “The Origins of You” offer a more chaotic metaphor. Human beings, they suggest, are like storm systems. Each individual storm has its own particular set of traits and dynamics; meanwhile, its future depends on numerous elements of atmosphere and landscape. The fate of any given Harvey, Allison, Ike, or Katrina might be shaped, in part, by “air pressure in another locale,” and by “the time that the hurricane spends out at sea, picking up moisture, before making landfall.” Donald Trump, in 2014, told a biographer that he was the same person in his sixties that he’d been as a first grader. In his case, the researchers write, the idea isn’t so hard to believe. Storms, however, are shaped by the world and by other storms, and only an egomaniacal weather system believes in its absolute and unchanging individuality.

Efforts to understand human weather—to show, for example, that children who are abused bear the mark of that abuse as adults—are predictably inexact. One problem is that many studies of development are “retrospective” in nature: researchers start with how people are doing now, then look to the past to find out how they got that way. But many issues trouble such efforts. There’s the fallibility of memory: people often have difficulty recalling even basic facts about what they lived through decades earlier. (Many parents, for instance, can’t accurately remember whether a child was diagnosed as having A.D.H.D.; people even have trouble remembering whether their parents were mean or nice.) There’s also the problem of enrollment bias. A retrospective study of anxious adults might find that many of them grew up with divorced parents—but what about the many children of divorce who didn’t develop anxiety, and so were never enrolled in the study? It’s hard for a retrospective study to establish the true import of any single factor. The value of the Dunedin project, therefore, derives not just from its long duration but also from the fact that it is “prospective.” It began with a thousand random children, and only later identified changes as they emerged.

Working prospectively, the Dunedin researchers began by categorizing their three-year-olds. They met with the children for ninety minutes each, rating them on twenty-two aspects of personality—restlessness, impulsivity, willfulness, attentiveness, friendliness, communicativeness, and so on. They then used their results to identify five general types of children. Forty per cent of the kids were deemed “well-adjusted,” with the usual mixture of kid personality traits. Another quarter were found to be “confident”—more than usually comfortable with strangers and new situations. Fifteen per cent were “reserved,” or standoffish, at first. About one in ten turned out to be “inhibited”; the same proportion were identified as “undercontrolled.” The inhibited kids were notably shy and exceptionally slow to warm up; the undercontrolled ones were impulsive and ornery. These determinations of personality, arrived at after brief encounters and by strangers, would form the basis for a half century of further work.

By age eighteen, certain patterns were visible. Although the confident, reserved, and well-adjusted children continued to be that way, those categories were less distinct. In contrast, the kids who’d been categorized as inhibited or as undercontrolled had stayed truer to themselves. At age eighteen, the once inhibited kids remained a little apart, and were “significantly less forceful and decisive than all the other children.” The undercontrolled kids, meanwhile, “described themselves as danger seeking and impulsive,” and were “the least likely of all young adults to avoid harmful, exciting, and dangerous situations or to behave in reflective, cautious, careful, or planful ways.” Teen-agers in this last group tended to get angry more often, and to see themselves “as mistreated and victimized.”

The researchers saw an opportunity to streamline their categories. They lumped together the large group of teen-agers who didn’t seem to be on a set path. Then they focussed on two smaller groups that stood out. One group was “moving away from the world,” embracing a way of life that, though it could be perfectly rewarding, was also low-key and circumspect. And another, similarly sized group was “moving against the world.” In subsequent years, the researchers found that people in the latter group were more likely to get fired from their jobs and to have gambling problems. Their dispositions were durable.


That durability is due, in part, to the social power of temperament, which, the authors write, is “a machine that designs another machine, which goes on to influence development.” This second machine is a person’s social environment. Someone who moves against the world will push others away, and he’ll tend to interpret the actions of even well-meaning others as pushing back; this negative social feedback will deepen his oppositional stance. Meanwhile, he’ll engage in what psychologists call “niche picking”—the favoring of social situations that reinforce one’s disposition. A “well-adjusted” fifth grader might actually “look forward to the transition to middle school”; when she gets there, she might even join some clubs. Her friend who’s moving away from the world might prefer to read at lunch. And her brother, who’s moving against the world—the group skews slightly male—will feel most at home in dangerous situations.

Through such self-development, the authors write, we curate lives that make us ever more like ourselves. But there are ways to break out of the cycle. One way in which people change course is through their intimate relationships. The Dunedin study suggests that, if someone who tends to move against the world marries the right person, or finds the right mentor, he might begin to move in a more positive direction. His world will have become a more beneficent co-creation. Even if much of the story is written, a rewrite is always possible.

The Dunedin study tells us a lot about how differences between children matter over time. But how much can this kind of work reveal about the deeper, more personal question of our own continuity or changeability? That depends on what we mean when we ask who we are. We are, after all, more than our dispositions. All of us fit into any number of categories, but those categories don’t fully encompass our identities.

There’s an important sense, first of all, in which who you are is determined not by what you’re like but by what you do. Imagine two brothers who grow up sharing a bedroom, and who have similar personalities—intelligent, tough, commanding, and ambitious. One becomes a state senator and university president, while the other becomes a Mob boss. Do their parallel temperaments make them similar people? Those who’ve followed the stories of William Bulger and James (Whitey) Bulger—the Boston brothers who ran the Massachusetts Senate and the underworld, respectively—sometimes suggest that they were more alike than different. (“They’re both very tough in their respective fields,” a biographer observed.) But we’d be right to be skeptical of such an outlook, because it requires setting aside the wildly different substances of the brothers’ lives. At the Pearly Gates, no one will get them confused.


“He’s more interesting poolside.”
Cartoon by Liza Donnelly


The Bulger brothers are extraordinary; few of us break so bad or good. But we all do surprising things that matter. In 1964, the director Michael Apted helped make “Seven Up!,” the first of a series of documentaries that would visit the same group of a dozen or so Britons every seven years, starting at age seven; Apted envisioned the project—which was updated most recently in 2019, with “63 Up”—as a socioeconomic inquiry “about these kids who have it all, and these other kids who have nothing.” But, as the series has progressed, the chaos of individuality has encroached on the clarity of categorization. One participant has become a lay minister and gone into politics; another has begun helping orphans in Bulgaria; others have done amateur theatre, studied nuclear fusion, and started rock bands. One turned into a documentarian himself and quit the project. Real life, irrepressible in its particulars, has overpowered the schematic intentions of the filmmakers.

Even seemingly unimportant or trivial elements can contribute to who we are. Late this summer, I attended a family function with my father and my uncle. As we sat at an outside table, making small talk, our conversation turned to “Star Trek,” the sci-fi TV show that premièred in 1966. My father and uncle have both watched various incarnations of it since childhood, and my dad, in particular, is a genuine fan. While the party went on around us, we all recited from memory the original version’s opening monologue—“Space: the final frontier. These are the voyages of the Starship Enterprise. . . .”—and applauded ourselves on our rendition. “Star Trek” is a through line in my dad’s life. We tend to downplay these sorts of quirks and enthusiasms, but they’re important to who we are. When Leopold Bloom, the protagonist of James Joyce’s “Ulysses,” wanders through a Dublin cemetery, he is unimpressed by the generic inscriptions on the gravestones, and thinks they should be more specific. “So and So, wheelwright,” Bloom imagines, or, on a stone engraved with a saucepan, “I cooked good Irish stew.” Asked to describe ourselves, we might tend to talk in general terms, finding the details of our lives somehow embarrassing. But a friend delivering a eulogy would do well to note that we played guitar, collected antique telephones, and loved Agatha Christie and the Mets. Each assemblage of details is like a fingerprint. Some of us have had the same prints throughout our lives; others have had a few sets.

Focussing on the actualities of our lives might belie our intuitions about our own continuity or changeability. Galen Strawson, the philosopher who says that he has little sense of his life “as a narrative,” is best known for the arguments he’s made against the ideas of free will and moral responsibility; he maintains that we don’t have free will and aren’t ultimately responsible for what we do. But his father, Peter Strawson, was also a philosopher, and was famous for, among other things, defending those concepts. Galen Strawson can assure us that, from a first-person perspective, his life feels “episodic.” Yet, from the third-person perspective of an imagined biographer, he’s part of a long plot arc that stretches across lifetimes. We may feel discontinuous on the inside but be continuous on the outside, and vice versa. That sort of divergence may simply be unavoidable. Every life can probably be viewed from two angles.

I know two Tims, and they have opposing intuitions about their own continuities. The first Tim, my father-in-law, is sure that he’s had the same jovially jousting personality from two to seventy-two. He’s also had the same interests—reading, the Second World War, Ireland, the Wild West, the Yankees—for most of his life. He is one of the most self-consistent people I know. The second Tim, my high-school friend, sees his life as radically discontinuous, and rightly so. When I first met him, he was so skinny that he was turned away from a blood drive for being underweight; bullied and pushed around by bigger kids, he took solace in the idea that his parents were late growers. This notion struck his friends as far-fetched. But after high school Tim suddenly transformed into a towering man with an action-hero physique. He studied physics and philosophy in college, and then worked in a neuroscience lab before becoming an officer in the Marines and going to Iraq; he entered finance, but has since left to study computer science.

“I’ve changed more than most people I know,” Tim told me. He shared a vivid memory of a conversation he had with his mother, while they sat in the car outside an auto mechanic’s: “I was thirteen, and we were talking about how people change. And my mom, who’s a psychiatrist, told me that people tend to stop changing so much when they get into their thirties. They start to accept who they are, and to live with themselves as they are. And, maybe because I was an unhappy and angry person at the time, I found that idea offensive. And I vowed right then that I would never stop changing. And I haven’t stopped.”

Do the two Tims have the whole picture? I’ve known my father-in-law for only twenty of his seventy-two years, but even in that time he’s changed quite a bit, becoming more patient and compassionate; by all accounts, the life he lived before I met him had a few chapters of its own, too. And there’s a fundamental sense in which my high-school friend hasn’t changed. For as long as I’ve known him, he’s been committed to the idea of becoming different. For him, true transformation would require settling down; endless change is a kind of consistency.

Galen Strawson notes that there’s a wide range of ways in which people can relate to time in their lives. “Some people live in narrative mode,” he writes, and others have “no tendency to see their life as constituting a story or development.” But it’s not just a matter of being a continuer or a divider. Some people live episodically as a form of “spiritual discipline,” while others are “simply aimless.” Presentism can “be a response to economic destitution—a devastating lack of opportunities—or vast wealth.” He continues:


There are lotus-eaters, drifters, lilies of the field, mystics and people who work hard in the present moment. . . . Some people are creative although they lack ambition or long-term aims, and go from one small thing to the next, or produce large works without planning to, by accident or accretion. Some people are very consistent in character, whether or not they know it, a form of steadiness that may underwrite experience of the self’s continuity. Others are consistent in their inconsistency, and feel themselves to be continually puzzling and piecemeal.

The stories we tell ourselves about whether we’ve changed are bound to be simpler than the elusive reality. But that’s not to say that they’re inert. My friend Tim’s story, in which he vows to change forever, shows how such stories can be laden with value. Whether you perceive stasis or segmentation is almost an ideological question. To be changeable is to be unpredictable and free; it’s to be not just the protagonist of your life story but the author of its plot. In some cases, it means embracing a drama of vulnerability, decision, and transformation; it may also involve a refusal to accept the finitude that’s the flip side of individuality.

The alternative perspective—that you’ve always been who you are—bears values, too. James Fenton captures some of them in his poem “The Ideal”:


A self is a self.
It is not a screen.
A person should respect
What he has been.

This is my past
Which I shall not discard.
This is the ideal.
This is hard.
Advertisement



In this view, life is full and variable, and we all go through adventures that may change who we are. But what matters most is that we lived it. The same me, however altered, absorbed it all and did it all. This outlook also involves a declaration of independence—independence not from one’s past self and circumstances but from the power of circumstances and the choices we make to give meaning to our lives. Dividers tell the story of how they’ve renovated their houses, becoming architects along the way. Continuers tell the story of an august property that will remain itself regardless of what gets built. As different as these two views sound, they have a lot in common. Among other things, they aid us in our self-development. By committing himself to a life of change, my friend Tim might have sped it along. By concentrating on his persistence of character, my father-in-law may have nurtured and refined his best self.

The passage of time almost demands that we tell some sort of story: there are certain ways in which we can’t help changing through life, and we must respond to them. Young bodies differ from old ones; possibilities multiply in our early decades, and later fade. When you were seventeen, you practiced the piano for an hour each day, and fell in love for the first time; now you pay down your credit cards and watch Amazon Prime. To say that you are the same person today that you were decades ago is absurd. A story that neatly divides your past into chapters may also be artificial. And yet there’s value in imposing order on chaos. It’s not just a matter of self-soothing: the future looms, and we must decide how to act based on the past. You can’t continue a story without first writing one.

Sticking with any single account of your mutability may be limiting. The stories we’ve told may become too narrow for our needs. In the book “Life Is Hard,” the philosopher Kieran Setiya argues that certain bracing challenges—loneliness, failure, ill health, grief, and so on—are essentially unavoidable; we tend to be educated, meanwhile, in a broadly redemptive tradition that “urges us to focus on the best in life.” One of the benefits of asserting that we’ve always been who we are is that it helps us gloss over the disruptive developments that have upended our lives. But it’s good, the book shows, to acknowledge hard experiences and ask how they’ve helped us grow tougher, kinder, and wiser. More generally, if you’ve long answered the question of continuity one way, you might try answering it another. For a change, see yourself as either more continuous or less continuous than you’d assumed. Find out what this new perspective reveals.

There’s a recursive quality to acts of self-narration. I tell myself a story about myself in order to synchronize myself with the tale I’m telling; then, inevitably, I revise the story as I change. The long work of revising might itself be a source of continuity in our lives. One of the participants in the “Up” series tells Apted, “It’s taken me virtually sixty years to understand who I am.” Martin Heidegger, the often impenetrable German philosopher, argued that what distinguishes human beings is our ability to “take a stand” on what and who we are; in fact, we have no choice but to ask unceasing questions about what it means to exist, and about what it all adds up to. The asking, and trying out of answers, is as fundamental to our personhood as growing is to a tree.

Recently, my son has started to understand that he’s changing. He’s noticed that he no longer fits into a favorite shirt, and he shows me how he sleeps somewhat diagonally in his toddler bed. He’s been caught walking around the house with real scissors. “I’m a big kid now, and I can use these,” he says. Passing a favorite spot on the beach, he tells me, “Remember when we used to play with trucks here? I loved those times.” By this point, he’s actually had a few different names: we called him “little guy” after he was born, and I now call him “Mr. Man.” His understanding of his own growth is a step in his growing, and he is, increasingly, a doubled being—a tree and a vine. As the tree grows, the vine twines, finding new holds on the shape that supports it. It’s a process that will continue throughout his life. We change, and change our view of that change, for as long as we live. ♦




Published in the print edition of the October 10, 2022, issue.

New Yorker Favorites


Searching for the cause of a catastrophic plane crash.


The lonely fate of Typhoid Mary.


The women Edmund White dated as he tried to go straight.


Should we expect more from fathers?


The heiress who probably spent more on clothing and jewels than any queen in history.


How baseball players became celebrities.


Shouts & Murmurs: I’m thrilled to report that nothing is going on with me.


Sign up for our daily newsletter to receive the best stories from The New Yorker.

Joshua Rothman, a staff writer, authors the weekly column Open Questions. He has been with the magazine since 2012.




===

불교 여성 살림 - 불성과 살림의 불이不二를 기억하기 김정희

불교 여성 살림 : 알라딘


불교 여성 살림 - 불성과 살림의 불이不二를 기억하기
김정희 (지은이)모시는사람들2011-01-20




Sales Point : 24
책소개
불교 여성주의의 관점에서 불교 여성학의 영역 개척을 시도한 책. 우리나라 여성사를 유교 가부장제가 지배했던 ‘여성 잔혹사’라고만 보는 시각에서 벗어나, 오늘날 그 어떤 종교보다 많은 여성들이 마음을 의탁하고, 삶의 지혜와 지침을 얻고 있는 불교, 무속 등등 자장 안에서 재조명함으로써, 우리 역사에서 면면한 비가부장제적 전통을 오늘날 기억해 내고 소환함으로써 오늘의 여성 문제의 대안을 모색하는 것이다.

독실한 기독교 가정에서 태어난 저자가 불자가 되기까지의 신산한 삶의 과정, 여성학자로서의 삶의 일단, 오늘 이 땅에서 여성학을 한다는 것의 의미들을 짚어 보면서, 이 책에 실린 글들을 써 오던 과정과 그 글들을 쓰게 된 이면들을 총론적으로 들려준다.



목차


1. 생태여성주의의 영성
1. 영성과 마음을 아울러 보기
2. 생태여성주의 출현 배경
3. 생태여성주의의 영성

2. 불교에서의 마음
1. 우리는 왜 ‘마음’을 찾기 시작했을까?
2. 생명 아닌 것이 없는 불이不二의 마음
3. 존재의 공함을 깨쳐 자유자재함

3. 불교, 한국 문화, 여성
1. 한국의 대모신 신화: 마고 신화
2. 가부장제 역사 속에서의 불교, 도교와 여성
3. 민의 생명 전통과 불교

4. 여성의 불성인 살림에 대한 사유
1. 생명 위기의 시대에서 살림의 시대로
2. 살림의 두 얼굴
3. 부활하는 살림
4. 살림의 경계, 살림의 길 야 한다

5. 불교의 생명 윤리와 재가 여성 불자
1. 떠나지 않은 여자들의 작은 해탈
2. 불교의 생명 윤리로서의 팔정도와 오계
3. 여성 불자 삶 속에서의 불교 생명 윤리
4. 작은 해탈을 넘어설 수 있을까?

6. 여성이 마음을 회복해 가는 여정에 대한 사유
1. 생활과 분리되지 않는 마음공부
2. 가부장적인 마음의 몽상을 알아차리기
3. 가부장제의 여성성에서 벗어나 마음을 회복하기

접기


책속에서


여성을 제2의 성으로 강제하는 사회 세력들에 대한 저항과, 여성 자신의 노예성에 대한 비판과 성찰의 정신은 다양한 여성주의들feminisms을 포괄하는 큰 하나의 우산으로서의 ?:여성주의Feminism’라 할 수 있다. 한국에서는 주로 미국 여성주의의 영향을 받아 1980년대에는 여성주의Feminism 범주 안에서 자유주의 여성주의liberal feminism, 마르크스주의 여성주의marxist feminism, 사회주의 여성주의socialist feminism, 급진적 여성주의radical feminism가 이론적·실천적 각축을 벌였다. 1990년대 이후는 유럽에서 1960년대부터 발전해 온 탈근대 여성주의와 이것의 미국적 변용 이론들이 여성주의의 주류를 차지하고 있다. 한편 1970년대 이후 생태계 파괴로 인한 지구의 지속 가능성의 위기가 심각한 문제로 인식되면서 생태주의가 대두되었다. 이 흐름 속에서 생태적 문제와 여성문제에 대한 인식을 통합적으로 사유하는 생태여성주의가 부상하였다. 한국에서는 90년대 중반 북경 여성대회와 생태여성주의 관련 서적이 번역되면서 서구 생태여성주의가 소개되었다. 한국의 문화적·역사적 맥락의 다름으로 인해 서구 생태여성주의는 한국의 여성 생명운동가들에 의해 친근하게 받아들여지고 있지는 않다. 이 장에서는 우리의 다름을 제대로 말하기 위해 서구 생태여성주의 자체에 대한 이해를 선결하고자 한다.(1장에서) 접기
생태주의와 생태여성주의 이론과 실천에서 가장 기본적인 개념인 영성은 동양 문화권에서는 마음으로 일컬어진다. 여기에서는 바로 이 영성의 문제를 마음의 문제로 이해하여 불교에서의 마음과, 여기서 자연스럽게 도출되는 인간형인 무아를 이해해 보고자 한다. 불교의 마음 이해와 관련해서는 불교의 경전들과 선사禪師들의 경전에 대한 주석서나 저서들이 이미 많이 있다. 그런데도 중생심이 여전히 그득한 내가 불교에서의 마음을 논한다는 것이 하룻강아지 범 무서운 줄 모르는 격 같아 송구스러운 마음이 든다. 그럼에도 불구하고 마음과 여성 문제, 마음과 생태 위기를 연결시키는 연구를 진행하기 위해서 연구자로서 마음 이해문제의 정리는 피할 수 없는 과제이다. ‘3장 불교와 여성’의 논의를 이끌어 가는 데 요구되는 수준에서 불교에서의 마음 문제를 정리해 보고자 한다.(2장에서) 접기
이 장에서는 내가 지난 10년 동안 불교와 여성을 연관시켜 수행한 생명여성주의로서의 불교여성주의 연구들을 소개한다. 생명여성주의는 불교·노장과 같은 동양의 생명 철학 또는 살림의 전통이나, 이것들과 서구 생태주의 사상의 통합에 기반을 두고 한국 및 동아시아의 역사·문화 맥락을 사상시키지 않는 지구·지역적 흐름 속에서 여성 문제와 생태 문제를 통합적으로 연구하고 실천해가고자 하는 나의 이론적·실천적 지향을 표현해 주는 기표이다. 이 장 전편을 흐르는 또 하나의 문제의식은 한국 역사에서 관찰되는 비非가부장적 전통의 흔적들과 그 연원에 대한 궁금증이다. 신사임당의 예에서 보듯이 조선 중기까지도 부거제夫居制는 정착되지 않았다. 신사임당의 친정은 아들이 없었기에 결혼몇 달 후 아버지가 세상을 떠나자 친정에서 3년상을 마치고 남편 있는 서울로 올라갔으며, 그 이후로도 이따금 친정에 가서 홀로 사는 어머니와 같이 지냈다. 우리의 역사를 들여다보면, 도저히 가부장제 풍습이라고는 볼 수 없는 이 같은 전통들이 눈에 띈다. 즉 씨받이나 호된 시집살이, 자살까지 강요한 열녀 문화와는 다른 결의 전통이 관찰되는데, 이 전통들을 어떻게 설명할 수 있을까? 이런 전통의 연원은 무엇일까? 이것은 나만의 궁금증이 아님에도 불구하고 그 연원에 대한 연구는 아직 제대로 이루어지고 있지 않은 듯하다.(3장에서) 접기
살림은 일반적으로 아이를 기르고 성인들의 하루하루 활동력을 재생시켜 주는 여성의 집안일을 일컫지만--사실 이것만 해도 대단한 일이다--, 여기에 국한되지 않는다. 새만금 갯벌에서 조개를 채취하는 여성들은 이 일을 갯살림이라고 부른다. 여기서‘살림’은 집 바깥의 일을 지칭한다. ‘나라國家살림’이란 말도 쓴다. 이때 살림은 경제적인 측면을 강조하는 국가 경영, 국가 정치를 의미한다. 애초에 살림이 가구에 감금될 운명의 언어라면 이런 어법은 가능하지 않을 것이다. 그러나 우리가 이 말을 자연스럽게 사용한다는 것은 애초에 살림이 가구 안에갇힌 말이 아니었음을 말해 준다. 내외의 경계가 탄력적으로 달라질 수 있고, 살림의 마음이 살아 있는 활동은 애초부터 다 살림이었던 듯하다.(4장에서) 접기
구체적으로 이 장에서 제기하는 질문들은 다음과 같다. 불교생명 윤리는 어떤 논리를 갖고 있는가? 불교 생명 윤리가 여성 불자의 경험과 삶 속에 용해되어 작동하고 있다면, 그것은 어떤 모습으로 나타나는가? 또 불교의 생명윤리를 체화한 여성 불자는 개인적·사회적 차원에서 생태적 주체로서의 면모를 충분히 보여 주는가? 그렇지 못한 면이 있다면 그것은 불교 생명 윤리의 내재적인 결함에서 비롯되는가 혹은 윤리 외적인 요인에서 비롯되는가? 이러한 질문에 답하기 위해 불교의 생명 윤리로서「팔정도」와「오계」를 살펴볼 것이다. 여기서는 이 두 윤리가 생태적인 불교 윤리로 불릴 수 있는가를 이와 관련한 찬반양론을 비교하면서검토해 보고자 한다. 두 번째로 불교 윤리를 내재화해 가는 과정에서 여성 불자들이 실존적·사회적 주체로서 자신을 세우는 과정을 구체적인 사례담을 통해 연구한다.(5장에서) 접기
여기서는 2장에서 살펴본 불교의 마음론에 근거하면서 또한 이 마음론이 생활과 마음공부를 둘로 보지 않음에 대한 논의를 진전시키면서, 여성이 마음을 회복해 가는 여정을 불교적으로 사유해 보고자 한다. 이러한 불교적 사유는 체계 읽기와 청정한 마음의 회복이라는, 구조적 독해와 존재론적 독해를 넘나드는 방식으로 진행된다. 흔히 불교적 사유는 체계는 무시한 채로 존재의 자기 결정성을 절대화하는 것으로 이해되고 있다. 그러나 여기서는 이는 잘못된 것임을 지적하고 이 두 측면을 모두 이해하는 것이 불교적 사유라는 인식하에 논의를 전개해 나갈 것이다.(6장에서) 접기



저자 및 역자소개
김정희 (지은이)
저자파일
신간알림 신청

남도의 전통 농촌 문화 살리기 활동을 하는 가배울 공동대표로, 여성학자이기도 하다. 생명여성주의와 지역 여성운동에 관심을 가지고 있으며 《풀뿌리 여성정치와 초록리더십의 가능성》, 《공정무역, 희망무역》, 《남도 여성과 살림예술》 책을 펴냈다.

최근작 : <지구별 생태사상가>,<불교와 섹슈얼리티 (반양장)>,<덜 소비하고 더 존재하라> … 총 16종 (모두보기)


출판사 제공 책소개
이 책은 “불교 여성주의”의 관점에서 “불교 여성학”의 영역을 개척하고자 한다. 우리나라 여성사를 유교 가부장제가 지배했던 ‘여성 잔혹사’라고만 보는 시각에서 벗어나, 오늘날 그 어떤 종교보다 많은 여성들이 마음을 의탁하고, 삶의 지혜와 지침을 얻고 있는 불교(불교, 원불교), 무속 등등 자장 안에서 재조명함으로써, 다시 말해 우리 역사에서 면면한 비가부장제적 전통을 오늘날 기억해 내고 소환함으로써 오늘의 여성 문제의 대안을 모색하는 것이다.

저자의 규정에 따르면 “불교 여성주의”는 “불교의 사유 체계에 따라 여성 경험과 양성 관계를 연구하고, 이를 통해 분석되고 해석된 연성문제에 대한 이론적 실천적 대안을 제시하며, 나아가 이를 실현하기 위해 노력함으로써, 여남 상생의 세상을 만들어가는 데 기여하고자 하는, 일련의 이론적 실천적 행위의 총체”이다.

이를 위하여 과거 여성사의 새로운 조명이 필수적이다. 이는 "자신이 발을 딛고 있는 곳의 심층 문화에 대한 자각과 성찰, 건강한 계승 노력"이 건강한 마음을 회복하는 출발점이 되기 때문이다.
저자는 자기 자신의 삶에 대한 짧은 고백(생명여성주의로 가는 길, pp.5-19)으로 이 책을 시작한다. 독실한 기독교 가정에서 태어난 저자가 불자가 되기까지의 신산한 삶의 과정, 여성학자로서의 삶의 일단, 오늘 이 땅에서 여성학을 한다는 것의 의미들을 짚어 보면서, 이 책에 실린 글들을 써 오던 과정과 그 글들을 쓰게 된 이면들을 총론적으로 말해 주고 있다.

수많은 여성 관련 담론들이 펼쳐지고, 여성을 포함한 시민사회 내지 여러 층위의 삶의 지평들이 우리들의 삶을 종횡으로 재단하지만, 구체적인 삶의 현장에서 살아가는 여성들, 사람들은 어쩌면 그러한 담론들과 전연 별개의 실상으로 자신의 삶을 꾸려 나가고 있는지도 모른다. 학문이란 그들의 삶의 모습을 제대로 담아 내기 위하여 끊임없이 그물코를 현실적합하게 수정하고 보완해 나가는 작업일 테다.

저자가 "생명여성주의'라는 이름으로 그려 내고자 하는 여성들의 삶의 모습, 영성(불성, 마음)과 한국문화, 살림(큰 살림)의 주체로서의 여성--그것은 작아진 오늘의 '살림'의 원형을 회복하는 것으로서--성 회복, '불교 여성학', 여성불자들의 마음공부와 보살행으로서의 살림의 구체적인 사례 등을 통해, 불교와 여성, 여성과


100자평