- Donate
- Subscribe
My Account
Eugene McCarraher
Against Chrapitalism.
- View Issue
- Subscribe
- Give a Gift
- Archives
If the last five years of American politics have demonstrated anything, it's that Marx's dictum about the modern state couldn't be more indisputable: our government is the executive committee for the common affairs of the bourgeoisie. Now more than ever, our liberal democracy is a corporate franchise, and the stockholders are demanding an ever-higher return on their investment in America, Inc. Over the last four decades, the Plutocracy has decided to repeal the 20th century, to cancel the gains and protections won by workers, the poor, and others outside the imperial aristocracy of capital. Enough of this coddling of those Ayn Rand vilified as "moochers" and "looters." Return the country to its rightful owners: the "Job Creators," the Almighty Entrepreneurs, those anointed by Heaven to control the property interests of the American Empire. Endowed with the Divine Right of Capital, they deserve our thanksgiving and reverence, for without them we would not deserve to live, such common clay are we.
The Faith of the Faithless: Experiments in Political Theology
Simon Critchley (Author)
Verso
302 pages
$11.19
Lest anyone think that the re-election of President Barack Obama invalidates this judgment, think again. Mitt Romney may have been a more egregious and openly disdainful lord of the manor, but Obama has compiled an impeccable record of imperial corporate stewardship. Despite all the hype about a rising progressive coalition of non-whites and young people, there is no reason to believe that Obama's second term of office will be any less a model of deference.
The Plutocracy's beatific vision for the mass of Americans is wage servitude: a fearful, ever-busy, and cheerfully abject pool of human resources. Rendered lazy and recalcitrant by a half-century of mooching, American workers must be forced to be free: crush labor unions, keep remuneration low, cut benefits and lengthen working hours, close or narrow every avenue of escape or repose from accumulation. If they insist on living like something more than the whining, expendable widgets they are, reduce them to a state of debt peonage with an ensemble of financial shackles: mortgages, credit cards, and student loans, all designed to ensure that the wage slaves utter two words siren-sweet to business: "Yes, boss." It's the latest chapter in the depressing story that David Graeber relates in Debt: debt as an especially insidious weapon in the arsenal of social control. "There's no better way to justify relations founded on violence … than by reframing them in the language of debt," he writes, "because it immediately makes it seem that it's the victim who's doing something wrong."
Alas, we're living in the early, bewildering days of the demise of the American Empire, the beginning of the end of that obsession-compulsion known as the Amerian Dream. The reasons are clear, if often angrily denied: military hubris and over-extension; a stagnant monopoly capitalism with a bloated financial sector; a population on whom it's dawning that low-wage labor is their inexorable fate; ecological wreckage that can only be limited or repaired by cessation of growth. The patricians' task will be threefold: finessing the increasingly obvious fact of irreversible imperial decline; convincingly performing the charade of democracy in the face of popular vassalage; and distracting or repressing the roiling rage and tumult among the plebs. How will the elites maintain and festoon their ever-more untenable hegemony?
Empires have always evaded but eventually accepted their impending senescence: first, willful, vehement denial, and redoubled, often violent devotion to the imperial customs and divinities; then the slow, entropic apocalypse of demoralization and retrenchment. As imperial twilight descends, a brisk if melancholy market of fashions in acquiescence will undoubtedly arise. Reconciled to the dystopian prospect of a world engulfed in war and famine, the affluent will sport a variety of brands of what Simon Critchley dubs "passive nihilism," a withdrawal from politics into tasteful, well-guarded enclaves of resignation. Radical visions may revive as well, but right now they're dispiritingly f*ckless. Looking at first like a pentecost of utopia, the "Occupy" movement has dismally failed to gain any popular traction, in part because of the utter mediocrity and incoherence of its demands. "Fairness" is populist pabulum; "we are the 99%" is a slogan, not serious political analysis. The injustice and indignity of capitalism have seldom been so openly wretched, but as Graeber ruefully observes, just when we need "to start thinking on a breadth and with a grandeur appropriate to the times," we seem to have "hit the wall in terms of our collective imagination."
Don't expect any breadth or grandeur from the Empire's Christian divines. Across the board, the imperial chaplains exhibit the most obsequious deference to the Plutocracy, providing imprimaturs and singing hallelujahs for the civil religion of Chrapitalism: the lucrative merger of Christianity and capitalism, America's most enduring covenant theology. It's the core of "American exceptionalism," the sanctimonious and blood-spattered myth of providential anointment for global dominion. In the Chrapitalist gospel, the rich young man goes away richer, for God and Mammon have pooled their capital, formed a bi-theistic investment group, and laundered the money in baptismal fonts before parking it in offshore accounts. Chrapitalism has been America's distinctive and gilded contribution to religion and theology, a delusion that beloved community can be built on the foundations of capitalist property. As the American Empire wanes, so will its established religion; the erosion of Chrapitalism will generate a moral and spiritual maelstrom.
What will American Christians do as their fraudulent Mandate from Heaven expires? They might break with the imperial cult so completely that it would feel like atheism and treason. With a little help from anarchists, they might be monotheists, even Christians again. Who better to instruct them in blasphemy than sworn enemies of both God and the state? Christians might discover that unbelievers can be the most incisive and demanding theologians. As Critchley asserts, " 'God' is the first anarchist, calling us into struggle with the mythic violence of law, the state, and politics by allowing us to glimpse the possibility of something that stands apart." By inciting us to curse and renounce the homespun idolatry of Chrapitalism, Critchley and Graeber can point Christians back to a terrible but glorious moment in their history: when the avant-garde of the eschaton were maligned as godless traitors. We'll need that dangerous memory in our frightful if doubtless very different time.
An anti-globalist firebrand and renowned anthropologist at Goldsmiths, University of London, Graeber has been touted as a guru for Occupy, writing portentously in the Guardian that it represents "the opening salvo in a wave of negotiations over the dissolution of the American Empire." Debt should be read as a scholarly barrage in that colloquy on imperial decay. Indeed, Graeber himself tells us that his is an Important Book. "For a very long time, the intellectual consensus has been that we can no longer ask Great Questions." Graeber's Great Answer is a tour de force of interdisciplinary erudition, a sprawling, disheveled, and fascinating mess of a book. After 200 pages of anthropology, economics, sociology, and philosophy—even a bit of religion and theology—the history of debt unfolds as a magpie collection of anecdotes: stories from around the globe about coinage, slavery, markets, trade, and law. The last two centuries get jammed into the last 40 pages; the last 40 years into the final thirty. It's a rambling, ill-focused account, and it's not at all clear by the end of the volume exactly what the Great Answer is.
Graeber's history is less engrossing than his vigorous diatribe against the sado-science of economics—the ethical nexus of Chrapitalism—and his sustained assault on this phony discipline will endure in the annals of schadenfreude. There's been a Himalayan rise in the inflation rate of arrogance among economists since the 1970s, and having failed to see the current turmoil coming, practitioners of the dismal science should be required to eat a daily helping of humble pie. Their account of history (where they pretend to know any) has been discredited for over a century; drawing on an ample anthropological and historical literature, Graeber shows that money and markets emerged, not from Adam Smith's "natural liberty," but from the need of ancient states to provision their expanding temple-military complexes. From its "myth of barter" to its truncated, utility-maximizing humanism, economics, Graeber contends, has "little to do with anything we observe when we examine how economic life is actually conducted." Historically illiterate and morally cretinous, economics—not theology—is the most successful confidence game in the history of intellectual life, a testament to the power of avarice to induce and embellish human credulity.
In Graeber's view, economics' most nefarious impact on morality is its perverse account of social relations, especially those revolving around obligation and interdependence. Graeber distinguishes between obligations—the incalculable owing of favors, as when you give me something, and I owe you something back—and debt as a precisely enumerable obligation, and therefore calculable in terms of equivalence and money. Conceivable only when people are treated not as human beings but as abstractions, equivalence is the categorical imperative of pecuniary reason, and it sanctifies the self-righteous, skinflint buncombe that parades as an ethic of "character." Isn't paying one's debts the basis of morality and dependable personal character? Especially when translated into money, the quantification of debt can justify a lot of indecent, horrific conduct. Can't pay me back? I'll take your daughter, or foreclose on your home, or demand austerity measures that result in famine, disease, or destitution.
Graeber's alternative to debt and its moral atrocities is communism: "from each according to their ability, to each according to their needs." (Not, note well, according to their "deserts.") Knowing that he'll face a fusillade of umbrage about "totalitarianism," Graeber insists that communism "exists right now" and lies at "the foundation of all human sociability." Our lives abound with moments of everyday communism: we don't charge people who ask us for directions, and if we do, we're rightly considered jerks. Communism is not "egalitarianism"—which, as even Marx observed, partakes of the boring, inhuman logic of equivalence—and in Graeber's view, it doesn't entail any specific form of property. (An unromantic admirer of peasant societies and their moral economy of "the commons," Graeber appears to endorse what anthropologists sometimes call "usufruct," in which property becomes a kind of trusteeship dependent on the performance of a function.) A communist relationship—between spouses, lovers, friends—is not only one in which accounts are not kept, but one in which it would be considered "offensive, or simply bizarre" to even think of doing so. Love keeps no record of wrongs—or rights.
Thus communism restricts or negates a "freedom" conceived solely as lack of restraint. As Graeber explains, "freedom" has meant several things: release from debts, as in the biblical notion of "redemption"; friendship, as derived from the German freund, connoting amicable solidarity; and unfettered power, or libertas, enshrined in Roman jurisprudence, the right of a patriarch to do anything with his possessions. And as Graeber reminds us, those possessions included his family: famulus meant slave, while dominus, or master, derived from domus, or household. (Remember that next time you're tempted to swoon to claptrap about "family values.") The notion of absolute ownership of things originated in the absolute ownership of people. Roman libertas leavens the mean-spirited ideal of "freedom" in liberal capitalist democracies. As "self-ownership," freedom both makes property a right rather than a function and turns a right into a kind of alienable property. Of course, capitalists have every interest in getting us to see "freedom" this way, since "self-ownership" entails the notion that we can give away, sell, or rent out our freedom. As 19th-century craftsmen and workers understood better than we do today, wage labor is the slavery of capitalism: if you don't own the means of production, you work for those who do—unlike chattel, you enjoy the dubiously ennobling privilege of choosing your master.
Graeber affirms redemption and friendship against the command economy of libertas. Friends and lovers don't treat each other as servants or vendable objects, so freedom should be "the ability to make friends," the capacity to enter into human relations that are uncoerced and incalculable. And since friends are naturally communists, they'll live without thinking of their relations in a way that leads to double-entry bookkeeping; they'll live in the light of "redemption," which isn't about "buying something back" but rather about "destroying the entire system of accounting." To create a more humane and generous world, we must unlearn our moral arithmetic and throw the ledgers into the bonfire. A communist society of friends requires the abolition of capitalism.
Hence the expectation, after 500 pages, of a Great Answer with "breadth and grandeur"—but Graeber fails to deliver anything more than exhortation and tepid reformism. "History is not over … surprising new ideas will certainly emerge," he assures us; popular movements are having "all sorts of interesting conversations." Yet Graeber's own call for "a Biblical-style Jubilee" is magnanimous but disappointingly banal. A wholesale cancellation of consumer and international debt seems bold, but it's fundamentally conservative: it would liberate debtors while maintaining the existing arrangement and logic of capitalism. Property forms do matter; we can't treat them with the cavalier indifference that Graeber exhibits. To end the tyranny of debt, we would have to cultivate a political imagination that sees well beyond a jubilee.
While Graeber asserts that some great conceptual breakthrough could arise "from some as yet completely unexpected quarter," he pretty much dismisses religion as a source of moral and political innovation. Religion parrots the language of money and debt: "forgive us our debts, as we forgive our debtors" as the Lord's Prayer pleads, and religions often speak of the debt we owe to God or some other cosmic force. "Redemption" meant buying back, and the Atonement is often conceived as Christ's paying a debt we sinners owe to God. And besides, as Graeber observes, Christians don't take their own Savior at his word. Christian bankers and creditors don't forgive their debtors; why should God forgive them their sins? Yet Graeber concedes that Christianity harbors traces of a moral and ontological revolution against the regime of debt. "Redemption" could point to the destruction and transcendence of equivalence; as Thomas Aquinas and other medieval theologians explained, "our relation with the cosmos is ultimately nothing like a commercial transaction, nor could it be." You can pay off the bank or the bartender; how do you square a "debt " to God?
Graeber drops the point and moves on; Critchley makes "our relations with the cosmos" the central concern of his incisive volume. A philosopher at the New School for Social Research, Critchley has written often and profoundly on ethics in the wake of God's apparent death, especially in Infinitely Demanding (2007), where he sought to explain and overcome the demoralization he sees in liberal societies. Tracing what he calls their "motivational deficit" to the "felt inadequacy of secular conceptions of morality," Critchley proposed an account of moral and political agency in terms of "dividualism," where the self is incessantly called and divided by "fidelity to an unfulfillable demand." We can and should never be "at one" with ourselves; we can and never should be "authentic." The energy for political transformation resides in our "endless inauthenticity, failure, and lack of self-mastery."
With his new book, Critchley joins other left radicals—Slavoj Zizek, Alain Badiou, and Terry Eagleton—who seek in theology not some balm for disappointment, but a tonic to sharpen the mind and revive the spirit of anti-capitalist struggle. Presented as a modest portfolio of "experiments in political theology," Critchley's volume is a rich, audacious attempt to plumb the meaning of faith, the most sustained left-atheist engagement with Christian theology since the work of Ernst Bloch. Struck by Oscar Wilde's bracing assertion in De Profundis—"everything to be true must become a religion"—Critchley provides an exacting and indispensable reflection on the nature of political commitment.
From Hobbes and Locke to Rousseau and Marx to Rawls, Nozick, and Foucault, the modernity of modern politics has been thought to reside in the rejection of any conception of political order rooted in nature or divinity. But by grounding the political completely and unreservedly in the human, this apparently "secular" mode of politics requires that the human be "unchallengeable"—in other words, sacred. All political order depends, Critchley maintains, on allegiance to a "supreme fiction" whereby a people becomes a people—an "original covenant," as he puts it. Whether it's fascism, communism, or liberal democracy, modern political forms, Critchley contends, comprise "a series of metamorphoses of sacralization." In this view, the American civil religion is an especially brazen displacement and renaming of sacral devotion.
This is a provocative and unsettling claim, for it counters the tale of modernity narrated as "secularization" or "disenchantment." First told by Marx and Max Weber, it's been given a Christian re-statement most recently by Charles Taylor in A Secular Age (2009). I've long thought that religious intellectuals give too much credence to the "disenchantment of the world," and that they need, not to call for some reactive "re-enchantment," but to tell a new story about modernity. (As readers may know, I'm finishing a book that makes a Critchleyan claim about the history of capitalism.) For those who want to challenge the very narrative of "secularization," Critchley will be an invaluable interlocutor, if not quite a kindred spirit.
Still, Critchley's account of "the sacred" remains utterly human and terrestrial—it echoes a lineage that extends from Ludwig Feuerbach to Norman O. Brown—and it underlies the promise and failure of his attempt at a political theology without God. Honoring its "infinite demand," the dividualist self commits to a truth that is fundamentally religious—a "troth, the experience of fidelity where one is affianced and then betrothed." This is a powerful and persuasive phenomenology of faith as unswerving devotion. But from whom or what does this infinite demand to which we betrothe ourselves originate? Critchley summarily rules out any origin "external to the self … any external, divine command, any transcendent reality." It seems that in Critchley's telling, we marry ourselves. Polonius is right: to thine own self be true.
This religious fidelity to ourselves behooves both love and communism. In two chapters on Pauline theology and the late-medieval movement of the Free Spirit, Critchley hints at a radical politics sustained by faith and suffused by love. Perusing the writings of Marguerite Porete—a learned, lyrical Beguine mendicant who died at the stake in 1310—Critchley affirms her belief that sin could be overcome in this life through a mystical, quasi-erotic union with the Spirit, and that such a union requires what Simone Weil called a "decreation" of the ego in the transformative crucible of love. Love, for Porete, is a strenuous, intrepid pilgrimage into self-annihilation; "love dares the self to leave itself behind, to enter into poverty"; in Critchley's words, love is "the audacity of impoverishment," an exhilarating, paradoxically enriching loss, an abandonment of all security for the sake of communion—friendship—with divinity.
Thus, as Critchley interprets Paul, "who I am is not in my power"; called and divided, my identity requires "a certain affirmation of weakness." The self is not a seizure and assertion, but rather "the orientation of the self towards something that exceeds oneself." Freedom is not, as in Roman and liberal capitalist libertas, some "virile assertion of autarchy," but rather "the acknowledgement of an essential powerlessness." Freedom comes through submission to the anguish of love; it is not the possession but the endurance of all things. "Love," Critchley writes compellingly, "is not as strong as death. It is stronger."
For Porete and the Free Spirit, love and poverty—tokens of friendship with God—entailed a "faith-based communism," in which the wealth of God is held in common by all, without regard for class or status. (As Graeber emphasizes, friends and lovers are communists.) At the same time, "there is no longer any legitimacy to moral constraints … that do not directly flow from our freedom"—freedom understood as friendship with God. In Pauline terms, love is the law of our being.
Though (wrongly) condemned by the Inquisition for sexual libertinage, the Free Spirit was less about doing than about changing what you want. A revolution of desire must both precede and accompany a revolution of politics. The Free Spirit explored the outer limits adumbrated by Paul and the earliest Christians—"rejects and refuseniks, the very filth of the world," as Critchley glosses Paul, who produced a "political theology of the wretched of the earth." Reading Paul (properly) in an eschatological light, Critchley sketches what he calls the Christian meontology: "an account of things that are not" together with an account of things that are, but are passing. (Like, say, the American Empire.) Meontology is the historical and political analogue to dividualism: we are called and divided from the present, beckoned to "see the world from the standpoint of redemption." We are to live as if the new world already is, and as if this world were already not—not cutting deals with the transient and god-forsaking powers and principalities of the age. Living as a vanguard, Christians reside—or better, travel—within the radical insecurity of time, since the parousia could occur at any moment and render all our calculations foolish.
Critchley clearly believes that the contemporary left must recuperate something of this eschatological faith, but his political theology founders on his avowed dismissal—and misconstrual—of Christian ontology. "To be is to be in debt," he writes, and "original sin is the theological name for the essential ontological indebtedness of the self." There are two problems with this account of ontological "debt." If, as Critchley holds, there is no "transcendent reality," then to whom or what do I "owe" this "debt"? To the "infinite demand" of whom or what do I owe my faith and commitment? If Critchley's "dividualism" is right, I owe it to myself—but I suspect that any debt that I owe to myself will be a fairly easy tab to settle, with ever-negotiable terms of repayment to myself as my lenient creditor.
In other words, I'm sinful—and here Critchley makes another mistake. Sin does not name our "ontological indebtedness"—this makes existence sinful in itself, which makes the calamity of sin incomprehensible. Graeber comes closer to getting it right when he remarks that sin "is our presumption in thinking of ourselves as being in any sense an equivalent to Everything Else that Exists … so as to be able to conceive of such a debt in the first place." Sin is not only a refusal to acknowledge our "indebtedness"—it's the very idea of our indebtedness itself, the notion that our ultimate relation to God is that of dependence, not of loving friendship. It's not just that we desired to be independent of God; it's that we didn't trust God, didn't desire his friendship. So when Critchley writes that Christian love rests on a conviction of "the absolute difference between the human and the divine," he forgets the Incarnation, where the divine entered into the human, and the human was raised to the level of divinity. (Following Paul, the Church Fathers would elaborate the Incarnation in the doctrine of theosis, or the deification of humanity.)
Being Christian consists in realizing that we don't "owe" God a single thing; it's not as though, in giving, he's parted with something, and become poorer or more diminished because of it. I would argue that this perversion of our relationship with God lies at the root of the American Dream, the delusion that the endless pursuit of libertas and wealth is an offering to God. Turning God into a ruthless creditor, we pile up money, achievements, property, and empire to settle the debt. And when the money runs out, the achievements fade, the property depreciates, and the empire crumbles, we wail about losing his favor, as if he's found us unworthy of lending on account of a low cosmic credit score.
In his magnificent sermon, "Poverty and God," the late Father Herbert McCabe reminds us that God is our Creator, not our creditor, nor some demanding investor in our earthly pursuits. "God makes without becoming richer … it is only creation that gains by God's act." (As Henry Miller once put it, "God doesn't make a dime on the deal.") Thus, God is literally poor because he "has no possessions … nothing is or acts for the benefit of God." We can't "give back" to God, or win his love with an impeccable credit history. His delight is to be with, not hound his children, like a rude collection agent; what parent thinks of a child's life as a loan to be repaid or a debt to be squared?
Come to think of it, the God of Jesus Christ has no business sense at all, and violates every canon of the Protestant Ethic. He pays the same wage for one hour of work as for ten, and recommends that we lend without thought of return. (Finance capital could not survive a day with this logic, which is one excellent reason to recommend it.) He's an appallingly lavish and undiscriminating spendthrift, sending his sunshine on the good and the evil. He has a soft spot for moochers and the undeserving poor: his Son was always inviting himself into people's homes, and never asking if the blind man deserved to be cured. How can you run a decent economy this way?
He calls us his friends, and friends share all things; as Thomas Merton knew, "to be a Christian is to be a communist." And divine friendship is to live without debts by "throwing ourselves away"—giving (not charging) according to our ability, and receiving according to our need. "To aim at poverty," McCabe said, "to grow up by living in friendship, is to imitate the life-giving poverty of God, to be godlike." By comparison, the American Dream is a shabby hallucination. As the American Empire totters and slides into history's graveyard of hubris, the glorious poverty of friendship will be our only hope of moral renewal. It's a model of another, very different empire, one innocent of creditors and debtors: the people's republic of heaven, the realm of divine love's utterly unearned, unarmed, and penniless dominion.
Eugene McCarraher is associate professor of humanities and history at Villanova University. He is completing The Enchantments of Mammon: Capitalism and the American Moral Imagination.
Copyright © 2013 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.
- More fromEugene McCarraher
Stan Guthrie
The unpredictable impact of Jesus.
- View Issue
- Subscribe
- Give a Gift
- Archives
When Jesus said we will know a tree by its fruit, he was referring to individual morality (see Matt. 7:15-20; Luke 6:43-45). Good trees produce good fruit, evil trees produce evil fruit, and never the twain shall meet. Yet given the charge by the late Christopher Hitchens and other New Atheists that “religion poisons everything”—that religion, including and especially Christianity, is a bad tree—Christians are right to search for and highlight the good fruit that drops from the tree of Christian faith and nourishes not just individuals but also whole societies.
Who Is This Man? Study Guide: The Unpredictable Impact of the Inescapable Jesus
John Ortberg (Author)
Zondervan
96 pages
$15.11
Christianity, its defenders never tire of asserting, has given the world such good fruit as political freedom, education, the uplift of women, concern for the poor and disabled, and so on. Is there room on the shelf for yet another volume on the good fruits that have fallen from the Christian tree? John Ortberg, pastor of Menlo Park Presbyterian Church, thinks so, and Who Is This Man? The Unpredictable Impact of the Inescapable Jesus is the result. Ortberg seems to be targeting the friendly outsider, whom Anthony Burgess once described—in a review of the work of C. S. Lewis—as “the half-convinced, … the good man who would like to be a Christian but finds his intellect getting in the way.”
In differentiating his book from some of its worthy predecessors, Ortberg draws a thought-provoking contrast between the expectations of a neutral observer at the dismal conclusion of Jesus’ three years of public ministry and the outsized, global impact of the Carpenter from Nazareth today. “Normally when someone dies, their impact on the world immediately begins to recede,” Ortberg writes in his opening chapter, yet “Jesus inverted this normal human trajectory, as he did so many others. Jesus’ impact was greater a hundred years after his death than during his life; it was greater still after five hundred years; after a thousand years his legacy laid the foundation for much of Europe; after two thousand years he has more followers in more places than ever.”
And Ortberg continues to draw effective contrasts: ancient views on human disposability vs. Jesus’ teaching about the dignity and worth of the individual; ancient views on loving one’s friends and hating one’s enemies vs. Christ’s command to love your enemies; the Roman belief that religion must serve the state vs. the Lord’s recognition of two realms, for God and Caesar; the prevailing pagan view of marriage as mainly a social arrangement for the rich vs. Jesus’ statement that marriage is primarily a God thing; and so on.
Ortberg fleshes out these insightful contrasts in chapters on Jesus’ transforming influence on human dignity, art, marriage, treatment of enemies, and his incomparable example of humility. (His conjectural exposition of the Lord’s parallel treatment of the Jews and the pagans in the Decapolis on “the other side” of the lake in Mark 6 and 8 is fascinating.) He especially shines when describing how Jesus overturns—as at the money-changers’ tables—the world’s perceptions about greatness. Speaking of Christ’s self-giving love, Ortberg mentions an extra-biblical Jewish story of some disciples who “love their rabbi so much that they try to wash his feet. But there are no stories of a higher-status person washing the feet of a lower-status person. We never read of a rabbi washing his disciples’ feet. Except this rabbi, who by the way said he was the Messiah.” Who Is This Man? is sprinkled with such sparkling vignettes. Ortberg’s pithy explanations of the ancient world move the text along briskly.
In contrasting the teachings of Christ with the ancient mindset, however, sometimes Ortberg seems to give short shrift to the Lord’s Jewish outlook and heritage. The author, for example, contrasts the rampant, outward-focused religious hypocrisy of the ancient world with the demand by Jesus that people clean both the inside and outside of their religious cup—that our good-looking fruit cannot be rotten to the core. Yet Jesus’ demand for inner righteousness quite properly reflects the prior call of Israel and Judah’s prophets for religious integrity:
I hate, I despise your feasts,
and I take no delight in your solemn assemblies.
Even though you offer me your burnt offerings and grain offerings,
I will not accept them;
and the peace offerings of your fattened animals,
I will not look upon them.
Take away from me the noise of your songs;
to the melody of your harps I will not listen.
But let justice roll down like waters,
and righteousness like an ever-flowing stream.
—Amos 5:21-24, ESV
Jesus, after all, came not to abolish the Law but to fulfill it. Occasionally, in a worthy attempt to present the Lord as sui generis, Ortberg’s portrayal could be seen as disconnecting him from his Jewish roots. Perhaps a few more overt tips of Ortberg’s cap to the Jewish tree would put the fruit of Jesus in better context.
A few more quibbles about this readable, engrossing, work: Ortberg says next to nothing about the church’s various black eyes—its pogroms, Crusades, witch trials, and the like. That’s why I believe this book is geared toward the friendly outsider who might slip into a pew at Menlo Park Presbyterian—not toward a hypercritical New Atheist. The book also has a few more typos than I would like, and its informal and occasionally confusing use of sources produced in me a minor sense of frustration. When an author is citing “evidence that demands a verdict,” it is crucial to make the sources of that evidence as accessible as possible.
I was glad that Pastor Ortberg closed Who Is This Man? with a call to the reader to “put what Jesus said to the test. Run an experiment. We all learn how to live from somebody: our parents, our peers, favorite writers, our appetites, our boss, or a vague combination of these. Try learning how to live from Jesus. Come and see. Whatever your ideas about religion might be, you can try being a student of Jesus. And that’s a very good place to start.”
In other words, taste the fruit.
Stan Guthrie is an editor at large for Christianity Today magazine and author of A Concise Guide to Bible Prophecy, coming this summer from Baker Books.
Copyright © 2013 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.
- More fromStan Guthrie
- Stan Guthrie
Michael Robbins
The crumpled tissues and loose change of the vernacular.
- View Issue
- Subscribe
- Give a Gift
- Archives
Contemporary American poetry has a crush on the crumpled tissues and loose change of the vernacular—idiom, platitude, cliché. Consider some recent book titles: Quick Question (John Ashbery); Nice Weather (Frederick Seidel); Just Saying (Rae Armantrout). These are examples of what the linguist Roman Jakobson called the phatic function of language—interjections, small talk—designed to check if the channel of communication is working. The Romantic-modernist revolution that opened poetry to "the language of real men" (Wordsworth) and words that people "actually say" (Pound) culminates in poems with lines like "Thanks, Ray, this is just what the doctor ordered" and "Don't come in here all bright-eyed and bushy-tailed."
These last are from Susan Wheeler's "Maud Poems," the first of three elegiac sequences that make up her fifth collection, Meme—not that they're elegies in a traditional sense. "The Maud Poems," for instance, combine stock expressions favored by Wheeler's mother, distillations of a lifetime's worth of penny wisdom borrowed from other mouths, with lyric effusions of starkly different register: "she's already spilled the beans" is set against "an owl, recalcitrant / in its non-hooting state."
There is no condescension here—who among us, reduced to our most hackneyed formulae, would come off better? By highlighting precisely what was least individual, most communal, about her mother, Wheeler reminds us that it is our initiation into language that makes us human. Maud's idioms mark her as a person of a certain age, a particular temperament—"Well, they went bloody blue blazes through their last dollar before you could say boo"—while her daughter's idiom appropriates them for art. There is something of the impulse of Language poetry here ("Attest—ament, filament, adamant, keen"; its closest relative might be Lyn Hejinian's My Life).
The book's title refers to a pseudo-concept popularized by intellectual featherweight Richard Dawkins. A meme is supposedly the cultural analogue of a gene, transmitting cultural information, responsible for the spread of songs and catchphrases and jeggings. In Wheeler's lexicon, it represents the idea that language is a virus, and the wasting away of generations is how it transmits. It's not perfect; its basic reproduction rate varies. As David Shields puts it in How Literature Saved My Life, "Language is all we have to connect us, and it doesn't, not quite."
And Wheeler's got it bad: in Meme, I'm pleased to report, she's even bringing the limerick back:
I picked up a gal in a bar.
She said she'd ignore my cigar.
But when I was done
Relieving my gun
She said I was not up to par.
Wheeler has always been bouncier than most of her like-minded frenetic post-Language peers. Alert to what the toxic glow of Fruity Pebbles tells us about capitalism, she loves a good bubblegum jingle. In certain moods she's closer to Frederick Seidel than Susan Howe, penning cracked power ballads her parents might have danced to on a boardwalk in an alternate universe:
If I had a way to make you live with me again
—Even as a rabbit, or a wren (if all that's true)—
I wouldn't see at all that girl against the wall.
You've a right to cause me trouble now, I know.
What's troubling about these poems is their implication that language is a function of mourning, rather than the other way around—that, as Nietzsche said, "What we find words for is something already dead in our hearts." Or as Wheeler's Emersonian sensibility has it: "Want to go watch a kibitzer crumble / In the puke-green pour of the moon?" Her grief gets physical, and while it might evade adequate expression, it remains indexed to the motion of words:
I am tired. Today
I moved a book from its shelf
to the bed. The span
of its moving was vast.
A lifespan—a kind of book—is vast; it is a brief movement across a room.
Like Ashbery, an obvious influence, Wheeler kibitzes and chats while her informal colloquies crumble and deliquesce. And like Ashbery in recent years, Wheeler occasionally dips into a melancholy and pseudo-archaic register:
When will you go away,
oh piercing, piercing wind?
When will at rest I be again?
Oh sleep that will not rain on me,
oh sleep that nothing brings.
Oh, when will a face appear
that cancels full th'other?
Or will there be no more for me
of anodyne palaver?
Westron wynde, when wilt thou blow, the small raine down can raine. This kind of ventriloquism, not quite leached of irony but still evocative of less relentless pleasures, is voguish at the moment. Tom Pickard is, in my view, the master. In "Hawthorn," from Ballad of Jamie Allan, he writes:
there is a hawthorn on a hill
there is a hawthorn growing
it set its roots against the wind
the worrying wind that's blowing
its berries are red its blossom so white
I thought that it was snowing
It would be lovely to have more poems from Wheeler in this mode, or at least more that exploit her winning facility for rhyme, and perhaps fewer that till the exhausted soil of "experimental" fields:
- Anabaptists
- field field to
- lip on a / in a daisy
- pond muck
- Curtailing assumptions such that
- frog muck
- panopticon the hazards
- signage escalator mutant tut
After such escalator mutant tut, what forgiveness? I know it's bad form to say so, but fifty years after The Tennis Court Oath, this sort of thing is just possibly beginning to seem a bit rote. Certainly someone as lyrically capable—and as capable of lyrical subversion—as Wheeler needn't clutch so at the au courant. "It was the winter of the Z-pack" is startling in its sabotage of romantic anticipation. The lyric speaker of these poems gets "smashed by a Prius on a wild goose / chase" and still manages to affirm the sight of a "halo against the light."
But her openness to the possibilities of poetry regardless of tribal affiliation is one of Wheeler's virtues. "Such is the state of our poetry caught in my throat on its way / to my mouth, why not do everything," she writes toward the end of the book, before concluding: "but of course we do nothing." When third-hand experimentation is the norm, in life as in poetry, everything can look an awful lot like nothing. In these spring-loaded poems, Wheeler honors the less than everything that gets done in a life by infusing elegy with verve, anachronism with new-minted coin. "Let's make like we're not through," she writes, and it's all any of us can do—go on making things, making likenesses, as if we were not already finished, not already broken up, not already out the other side, like so many people we knew, like all the things they said.
Michael Robbins is the author of Alien vs. Predator (Penguin).
Copyright © 2013 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.
- More fromMichael Robbins
John H. McWhorter
What we learn from California Indian languages.
Books & CultureApril 24, 2013
When Europeans encountered what is today California, that area contained 78 different languages. Not "dialects" of some single "Indian tongue," or even just three or four, but 78 languages as mutually unintelligible as French, Greek, and Japanese.
Victor Golla's California Indian Languages is a lush and handy primer on what is known about all of these languages, but a volume like this is as much an elegy as a survey. Not a single name of any of these languages would ring a bell to laymen—Pomo? Miwok? Wiyot?—and this is partly because almost all of them will be extinct within another generation. As of 2010, for only a few dozen were there were fluent speakers alive, and most of them are elderly. European languages, especially English, intruded upon Native Americans' linguistic repertoire starting centuries ago, and eventually were often forced upon them on the pain of physical abuse in schools. Today, English is the everyday language for almost all Native Americans.
Once even a single generation grows up without living in a language, it is almost inevitable that no generation ever will again. The ability to learn a language well ossifies after the teen years, as most of us can attest from personal experience. If we do manage to wangle a certain ability in a language as adults, the chances are remote that we would use this second language with the spontaneous intimacy of parents and children at home. The generation of, say, Miwoks who only know a few words and expressions of their parents' native language—or enough to manage a very basic conversation but that's all—will not pass even this severely limited ability on to the next generation.
Yes, there are programs seeking to revivify these fascinating languages. Some groups have classes. In others, there are master-apprentice programs, in which elders teach younger people the ancestral language within a home setting. One reads about such efforts in the media rather frequently nowadays, in the wake of various books over the past twenty years calling attention to how many of the world's languages are on the brink of disappearing. By one estimate, only 600 of the current 6,000 will exist in a hundred years.
Golla's book, unintentionally, suggests that happenstance aspects of linguistic culture in indigenous California made the task of reviving these languages even harder than it might be otherwise. Ironically, one of the factors was something that many would find rather romantic in itself: Native Americans in California considered languages to be spiritually bound to the areas they were spoken in. This seemingly innocuous aspect of cosmology had a chain-reaction impact on the future of the languages.
First, in this world it was considered culturally incorrect to speak any but the local language when in its territory. This meant that those who traveled to another place—and few did, given this strong sense of local rootedness—made use of interpreters rather than learning the other language themselves. Hence California Native American languages were rarely learned by adults.
As it happens, when adults learn a language in large numbers and there is no written standard or educational system enshrining its original form, it becomes less complex. English lacks the three genders of its sister language German because large numbers of Vikings invaded and settled the island starting in the 8th century and married local women. They exposed children to their approximate Old English to such an extent that this kind of English became the norm. I am writing, then, in a language descended from "bad" Old English.
That this kind of thing happened so rarely in indigenous California had a secondary effect: the languages tend to be more complex than anything an English speaker would imagine. Taking lessons in Yokuts, spoken in the southern Central Valley, you would learn that the past tense ending is -ish. So: pichiw is "grab," and pichiw-ish is "grabbed." But it turns out that grab is unusual in Yokuts: not just some but most Yokuts verbs are irregular. Add -ish to ushu "steal," and it morphs into osh-shu, with a new o instead of u and a double sh. Add -ish to toyokh "to doctor" and it's tuyikh-shi. You have to know precisely how each verb gets deformed—and that's just two verbs.
In Salinan over on the coast, there's no regular way to make a plural: every noun resembles the handful in English like men and geese. House is tam, houses: temhal. One dog: khuch. More of them: khosten—and this is how it is for all nouns. All of the California languages are like this in various ways. A grammatical description of any one of them is, in its way, as awesome as a Gothic cathedral.
But this means that, past childhood, learning these languages is really tough. English speakers find it hard enough to get past Spanish putting adjectives after nouns and marking its nouns with gender. But when we get to languages where instead of just saying go or put you have to also append one of several dozen suffixes indexing exactly what the goer or putter was like and the material nature of what was gone or put—e.g., in Karuk, putting on a glove requires a suffix marking that what happened was "in through a tubular space"—we are faced with a task few busy adults will be in a position to master.
Many years ago I was assigned to spend a few weeks helping speakers of one of the varieties of Pomo recover their language. We had a good time. However, here was a language in which to say "She didn't stay very long and came back," you have to phrase it as, roughly, "Long time it wasn't, she sat and back here-went," putting the verb at the end instead of in the middle and also mouthing sounds unfamiliar to speakers of English or even Spanish (or Russian or Chinese!). I couldn't help thinking that for them—or me—to actually breathe life into this language now surviving only on the page was not going to happen. And they knew it. One told me that she was just hoping to be able to know enough of the language that her descendants could feel a connection to the past and their place in the world.
This struck me as a healthy and achievable goal. Books like Golla's, demonstrating the amazing complexity of these languages, also show that we must alter our sense of what it is to "know" a language. When someone says they play the piano, we do not assume they play like Horowitz. In the same way, in a new world there will exist languages that thrive as abbreviations of what they once were, useable by modern adults who seek a cultural signpost rather than a daily vehicle of communication. Anecdotally, this is already effectively the case with revived languages such as Irish Gaelic and Maori. Their new speakers, using the languages in cultural activities and even in the media to an extent, nevertheless use English much more. They are rarely speaking the language in as full a form as their ancestors did. Yet no one would suppose that this invalidates the effort.
It is unlikely that 6,000 languages will continue to be passed down in fuller form than this, and they will often survive in an even more restricted sense: flash cards, expressions, songs, perhaps some strictly "101" grammar. The difficulty of mastering languages beyond childhood is but one reason why. Amidst globalization, a few widely spoken languages dominate in print, media, and popular music and are necessary to economic success. In this, they inevitably come to be associated with status and sophistication.
The educated Westerner, and especially the anthropologist or linguist, cherishes the indigenous as "authentic" and as a token of diversity in its modern definition. These are laudable perspectives in many ways but are not always shared by those to whom an indigenous language is simply the one they learned on their mother's knee, as ordinary as English is to us. Such a person may not feel especially authentic or diverse to themselves. Often they prioritize increasing their income and embracing the wider world—especially for their children.
The flourishing of 6,000 languages points us back to a much earlier stage of humankind in which all people were distributed in small groups like those in indigenous California, where the basic unit was the "tribelet" of a few hundred people. In the modern world, for better or for worse—and quite often worse—people are coming together. The only question would be why there wouldn't be fewer languages. However, if most of the world's languages cannot continue to be spoken, surely we must utilize the advantage of writing to document what once was.
The fashion is to justify this on the basis of the languages recording the unique worldviews of their speakers. But that notion is more fraught than often supposed. Say we celebrate Karuk for showing that its speakers were especially sensitive to things like tubular insertion. Is the American white kid somewhere in Indiana really less attuned to the snug feeling of getting his fingers into gloves than a Karuk kid in California once was, even if English doesn't have a suffix with that meaning?
Rather, languages randomly mark some things more than others. Call California Native Americans fascinatingly connected to space and direction, but then be prepared to call them blind to the difference between the hill versus just a hill—most Native American languages leave that particular distinction largely to context. We assume Native Americans felt that nuance as deeply as we do even if their grammars do not happen to explicitly mark it with words or suffixes. Just as obviously, for Yokuts to have almost no regular verbs says nothing about how its speakers process existence.
Dying languages should be documented not as psychological templates but as awesomely alternate randomnesses from what European languages happen to be. Golla's book is valuable also, then, in its diligent chronicle of the researchers over the centuries who have dedicated themselves to the task of simply getting on paper how these languages work. One of the most resonant photos in the book—from almost a hundred years ago—is of founding California language scholar Alfred Kroeber, longtime anthropology professor at the University of California at Berkeley, who got down the basic structure of dozens of California languages during his career. The snapshot, unusually for an era in which smiling for photographs was not yet common coin, captures a man grinning in the great outdoors—a man who clearly relished his mission.
And quite a mission it was. A language is a markedly huge business. First there is the basic grammatical machinery of the kind described above—but then there are the wrinkles. In English, we say one fish, two fish, okay—so fish is irregular: no two fishes. But then what about the Catholic Feast of the Seven Fishes? Try explaining that to a foreigner—such as to a Japanese one I know who also, despite her very good English, mentioned an obese man whose "meat" was hanging over the edges of a chair. We natives would say "flesh"—but why? "Meat" makes perfect sense: that we happen to prefer to say "flesh" or "flab" is just serendipity. You can say I'm frying some eggs, or I'm frying up some eggs. They don't mean the same thing—note that the version with up implies that the eggs will be ready for you to eat soon. But if you were teaching someone English, how likely would you be to get to that nuance?
To speak a language in full is to have full control over little things like that, and it's the rare outsider whose grammatical research can get down to details this fine. Even when well documented with a grammatical description and a dictionary, a great deal of what a language was has still been lost, just as a cat's skeleton cannot tell us that cats hold their tails in the air and curl up when they sleep.
For reasons of this kind, some insist that all efforts be made to keep such languages actually spoken, as "living things" rather than archival displays. However, Golla's book gives ample coverage to revival efforts, and the sad fact is that there is not a single report of a language that was once dying but has now been successfully passed on to a new generation. For all but a few lucky cases where happenstance has kept the language alive to the present day, documentation may be the best we can do.
In this light something bears mentioning that linguists traditionally step around. It is often implied that a great diversity of languages being spoken in the world is beneficial in the same way that genetic diversity is within a population. This, however, is more stated than demonstrated. If there had only ever been one language among all of the world's peoples, and all people could converse wherever they went, how commonly would people have regretted that there weren't thousands of mutually unintelligible languages? All humans could converse—who would have deemed that a disadvantage? Or, who would have said that it would be better if all humans had some other language alongside the universal one that only some people knew?
That is, amidst the downsides of language loss—including that most of those that die will be the smaller, indigenous ones—there are some benefits to there being fewer. A statement like that is understandably difficult to embrace for people watching generations of their own people grow up without something as central to cultural identity as their own language, as well as for scholars and activists who are equally dismayed at same. However, at least we have the technology to get on record a good deal of what the lost languages were like, and California Indian Languages is a perfect introduction to this record as it currently exists for 78 vastly different ways of talking.
John H. McWhorter teaches at Columbia University and is a contributing editor of The New Republic. He is the author most recently of What Language Is (Gotham).
Copyright © 2013 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.
- More fromJohn H. McWhorter
Alister Chapman
Cricket and the aftermath of colonialism.
- View Issue
- Subscribe
- Give a Gift
- Archives
In 1989, the Derbyshire county cricket team played a local high school. Think Red Sox against Greenfield High. The professionals batted first, scoring a formidable total in a game where each side would bat one inning. (In cricket, runs are more easily come by than in baseball, and a team bats until all but one of the players are out. Scores of 300 or more are common.) When the school came out to bat, all eyes were on Derbyshire’s Michael Holding, a fast bowler who played for the world-beating West Indies. In a game where there is no equivalent of the pitcher’s mound, fast bowlers will run in before they bowl, gathering pace for thirty yards or more before hurling the ball towards the heavily padded batsman. That the ball typically bounces before it reaches its target makes things even more interesting, with bowlers like Holding able to bowl the ball short and make it fly up toward their opponent’s head. Holding had mercy on the schoolboys, however, trotting in and sending down very playable balls.
It wasn’t long before Derbyshire had dismissed the school’s best batsmen, and the tail-enders were coming in. Last of all came the youngest and smallest of the lot. He was in the team as a bowler, but one very different from Michael Holding. For while Holding used speed and power to beat the batsmen, the boy used guile. He was what is known as a spin bowler. Spin bowlers take just a few steps before they release the ball, but a variety of grips on the ball and a flick of the wrist can make for surprising results once the ball is in the air and especially after it has hit the ground. Spin bowlers are cricket’s artists.
On that particular day, this young spin bowler had claimed the most famous scalp of his career: Michael Holding. But when the boy came in to bat, Holding saw who it was and decided to play with him. He walked all the way to the back fence and steamed in to bowl. The ball he released was, in the end, just as gentle as those he had been serving up all afternoon. But I doubt it was as much fun for the boy as it was for Holding.
Those who were there that day could see precisely what was going on. But, as in baseball, a lot of what happens in cricket happens so far from the crowd that they have little idea. The subtleties of spin are almost always lost. People can see that there is a contest between the one who throws the ball and the one who has to hit it, but that’s about all. Cricket is a different kind of spectator sport from, say, basketball. The game is important, but the experience of being there just as much so.
In England, cricket is a game for watching on a lazy afternoon. You can turn and chat to your neighbor without worrying that you’ll miss too much. Or you can sit quietly and watch the players run back and forth, white on green, and absorb the atmosphere. Nostalgia comes easily, with memories of the peaceful green spaces of youth. Prime minister John Major once mobilized anxiety about European integration by painting a picture of an unchanging Britain of “long shadows on county grounds [and] warm beer.”
But where Michael Holding grew up, things were different. Cricket had been introduced to the Caribbean by English colonizers, who cast themselves as gentlemen but ran slave plantations. For the black population of Jamaica, Barbados, Trinidad, and Tobago, cricket offered further English discipline and English fair play. The play, however, was not always fair. Colonial clubs operated color bars. International teams had quotas for whites. With the dawn of international cricket, rules were made in England and sometimes for England. In Kingston, cricketing memories were sour as well as sweet. And as cricket spread throughout Britain’s empire it became a tool of local discrimination too, with princes in India lording it over the Indian game just as the English elites did back in England
Eventually, the colonials beat the conquerors. First were the Australians: the most famous trophy in cricket is a tiny urn containing the ashes from a wicket ceremonially burned after the Australians won in London in 1882. But then it was the turn of India, South Africa, Pakistan, the West Indies, New Zealand, and Sri Lanka. Cricket became a matter of national pride. The rivalry between Pakistan and India is immense. In 1990, the sport caused ethnic and political tension in England when a government minister suggested a “cricket test” of national loyalty: immigrants who continued to support the team from their country of origin rather than England were to be deemed insufficiently British.
The global home of cricket is now the Indian subcontinent. London’s Guardian reported that a billion people watched the India-Pakistan semi-final in the 2011 world cup. Tens of millions watch the Indian Premier League, which has adopted a shorter form of the game where matches last less than three hours. The crowds are not sipping tea and listening to birdsong. Advertisers compete to sponsor teams, with logos emblazoned on multi-colored shirts. Players make more per week than in any league except the NBA.
It is appropriate, then, that the latest important contribution to the literature on cricket comes from this part of the world. Shehan Karunatilaka is a Sri Lankan living in Singapore. The Legend of Pradeep Mathew, his first book, tells the story of a journalist’s desire to write a book on Pradeep Mathew, a fictional Sri Lankan spin bowler. The journalist, W. G. Karunasena, is an alcoholic. His work on the book is a race against liver failure.
Karunasena saw Mathew’s brilliance while reporting on Sri Lankan cricket, but was puzzled by how few games he had played for his country—and by his mysterious disappearance. The book is a quest to uncover the mystery. It is not a happy story. Mathew left the game and went underground after extorting money from a corrupt official—a nod in the direction of the gambling that has tarnished the image of the game, not least in the Indian subcontinent.
Just as sad is the ethnic prejudice that runs through the book, with Mathew facing opposition as a Tamil from the Sinhalese who dominate cricket in Sri Lanka. Karunatilaka highlights England’s sins—”England will spend centuries working off their colonial sins by performing miserably at sport”—but Sri Lankans don’t come off much better. Tamil terrorism forms part of the backdrop for the story.
Yet the book is also filled with humor and warmth. Karunasena’s friends are kind, quirky, and often witty. His wife is devoted, and even his estranged son returns home. Beauty comes from cricket. Karunasena loves his family and friends, but sport is less complicated and offers more moments of perfection and rapture. In a crude paragraph early in the book, Karunatilaka tells his readers that if they have never seen a cricket match or have and wish they hadn’t, “then this book is for you.” But people outside the cricketing commonwealth will find it hard to put the pieces together. References to Botham, Boycott, Bradman, Khan, Muralitharan, Tendulkar, and Warne will be lost on readers who didn’t grow up spending happy hours watching the game on TV. Anyone who enjoys sports, however, will be able to appreciate Karunatilaka’s delighted descriptions and diagrams of spin bowling. The floater, leg break, googly, flipper, armball, lissa, carrom flick, and (most special of all) the double bounce ball are all here, explained with awe and wonder. Mathew can behave like an idiot, but he bowls like a god.
And that, for Karunasena at least, is life. Answering the question of whether sport has any use or value, he says:
Of course there is little point to sports. But, at the risk of depressing you, let me add two more cents. There is little point to anything. In a thousand years, grass will have grown over all our cities. Nothing of anything will matter.
Left-arm spinners cannot unclog your drains, teach your children or cure disease. But once in a while, the very best of them will bowl a ball that will bring an entire nation to its feet. There may be no practical use in that, but there is most certainly value.
Or, as the dying journalist puts it near the end, “Unlike life, sport matters.” Karunasena becomes a picture of human existence. He gives up drink for a while, but then gives in. His book is unfinished; the mystery is solved only after his death.
Many will enjoy the rich picture of modern Sri Lanka that emerges in The Legend of Pradeep Mathew, despite its sad anthropology. But if you want to learn about cricket, you might do better to pick up the Duke University Press edition of C. L. R. James’ 1963 classic Beyond a Boundary, which comes with a three-page explanation of the game at the beginning. James was raised in Trinidad, where he experienced both the joy and the injustice of cricket. He excelled with ball and books, moving to England where he became a cricket correspondent for the Guardian and a left-wing social critic. Beyond a Boundary tells his story and that of West Indian cricket. There is much to lament. But there is hope, too, the final page relating the story of a quarter of a million Australians taking to the streets to bid farewell to a touring West Indian team. The vision of cricket as a force for international good was warped but not all wrong. The dying Karunasena recognized that, too.
Alister Chapman, associate professor of history at Westmont College, is the author of Godly Ambition: John Stott and the Evangelical Movement (Oxford Univ. Press).
Copyright © 2013 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.
- More fromAlister Chapman
Alan Jacobs
What is a “graphic novel”?
- View Issue
- Subscribe
- Give a Gift
- Archives
Most of my fellow teachers of literature know that students often think of almost any book-length narrative work as a "novel." A paper might begin, "Augustine writes in his novel The Confessions …" or "Homer's Iliad is a novel that …." This is not a major intellectual failing, of course, but it should remind us of the extent to which the novel has become so dominant a genre that common readers think of it simply as narrative, or lengthy narrative, itself. It should also be a reminder to teachers that time devoted to explaining the history and uses of literary genre is time well spent.
This particular inexactitude happens in non-academic settings too, and indeed a new version of it has recently arisen. Stephen Weiner's Faster Than a Speeding Bullet: The Rise of the Graphic Novel refers to Art Spiegelmann's Maus—an account of the author's father's experience in Auschwitz—as a graphic novel. Similarly, we might consider Dotter of Her Father's Eyes, a recent book by Mary M. Talbot and Bryan Talbot. On Amazon.com you may find it in the "Graphic Novel" category; its Wikipedia page, at least as I write, begins "Dotter of Her Father's Eyes is a 2012 graphic novel"—but then goes on to add, in the next sentence, "It is part memoir, and part biography of Lucia Joyce, daughter of modernist writer James Joyce." That the second sentence is not seen to contradict the first one reminds us once more how the word "novel" is commonly used; but it also reveals the limitations of our descriptive and critical vocabulary for this new form. The genres of graphic narrative proliferate beyond our ability to account for them.
Major comic artists like Will Eisner—in his Comics and Sequential Art and Graphic Storytelling and Visual Narrative (1985)—and Scott McCloud—in his Understanding Comics (1993) and Making Comics (2006)—have done yeoman work in explaining, for a wide readership but especially for would-be artists, the visual languages of graphic storytelling. Those are superb and, for anyone seriously interested in the subject, indispensable books. There is also a burgeoning academic and critical literature on graphic storytelling, as exemplified in The Comics Studies Reader (2009), edited by Jeet Heer and Kent Worcester. But we still struggle, I think, to know how best to write about graphic narrative—especially in that odd genre called the "book review."
A reviewer will want to say something about the shape of the story: its plot and structure, the way it organizes time and event. The adequacy and appropriateness of the language should be considered, as should those of the artwork. By "appropriateness" I mean, to use an old word, decorum, fitness: Do the language and the images fit the shape of the story? If they do not, does that indecorum seem meant? Is the resulting tension productive, or not? And then the reviewer should ask how the language and artwork interact. (These questions will vary in inflection and emphasis depending on whether the narrative is the work of a single artist—as in the case of William Blake's illuminated poems, or the recent work of Alison Bechdel or Chris Ware—or the product of collaboration, as is the norm in the world of "comics" narrowly defined.)
The graphic narrative is, then, a device with many moving parts. Randall Jarrell once defined the novel as "a prose narrative of some length that has something wrong with it"—not so far from the implicit definition of my students—and a graphic narrative might be even more naturally inclined to error. And everything I have said so far applies to fictional narratives: if the tale graphically told is historical or biographical, as is increasingly common these days, then one must also ask whether it is faithful to what we know, from elsewhere, of the story it tells. Yet another way for a book to have something wrong with it.
All of this throat-clearing brings us back to Dotter of Her Father's Eyes. It is a double story, whose protagonists are Lucia Joyce, daughter of James Joyce, and Mary M. Talbot herself, in her early years as Mary Atherton, daughter of James S. Atherton, whose The Books at the Wake: A Study of Literary Allusions in James Joyce's Finnegans Wake (1959) was one of the first major studies of that most daunting of masterpieces. (Dotter of Her Father's Eyes takes its title from a phrase in the Wake.)
The first thing that must be said about Dotter is that it's one of the most visually rich and sophisticated graphic narratives I have ever seen. Bryan Talbot renders the scenes from Mary Atherton's childhood in sepia tones, though patches of bright red or green are used occasionally to heighten certain moments; the life of the Joyce family is rendered in muted and mostly dark blues; and Mary's emergence into adulthood from the oppressive authority of her father is signaled by the use of fully-colored panels. Typewriter-style typefaces appear in conjunction with, often in contrast to, the familiar style of comic lettering; and scattered through the book are photographs, chiefly of documents pertaining to James Atherton. A particularly interesting example comes on the last page of the narrative: a weathered card on which is typed the chorus of the old ballad "Finnegan's Wake" lies atop Atherton's University of Liverpool registration form, which in turn covers much of the last page of Finnegans Wake, which begins: "sad and weary I go back to you, my cold father, my cold mad father, my cold mad feary father." Layers upon layers, both literally and metaphorically.
The James Atherton presented here was never mad, but he was often angry: he is most present in his outbursts, verbally and sometimes physically violent, and otherwise in the determination with which he cut himself off from his family in order to work without interruption. It is clear that Mary Talbot found her father "feary" indeed, and her difficulties with him, and her pleasure in the rare moments of his kindness, make up her whole account: her mother appears here only as a kind vagueness. In the parallel story, James Joyce is never angry but is often distant: he seems puzzled by his daughter on the rare occasions when he drifts into her life, typically to adjudicate hostilities between Lucia and her mother Nora. Nora is the story's chief villain, constantly mocking and belittling her daughter, while the great writer is comparatively kind and gentle—but utterly unsupportive of Lucia's love for dance: "Lucia, Lucia. Be content. It's enough if a woman can write a letter and carry an umbrella gracefully."
This is a plausible portrait of Joyce, who seems to have married Nora Barnacle at least in part because of her ordinariness, her lack of interest in his own intellectual pursuits, and who was not above making fun, in Ulysses, of Molly Bloom's mental shortcomings. ("She had interrogated constantly at varying intervals as to the correct method of writing the capital initial of the name of a city in Canada, Quebec …. In calculating the addenda of bills she frequently had recourse to digital aid.") What is less plausible is the dominant portrait in Dotter: Lucia Joyce as a seemingly normal and healthy young woman who is legitimately frustrated by one relatively minor issue—romantic rejection by her father's secretary, the young Samuel Beckett—and one major one—her parents' refusal to support her calling to be a dancer. Her family's decision to place her in a mental institution seems, then, not only cruel but utterly inexplicable.
Bryan Talbot draws Lucia—it is hard to overstress the importance of this—so that she never looks like a seriously disturbed person; even her anger seems moderate, until the very end, and any extremity of response is presented as fully understandable in light of her family's treatment of her. The text and the imagery of this book are at one in pressing us to believe that Lucia was simply a gifted young woman whose parents, one in hostility and one in indifference, frustrated her career and then, when that angered her, allowed her brother to toss her into a mental institution, where she remained until her death in 1982. This could be a true story but is on the face of it deeply unlikely, and the book needs to do more to justify its interpretation, since it portrays the whole Joyce family as monstrous.
The historical record that we possess suggests a more complicated and more interesting story. Lucia grew up in chaotic circ*mstances, with frequent moves to dodge creditors that led the family on a constant odyssey across Europe and through different social, economic, and linguistic environments. Precisely how this affected her, and what vulnerabilities were part of her makeup from birth, we simply don't know, but her behavior seems always to have been odd. As a child she was prone to long periods of staring off into space, and as a young adult was mercurial at best: jumping impulsively from one style of dance to another and from school to school to school, repeatedly snipping the telephone lines when she felt her father was getting too many calls and therefore too much attention, and, finally, throwing a chair at her mother—the event that precipitated her brother Giorgio's decision to institutionalize her.
For all his indifference to Lucia's love of dancing, for which he was surely culpable, Joyce never thought that she was anything other than an extraordinary person: "Whatever spark or gift I possess has been transmitted to Lucia and it has kindled a fire in her brain." He knew that she was troubled, but refused to believe that she was mentally ill—though once, when he heard that she had attended Mass, he exclaimed, "Now I know she is mad." Given his own calling, he was especially sensitive to what he discerned as a peculiar linguistic power in her: "She is a fantastic being, speaking a curious abbreviated language of her own," he wrote to his patron and publisher Harriet Weaver. "I understand it or most of it." To another correspondent he wrote, "Lucia has no trust in anyone except me, and she thinks nobody else understands a word of what she says." And he even trusted her own self-understanding, as he told Weaver: "Maybe I am an idiot but I attach the greatest importance to what Lucia says when she is talking about herself. Her intuitions are amazing."
Carol Loeb Shloss, in her 2003 biography of Lucia, portrays Joyce as effectively a parasite, sucking the linguistic life out of Lucia and claiming it as his own in Finnegans Wake. (Shloss sees even Lucia's dancing—visitors to the Joyce household noted that she would practice in the same room where her father was writing—as providing rhythmical inspiration for his intricate and fanciful book.) This account has been called into serious question and makes Joyce scarcely less monstrous than he would be if he had allowed his daughter to be institutionalized for no reason stronger than a temper tantrum. But as an explanation it draws clearly on what we know, in that it shows a father deeply involved in his daughter's life and acknowledges that Lucia was anything but the cheerfully normal person we see in Dotter of Her Father's Eyes.
It's hard not to feel that the Talbots' portrayal of the Joyce family is shaped to bring it closer to the life of the Atherton family. James Joyce appears here as a distant, bemused half-presence—a little like James Atherton minus the terrible temper—but in real life was immensely and irresistibly charming to family and friends alike, though wildly erratic. One cannot doubt that his work on Finnegans Wake led him to neglect his family, and that Lucia resented this; but when he was present to her, his love and concern were evident, and he tirelessly sought to get her the best possible treatment. One of his friends estimated that in the last few years of Joyce's life three-quarters of his income went to her care, and he wrote detailed accounts of her condition for her therapists and doctors. He seems even to have thought of the Wake as a kind of counterspell to undo Lucia's madness, if madness it was: patting the manuscript of the work in progress, he once said, "Sometimes I tell myself that when I leave this dark night, she too will be cured."
In the end, Dotter of Her Father's Eyes tells with extraordinary visual sophistication a tale that, structurally and verbally, doesn't quite hold together. That Mary Talbot's father was a Joycean; that he was a difficult and even abusive man; that he sometimes used Joycean language when speaking to her (borrowing a phrase from A Portrait of the Artist as a Young Man, he called her "baby tuckoo" when she was small); that she too studied dance for a while—these correspondences, while they clearly created in Talbot's mind a strong link with Lucia Joyce, do not seem to me strong enough to make the parallel tales meaningfully parallel. It's a highly promising experiment in the visual presentation of intertwined life stories, and as such may bear rich fruit in the future; but its simplification of the immensely strange and convoluted relationship betwen James Joyce and his gifted but wounded daughter is unfortunate.
In 1936, after Lucia had begun her long circuit of moving from hospital to hospital, James Joyce panicked at the thought of what might happen to his daughter if the coming war were to separate them. He wrote to friends to ask for their help—any kind of help: "If you were where she is and felt as she must, you would perhaps feel some hope if you felt that you were neither abandoned not forgotten." (One word echoes repeatedly through his late letters about Lucia: "abandoned.") On the penultimate page of Finnegans Wake, a few lines before the passage about "my cold mad feary father," there are lines that some have read as words of hope for poor lost Lucia: "How glad you'll be I waked you! How well you'll feel! For ever after." But Lucia herself, in 1941, when she was told that her father had just died, replied, "That imbecile. What is he doing under the earth? When will he decide to leave? He's watching you all the time."
Alan Jacobs is professor of English at Wheaton College. His edition of Auden's For the Time Being is just out from Princeton University Press. He is the author most recently of The Pleasures of Reading in an Age of Distraction (Oxford Univ. Press) and a brief sequel to that book, published as a Kindle Single: Reverting to Type: A Reader's Story.
Copyright © 2013 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.
- More fromAlan Jacobs
Philip Jenkins
A neglected aspect of the “other Inkling.”
- View Issue
- Subscribe
- Give a Gift
- Archives
Can you imagine suddenly discovering a trove of major new works by one of the greatest Christian authors of the last century, a worthy companion of C. S Lewis and T. S. Eliot? In a sense, we actually can do this, and we don’t even need to go excavating for manuscripts lost in an attic or mis-catalogued in a university archive. The author in question is Charles Williams (1886-1945), well-known to many readers as an integral member of Oxford’s Inklings group, and a writer venerated by Lewis himself. (Tolkien was more dubious.) T. S. Eliot offered high praise to both the work and the man. Among other admirers, W. H. Auden saw Williams as a modern-day Anglican saint, to whom he gave much of the credit for his own conversion, while Rowan Williams has termed that earlier Williams “a deeply serious critic, a poet unafraid of major risks, and a theologian of rare creativity.” Some thoroughly secular critics have joined the chorus as well.
Williams exercised his influence through his seven great novels, his criticism, and his overtly theological writings—although theology to some degree informed everything he ever wrote. Some, including myself, care passionately about his poetry (I said “care about,” not “understand”). Amazingly, though, given his enduring reputation, Williams’ plays remain all but unknown and uncited, even by those who cherish his other work. Now, these plays are not “lost” in any Dead Sea Scroll sense: as recently as 2006, Regent College Publishing reissued his Collected Plays. But I have still heard erudite scholars who themselves advocate a Williams revival ask, seriously, “He wrote plays?” Indeed he did, and they amply repay reading, for their spiritual content as much as for their innovative dramatic qualities. Two at least—Thomas Cranmer of Canterbury and The House of the Octopus—demand recognition as modern Christian classics, and others are plausible candidates.
As a dramatist, Williams was a late bloomer. Although he was writing plays from his thirties, most were forgettable ephemera, and his most ambitious work suffered from his desire to reproduce Jacobean styles. In 1936, though, as Williams turned fifty, his play Thomas Cranmer of Canterbury was produced at the Canterbury Festival. This setting might have daunted a lesser artist, as the previous year’s main piece was Eliot’s Murder in the Cathedral, which raised astronomically high expectations. Thomas Cranmer, though, did not disappoint. Cranmer was after all a fascinating and complex figure, the guiding force in the Tudor Reformation of the English church and a founding father of Anglicanism. Yet when the Catholic Queen Mary came to the throne in 1553, Cranmer repeatedly showed himself willing to compromise with the new order. He signed multiple denials of Protestant doctrine before reasserting his principles, recanting the recantations, on the very day of his martyrdom. Famously, he thrust his hand into the fire moments before he was executed, condemning the instrument by which he had betrayed his beliefs.
Williams’ play is a superb retelling of the history of the English Reformation, but most of the interest focuses on Cranmer himself. Williams studies the journey of a soul en route to salvation despite every effort it can make to resist that outcome—what he calls “the hounding of a man into salvation.” This powerfully reflects the belief in the working of Grace, of the Holy Spirit, that is such a keystone of Williams’ theological framework.
We follow Cranmer along his way through the acerbic commentary of the Skeleton, Figura Rerum, one of the mysterious characters Williams repeatedly used to reveal the inner spiritual aspects of the drama. Although they appear on stage, they normally remain unseen by most or all of the human characters. But the Skeleton is much more than a chorus or commentary: rather, he represents both God’s plan and Cranmer’s destiny, “the delator of all things to their truth.” He is also a Christ-figure, who speaks in mordant and troubling adaptations of Jesus’ words from the Gospel of John: “You believe in God; believe also in me; I am the Judas who betrays men to God.” He is “Christ’s back,” and anything but a Comforter. The Skeleton, moreover, is given some of Williams’ finest poetry, lines that stir a vague recognition until you realize the intimate parallels to Eliot’s yet-unwritten Four Quartets.
Despite Cranmer’s timid and bookish nature, he is led to a courage that will mean both martyrdom and salvation, and will moreover advance God’s purpose in history. Ultimately, having lost everything and all hope, he throws himself on God’s will (in one of Williams’ many echoes of Kierkegaard). “Where is my God?” asks a despairing Cranmer. The Skeleton replies,
Where is your God?
When you have lost him at last you shall come into God.
…
When time and space withdraw, there is nothing left
But yourself and I; lose yourself, there is only I.
But even at this moment of total surrender, the play offers no easy solutions, and no simple hagiography. In the last moments, with death imminent, Cranmer even agrees to the Skeleton’s comment that “If the Pope had bid you live, you should have served him.” If he is to be a martyr, that decision is wholly in God’s hands: “Heaven is gracious / but few can draw safe deductions on its method.”
The success of Thomas Cranmer marked a shift in Williams’ interests to drama. Over the next nine years, up to his death in 1945, he would publish only two novels, as against eight other dramas that, together with Cranmer, would make up his Collected Plays. Like his friend Christopher Fry and other English dramatists of the age, Williams sought to revive older forms, including mystery plays and pageants, and some of these works are among his most accessible. Seed of Adam and The House by the Stable are Nativity plays, but as far removed from any standard church productions as we might expect given the author. In Seed, Adam also becomes Augustus, and the Three Kings represent different temptations to which fallen humanity has succumbed. In the pageant Judgement at Chelmsford, episodes from the span of Christian history provide a context for one very new and thoroughly modern diocese largely composed of suburban and industrial regions, and already (in 1939) facing the prospect of destruction by bombing. Yet Williams unites ancient and modern, placing Chelmsford firmly in the Christian story alongside Jerusalem and Antioch: all times are one before the Cross.
But if all the plays are worth rediscovering, it is his very last—The House of the Octopus (1945), a theologically daring story of an encounter with absolute evil—that best makes the case for his stature as a first-class Christian writer. Remarkably too, this play gains enormously in hindsight because of its exploration of ideas that seemed marginal to Christian thought at the time, but which have become pressing in an age of global church expansion.
The House of the Octopus offers a highly developed statement of Williams’ elaborate theological system, which we can trace especially through the earlier novels. His key beliefs involved what he termed substitution and exchange, in a sense that went well beyond the customary interpretation of Christ’s atonement. For Williams, human lives are so intertwined that one person can and must bear the burdens of others. We must, he thought, share mystically in one another’s lives in a way that reflects the different persons of the Trinity: they participate in what Williams called Co-inherence. Moreover, this mutual sharing and participation extends across Time—to which God is not subject—and after death. In his novel Descent Into Hell (1937), a woman agrees to bear the sufferings and terrors of a 16th-century ancestor as he faced martyrdom in the Protestant cause; he in turn perceives that loving aid as the voice of a divine messenger—and he might well be right in his understanding.
Stricter Protestants found Williams’ vision of the overlapping worlds of living and dead unacceptably Catholic, if not medieval, and accused him of heresy. Wasn’t he teaching a doctrine of Purgatory? Williams was perhaps taking to extremes the Catholic/Anglican doctrine of the communion of saints, but he was guided above all by one scriptural principle, expounded in Romans 8: the denial that anything in time and space can separate us from God’s love.
If some of Williams’ visionary ideas fitted poorly in the England of his day, they could still resonate in newer churches not grounded in Western traditions. House of the Octopus, for example, used a non-European setting to suggest how familiar dogmas might be reimagined in other cultures. The play is set on a Pacific island during an invasion by the Satanic empire of P’o-l’u. Although the situation strongly recalls the Japanese invasion of Western-ruled territories in World War II, and the resulting mass slaughter of Christian missionaries, Williams never intended to identify P’o-l’u with any earthly state. This is a spiritual drama, and the leading character is Lingua Coeli, “Heaven’s Tongue,” or the Flame, a representation of the Holy Spirit, who remains invisible to most of the characters throughout the play.
When alien forces occupy the island, they immediately demand the submission of the native people, who have recently become Christian converts. Terrified, one young woman, Alayu, denies her Christian faith and agrees to serve instead as “the lowest slave of P’ol’u,” but even that apostasy does not save her life. And this is where the theological issue becomes acute. The Western missionary priest, Anthony, is convinced that Alayu’s last-minute denial has damned her eternally. The local people, however, realize that salvation absolutely has to be communal as well as individual:
We in these isles
Live in our people—no man’s life his own—
From birth and initiation. When our salvation
Came to us, it showed us no new mode—
Sir, dare you say so—of living to ourselves.
The Church is not many but the life of many
In ways of relation.
Wiser than Fr. Anthony, they also know that death itself is a permeable barrier, and so is the seemingly rigid structure of Time itself. As a native deacon asks, could not Alayu’s original baptism have swallowed up her later sin?
If God is outside Time, is it so certain
That we know which moments of time count with him,
And how?
Alayu is saved after her death, through the support of her people and the direct intervention of the Flame. Formerly an apostate, the dead Alayu becomes a saint interceding for the living. As the native believers tell the horrified missionary, “Her blood has mothered us in the Faith, as yours fathered.” When Anthony in turn faces his own torment and martyrdom—and the danger of apostasy—it is Alayu who will give him strength: “He will die your death and you fear his fright.” Fr. Anthony learns that the Spirit’s power is far larger than he has ever dared believe. And he also realizes how deceived he was to think he could have kept his status as paternalistic ruler of his native church indefinitely, among believers who had at least as much direct access to the Spirit as he did himself.
Although Williams was claiming no special knowledge of newer churches and missions, recent developments have given his work a strongly contemporary feel. The ideas he was exploring in 1945 have become influential in those rising churches, especially the emphasis on the power of ancestors and the utterly communal nature of belief. In such settings, the ancient doctrine of the communion of saints, the chain binding living and dead, acquires a whole new relevance, and a new set of challenges for churches that thought these issues settled long since.
Like his other writings, Charles Williams’ plays offer plenty to debate and to argue with—but his ideas are not lightly dismissed. Some of us have been wrestling with them for the better part of a lifetime.
Philip Jenkins is Distinguished Professor of History at Baylor University’s Institute for Studies of Religion. He is the author most recently of Laying Down the Sword: Why We Can’t Ignore the Bible’s Violent Verses (HarperOne).
Copyright © 2013 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.
- More fromPhilip Jenkins
Naomi Schaefer Riley
Embedded reporting from the Millennial front.
- View Issue
- Subscribe
- Give a Gift
- Archives
In The World Until Yesterday, Jared Diamond notes that some traditional societies let small children play with and even suck on sharp knives. Diamond is not saying we should “emulate all child-rearing practices of hunter-gatherers.” (That’s good to know.) But maybe kids would learn some valuable lessons if we gave them a little more responsibility.
Twentysomething: Why Do Young Adults Seem Stuck?
Robin Marantz Henig (Author), Samantha Henig (Author)
304 pages
$10.33
Mission Adulthood: How the 20-Somethings of Today Are Rewriting the Playbook on Work, Love, and Life
Hannah Seligson, Mia Chiaromonte, Audible Studios
Audible
April 10, 2013
Which raises an interesting question: At what age should kids be allowed to use sharp knives? My six-year-old was trying to slice ravioli with a butter knife the other day and nearly gave me a heart attack. Do kids demonstrate that they’re old enough to do something and then we let them do it, or are they simply old enough because they’re doing it? Maybe age, like gender, is now just a social construct. The idea that there is a right age to use sharp knives or walk yourself to the bus stop or (looking to the future here) get married or have kids or start a career or move out of your parents’ basem*nt is … so 20th century.
That, anyway, is what I began to think after reading a couple of recent books from the crop of treatises purporting to explain Generation Y—people born between 1978 and 2000. In Twenty Something—Why Do Young Adults Seem Stuck?, Robin Marantz Henig and her daughter Samantha Henig, both journalists, offer a pop psychology tour of the scientific literature about so-called “emerging adults.” They propose to compare millennials with their boomer counterparts on a variety of subjects to determine (at the end of each chapter) where “now is new” and where the behavior of the current crop of young adults is the “same as it ever was.”
So, for instance, in a chapter about the way twentysomethings treat their brains and bodies, the authors conclude that “people still smoke too much and drink too much.” Samantha Henig notes that when she got her first cavity at age 24, she realized that she had to start to worry about her body’s “decay.” Conclusion? “When young people are responsible for their own health, good habits go to hell.” This is not exactly profound stuff. And while it may seem useful to compare boomers and millennials because boomer parents are often the ones wondering where they went wrong, we may also want to ask whether a comparison with the boomers is setting the bar a little low.
The Henig women mostly rely on studies by various psychologists, but they also put together their own survey, which they sent to friends and which was answered by 127 people. They don’t bank on the results for any broad claims, but their anecdotes regularly draw from this survey. It becomes quickly clear that the Henigs’ friends are a lot like them. In a chapter about marriage, they quote Michael, a 38-year-old engineer, who proposed to his girlfriend, “a graduate student at NYU who was doing her doctoral research on gender norms in courtship.” In a chapter on career choices, the Henigs quote a 32-year-old woman who, prior to pursuing a career in architecture, tried out a variety of other jobs, including “small writing gigs, short-term consultancy, researching for professors, nannying.” In other words, a reader would be forgiving for concluding that Generation Y are all college graduates from wealthy families who can’t quite settle on the perfect mate.
To her credit, Hannah Seligson looks a little further afield for the millennials she profiles in Mission Adulthood. She includes a leader of college Republicans who grew up in the Mormon church, a veteran of the Iraq war who is also a single mother, and the gay son of Mexican immigrants whose attention-deficit problems are making it difficult for him to hold down a job and pay off his college loans. Though she acknowledges that everyone she picked has a college degree, there is still plenty of diversity here. Her “guiding question,” she says, is “could we have met them a generation ago?”
But Mission Adulthood is still ultimately a defense of this generation against people who find them “lazy” or “stunted” or “entitled.” Seligson concludes that these critics are the “victims of prejudice. They dislike and disdain what they see because they do not understand it.” Maybe. But Seligson’s case is not helped when we hear her subjects say “What people in the past might have gotten from church, I get from the Internet and Facebook. That is our religion.” Or when Seligson describes “startup depression,” the anxiety that comes AFTER you’ve succeeded in getting tens or hundreds of thousands of dollars in venture capital for your wacky new business idea.
Most people think that millennials looks different because they do everything later. They get married later and have kids later and settle on a career later and move out later. Why? Well, primarily because they can. Thanks to our longer life-span and modern technology, they have a lot more time to examine their choices. Both of these books go on at length about the paradox of choice and the related phenomenon of decision fatigue. Twenty-somethings are faced with too many options and exhausted by having to pick among them.
But they don’t want to close anything off. The Henigs cite a fascinating study showing that this generation wants primarily to keep its options open. In a computer game devised by MIT psychologists, young adult players are given a certain number of “clicks” which they can use to “open doors” or—once inside a room—to get a small amount of money. After a few minutes of wandering, the players figure out which rooms have the most money. Theoretically they should simply keep clicking in those rooms. But sometimes, doors will start closing. Even players who know they will earn more from using their clicks inside a room will start to panic and click to keep the doors from closing. (The metaphor kind of hits you over the head, though, annoyingly, either the researchers didn’t try this on older people to compare or the authors of the book failed to report it.)
So twentysomethings like to keep their options open. Oddly, in fact, they seem to be examining them earlier than previous generations (at least in recent memory). In the West, anyway, our helicopter parenting means that kids are thinking about what college they will go to when they’re in elementary school. They are told from toddlerhood that they can be whatever they want when they grow up, and by the time they reach college they are paralyzed by the choice. The Henigs cite one young woman, a budding art historian who had not taken a science course in years suddenly agonizing about whether to take a college course called “Spanish for Doctors.” “What if I want to become a doctor? Shouldn’t I keep that option open?”
Millennials also starts engaging in sexual activity younger, which means that to the extent that they date, they will have 15 years of relationships with the opposite sex before they even think about marriage. Again, the options seem limitless. And finally, thanks to our early (and perhaps over-) diagnosis of psychological ills, kids start taking drugs like Adderall and Ritalin earlier and earlier. (The Henigs argue that prescription drugs are the new LSD. As of 2005, a quarter of a million college students were abusing prescription drugs.)
So what is the right age to get married and have children and buy a house and get a steady job and become financially independent? It may be hard to offer young adults a specific answer. But it is also possible to say that putting off decisions does not mean better results. The Henigs describe, for instance, the “slide” into marriage that happens when couples living together just decide it’s easier to simply tie the knot rather than beginning the process of breaking up, dividing the stuff, etc. Later, they may slide into divorce.
Barry Cooper, a British author, recently warned in Christianity Today against worshipping “the god of open options.” This god, Cooper said, “is a liar. He promises you that by keeping your options open, you can have everything and everyone. But in the end, you get nothing and no one.” Good advice at any age.
Naomi Schaefer Riley is the author most recently of ‘Til Faith Do Us Part: How Interfaith Marriage is Transforming America, just published by Oxford University Press.
Copyright © 2013 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.
- More fromNaomi Schaefer Riley
Allen C. Guelzo
Slavery and the Constitution.
- View Issue
- Subscribe
- Give a Gift
- Archives
“On the Fourth of March, 1861, Abraham Lincoln took the oath of office as the six-teenth president from Chief Justice Roger Brooke Taney—and managed, at the same time, to box the chief justice on the judicial ear. Or, at least, to draw a bright line of constitutional understanding between himself and the author of Dred Scott v. Sanford. “There is some difference of opinion,” Lincoln announced, about whether the Constitution’s fugitive slave clause “should be enforced by national or by state authority.” This distinction might be immaterial to the fugitive, but if Congress was to pass laws on the subject, shouldn’t “all the safeguards of liberty known in civilized and humane jurisprudence to be introduced, so that a free man be not, in any case, surrendered as a slave?” And just to make sure that no one assumed that he was merely calling for more accurate identification of suspects, Lincoln asked whether any such legislation should also explicitly “provide by law for the enforcement of that clause in the Constitution which guaranties that ‘The citizens of each State shall be entitled to all privileges and immunities of citizens in the several States?’ “
A Slaveholders' Union: Slavery, Politics, and the Constitution in the Early American Republic
George William Van Cleve (Author)
University of Chicago Press
408 pages
$96.04
Slavery's Constitution: From Revolution to Ratification
David Waldstreicher (Author)
Brand: Hill and Wang
208 pages
$2.99
To the naked eye, there seems nothing particularly momentous in that question. But there was. The “free man” who should not be mistaken and “surrendered as a slave” could only be a free black man, otherwise it would have been impossible to mistake him for a slave in the first place. And Lincoln was here suggesting that congressional legislation should protect that free black man because, under the Constitution, “citizens of each State” are entitled to the procedural protections of the Constitution’s privileges and immunities clause. Citizens. Only four years before, the chief justice sitting behind Lincoln had pronounced in Dred Scott v. Sanford that the Constitution did not and could not recognize black people as citizens, whether they were free or slave. Now, on almost the anniversary of Dred Scott, Lincoln threw Taney’s own words back at him.
But he did more. Everything which was, at that moment, dividing the republic and threatening to tip it into civil war was, Lincoln said, strictly an argument about constitutional theories, not about the things the Constitution actually said. “Shall fugitives from labor be surrendered by national or by State authority?” Lincoln asked. “The Constitution does not expressly say. May Congress prohibit slavery in the territories? The Constitution does not expressly say. Must Congress protect slavery in the territories? The Constitution does not expressly say.” What reasonable American would want to smash the Union when the grounds of disagreement hung on theories? No one, presumably—unless of course the chief justice had, four years before, proclaimed that the Constitution did say, expressly, that Congress could not prohibit slavery in the territories, and that Congress really is obliged to protect it there because the Constitution “distinctly and expressly” affirms the “right of property in a slave.” If the Constitution recognizes the “right of property in a slave,” then that property has no rights of its own, and the owners of that property have every ground on which to demand its protection and sustenance by the federal government.
But what Taney announced as fact, Lincoln relegated to opinion. In 1858, during his celebrated debates with Stephen A. Douglas, Lincoln flatly declared that “the right of property in a slave is not distinctly and expressly affirmed in the Constitution.” Now, Lincoln was president, and the Constitution he had sworn to preserve, protect, and defend would be understood by him to offer no national recognition to slavery at all. From that seed, you might say, the Civil War sprang.
That Lincoln revered the Constitution is not really to say anything different from what almost every other American of his generation would have said about it. “No slight occasion should tempt us to touch it,” Lincoln warned in 1848. “Better, rather, habituate ourselves to think of it, as unalterable. It can scarcely be made better than it is …. The men who made it, have done their work, and have passed away. Who shall improve, on what they did?” Only the most radical of abolitionists were inclined to regard it, in William Lloyd Garrison’s kindling terms, as “an infamous bargain … a covenant with death and agreement with hell” because it seemed to offer shelter to chattel slavery. But by the 1880s, there were many more voices of question about the untouchable perfection of the Constitution, and unlike Garrison, they were expressing eerie parallels to voices in that same decade which were beginning to question the untouchable perfection of the Bible. “The Constitution of the United States had been made under the dominion of the Newtonian Theory,” wrote Woodrow Wilson, whose PhD dissertation in 1883 on Congressional Government: A Study in American Politics frankly questioned the wisdom of a government of separated powers. “The trouble with the theory is that government is not a machine, but a living thing. It falls, not under the theory of the universe, but under the theory of organic life. It is accountable to Darwin, not to Newton.”[1] The 18th century had no sense of historical progression, development, and evolution, Wilson objected; it believed that certain fixed truths were available to be discovered, whether in physics or in government. Wilson’s century thought it knew better, and understood that the intricately balanced mechanisms of the U.S. Constitution were like one of David Rittenhouse’s orreries, and needed to be superseded by something more efficient, supple and responsive to changes in the national environment.
Wilson didn’t get much of what he wanted (thanks in large measure to those unresponsive congressional mechanisms), but the Progressives who followed Wilson were undissuaded by his failures, and they added a new sting to the Progressive impatience with the Constitution by holding up its embrace of slavery as the prime exhibit of the Constitution’s embarrassing backwardness. This same complaint was repeated very recently by Louis Michael Seidman asking (in The New York Times) why we should continue to be guided by a document written by “a group of white propertied men who have been dead for two centuries, knew nothing of our present situation, acted illegally under existing law and thought it was fine to own slaves.” And it is repeated again in two extraordinary and thorough pieces of constitutional history by David Waldstreicher and George William Van Cleve, both assuring us with no uncertain voice that the Constitution was not only designed to accommodate slavery, but “simultaneously evades, legalizes and calibrates slavery.” If you could desire a telling historical reason to (as Seidman’s New York Times op-ed urged) “give up on the Constitution,” Waldstreicher and Van Cleve offer it as luxuriantly dressed as you could wish.[2]
The Garrisonians were the first to assault the Constitution as a pro-slavery document. “There should be one united shout of ‘No Union with Slaveholders, religiously or politically!‘ ” declared Garrison in 1855, and one particularly good sampling of that disparagement comes from the pen of Frederick Douglass in 1849. Reacting to the insistence of Gerrit Smith and the Liberty Party that the Constitution “is not a pro-slavery document,” Douglass replied that it certainly was, and that it “was made in view of the existence of slavery, and in a manner well calculated to aid and strengthen that heaven-daring crime.” The proof was in the text of the Constitution itself:
• The Three-fifths Clause (Art. 1, sec. 2) gave the slave states disproportionate power in the House of Representatives.
• The authorization extended to Congress “to suppress insurrections” (Art 5, sec. 8) had no other purpose than suppressing slave insurrections, as did the added pledge (in Art. 4, sec. 4) to protect the states “against Domestic violence.”
• The permission given to Congress to end the slave trade after twenty years (Art. 1, sec. 9) was a “full, complete and broad sanction of the slave trade.”
• The clause requiring the rendition of any “person held to service or labor in one State, escaping into another,” labelled escape from slavery a federal crime (Art 4, sec. 2).
This made the Constitution “radically and essentially pro-slavery, in fact as well as in its tendency.”[3]
In more recent times, these arguments were taken up by Leon Higginbotham, Sanford Levinson, Thurgood Marshall, and Mark Graber, mostly as a way of substantiating their larger-view annoyance with the Constitution’s intractability to progressive policy changes. But in no place was the “pro-slavery Constitution” accusation laid down in more fiery detail than by Paul Finkelman, in his provocative Slavery and the Founders (1996; 2nd ed., 2001), where Finkelman not only embraced Douglass’ bill of indictment but added a few more of his own. It had been part-and-parcel of the New Social History in the 1970s and ’80s that slavery and race were the original sin of the American experiment, and that their presence belied any exceptionalist claims that the American founding represented a triumph for human liberty, undimmed by human tears. And in the long view, that was Finkelman’s point, too: Slavery and the Founders was written with the “belief that slavery was a central issue of the American founding,” and in no way creditable to that founding. Not only were the Three-fifths Clause, fugitive rendition, and the suppression of insurrections proof of the pro-slavery intentions of the Founders, slavery enjoyed special protection from the Constitution’s ban on export taxes (which gave a green light to the international marketing of slave-grown products), the dependence of direct taxation and the Electoral College on the Three-fifths Clause, and the limitation of civil suits and privileges-and-immunities to “citizens” (which could only be white people). “A careful reading of the Constitution reveals that the Garrisonians were correct: the national compact did favor slavery,” concluded Finkelman. “No one who attended the Philadelphia Convention could have believed that slavery was ‘temporary.’ “
Finkelman lays the groundwork for both Waldstreicher and Van Cleve (Finkelman is cited more often in A Slaveholders’ Union than any other modern historian), who in turn raise Finkelman’s claims for a pro-slavery Constitution to yet higher degrees. Waldstreicher is the shorter of the two, and more in the nature of a general summation of the neo-Garrisonian viewpoint. Like Finkelman, Waldstreicher believes that the Founders created a national “compact” which consciously sustained slavery (six out of the Constitution’s 84 clauses, he notes, bear on aspects of slavery), and allowed slavery’s interests to prevail in the federal Congress (since the house most responsible for fiscal matters was the place where the Three-fifths Clause brought its greatest weight to bear). But more than Finkelman, Waldstreicher does not believe that this was merely the result of paradox or political log-rolling in the Constitutional Convention. The Revolution itself was caused by the panic slaveholders felt over the implications of the 1772 Somerset decision in the Court of King’s Bench, which rendered slavery a legal impossibility in England. By denying slavery legal standing anywhere in the empire outside the colonies, Somerset alarmed American slaveholders, who were thus rendered instant converts to a revolution against imperial authority. In turn, the Constitution went out of its way to reassure American slaveholders, since the Constitution actually made it harder to get rid of slavery than before.
Van Cleve is less polemical, but longer and more methodical than Waldstreicher. In his reading, both the Revolution and the Constitution acted to strengthen slavery, either by sanctioning the colonial status quo on slave labor or by providing new protections for its expansion. Like Waldstreicher, Van Cleve believes that Somerset profoundly frightened American slaveholders—20 percent of all American wealth, Van Cleve adds, was invested in slaves—and the Constitutional Convention went out of its way to secure slavery’s place in American life. Not only did the Three-fifths clause and the fugitive rendition provisions side entirely with pro-slavery forces, but the state delegations to the convention were given no instructions to seek an end to slavery, and none of the ratification debates (including the Federalist Papers) made slavery an issue. Southerners who took up ratification as their cause in the Southern ratifying conventions actually campaigned for ratification precisely “because the Constitution did not authorize the federal government to take action against it.” Nor does Van Cleve find it difficult to find Southerners quite candid in their belief that “without security for their slave property … the union never would have been completed.” In that light, Chief Justice Taney’s dictum that the Constitution explicitly recognized slaves as property was merely the final corroboration of the Constitution’s lethal pro-slavery tilt.
Yet, in all of these assertions, from Douglass to Waldstreicher and Van Cleve, there creeps in an air of special pleading, an Eyore-ish determination to read the Constitutional glass as perpetually half-full, if not empty. Van Cleve, for instance, always takes the slaveholders’ word as the statement of the Constitution’s sober fact, while anti-slavery observers are dismissed as wrong when they see slavery being diminished by the Constitution. And the notion that the Constitution’s provisions for the termination of the slave trade can be read as “protecting the interests of slave traders and those of states that wanted to import slaves” must crinkle the brow of any disinterested reader. Above all, this pleading has to engender the puzzled question of how a regime based on such a pro-slavery Constitution could, within the span of a single lifetime, bring to the east front of the Capitol a president who could deny that the Constitution gave slavery any sanction at all.
Waldstreicher offers an explanatory hint in Slavery’s Constitution by suggesting that anti-slavery forces simply abandoned the Constitution and appealed instead to a “higher law,” in the form of a natural-law right to liberty. “Antislavery survived the post-Revolutionary backlash epitomized by the Constitution because some Americans refused to believe that the Constitution, or even America, was the ultimate source of their cherished ideals.” What gets lost in Waldstreicher’s description of the “higher law” appeal was how much it was based on the contention that the Constitution embodied natural law. That made the Constitution susceptible only of a reading which (like Somerset) made freedom the default position of national law, and limited the legalization of slavery only to local or state law. James Oakes, in his marvelous new history of emancipation, Freedom National: The Destruction of Slavery in the United States, 1861-1865 (2013), reads the Constitutional glass as not just half-full but running over with anti-slavery assumptions: “The delegates at the Constitutional Convention … were certain the system was dying anyway,” based on their reading of natural-law economics and natural-law moral philosophy, and “concluded that the progress of antislavery sentiment was steady and irreversible.” Slavery was deliberately crowded off the national table by the Constitution. Why, after all, asked anti-slavery voices at the time of the Missouri debates in 1820, had the Constitution permitted the Northwest Ordinance to stay in effect, or allowed the banning of the slave trade, if slavery was constitutionally-protected property? Why did the Constitution turn such linguistic somersaults to avoid actually using the word slave? Why did the fugitive slave provisions never specify that it was the federal government’s responsibility to render-up fugitives? And in arguments made by both John Quincy Adams (in his plea for the release of the Amistad rebels) and William Henry Seward, the Constitution is presented as a component of the law of nations, which is itself (according to the guiding lights of international legal theory, Vattel, Grotius, and Wheaton) based on natural law.
As if to confirm the suspicion that Finkelman et al. were arguing for a conclusion rather than making a case, Don Fehrenbacher’s last book, The Slaveholding Republic: An Account of the United Sattes Government’s Relations to Slavery (2001) set out a bristling phalanx of reasons why the Constitution had never been designed as a pro-slavery document. The convention itself, Fehrenbacher contended, was rent by bitter debates over slavery and its status, and the resulting document represented, not a triumph of a slaveholding consensus, but the hardly-won survival of an institution under heavy attack. The members of the convention were, in the end, content to curtail slavery rather than exterminating it, partly because they were a Constitutional Convention charged with keeping the American union together rather than an anti-slavery revival meeting calling sinners to repentance, and partly because they were confident that measures like the prospective ban on the slave trade would hasten the death of slavery on their own. The wonder is that slavery managed to survive as long as it did before the anti-slavery assumptions of the Constitution forced slaveholders into rebellion. The proof, for Fehrenbacher, was in the pudding of secession: the secessionists promptly wrote a new constitution, defining, legalizing, and extending slavery, “in stark contrast to the Constitution of 1787 that had embarrassingly used euphemistic language to mask the existence of the tyrannical institution in a land presumably dedicated to liberty.”
Fehrenbacher’s chief labor was to be to show how, if the Constitution cast that chilly an eye on slavery, southerners managed to defend and extend the instituiton for so long; his answer was politics. Southerners adeptly seized control of the executive branch from the very first, and spun the helm of the federal government hard-over in their favor. It was only in 1860, when they decisively lost that control, that the pretense of a pro-slavery Constitution was abandoned, along with the Union itself. All through the decades between 1790 and 1860, anti-slavery voices kept up a steady drumbeat of resistance to the “pro-slavery Constitution,” over and over again declaring that the Constitution was a natural-law document whose baseline was freedom. Lemuel Shaw’s decision in Commonwealth v. Aves (1836) saw no constitutional right of property in slaves.[4] Even a Southern court, in Rankin v. Lydia (1820), held that “freedom is the natural right of man,” and William Jay justified the revolt of the slaves on the Creole in 1841 on the grounds that, as soon as the Creole cleared Virginia waters, it came under control of the law of the sea, which was in turn a subsection of natural law.[5] And then there was Lincoln, who in his breakthrough speech at the Cooper Institute in February 1860, announced that the Constitution, far from recognizing slavery, actually empowers Congress to vaporize it any moment slavery puts a foot outside the states where it has been legalized. “An inspection of the Constitution will show that the right of property in a slave is not ‘distinctly and expressly affirmed’ in it.”
In the murk of historical interpretation, whether the Constitution was pro-slavery or anti-slavery will depend very much on whether someone is inclined to grant more authority to Lincoln than Douglass, to Finkelman than Fehrenbacher, to Lemuel Shaw than Roger B. Taney. Which means, in turn, that the deciding factor is likely to be buried a priori in whether one can be satisfied that an 18th-century Newtonian document should still be allowed to prevail in a political world which is surrounded by 19th-century evolutionary assumptions about adaptation to changing mores and social conditions. That decision will be aggravated by the current furor over “gridlock” and “obstruction” in the federal government, and whether one branch of government has the privilege of slowing the rest of the government’s reaction-times to an unscientific crawl. If “efficiency” (the demon-god of Wilsonian Progressives) or problem-solving or “responsiveness” is the prime desideratum in government, then the Constitution will surely appear as an outdated recipe for chronic political constipation. Hence, Seidman’s complaint that “Our obsession with the Constitution has saddled us with a dysfunctional political system” and “kept us from debating the merits of divisive issues and inflamed our public discourse. Instead of arguing about what is to be done, we argue about what James Madison might have wanted done 225 years ago.” And the temptation to tack on slavery as proof of the Constitution’s immoveability will probably be irresistible—as Seidman attests. Never mind that this evolutionary times-are-not-now-as-they-were argument is ironically what Missouri Chief Justice William Scott used in denying Dred Scott’s appeal in the original hearing of the case.
The problem with Madison is not that his version of government is 225 years old, or that it is Newtonian or mechanistic. It is that Madison and his fellow delegates in Philadelphia did not care a wet slap for efficiency in government. They wanted liberty, and anything which slowed the pace of governmental decision-making, or which exhausted the power of one branch in argument with another, and made government as safely unresponsive as it could be short of inanition, was by their lights precisely what a republic of liberty should prize (even if that guaranteed a large measure of inanition about slavery). What we want the Constitution to be has always had a peculiar way of determining what we think the Constitution was, and is.
Allen C. Guelzo is the Henry R. Luce Professor of the Civil War Era and a professor of history at Gettysburg College. He is the author most recently of Fateful Lightning: A New History of the Civil War and Reconstruction (Oxford Univ. Press) and Gettysburg: The Last Invasion, just published by Knopf.
1. Wilson, Constitutional Government in the United States (Columbia University Press, 1908), p. 56; Terri Bimes and Stephen Skowronek, “Woodrow Wilson’s Critique of Popular Leadership: Reassessing the Modern-Traditional Divide in Presidential History,” Polity, Vol. 29 (Fall 1996), pp. 27-63.
2. Louis Michael Seidman, “Let’s Give Up on the Constitution,” The New York Times (December 30, 2012).
3. Douglass, “The Constitution and Slavery,” The North Star (March 16, 1849).
4. Shaw, in Commonwealth v. Aves, held that “by the general and now well established law of this Commonwealth, bond slavery cannot exist, because it is contrary to natural right, and repugnant to numerous provisions of the constitution and laws, designed to secure the liberty and personal rights of all persons within its limits and entitled to the protection of the laws.” See Reports of Cases Argued and Determined in the Supreme Judicial Court of Massachusetts, ed. Octavius Pickering (Boston, 1840), p. 219.
5. Robert M. Cover, Justice Accused: Antislavery and the Judicial Process (Yale Univ. Press, 1975), pp. 95-96; Stephen P. Budney, William Jay: Abolitionist and Anticolonialist (Praeger, 2005), pp. 66-67.
Copyright © 2013 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.
- More fromAllen C. Guelzo
Laurance Wieder
Messianic unease.
- View Issue
- Subscribe
- Give a Gift
- Archives
Tradition holds that a messiah is born in every generation. “Yes,” the talmudic sages say. “Let the Messiah come, but not in our time.”Since 2001, at least seven books have been published concerning three Jewish messiahs of the past three-and-a-half centuries: Sabbatai Sevi, Abraham Miguel Cardozo, and Jacob Frank. All such studies and historical revisions are written in the shadows or on the shoulders of Gershom Scholem’s landmark 20th-century history, Sabbatai Sevi: The Mystical Messiah. That work of scholarship made respectable again a mystical tradition based largely on the Zohar. Scholem established that Moses de Leon, a late-medieval Spanish Jew, wrote the Zohar. Among learned Jews, that book enjoyed a status equal to the Talmud, a respect undone by the Messiahship of Sabbatai Sevi. Following the English and French Revolutions, modern Jewry sought enlightenment before redemption, citizenship before Jerusalem. Philosophy became preferable to prophecy.
Born in Turkey in 1626, Sabbatai Sevi spent much of his adult life oscillating between a luminous conviction that he was the long-awaited Messiah son of David, and profound inner darkness. Hoping to be cured of delusive sickness, in 1665 Sevi sought out a young healer of souls, the kabbalist-prophet Nathan of Gaza. Alas, instead of release, Nathan proclaimed the manic-depressive rabbi the Redeemer.
That same year, the newly anointed Messiah delivered a public address (reproduced by Ada Rapoport-Albert in Women and the Messianic Heresy of Sabbatai Zevi) on what the Messiah means for the other half of humanity:
As for you wretched women, great is your misery, for on Eve’s account you suffer agonies in childbirth. What is more, you are in bondage to your husbands and can do nothing great or small without their consent …. Give thanks to God, then, that I have come to the world to redeem you from all your sufferings, to liberate you and make you as happy as your husbands, for I have come to annul the sin of Adam.
Jewry flocked to their new hope. Despite his strange deeds and words, despite the opposition of some rabbis who rejected the Messiah, an army of believers followed Sabbatai Sevi’s call to march with him to Constantinople. There, he said, he would receive the earthly crown of empire from the Turkish sultan, and so bring on the new order. In 1666, a year of astrological portent, the Mystical Messiah was arrested outside the Ottoman capital. Called to an audience before the Grand Turk, Sabbatai took the turban.
The Jewish Messiah’s apostasy to Islam was enough to disillusion many of those who thought the promised time had arrived. Not to mention those who always doubted. In November 1666, Rabbi Joseph Halevi sent a letter to Jacob Sasportas of Hamburg. He observed that a full year had passed since dispatches from Alexandria, from Egypt, from the Holy Land, from Syria and from all Asia announced that redemption was at hand. “This good news was brought us by a brainless adolescent from Gaza, Nathan the Lying Prophet,” Halevi wrote, “who, not satisfied with proclaiming himself a prophet, went on to anoint king of Israel a coarse, malignant lunatic whose Jewish name used to be Sabbatai Sevi.”
For those able to weather their disappointment, Nathan of Gaza justified Sabbatai Sevi’s abandonment of faith on kabbalistic grounds. The Torah is now void. Words have no meaning, except what we say they mean. Bad is now good, good evil. The Messiah must sink low in order to mount high. And so forth. Some of the faithful remained faithful.
Six years after Sabbatai Sevi’s death in 1676 in Alkum, Sabbataian devotee Joseph Karillo and a companion called on Abraham Miguel Cardozo in Constantinople. Cardozo was a Catholic convert to Judaism who had accepted the Turkish-born messiah and claimed a messianic role of his own.
The companion recounted for Cardozo their final audience with Sabbatai Sevi. “After the New Year [Rosh Hashanah] …, he took us out to the seashore with him and said to us: ‘Each of you go back home. How long will you adhere to me? Until you see the rock that is on the seashore, perhaps?’ And we had no idea what he was talking about. So we left Alkum, and he died on the Day of Atonement [Yom Kippur], early in the morning.” A Sabbataian sect practices their version of Judaized Islam in Smyrna to this day.
Moshe Idel, a contemporary Israeli scholar of kabbalism, interprets history as a kind of sacred text, written in glyphs or emblems as well as in narrative. This method follows a path trod by the Renaissance philosopher of history Giambattista Vico, by the 20th-century essayist Walter Benjamin, and by Benjamin’s friend Gershom Scholem. They assume that events, like words, conceal meaning. Thus, the fact that Sabbatai Sevi was born on the Ninth of Av, the day both Temples fell, 1626, is a sign, and not only to the orthodox.
As Abraham Cardozo put it: “What save sadness did Sabbatai, who was born on a funeral day, predict? He was unfortunate in his very name, since, in the Hebrew language, Saturn is called Sabbatai, a sad and malignant star.”
Or as Gershom Scholem wrote in a poem from 1933:
In days of old all roads somehow led
To God and to his name.
We are not devout. We remain in the Profane,
And where ‘God’ once stood, [now] Melancholy stands.
Abraham Miguel Cardozo was born in 1630, in Portugal. His family were Marranos, or secret Jews. Raised Catholic and educated at a university in Spain, Cardozo left home after graduation to join his brother in Venice. There, he converted (back) to Judaism. A fervent scholar of Judaica, Cardozo identified himself with the Messiah son of Joseph, a figure who traditionally heralded rather than followed the Messiah from the line of David.
Cardozo accepted the Mystical Messiah, but did not follow Sevi into Islam. Indeed, he neither wanted nor expected Sabbatai to bring the Jews back to the Holy Land. “When the Redeemer comes,” Cardozo wrote, “the Jews will still be living among the Gentiles even after their salvation is accomplished. But they will not be dead men, as they had been previously.” As in the 19th-century dream of Enlightenment, through redemption Jews will experience happiness, and enjoy dignity and honor.
Cardozo’s dissent, like Nathan of Gaza’s, was rooted in the Zohar. But the Sephardic exile’s vision faced forward to William Blake’s irascible God and The Marriage of Heaven and Hell, rather than backward toward Andalusia, Moses de Leon, and the Zoharic circle of Simeon ben Yohai.
David J. Halperin summarizes the Marrano’s minority theology in his edition of Abraham Miguel Cardozo: Selected Writings:
The world hosts four basic religious systems: Absolute, Prophetic Monotheism (Judaism and Islam); Philosophical Deism; Christian Trinitarianism; and pagan polytheism. All four are false religions. Muslims and Jews insist that there is no God except the being philosophers call the First Cause. Yet the message that Moses brought to Israel, when he came to redeem them from Pharaoh, was that there is a God other than the First Cause. He is the God whom the Bible calls by the sacred four-letter Name, whom the ancient rabbis called the Blessed Holy One.
Where Sabbatai Sevi sounds grandiose, Cardozo’s voice is modest and extreme. “I am no Messiah,” he wrote later in life. “But I am the man of whom the Faithful Shepherd [Moses] spoke when he addressed Rabbi Simeon ben Yohai and his companions: ‘Worthy is he who struggles, in the final generation, to know the Shekhinah [the female side of the Divine Presence], to honor Her through the Torah’s commandments, and to endure much distress for Her sake.’ “
Abraham Cardozo outlived his fallen Messiah by thirty years. Addressing Sevi’s failure to reappear, Cardozo explained that “Our ancient rabbis have said that King Messiah will tell every Jew who his father is, that is to say, his Father in heaven, God, whom they have forgotten in their exile. Sabbatai Sevi has not done this. He has not openly proclaimed to the Jewish people the divinity of the Shekhinah, the existence of the Great Name, the truth of God. Even if he was aware of all this, his awareness was for himself alone.”
Jacob Frank was another Messiah successful for himself alone. Born in 1726, this Polish Jew with a knack for commerce found his calling in mid-18th-century Smyrna. By force of personality, Frank assumed his messianic mantle in the Ottoman Sabbataian community. This Messiah’s revealed truth identified four aspects of holiness: the God of Life, of Wealth, of Death, and the God of Gods. Frank lived like an Oriental potentate on the offerings of his followers as he progressed from Turkey through Anatolia to Poland and Bohemia, all the while promising everlasting life on this earth to those numbers of Sephardic and Ashkenazi Jews he converted—to Catholicism.
Frank’s converts assumed Polish names and received aristocratic patents when they followed their redeemer into the Catholic Church. Frank identified his daughter—born Rachel Frank in 1754 and later known as Eva—with the Shekhinah, as well as with the Madonna. The Frankists addressed Eva as “The Maiden” or “The Virgin.”
Frank’s sayings and stories are compiled in a book, The Words of the Lord. There, the master asks, “How could you think that the messiah would be a man? That may by no means be, for the foundation is the Maiden. She will be the true messiah. She will lead all the worlds.”
Pawel Maciejko calls his Frankist history The Mixed Multitude, alluding to both the generation that followed Moses out of Egypt and to the rising tide of spiritual and political democracy. Witnesses withheld their hosannahs. A contemporary rabbi’s account of one early Frankist-cum-Sabbataian ritual in Lanckoronie, Poland, in 1756, reads like a scene from Isaac B. Singer’s novel Satan in Goray: “And they took the wife of a local rabbi (who also belonged to the sect), a woman beautiful but lacking discretion, they undressed her naked and placed the Crown of the Torah on her head, sat her under the canopy like a bride, and danced a dance around her. They celebrated with bread and wine of the condemned, and they pleased their hearts with music like King David … and in dance they fell upon her kissing her, and called her ‘mezuzah,’ as if they were kissing a mezuzah.”
The outside world also took note of Jacob Frank. A 1759 issue of the English Gentleman’s Magazine featured an anonymous “Friendly Address to the Jews.” Its author expressed surprise at a report “that some thousands of Jews in Poland and Hungary had lately sent to the Polish bishop … to inform him of their desire to embrace the Roman Catholic Religion.” The correspondent suggested that if you think that the Christian religion is true, and believe the messiah is already come, then why not “embrace the Protestant religion, that true Christianity which is delivered to us … without the false traditions and wicked intentions and additions of the Popes, who have entirely perverted the truth, and corrupted primitive Christianity.”
Overtly Catholic, the Frankists also kept Jewish feasts and holy days. A few years after the Maiden’s death in 1816, a secret society, called the Asiatic Brethren of Bohemia, Poland, and Hungary, mirrored the Frankists. These Masonic Protestants celebrated Christian holidays as well as the birth and death of Moses, and Shavuot, “to bring about religious unity by leading Christianity back to its Jewish form.”
In his table talk, Frank dismisses Jewish worship and tradition with a wave of his hand: “All the Jews are seeking something of which they have not the slightest inkling. They have a custom of reciting every sabbath: ‘Come, my beloved, to meet the bride,’ calling out ‘Welcome’ to the Maiden. This is all mere talk and song. But we pursue her and try to see her in reality.”
“The whole Zohar is not satisfying for me,” he announced, “and we have no need for the books of kabbalah.”
With regard to his scriptural forebears, Frank models his conduct after an alternative lawgiver: “Moses did not die but went to another religion and God permitted it. The Israelites in the desert did not want to walk that road, and when they came to … bitterness, they became aware of that freedom and it was in that place where there was no obligation.”
What of Frank’s own place in suspended history? “All religion, all laws, and all the books published up to now as well as whoever reads them, are like reflections of words that died a long time ago. All that comes out of Death. The wise man’s eyes should always look to the person in front of him. This man does not look left or right or to the back, yet everybody turns his eyes towards him.”
Just before his own death in 1791, Frank announced: “I tell you, Christ is known to you as coming to liberate the world from Satan’s hands, but I came to liberate you from all laws and statutes that existed up to now. I have to destroy them all, and only then will God reveal himself.”
In The Poetry of Kabbalah: Mystical Verse from the Jewish Tradition, Peter Cole translates a popular hymn by Yisrael Najara that is still part of mainstream Jewish worship. The song, “Your Kingdom’s Glory,” was adopted by Sabbataians as an anthem of messianic kingship. Its seven stanzas were chanted in the Cathedral of Lublin in the presence of Jacob Frank. The hymn begins:
Let your Kingdom’s glory be revealed
over a poor and wandering people,
and reign, Lord who has ruled forever,
before the reign of any King.
Stanza four, the song’s center, states:
I hope for the time of your redemption
and wait with patience for your salvation.
If it tarries, Lord, in your absence,
I will look for no other King.
The plea concludes:
Bring my people back to you There [Sion’s mountain],
and I will rejoice around your altar.
With a new song, I will offer
thanks to you, my Lord and King.
More precise and moving, Cole’s verse-paraphrase of one passage from the Likutei Amarim Tanya of Rabbi Schneur Zalman, an hasidic contemporary of Jacob Frank, embodies the messianic fervor merely alluded to in the earlier, generic hymn:
All before Him is as nothing:
The soul stirs and burns
for the precious glory of His greatness,
to behold the light of the King
like coals of the fierce flame rising.
To be freed from the wick
or the wood to which it clings.
Historically, Christians are vexed with the Jews, who insist on waiting for their own messiah, amid discussion of how he will be known, what marks he shall bear both in the scriptural and in the worldly sense, and when. Islam, too, looks for the Mahdi and a day of salvation. Yet even those who believe that their messiah has appeared await a second coming.
So the question of who and what to accept, of how to recognize the truth, abides. I must ask it of myself, if I ask it of others: How could you believe? Or, How could you not?
Considering the matter of the pretender, or the fallen Messiah, the question changes: How could a person be so false, and yet walk the earth?
Legend tells that on the day the Temple was destroyed, the redeemer was born. At that very moment, a certain Jew was plowing his field, and his heifer lowed. A passing Arab said, “Weep, Jew. Your Temple is destroyed. I know this from your heifer’s moo.”
The heifer lowed again. The Arab said, “Rejoice, for the Messiah, who will deliver Israel, is born.”
The Jew asked the Messiah’s name and birthplace.
The Arab answered, “Menachem (the Comforter) son of Hezekiah, in Bethlehem.”
The Jew sold everything, became a garment merchant, and traveled until he reached Bethlehem. Women flocked to buy his wares, and urged Menachem’s mother to buy a little something from the merchant. She replied, “Better to have Israel’s enemies strangled, than to buy one rag for such a son. The day he was born was the day the Temple was destroyed.”
The Jew who came so far to find her said, “It may have fallen on the day your son was born, but I am certain that on his account the Temple will be rebuilt. Take what you need. I will come again, and you will repay me.”
Time passed. The Jew returned to Bethlehem, and sought out Menachem’s mother. “So tell me, how is your son?”
The woman answered, “Right after you spoke to me, a windstorm snatched him from my hands and carried him off.”
So it is said in the Book of Lamentations: Menachem the Comforter is far from me.
Laurance Wieder is a poet living in Charlottesville, Virginia. His books include The Last Century: Selected Poems (Picador Australia) and Words to God’s Music: A New Book of Psalms (Eerdmans). He can be found regularly at PoemSite (free subscription available from poemsite@gmail.com).
Books discussed in this essay:
Book of Legends/Sefer HaAggadah: Legends from the Talmud and Midrash, by Hayyim Bialik and Y. H. Rawnitzky (Schocken, 1992).
The Poetry of Kabbalah: Mystical Verse from the Jewish Tradition, translated and annotated by Peter Cole, co-edited and with an afterword by Aminadav Dykman (Yale Univ. Press, 2012).
Sabbatai Zevi: Testimonies to a Fallen Messiah, translated, with notes and introductions, by David J. Halperin (The Littman Library of Jewish Civilization, 2012 [2007]).
Abraham Miguel Cardozo: Selected Writings, translated and introduced by David J. Halperin (Paulist Press, 2001).
Saturn’s Jews: On Witches’ Sabbat and Sabbateanism, by Moshe Idel (Continuum, 2011).
Jacob Frank: The End to the Sabbataian Heresy, by Alexander Kraushaar, translated, edited, annotated, and introduced by Herbert Levy (Univ. Press Of America, 2001).
The Mixed Multitude: Jacob Frank and the Frankist Movement, 1755-1816, by Pawel Maciejko (Univ. of Pennsylvania Press, 2011).
Women and the Messianic Heresy of Sabbatai Zevi, 1666-1816, by Ada Rapoport-Albert, translated by Deborah Greniman (The Littman Library of Jewish Civilization, 2011).
Sabbatai Sevi: The Mystical Messiah, 1626-1676, by Gershom Scholem (Princeton Univ. Press, 1973).
Satan in Goray, by Isaac Bashevis Singer (Farrar, Straus and Giroux, 1996 [1955]).
Copyright © 2013 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.
- More fromLaurance Wieder