What sort of atheist would I be?

[Above is the image for Loyola University Chicago’s 2016 conference, “The Challenge of God: Continental Philosophy and the Catholic Intellectual Heritage.” All credit goes to Jacob Torbeck

Christ is in the grave. From Good Friday through Holy Saturday, Christians often imagine the world waiting with bated breath for the Son of God to burst forth in glory on Easter Sunday, but the Gospel accounts suggest otherwise: the disciples — before Holy Saturday was ever Holy — were coming to grips with the fact that their Messiah had failed. Perhaps more traumatic, their friend was dead. Anastasios is far from anyone’s lips.

What would it be like to live in that space — perpetually?

I’ve lived on both sides of the A/theist binary and still straddle it more than is comfortable most of the time in an expression of what Colby Dickinson likes to call my “Protestant intensity.” Philosopher Jacques Derrida once said that he “rightly passe[d] for an atheist,” a statement which provoked all sorts of mental gymnastics from his interpreters. In her book on the subject, professor Pamela Caughie defines “passing” as closely related to theories of “performativity,” including the notion that “any ‘I’ comes to be a subject only through a matrix of differential relations that make certain kinds of being possible.” Identity, in other words, is “something we do, not something we are,” and these doings tend to be “ethically and politically motivated” by a desire to respond to our various situations (4, 14). Thus, Derrida “rightly passed for an atheist” because it was the cultural category which he could inhabit most comfortably, even if it didn’t define him all that well.

Under that rubric, it could be said that I “rightly passed for a Christian” throughout my later high school and college years. “Christian” was a language and a set of habits I had learned; I knew how to Christian, as Gilles Deleuze might put it, placing the emphasis on the verb, identity being something we do rather than something we are, after all. “Faith” is a bad word for whatever I had then; it was theatrical rather than dramatic, a pure form without content. In fact, it was probably something more intense: I was an ironist, with deep contempt for the role I played.

In my actual thought-life, the Christian narrative had become a sort of mythical appendix to a different reality. I never fully fleshed it out to myself, but for me Jesus was a kind of blip on a cosmic radar; it was an event, maybe one that even had real effects, but none that pertained to me or even the vast majority of people I knew. St. Paul provided the rubric — “If anyone is in Christ, he is a new creation” (2 Cor 5:17) — and personal experience taught me that I didn’t fit. The Christian narrative didn’t include me, so I needed to know where I stood, and ultimately I settled into a few basic premises: Matter was (probably) conscious; the cosmos was (therefore) conscious; and the consciousness of the cosmos appeared to be beyond good and evil in a way that could easily be either alien or infantile. I coped with that the best I could and suspected that I would continue to do so eternally, through various forms of (non-) embodied existence.

Then the change came, and any account of that invites judgment. I had been, up to a certain point, what William Lynch calls a “facer of facts,” for whom “the beautiful thing… is to accept the absurdity and limitations of reality with nerve, sincerity, courage and authenticity” (20). Under another rubric: my courage failed; I fell back on religion, “escap[ing] into a tenuous world of infinite bliss,” and the rest is history. This is not how I would tell my story, but it is a hard story to tell without falling back on categories which, as Adam Kotsko says, appear regressive and naive in the modern world:

In debates over divine transcendence, the burden of proof is most often on the person who wants to reject it — and that position does make sense, as the Christian tradition has mostly embraced divine transcendence. That said, the cultures in which Christianity has mostly moved have also mostly embraced divine transcendence as a kind of cultural common sense. That is no longer the case in the Western world, however. In making sense of the world around us, the “God hypothesis” is obviously no longer necessary. Insisting on divine transcendence, therefore, means pushing up against an amazingly successful explanatory system that virtually no one questions in any serious or thorough-going way. There had better be a damn good reason to take that on! In short, I think that in the contemporary world, the burden of proof is on those who want to maintain divine transcendence.

I have to admit that Kotsko’s assessment feels right to a certain extent; there are certain “amazingly successful explanatory systems” within which God need play no role. I’m not talking about the typical positivist drivel which comes from the likes of Richard Dawkins or Jerry Coyne, but Deleuze’s account of purely material “becomings” along a “plane of immanence” is deeply compelling to me. This purely univocal account of being and experience seems to match up with my reality quite well, but metaphysically it also tends toward a sort of pantheism which is indistinguishable from atheism. As political theologian Clayton Crockett puts it, “I would see Christ more as a singular entity who expressed a powerful vision of life and then died, but that death is itself the resurrection into a repetition of difference that is both absolutely unique and completely inter-related to all other forms of life. There is a Christ-event, but also a Confucius-event, a Spinoza-event, etc.” Frankly, this sounds a lot like my old way of looking at things. I don’t know — maybe I could still find a form of life for myself in there somewhere.

… And yet, I can’t shake the feeling that this is too simple. In many ways, I find that my mental dilemma has reversed: whereas a few years ago I felt that Christianity filled in the holes of the world I’d made for myself, now I feel that same conviction clinging to me despite a wealth of alternative explanations. It may, perhaps, be an act of bad faith to cling to such convictions — and yet I know that to abandon those convictions would be an act of bad faith, totally and absolutely.

Maybe this is what leads me back to “Atheism for Lent” — belatedly, of course, since Lent has already passed us by. In this instance, “Atheism for Lent” means using the Lenten season to engage with historical criticisms of Christianity, allowing such criticisms to refine away the idolatries which so often accrete around our faith and allowing us to repent of them. However, such a process falls under heavy criticism from Kotsko and others who believe that such attempts to “resuscitate” Christianity in order to leave its “core” free from criticism are (what else?) acts of intellectual bad faith.

I don’t agree with this claim. I see many, many problems with it — but I also find it intimidating, and feel compelled to engage with it. I feel that this type of performance-art atheism — the sort which empties Christian forms of their spiritual content and repurposes them for more material, social uses — is the type of atheist I might be if I could manage to be one.

So by way of engaging the question “What sort of atheist would I be?”, I am working through Katharine Sarah Moody’s book, Radical Theology and Emerging Christianity (Ashgate, 2015) over the coming weeks. I’ll be posting part-reviews, part-meditation as I work through each chapter of the book and think about what I believe, what I don’t, and why. Should anyone want to come along for the ride, I hope you will find something helpful regarding the strange field of “radical theology,” but at minimum I hope that your imagination will be sparked; after all, I hardly think that any theology can live without such a spark.

Hay, Horses, and (What Sort of) Dogmatics (?)

I have a long-standing tacit debate going with my mother and my mother-in-law. It involves the reach of Church doctrine — because why wouldn’t it? My mom-in-law puts it in more immediate, community-minded terms; namely, how essential do we want to make something like, say, the Trinity, if the general population of the Church “doesn’t get it”? My own mother’s terms are a bit more pastoral: “Put the hay down where the horses can get to it.”

My general response cites a failure of Christian education and that we don’t think highly enough of the laity, and in large degree I still think this is true. Amos, after all, was an Ancient Near-Eastern farmer who levied a socio-economic critique so scathing that it would have bowled over Marx himself. Radical critiques of Church history typically read modern systems of power/knowledge onto early doctrinal developments and presume that one-tenth of one percent has always dictated the terms of knowledge. But I think it’s reductive to say that the Trinity (might as well stay consistent) emerged from these same sorts of power dynamics. It wasn’t some mystifying element held over the heads of the faithful in order to create a divide between them and the reigning hierarchy. Yes, at the Council of Nicaea the Church was represented by a fairly small number of men, but by many accounts only an astonishing half of one percent dissented from the view which would then under-gird the doctrine of the Trinity and become “orthodox.” To me, this indicates that the doctrine was far from incomprehensible to the people whom these men represented at the council — So why should our thoughts on that change now?

Perhaps they shouldn’t. But that doesn’t render the question unproblematic; in fact, the problem may well lie elsewhere, in the question my mom-in-law consistently poses: what happens if you can’t “grasp” something so essential?

In Radical Hermeneutics (1987), John D. Caputo attempts to re-think the entire foundation of knowledge, reason, and the roles they play in our lives. Extending the legacy of philosopher Jacques Derrida and his theory of deconstruction, Caputo advocates a kind of thinking which “exposes itself to the twilight world of ambiguous and undecidable figures”:

Its role is not so much to “come to grips” with [mystery]–that is the metaphorics of grasping, and we have insisted on [mystery’s] ability to elude our grip–as it is to cope with or, best of all, to stay in play with it. (271)

Here, Caputo suggests that “grasping” is the wrong way to go about learning. Throughout his study, Caputo makes a number of adjustments in how he discusses knowledge: the Latin veritas, an object of conquest and loaded with certainties, gives way to an older notion, the Greek aletheia which discloses itself, generously opening itself to inquiry without giving everything away. Heidegger resources aletheia in his thought, but Caputo wants to radicalize such disclosure into a-letheia. What is disclosed, for Caputo, is the lethe at the heart of all thinking; not a foundational reality which coquettishly gives us peaks of itself but one which opens itself up completely to reveal that, underneath, there is nothing but flux, change, and chaos. This flux, Caputo believes, is what the German mystic Meister Eckhart has in mind when he described “Godhead,” as resourced in Heidegger:

Heidegger’s first, last, and constant thought, in my view, is that thinking is in the end directed at that lethic dimension, that the de-limitation of conceptual thinking issues in a Gelassenheit toward the lethe, the concealed heart of a-letheia, the mystery which withdraws, which never hands itself over in a form we can trust. (271)

So much for “putting the hay where the horses can get at it” …

Caputo dances beautifully across the mystical sublime, the “abyss” which accompanies the overwhelming reality of what we call God. But there are other interventions to be made here. I will leave it to Luke Ferretter to describe Caputo’s misreadings of Eckhart and the tradition to which he belonged; more imminently, one must consider what to make of Caputo’s claim that the lethe, or what Eckhart called “Godhead,” “never hands itself over in a form we can trust.”

Christianity, of course, is proceeds from a different wager: it believes that, in fact, Godhead handed itself over in a form that we can trust; the person of Jesus Christ and, secondarily, in the plural testimonies contained in what we call Scripture and the interpretive traditions of the Church (however fragile that canon may be; you can consult Colby Dickinson on that one). The core of Christianity is the belief that the “abyss” beneath the name of God has conceded to name itself God the Father Almighty, Maker of Heaven and Earth. God puts Godself in play with us materially, historically, making a wager of trust something like possible.

So while I do not believe that Caputo’s “religion without religion” is representative of historic Christianity (nor can it be in any real way), I do want to retain a part of his critique; in this case, his critique of “the metaphorics of grasping.” With Caputo, I agree that language, always the first ingress into our Being, often constitutes the realities in which we live. Insofar as the “metaphorics of grasping” are operative within our discussions of doctrine, there is a constitutive project at work which places us in active relationship to the passivity of God and his nature. Under such dynamics, the privilege will indeed belong to those who are able to “grasp” God, who “get it,” and it will fall to those who cannot to fall in line behind those who can. This creates an economy of power in which those who “get it” merely have to accuse a dissenter of not getting it, an accusation which has no hope of being rebutted because those who “get it” already control the terms of the debate. Practically, this produces situations like our current crisis in Christian education, where doctrine is merely repeated ad nauseum in terms which no one understands anymore, the bludgeon by which the powerful mark themselves off from those who “just don’t get it.”

So what can we do about this? How can we change it, and avoid this sort of toxicity? For one, I am more than satisfied exchanging veritas for Heidegger’s aletheia without making Caputo’s enormous leap into the lethe. This is a conservatism for which he would fault me mightily, but so be it; I see no need to make such a leap. What I do see, however, is that this move returns God to the active position, free to place Godself under disclosure. We take an active role in this as well, watching for events of such self-disclosure and communication by which this God would make itself known, and not to a privileged few but to all people (Isaiah 65:1).

Where, then, heresy? This is something that I will need a long time to sort out, but by way of a preliminary answer, I would argue that, under the “metaphorics of grasping,”accusations of heresy become a means by which to marginalize and manage those who are unable to “grasp” and yet continue to speak as though they do. Privilege of interpretation belongs to those who “get it,” while others are kept in line, warned not to speak beyond their place and unable to defend their dissents against the tribunal. But in accounts of the early Church, unless we read them with the most uncharitable of suspicion (a popular and academically “sexy” move to be sure), this isn’t how we see “heresy” at work. Rather, “heresy” tends to be a pretty active project; rather than being the mere failure to apprehend part of what power has decreed to be knowledge, heresy involves a deliberate rejection, an active misreading, or an over-emphasis of something else. In his feud with Athanasius, Arias did not merely “fail to grasp” the teaching that Christ was both human and divine, rather he insisted on emphasizing the humanness of Christ such that his divinity was effaced. In this case Arias could be accused of “grasping” — veritas — where it was not right to grasp, reaching beyond what God had seen fit to disclose about himself — aletheia.

If we excise or invert the metaphorics of grasping, there is, I think, an opportunity to reconsider the terms. Church dogmatics is not a veritas, a confident “grasping” of what has attempted to stay hidden from us. But neither should dogma be a merely provisional, skittish “construal” of that which “never hands itself over in a form we can trust.” Rather, dogma emerges from a desire to respond to aletheia, to that which God has seen fit to reveal about himself. Aletheia is democratic, and so dogma at its best is the communal response to God’s self disclosure. It says, “Yes, we have all seen the same thing, and though we may not understand it we trust the One who has shown it, and those who have seen the same things before us. It is enough, for now.” Dogma, when it finds its foundation in the community, is democratic because it finds its expression in response to aletheia, and because it is democratic it is also sparse, humble, and yet deadly serious, “valuable for teaching the truth, convicting of sin, correcting faults and training in right living” (c.f. 2 Timothy 3:16). Orthodoxy is orthodoxy precisely because it “sets the hay down where the horses can get to it.” Under this rubric, it is what we call heresy, and not orthodoxy, which repeats the violence of the “metaphorics of grasping” in its dissatisfaction with aletheia and instead reaches for novelty and power, either in the certainty of veritas or the gnosticism of pure lethe. It denies God’s freedom to be responsible for Godself, and denies that Christ is trustworthy enough to reveal himself in a self-consistent way which creates a humbly normative framework by which the entire community can understand itself. A proper dogmatics, responding to revelation, should be assure along with Czeslaw Milosz that, “The errors and childish imaginings of the explorers / Of mysteries should be forgiven.”

Under this reading, heresy rather than orthodoxy is an exercise in the illiberal and undemocratic; dogma is liberated, freed to be the curation of a communal voice. But the believed-in self-consistency of God as communicated by that dogma does not (indeed must not) close down the play, the messianic possibilities of the Spirit. Whereas Caputo sees all as play, as lethe, the Christian sees this as a play with purpose, the play of a person, a divine person in communication with (all) human persons, and this divine person is absolutely free. But this person is also good and trustworthy, and it is ultimately that belief which guards against what I have described above becoming a simple reversal, an inversion of terms which ultimately maintains the same power structure. Caputo himself admits that all such “displacements” are dangerous in precisely this way, no matter who is performing them. Lest we fall prey to this danger, we must remain open to that same messianic “play” by which God-in-Christ overturned the Law of Moses and undermined the theology of the Jews. The free play of the Holy Spirit may, indeed, leave our own conceptualizations and construals undermined at times, but that freedom does not bespeak of a lethe which leaves us cast adrift. We and our ever-fragile dogmatics are, rather, left with a single consolation: ADONAI, God of the spirits of the prophets, is true and trustworthy (c.f. Revelation 22:6).

Car-Crash Theology

He saluted the timestamp, archival marker as it was. On a purely solipsistic level, he wanted to believe that past could collapse into present, or that acts of thought were one long present. The ubiquity of the archive, and the marks it put on everything, would never again allow this. It created a topography behind him which insisted, beyond denial, that motion had occurred. Palaces, recognizable in proportion to completeness in proportion to decay, dotted the blurred-orange horizon and hailed their only-ever occupant, who now found himself a hundred stamps away. 

I’m currently thinking about the Shellean afterlife of theology when I should be reading any of a number of things which don’t seem to command my attention enough to do them, which is essentially the description of a “responsibility” in the first place. On hand I have five stones, variously consulted, plus a canteen. I’ll lay a cornerstone here, something I haven’t even looked at yet, and like every other one I will forget all about it, but the point is that at some point I’ll be able to come back and dig it up and remember that I ever put it here at all.

“The use for theology in a secular society is to understand our cultural heritage and diagnose its often unexpected influence” — Adam Kotsko

“Hmm, I think there is a better answer for the use [of] Death of God theology — it is similar to the new age of art.” 

Rarely does anyone think about the “use” of theology I’ve found, beyond its polemical/proselytizing/evangelizing/apologetic forms. This is what theology “does,” is create a framework in which to hold and maintain the believer while also potentially converting the non-believer. Traditionally, I think, this is what theology has done or been seen to do.

Kotsko thus suggests that the “use” of theology needs to be reconsidered for a secular society after the “death of God,” specifically the Hegelian-Altizerian interpretation which he thinks is “most interesting.” It is a functionally atheistic (post-theistic?) understanding of theology, theology conducted as genealogy, as archaeology, as a means of making sense of where we came from and where we’re going. This doesn’t mean relegating theology to the status of sociological phenomena; a real understanding of theo-logic has to go into this work of realizing the degrees to which religious belief has produced us, still pilots us, where we can permit it to do so (in a kind of chastened non-non-naivety) and where we must excise it. It’s not quite the making of Nietzsche’s ubermensch, as we get to decide exactly how much of the shadow of God’s corpse we want to live under at any given time. We can come and go from under it as we please.

The alternative, less overtly political “use” for theology after the death of God is in the kind of pastiche-role of an “art after metaphysics” such as that explored by John David Ebert. The “transcendental signified” — the universal concept, the keystone for a metaphysic — disappears behind the realities of language (a la Derrida). This is illustrated beautifully in Shusaku Endo’s Deep River, I think, in which the ostensibly universal significations of Catholic theology are, in frustration, locked into geographical particularity: “God revealed himself in Europe” an exasperated Jesuit tells a would-be Japanese priest as he attempts to translate Western images of God into usable tools for his homeland. The un-said insistence that Christianity is untranslatable outside of Western metaphysics turns into a damning affirmation: metaphysics — and consequently Christianity — are not in fact universal or “transcendental” categories of signification.

Ebert says that the contemporary art world has fully thrown itself into this realization. Dislocated forms, Eliot’s “fragments shored against ruins,” become the detritus by which we make new things: we recombine, redeploy, similar to what Jeffrey Nealon says is the role of conceptual poetics in the work of Kenneth Goldsmith and others. Theology is one of these many forms, these sense-making systems whose integrity has dissolved and left us with a “heap of broken images” which we can then put back together in ways that have never been seen before, to mean things they have never meant before–something intensely personal. Theology after the death of God becomes “profaned” in Giorgio Agamben’s sense, something tumbled from its pedestal, cast out of its state of use into an earthy realm where it is played with, given new uses. The grammar of the “Book of God,” so pulled down to earth, becomes available for new ways of navigating Being.

So now that God is dead, “theology” becomes newly available in two forms: as archaeology, a means of understanding who we are and where we come from, or as theo-poetics, a reclaiming of God-language for personal use and sense-making. In either cases, it is a disused tool: as archaeology, it is a signpost that has itself been pulled out and used to dig for the foundations of the sign. In art, we might use a spelunking analogy: our guide has died, and now we use their gear to find a way forward ourselves. Both approaches are “nihilistic,” as Kotsko wouldn’t hesitate to argue, but it’s not a project only for atheists. Catholic theologian Colby Dickinson cautions us to remember  that such theologies are profoundly political and cultural, rarely making actual claims about the ontological status of the divine (if such claims can be made at all). Such “immanent” theologies do not preclude the reality of a transcendent God who has (or has) not died (unless you’re an Altizer, of course); but they are profoundly political and cultural statements about the lack of any such God’s felt presence in modern society. As such, Dickinson is also interested in “finding new uses” for theology in such times. Insofar as he is speaking as a Catholic, I honestly don’t have any idea what the hell he means …

Because I understand the logic behind both of these approaches, this “resuscitation” of a discourse that has been dethroned, returning its vocabulary to play and common use, or to archival work. I think I understand the logic now more than I did a year ago; the purpose is not to make claims about the divine in reality but to (re)use the vocabulary available for something different. The Temple has been demolished, brick by brick, just as was prophesied: let us return to the stones and make something new out of them, so that we may “shore ourselves against its ruins.” Perhaps this is the noble image evoked by such projects.

… And yet I can’t shake the image of a story I heard, about a horrific car crash in which a child was decapitated. When the EMT’s arrived, the mother was holding the body in her lap, crying, trying to reattach the head, hope sparking through the chemical haze of adrenaline and grief which told her that this time, no, maybe this time, the red and gaping sever would shut like the beaten, battered gates of Hell in the wake of Resurrection.

Features and Bugs

“Sheep might have to put on wolves’ clothing, to fight as wolves do; of course, the innocent may risk bloodying their own jaws–captured by discourses they should have known were predatory.” — Mark Greif, The Age of the Crisis of Man (2007)

 

According to Rene Girard in I See Satan Fall Like Lightning (2001), what’s right with the world is that our current age enjoys an unprecedented care for victims. Unlike past ages, which were content to unload their violence on suddenly and unanimously-selected victims, our world is no longer content with the selection of scapegoats. Indiscriminate violence against the innocent can no longer serve as a placeholder for justice. Such, according to Girard, is the power of the Gospel.

This is something I want to make sure I think on in light of my last post. I’ll be honest, “movements” scare me. Whether it be #BlackLivesMatter or #BlueLivesMatter or the reaction of #AllLivesMatter anything (those are simply the most visible right now), no matter the source, part of me is always worried that someone is out for blood and that movements for “justice” are actually veiled coups, about-faces disguised as cries for equality which will go suspiciously silent the moment power changes hands. I went through the process of articulating this discomfort with a few people recently and got one particularly interesting response:

I will gently suggest that your discomfort […] should be examined, as they say in tech, as a feature not a bug.

What strikes me about this is the double edge to it. On the one hand, this is absolutely true: my discomfort, insofar as it is sourced in my own unexamined privilege and prejudices, is definitely “a feature not a bug.” Insofar as I am worried that “justice” means I might have to change the way in which I interface with the world, then I am on the wrong side of things — I am complicit, in need of forgiveness, repentance and patience.

But it turns out there is a darker side to this phrase: in the tech industry, when one tries to argue that something is a “feature not a bug,” there may actually be a con going on. When something goes wrong with a line of code for instance, or a program fails to work properly or efficiently, this may be used (jokingly) as an excuse. The argument that “it’s a feature not a bug” is in fact the programmer’s way of avoiding responsibility for a defective product that needed way more care and attention.

So, ironically, perhaps the single most true thing that could be said of any and all social movements is that “Discomfort is a feature, not a bug,” with all the implications of its rampant double-meaning. On the one hand, such discomfort needs to be accepted as a reality of having one’s privilege challenged; on the other, it can also be used to abdicate responsibility under the attitude that the ends justify the means.

The very ideas of justice and revolution, then, contain within themselves the promise of their success as well as their potential for a new kind of tyranny. And this ought to scare us, I think, because we are human creatures. We have a long history of piling up victims, as Girard argues. All special-interest groups have the deck stacked against them from the beginning because they cannot demand justice without reminding humanity of the mob mentality that all human culture is based on. There are two discomforts which must be held in tension, and the challengers and the challenged alike are responsible to one another in producing an actual event of justice: the ending of victimization without producing more victims.

What should encourage us, in the times ahead, is that such justice is a real possibility. If we believe, as Girard does, that we really have developed an attention to victims as well as a distaste for collective violence, then we can bank on that lesson, an instinct which does not come naturally but has been disclosed to us and to which we have failed to attend, even actively struggling against it. If our fear of retaliatory violence is real, or if we fear the inevitable and inexcusable justification of violence in the name of “justice,” then we have at least as much ability — even responsibility — to imagine an event of justice without violence, without the further accumulation of victims.

This doesn’t mean that there isn’t still real, or what Girard would call “Satanic,” potential here — Under the worst of conditions, victimhood can cease being something to rectify and can instead become something to celebrate. Just as our culture has a unique and unprecedented potential for justice through our attention to victims, so it carries the odd potential in which weakness and victimhood can themselves become just as much justifications for violence and prejudice as can privilege and power. The longer privilege and power–comfortable as they are with themselves–go without recognizing themselves as such, the more likely such a scenario becomes in which “justice” simply becomes an act of oppression changing hands.

But if we remember this potential exclusively, we lose the courage to enact the other. This is what I’m learning, anyway. Non-violent justice is not a thing to be wished for but a thing to be performed. It must be demonstrated as a real, imaginable possibility in daily life and lived experience — it must be a way of being-in-the-world. To “imagine that things might be otherwise” requires intentional dedication to a number of things which our culture isn’t exactly hospitable towards: applied humility, slowness, patience, meaningful speech always balanced by the practice of fertile silence.

This, in any case, is the space which I want to occupy.

“All Lives Matter”

“One thing I have been paying attention to,” said a friend of mine recently as he responded to the past week’s events. “Is how much people are equating ‘speaking out’ with what others are sharing on social media. As if that somehow represents the zenith of a responsible social conscious and is the best, most serious gauge on how people feel and what they think. Like those of us who have refrained from digitally expressing outrage, condolences, etc. etc., have to apologize to the rest of the world for keeping our mouths shut and being thought fools.”

He went on to give what’s turned out to be one of the most resonant pieces of observation I’ve read in a long time:

It is embarrassing that as a culture we have decided that sharing and posting online constitute meaningful dialogue and serious commitment – even if we’re “signing a petition.” Social media is far less about “dialogue” (meaningful conversation) than it is about “monologue” (shouting your opinion into the open air). And often times those opinions are not backed by action, let alone thoughtful, consistent commitments toward alleged concerns.

Finally, he hits his real zinger:

The truth is, what we really want is pats on the back, little back rubs, call them what you will, validating our ideas. We done well, saying X is bad and Y is good.

This cut to the quick, and I mean hard, especially in light of my last post. In that post, I expressed the belief that “being capable of responding,” as David Tracy argues, places us in a situation where we should say something. Of course, the other side of the coin is that, when one has very little to say it is often best to keep silent. Silence, however, has become tantamount to complicity in the digital age, while there is no patience for the type of “speaking out” which Tracy advocates because, frankly, that kind of activism doesn’t move fast enough for the contemporary culture. At this point the single worst thing you can apparently do is decide that you’re not going to participate in the masturbatory politics of social media.

This is almost exactly what I predicate my Freshman writing courses on: the need to develop and perform strategies of wise, thoughtful engagement when our technology enables and even expects us to do the exact opposite. Believe it or not, today’s college students hate the way the public forum works, too.

But one question which I’ve failed to address, and have honestly not given enough thought to, is simply this: Where? Where does one actually find a place in which to perform the kind of measured discourse which might produce effective politics? I can list a few options: The academy, the Church, public hearings organized by the State, official debate forums… All of these fall apart very quickly. Even this platform right here proves to be a misery, because as soon as I finish this blog post, I am going to share it. I am going to ask people to read it and I will be interested in what they think. The moment that happens, an entire cultural apparatus is going to pick it up and turn it into precisely the kind of artifact that my friend decried.

Writing and critical thought themselves have, for these reasons, become a sort of hypocrisy in a culture that doesn’t want to wait for them to do their extended work. The political climate in which we find ourselves is like that of a huge frozen lake that’s begun to crack under our feet; long, cruel, spidery cracks that dare us to try something.

“Come, let us reason together,” say the writer and the critic as they explore the territory, methodically tapping their ways across the ice and looking for strong places to lay their weight.

“Get over here, you’re going to get us killed!” screams everybody else, as the ice groans under their collective ideological baggage where they’ve huddled together for familiar warmth, not thinking for a moment that this might be precisely what causes the fatal collapse.

We no longer have any patience for anything which doesn’t amount to out-and-out confirmation bias. We are, in short, obsessed with the ideological use of the products which we continually churn out into the public sphere. We have commercialized our speech and determined its value based on how well it makes us feel good about X or demonizes Y.

Philosopher Giorgio Agamben has a prognosis for all this, and it’s not good:

Contemporary politics is this devastating experiment that disarticulates and empties institutions and beliefs, ideologies and religions, identities and communities all throughout the planet, so as then to rehash and reinstate their definitively nullified form.

These empty, “nullified” form of activism are precisely what you get when social media becomes “the zenith of a responsible social conscience.” We have no patience for any form of thought or identity which can’t be incarnated in the all-mighty emoticon. Our culture has collectively decided that it will use technology to evacuate communication of its content and power before promptly turning around and insisting that it’s done no such thing. “Now shut the hell up,” says Facebook, “and put this flag filter on your profile picture.”

“I’m tired of seeing Life politicized,” says Agamben. “Through terror, through the complete commodification of the human being, through racism, through capitalist structures. Identity politics. Through the Medical establishment. Even through religion.”

Catholic theologian Henri de Lubac has similar words:

People imagine that by reducing everything to immanence, everything, beginning with himself, would be given back to man; on the contrary, it meant robbing him of everything he possessed and “alienating” him absolutely. For it implied reducing everything to duration.

“The sole form in which life as such can be politicised,” says Agamben, “is its unconditioned exposure to death – that is, bare life.” This unconditioned exposure to death — the Orlando shooting, the killing of a black man in Houston, retaliatory killings of a number of police officers in Dallas — all of these deaths have exposed people to the fact that their political status is implicitly “reduced to duration,” namely the length of time for which they are useful for producing useful material. “The way in which humanism, which regards man as the supreme value, ‘gives value to man’ ends by resembling the exploitation of land or livestock.”

That is how you commodify a person, their words and thoughts: you reduce their identity to their usefulness in a competitive capitalist superstructure which is only interested in itself. The unconditioned exposure to death jars them, they begin to face down this reality and they start asking questions. They realize they have been “disfigured,” as de Lubac says, they demand a ground for their being. That is what a movement like Black Lives Matter is, in the end, all about: “Tell us we’re valuable! Tell us why we’re valuable! Show us!”

“All Lives Matter,” is the insidious response, and insidious because of this: there is nothing actually on-offer. Those who have faced down death, the bareness of their lives, have asked for something absolute, something particular, some assurance which does not reduce them to their labor, and culture has nothing to give them. “All Lives Matter,” we chant, “Black Lives Matter, Pray for Orlando, End the Hate” and the utter emptiness which those words disguise weighs them down.

Because, in the end every single statement of either solidarity or revolution spoken into the politicized sphere of social media is recast within that platform as a mere assertion rather than providing hurting, frightened people with the ground of Being which they are demanding. The public sphere we have created divides every issue into clean black-and-white, us-and-them, and paradoxically this clean division utterly erases the particularity of individual situations, it ravenously co-opts and appropriates real people into its sprawling ideological narrative.

The erasure of people and particulars for the maintenance of that narrative ultimately guarantees that all our most heartfelt prayers and protests alike are reduced to soothing coos of “All Lives Matter.” And by this, we mean that “All lives matter the same,” which is to say that no life matters beyond its capacity for production. “A capacity,” says culture with a knowing glare. “Which you have all disrupted by taking all this time insisting that you matter.”

Until there is actual content behind the assertion that life matters in all its forms, any rallying cry that “Lives Matter,” whoever’s lives they may be, will be essentially meaningless, not knowing what it wants. The words  will simply be a negative space, a canker, a weight into which we will fall and fold, “collapsed into the relative, carrying the whole of man with it.”

And, without another medium, we will still go down tweeting about how much “life matters.”

 

On being capable of responding

‘Not all are guilty but all are responsible.’

“To see how ambiguous our history has been, however, is not simply to retire into that more subtle mode of complacency, universal and ineffectual guilt. Rather, as Abraham Joshua Heschel insisted: ‘Not all are guilty but all are responsible.’ Responsible here means capable of responding: capable of facing the interruptions in our history; capable of discarding any scenarios of innocent triumph written, as always, by the victors; capable of not forgetting the subversive memories of individuals and whole peoples whose names we do not even know. If we attempt such responses, we are making a beginning — and only a beginning — in assuming historical responsibility” (David Tracy, Plurality & Ambiguity).

This weekend, early on the morning of Sunday, June 12, 2016, Omar Mateen opened fire with an automatic weapon inside a gay nightclub in Orlando, Florida, killing forty-nine people and wounding at least as many others.

Meanwhile, I am privileged, normative, living hundreds of miles away in the American Midwest, which certainly leaves me with very little to say. I know that, and I know regarding that about which one has nothing to say, it is best for one to keep silent — especially where my friends have already done so much. The best I can do is immortalize this moment here, bear witness to it, show that I am thinking about it and will not stop thinking about it. Perhaps the most important thing I can do is make a monument here, a Beth-El, making sure that there is just one more waypoint through which someone can go on their way to remember the victims.

The only contexts I have for this are my books, my only interface with the outside world right now is my library. There’s something miserably dehumanizing about that. I learned of the attack just as I was finishing David Tracy’s Plurality & Ambiguity. Writing in 1987, in the heyday of cultural theory, Tracy, a Catholic theologian, argues for the power of conversation in navigating the intense plurality and ambiguity of our human condition — of our language, our history, and our hope. Confidently, Tracy describes the power of critical theory for the turn of the century:

Any theory that allows primacy to critical reflection is on the way to becoming critical theory. A critical theory in the full sense, however, is any theory that renders explicit how cognitive reflection can throw light on systemic distortions, whether individual or social, and through that illumination allow some emancipatory action.

Tracy goes on to describe his optimism in the possibility of such emancipatory action through critical theory: “The uniqueness of modern critical theories…is that our situation is now acknowledged to be far more historically conditioned, pluralistic, and ambiguous than theories like Aristotle’s could acknowledge.” Tracy cites such subversive and re-visionary voices as Michel Foucault, Julia Kristeva, and Edward Said as exemplars of this kind of thinking. Our power is in our awareness, says Tracy, and through the humility of our moment, we can pursue “genuinely new strategies of attention, resistance, and hope.”

Twenty-nine years later, the victims of Omar Mateen were not recipients of emancipatory action. The LGBTQ+ community as a whole, brutally reminded of their terrifying lack of safety in this world, is not experiencing emancipatory action. The Muslim community whose beliefs and ideals are being blamed for Mateen’s hatred and violence are not recipients of emancipatory action. Praying Christian communities, whose motives are questioned and even decried due to the Moral Majority’s long history of homophobia and oppression, are not experiencing emancipatory action. Despite all our awareness of history, plurality, and ambiguity, emancipatory action is a far-flung wish that comes fifty lives and more too late.

I guess I’m trying to say that, on a certain level, Tracy was wrong. Insofar as Tracy seemed to see a theological light at the end of the tunnel via the road of Critical Theory, I think Tracy may have been wrong. “The golden age of cultural theory is long past,” writes Terry Eagleton in 2003, and this sentence alone, it seems, is enough to blithely dismiss a number of Tracy’s hopes, as “plurality” is devolving into the viciously-policed boundary-lines of warring ideologies which have less and less patience for “ambiguity.”

“Indeed, there are times when it does not seem to matter all that much who the Other is,” continues Eagleton. “It is just any group who will show you up in your dismal normativity.” If I as an academic may be forgiven for saying this, Orlando, and all its echoes, represents just such a situation in which it ought not matter all that much who the Other is. As I write, processes of Othering are happening all over the place as people look for someone to blame: “There is just Them and Us, margins and majorities.” But there are also those who are choosing to stop Othering just long enough to help pick up the pieces of people’s lives, without caring whether or not these hands are gay, straight, trans, cis-het, Muslim, Christian or atheist, only caring that there are hands at all, hands to shore fragments against ruins.

I’ll be reading Tracy’s The Analogical Imagination next, the theology which under-girds the view he presents in Plurality and Ambiguity. Following him will be more of Eagleton, with After Theory (2003) and his recent Culture and the Death of God (2014). I want to know where Tracy failed, or where history and society failed him, and what explanation there might be for a society that just wants to fight about what’s happening to it rather than reason together. I want to know why just a few, venomous voices are allowed to dominate the “conversation” which Tracy once imagined so hopefully, cheapening the silence the rest of us keep not because we’re passive or afraid, overwhelmed by what Tracy calls the complacency of “universal and ineffectual guilt,” but because we’re just trying to take a few moments to focus on holding one another.

And, I want to know what to do next.

On (maybe) asking how to dance

There’s something tragic about David Tracy.

I’m currently in the middle of reading his Plurality and Ambiguity (1987), after which I will work through The Analogical Imagination (1981). Though chronologically inverted, P&A really is his cultural diagnostic and his rationale for writing TAI, which is his attempt at a systematic theology of Christian pluralism.

If I can try and sum up, Tracy is moved to respond to an age of theological tyrannies, brow-beating fundamentalisms which cannot appreciate how complicated and even indeterminate the theological imagination and its history really is. Rather than easy systematic theologies which totalize the religious imagination and yield it up to easy human disposal (at the expense of everyone else’s religious experience, one might add), Tracy offers up one of the greatest voices of late-twentieth-century liberal theology and argues for the public appreciation of plurality and ambiguity in all its forms across all religions. This does not become an easy relativism, however, which Tracy says is woefully inadequate and implausible; Tracy is insistent that practical pluralism cannot come at the expense of a rigorous methodology which insists on pursuing the truth. As to how that truth manifests, however, Tracy outs himself in Plurality and Ambiguity, saying,

“My own hope is grounded in a Christian faith that revelations from God have occurred and that there are ways to authentic liberation.”

That hope, however, remains just that: not a politic, not a polemic, not even a dogma, really, but a hope — an eschaton.

In many ways, Tracy’s thought is a more accessible, public iteration of Hans Urs von Balthasar’s. Both thinkers argue for a re-aestheticization of Christian faith, both argue that Christian hope is ultimately eschatological, deferred into the future rather than manifested in the  political present, and both believe that the reality of the event of Christ requires a plurality of representations and interpretations. Indeed, history itself is the unfolding of such work.

All that said, I can’t help but feel that Balthasar does the work better, in some ways, than Tracy. Perhaps it’s the angle of approach; Tracy both does and does not walk the postmodern line of undecidability. Not only is one still obliged to stake a claim, says Tracy, even in a pluralistic context such as our own, but he believes that a rigorous methodology can help us keep in step with truth — though not, perhaps, just yet. So it is that Tracy places his hope in Christianity, stakes his claim that Christianity will ultimately prove itself to be true, while also acknowledging that, until the end of days, that claim has yet to be definitively proven and that one must remain open accordingly. Tracy then attempts to build a theology which can contain this position.

Balthasar, meanwhile, stakes his claim within Christian theology first and, from his theological imagination, a pluralism emerges which is itself rooted in the formal beauty which Balthasar finds in the Catholic metaphysic. Perhaps this is where Balthasar beats out Tracy for me, insofar as Tracy seems to do precisely what Balthasar cautions against: he begins with the beauty of pluralism, which Balthasar would call a “worldly” aesthetic, and uses those standards to gauge his theology. Balthasar, meanwhile, begins with the “Glory of the Lord,” and finds that a certain pluralism emerges quite beautifully from within Catholic doctrine itself.

I think this different point of entry and its resulting implications lies at the heart of why Tracy seems a tragic figure to me. His planned book on practical theology has never ultimately surfaced, and his presence in the conversation between Christianity and postmodernism has significantly waned. Tracy bases much of his thought on Heidegger’s insistence that whenever something is revealed, something else also disappears from view, and Stephen H. Webb chronicles Tracy’s own descent into “hiddenness,” calling him “our Erasmus.” Among Tracy’s rare recent appearances is the transcription of his response to Richard Kearney which appears in the latter’s book, Reimagining the Sacred (2015). If one listens to the actual lecture, an exhausted-sounding Tracy does indeed seem to be trying to uphold the classical hope of a Christian God whose omnipotent power manifests as boundless love and self-limiting relation, over and against “the smallest possible God” of Kearney’s anatheism, the (end?) product of a postmodern theology which Tracy himself helped create.

To some degree, I wonder if Tracy’s exhaustion is sourced in the same place as Jeffrey Nealon’s critique in Post-Postmodernism (2012). There, Nealon argues that the postmodern modes of social critique which interrupted claims at totalization, which disturbed the notion that we can have the world at our disposal, no longer function the way they once did because we have so thoroughly internalized the lessons of plurality and ambiguity which Tracy helped elucidate. Truthfully, this sort of redundance creeps into my reading of Tracy on some level: even as a religious believer who has staked my claim in a given interpretation, I have accepted that my own tradition is itself surrounded by a plurality of others and that my own position is not innocent of intense ambiguity, even darkness and trauma. While Tracy successfully diagnosis such a situation, I did not need him to do it for me, I only needed to grow up in the first decade of the twenty-first century, where fundamentalism is on the wane and yet not being replaced by the sort of liberal theology of which Tracy was perhaps the last great expounder. Not that such camps don’t exist, but on the whole it seems that some other kind of religious identity is quietly asserting itself.

If Nealon is correct, and the future of literature lies in its very “falsehood,” in its power to “give another account of the real altogether,” perhaps the same can be said of theology to come.  Again, perhaps this involves not the disappearance of doctrine, as in Amy Hungerford’s account of a literary religion of content-less form, but rather doctrine’s resurgence, though not as dogma but as drama. Certainly, stories such as Shusaku Endo’s Deep River (1994) or Marilyn Robinson’s Gilead (2004) should be read as such, not as catechisms but more as invitations — Imagining a dance of source, and a space in which the dance does indeed flow, have rhythm, logic, and then perhaps finding that this rhythm and form carries over into lived experience, into the real world where, as Tracy says, theory can once again prove useful for life.

Webb reminds us that Tracy was always critical of evangelizing theologies, but perhaps this has less to do with evangelism itself than with its mixed-up set of priorities. Certainly it does no good to shout the steps of a dance over the top of everyone else who is already contentedly dancing in some other way. As Tracy would insist, all must be allowed to dance as they will, but if, as in Balthasar’s thought, the steps of one dance come together as something truly beautiful, as something which truly does justice to all the others, then might we end up asking to learn that dance after all?

… Well, it depends. As Tracy says:

“Others — and this, I believe, is the most serious charge — find themselves, despite their acknowledgement of the cultural and ethical achievements of religion, unable to consider seriously the intellectual claims of theology because the history of religions also includes such an appalling litany of murder, inquisitions, holy wars, obscurantisms, and exclusivisms.”

The stakes of this invitation do, indeed, seem to come down to beauty after all.

ēthikē dramatikos

Be beautiful, therefore…

I want to expand a bit on something I only mentioned in passing in my last post: the idea of a dramatic ethics.

Hans Urs von Balthasar argues, in The Glory of the Lord, that aesthetics, when properly understood, should have ethical implications and aftershocks. In other words, morality is an effect of beauty. This is an instance of, as I mentioned before, “the beautiful” having a reciprocal and interdependent relationship with the true and the good.

This, certainly, is not typically the way we think about ethics or morality. The Old Testament, for instance, we boil down to a long litany of “do this” or “don’t do that.” Morality, in our public consciousness, is a set of black-and-white prohibitions which are either tyrannically repressive, or just enough to keep us from killing each other. In order to ensure social order, we codify our morality and yield it up to the jurisdiction of the State so as to ensure that everyone keeps the bare minimum code of behavior to get along, but not enough to infringe on our individual rights to do whatever the hell we want. In fact, this is even how we diet.

This, certainly, is the only outcome when morality is understood in purely juridical terms; when aesthetics has been evicted from our set of concerns.

Perhaps the most well-known and gag-inducing example of this attitude is the pithy phrase “What would Jesus do?”, made popular by evangelicals in the 1990s. Self-contained in this phrase is the assumption that the key to a moral life lies in taking every single action and decision, running it through the heuristic of Christ’s own behavior in the Gospels, doing a quick cross-reference and then acting accordingly.

The problem is, in my Bible at least, I’ve got about a hundred-ish pages of Jesus actually doing anything, most of it is repeat material and, as if this were the least of my concerns, Rabboni didn’t exactly make the clarity of his words or behavior a top priority (Matthew 13:10ff).

So what does this mean for the Imitatio Christi?

I truly think the answer lies in drama.

For Balthasar, drama is ultimately the very form of Being itself, and the dramatic action of the Gospel narrative is the definitive disclosure of the way in which the world works. The self-emptying, saving God-in-Christ reveals himself to be the form of all creation, and so morality becomes a matter of participating in that form in a way that gestures to it, does justice to it, calls attention to it.

But how does this interface with ethics? If morality is fundamentally dramatic, then dramatic logic should apply to our ethics, including one of the oldest forms of drama: tragedy. Tragedy, it seems, throws a wrench into even the most confident ethical systems. I’ve been preoccupied with this question ever since an old highschool teacher of mine told the story of his wife going into labor, with severe complications. Ultimately, the doctors approached him with the decision to save either his wife, or his unborn son. He chose the former.

In fact, this scenario is very similar to that encountered by the Jesuit priest Rodriguez in Shusaku Endo’s novel, Silence (1966). Forced by the Japanese shogunate to either publicly apostatize or thus be responsible for the murder of hundreds of villagers. Meanwhile, the aged and enfeebled Alice Bell in Pat Barker’s Union Street (1987) wanders out of her room and into the freezing cold, not because of any overwhelming desire to die but because the people on whose care she depends have institutionalized her, abdicating their responsibility so thoroughly that she is left with the choice of either dying on her own terms or dying slowly of neglect.

Modern Christian morality is going to immediately pose the question, did they do something wrong? There will be the ardent stance that unproblematically says, yes, your teacher should have saved the unborn child. That, or he should have made no decision and left it in God’s hands. Or, yes, Rodriguez should have given the people over to death and left the rest up to God, if he truly believed in him. Similarly, Alice had no right to seek death on her own terms. Perhaps something in these claims is true. But I think that tragedy ruins us for decisions like this, forces the reminder that our easy groupings of “do this, not that” fail to do any kind of justice to the reality of what Martin Heidegger calls the fallenness of being-in-the-world, or Paul Ricoeur’s fundamental view of the human being as the acting and suffering person.

Tragedy produces situations in which the only appropriate ethical response is to find a way to participate in the dramatic life which weeps despite knowing itself to be the Resurrection and the Life (John 11:21-35). Whatever the outcome, whatever the “right” answer, I feel the really important thing here is that we are forced to account for something lying far afield of our moral systems. We are forced to imagine, as Philip Yancey does, “the Jesus who speaks from the fumie, whose love extends to apostasy and beyond,” held in impossible tension with the Jesus who makes it very clear that he will disown those who disown him (Matthew 10).

“Be perfect, therefore, just as your Father in Heaven is perfect” (Matthew 5:48). This is perhaps the most difficult of Jesus’ teachings. I feel it becomes even harder, not easier, when we divest ourselves of our expectations of moral perfection and instead read, “Be beautiful, therefore, just as your Father in Heaven is beautiful.”

“Seeing the form.”

While the “true” and the “good” may not change, the “beautiful” is precisely the capacity for the “true” and the “good” to adapt to the changing needs of history.

I’m about three days into my dedicated time of reading for comprehensive exams, and my track record is abysmal.

Doing the math, I should be reading about 10 books a month to stay ahead of the curve. That’s a book every three days.

Three days in, I am on page 125 of Catholic theologian Hans Urs von Balthasar’s The Glory of the Lord.

That is, page 125 out of 663.

Of Volume I.

Of 7.

Not that I need to read all seven volumes, but still. I could be doing better.

At the same time, the slow grind has been useful, especially since Balthasar is so dense. In his introductory chapter to The Glory of the Lord, Balthasar provides a history lesson on the fall of “beauty” from the world of theology. He begins with Romanticism (the philosophical movement of the long 19th Century) and demonstrates the ways in which art has been divorced from the other transcendentals, the “good” and the “true.”

The main effects have been this: first, there is the privatization of religion into what theologian David Tracy calls “a private consumer product that some people seem to need.” Following closely, art, or “the beautiful,” becomes similarly privatized as (Tracy here, again), “a now attractive, now repulsive expression of another’s private self.”

For Balthasar, this privatization began when “beauty” came to be understood as something intensely personal and subjective, no longer understood as the “form” of the world, as is the case in Catholic sacramentalism. From there, the historicity which has under-girded traditional Christian religious belief ceases to be necessary and give way to the existentialism of modern religion, in which God is not a person to be engaged by a concept to be orbited. For Balthasar, when aesthetic form disappears from our understanding of the Gospel, that form being rooted in God’s self-disclosure throughout history, then we are indeed left only with sterile concepts; “Resurrection,” for instance, no longer figures Christ’s victory over death, but rather a figurative “resurrection” into the memory of the church and the potentials of new life here on earth and among other people.

This, I think, is precisely what Catholic novelist Shusaku Endo encountered in trying to translate his Christianity into something intelligible to his native Japan:

‘When the [Jesus] was killed,’ Otsu muttered, staring at the ground, as though speaking only to himself, ‘the disciples who remained finally understood his love and what it meant. Every one of them had stayed alive by abandoning him and running away. He continued to love them even though they had betrayed him. As a result, he was etched into each of their guilty hearts, and they were never able to forget him. The disciples set out for distant lands to tell others the story of his life.’ Otsu spoke as though he had opened up a picture-book and was reading a story to the impoverished children of India. ‘After that, he continued to live in the hearts of his disciples. He died, but he was restored to life in their hearts.’

This passage, from Endo’s final novel Deep River (1994), has less to do, I think, with the modern refusal to accept anything as miraculous as the bodily resurrection of Christ. More so, I think that it is also an aesthetic problem, one which arises when beauty is no longer a priority for theology — when “seeing the form” becomes clouded and problematic, especially when making the jump into a deeply aesthetic yet modernized culture such as Japan for which “form” means something very different. It is one thing when the “form” requires, as Tracy and Balthasar both argue, an inexhaustible plurality of expressions. It’s a wholly other thing when “form” ceases to be the means by which faith might be expressed.

This, certainly, is the case for a post-structuralist culture for which “form” is a four-letter word, another attempt at dominating reality and rendering it up to our disposal. Living in the world, today, means finding a way to live without form altogether. This is made evident enough by the fact that those who manage to do so insist on praising their achievement.

But this vantage is itself historically grounded, is the product of over a hundred years of history. Modern philosophy is, in many ways, the child who grew up in the middle of the messy divorce between theology and aesthetics. But, I think, it is possible to imagine things being otherwise: that perhaps “beauty” is not merely an after-effect of numerous subjective cross-sections of the “true” and the “good,” themselves as subjective as anything else, but that “beauty” may well be part of what determines what is “good” and “true” in the first place. Maybe it is possible, for instance, to think of ethics not as dogma, but as drama. Such imagining is, I think, precisely the work of literature, the potential role of religious literature in the 21st Century, but it is only effective if history is given central importance to the journey, not only because it tells us where we come from but because, according to Balthasar, history dictates the very needs and means (i.e. “style”) by which the form of the beautiful must be expressed. History explains where we come from, and so clarifies what we need in the present, so while the “true” and the “good” may not change, the “beautiful” is precisely the capacity for the “true” and the “good” to adapt to the changing needs of history.

But, as one fellow in the faith told me recently, and very bluntly, “Oh, I don’t care about history.” Specifically, what use is history to the immediate needs of ministry?

Oh. I wonder.

On being part of the problem

In which I attempt a renegotiation with what it means for me to be a writer.

I recently found out that I am a blight on my neighborhood. I love this moniker: the “transient academic.” It’s exciting to have something to call myself, about 6 months out from finished PhD. Whenever anyone asls, “What do you do?”, I feel the impulse to give a two-minute dissertation defense that justifies my existence. But as soon as I see eyes glaze over, I clam up and say, “I teach.” Though, anyone not interested in how contemporary literature re-imagines the credal and liturgical possibilities of Christian theology in light of continental philosophy is also unlikely to be impressed that I teach freshman writing.

One commentator on the above article from Evanston Now, going (appropriately) by the name of “Jacques,” astutely articulates everything wrong with me and what I do:

For the time being setting aside — for the sake of my readers’ patience and with a certain respect for the limits imposed on my discourse by the conventions of the “internet comment thread” — the crucially important problematic of transience which threatens to undermine, even as it makes possible, the very discussion on which we have embarked, as well as the difficulty that besets any attempt to delimit a “right sort” of academics, that is, that the category’s own purity can only be established by certain exclusion that inscribes it necessarily and from the very beginning with its other, the group which provokes such anxiety within a certain discourse that claims the authority of the Southeast Evanston Association, even as said Association attempts to disown that discourse, is almost certainly (within the horizons of the present discussion, respecting the form in which it was initially proposed, with all the presuppositions and limitations it entailed) those philosophers (still so numerous, alas, and who enjoy an especial prominence in this country) who are still attempting to shore up various forms of positivism and logocentrism.

In other words, as an academic, I am trained in and excel at — as does Jacques here —  being insufferably obtuse on purpose.

It doesn’t have to be this way. Let me demonstrate:

The SEA’s denunciation of “transient academics” is only possible because it assumes and tries to preserve an idea of what the “right sort” of academic looks like, which is itself only meaningful when you have crappy “transient academics” (whatever that means) to compare them to. The SEA doesn’t really want to get rid of “transient academics” because it needs them to maintain its own superiority complex.

That’s my translation of Jacques’s comment, which is an example of what the academy (and by now everyone else) calls deconstruction. In my little note, I’ve tried to give an example of what deconstruction does. But Jacques, facetiously, has zeroed in on how it sounds.

The sound of deconstruction has achieved its own ironic sort of social life and capital. Nobody knows what deconstruction means or does or what it’s about, but speaking in deconstructive registers makes the same point, no matter the content: “I am smart enough to sound smart enough to know what I’m saying is garbage.”

This is complicated for me, because I don’t believe deconstruction is complete garbage. I have problems with deconstruction, but for different reasons than most people have problems with it.

Despite this rupture in understanding between everyday life and the Ivory Tower, we see that, at present, “deconstruction is the case” —  so put by Penn State professor Jeffrey Nealon in his book, Post-Postmodernism (2012). As evidenced not merely by the comments section of the Evanston Now article but by culture at large, academic rhetoric continues to become furniture for everyday discourse—a sort of White Elephant gift that we shove into the corner and laught at once in a while. Glaringly, the world and the academy seem to be talking about completely different issues. Take, for instance, Nealon’s commentary on the liberal academic politics of “resistance” and “interruption”:

For interruption to function plausibly as a mode of resistance to truth, the primary social and theoretical ‘problem’ logically has to rest in a social system that does whatever sinister work it does through the desire for totalization. 

Nealon is arguing that there was once a time, not even that long ago when “truth” was inscribed by the status quo as an act of domination: truth means one thing and one thing only and that happens to be what I say it means. Under such circumstances, according to Nealon, criticism and interpretation were major modes of social resistance because they “interrupted” authoritarian claims to truth: if I can show that your “truth” is a product of history and social forces, that it can just as easily be re-construed and reinterpreted to serve another agenda of “truth,” then I have stopped your totalitarian effort in its tracks. That is precisely what deconstruction does: it takes the piss out of the “Word of God,” in whatever form that takes, by reading it against itself — like a chump.

So what’s changed? Well, for one, our sitting President is so good at interrupting himself that we are left with very little to do. Despite its ironic redeployment even from the mouths of babes, deconstruction has done its work. We are so thoroughly aware that truth claims are specious that pointing this out has ceased to be an act of social resistance and has become a tiresome exercise in stating the obvious. Our culture, down to our economics and politics, now takes plurality and indeterminacy as its starting point. Hell, interruption is a form of marketing now.

This might be more encouraging if we also lived with a meritocracy of free ideas, but this hasn’t happened either. Instead, we see splintered ideologies going to war with one another inside the public sphere of cooperative tolerance we pretend we live in. According to Rita Felski, we are so trained in suspicion that it’s become part of our social muscle memory, to the point where we have no faith in conversation anymore. 

Deconstruction did not teach us how to be hospitable to one another, as Derrida arguably intended. Rather it taught us we are factories of violence precisely because we are linguistic animals. We are so sure that every utterance is a power-play in disguise, designed to interrupt us and lock us down; obfuscation becomes an act of self-defense or, at worst, an act of preemptive intellectual terrorism. Even sounding academic signals you’re not really interested in having a conversation.

So what do we do about that?

A few years ago,  when I attended the Western Conference on Christianity and Literature in California, the theme was “Shepherding Language: Restoring Our Faith in Words.” This is a good and noble pursuit, but as Jacques pointed out regarding the Southeast Evanston Association, “shepherding language” will always involve a particular idea of what that ought to look like. For some, such as one conference keynote speaker, “shepherding language” involves what Nealon elsewhere calls “a wholly untenable and manipulative fall back into tradition”: the brute insistence that there is a “plain and simple” way of doing things that has become obfuscated and overcomplicated. I don’t think anything has given the lie to that attitude quite like the information age; there are far too many things being said too quickly. Any “right way” of speaking is going to be drowned out in a moment. If there is so much text, then why add any more?

On this note, Marilyn McEntyre has become one of my favorite people because of her refusal to avoid speaking. She speaks without certainty, insisting on the desire to communicate in a world that will automatically be suspicious of both clear speech and obfuscation. For McEntyre, “restoring faith in words” doesn’t involve rediscovering
that faith but in remaking it, re-investing it, and speaking “in good faith.” If Richard Kearney has prescribed “anatheism” as a return to or rediscovery of belief in God “after” one has lost faith, then perhaps McEntyre is proposing a redefinition of analogy: ana logos, a return to words after the death of words.

That’s what I hope for here, anyway. I am a fifth-year doctoral student in the English department at Loyola University Chicago. I am currently finishing a dissertation on three religious authors— Denise Levertov, Wendell Berry, and Eugene Vodolazkin— and exploring how they each “anatheistically” articulate beliefs in divine power and sovereignty, despite the fact that such things have been critiqued three ways to Sunday. In my comprehensive exams, I found myself saying that postmodernism was the “gauntlet” through which religious belief and identity have to pass in today’s world. One of my professors, Andrew McKenna, has insisted on calling it a “guillotine.” He’s probably more right.

So I’m conversing with the guillotine, and practicing ana logos: returning to language, documenting my journey through the rest of my doctorate and after, being as conscious as I can to say things in ways that matter. I talk about God, art, writing, politics, and being human. I practice talking about such thing as though I were talking to a systems engineer (my best friend), a crisis counselor (my wife), my pastor, or even my mother-in-law …

Especially my mother-in-law.

This is my attempt to write “without footnotes.” Though, I suppose I’m doing the opposite.

These are all my footnotes.