THE BREAKFAST CLUB just turned 30! I wrote a short piece about the experience of teaching a film I adored in my youth to a brand new generation of students. The big surprise? They loved it as much as I do:
“It’s a hard thing, teaching students of another generation about a movie you loved as a child. Indeed, whenever I teach a film that I loved passionately in my youth—”E.T.”, “Star Wars,” and “Fast Times at Ridgemont High”—I try to divorce my affective attachment to it from my pedagogy. It’s not that I don’t let students know when I truly love a film—I gush about “Breathless” and “Double Indemnity” and “Killer of Sheep.” It’s just that I don’t trust the tastes I cultivated during my youth, back when my raw, hormonal heart dictated the music I listened to and the movies I watched. My undeveloped cinematic palate is somehow less authentic, at least to the teacher in me, than the tastes I formed post-college, when I began to study the cinema as a critical object. So I overcompensate for the love object. I try to point out its flaws ahead of time, to prepare myself for disappointment. I am sure they will find “The Breakfast Club” racist, close-minded, and unsatisfying. They will surely shit on my youth.”
Read the full piece here.
Several months ago I published a 2-part guide to the academic job market right here on my blog (for free!!!!!!!!!!), as a way to help other academics explain this bizarre, yearly ritual to family and friends. Indeed, several readers told me that the posts really *did* help them talk to their loved ones about the academic job market (talking about it is the first step!). Yes, I’m working miracles here, folks. And then, this happened:
“A few months ago, as I was sitting down to my morning coffee, several friends – all from very different circles of my life – sent me a link to an article, accompanied by some variation of the question: “Didn’t you already write this?” The article in question had just been published on a popular online publication, one that I read and link to regularly, and has close to 8 million readers.
Usually, when I read something online that’s similar to something I’ve already published on my tiny WordPress blog, I chalk it up to the great intellectual zeitgeist. Because great minds do, usually, think alike, especially when those minds are reading and writing and posting and sharing and tweeting in the same small, specialized online space. I am certain that most of the time, the author in question is not aware of me or my scholarship. It’s a world wide web out there, after all. Why would someone with a successful, paid writing career need to steal content from me, a rinky-dink blogger who gives her writing away for free?
But in this case, the writer in question was familiar with my work. She travels in the same small, specialized online space that I do. She partakes of the same zeitgeist. In fact, she had started following my blog just a few days after I posted the essay that she would later mimic in conceit, tone and even overall structure.
Ethically speaking, idea theft is just as egregious as plagiarism, especially when those ideas are stolen from free sites and appropriated by those who actually make a profit from their online labor.
When pressed on this point, the writer told me that she does read my blog. She even had it listed on her own blog’s (now-defunct) blogroll. But she denied reading my two most recent posts, the posts I accused her of copying. Therefore she refused to link to or cite my blog in her original piece, a piece that generated millions of page views, social media shares, praise and, of course, money, for both her and the publication for which she is a columnist.
So if a writer publishes a piece (and profits from a piece) that is substantially similar to a previously published piece, one which the writer had most certainly heard of, if not read, is this copyright infringement? Has this writer actually done something wrong?”
Well, Christian Exoo and I decided to try to find out. To read our article “Plagiarism, Patchwriting and the Race and Gender Hierarchy of Online Idea Theft” at TruthOut, click HERE.
My Mom takes her yearly Oscar picks seriously (you can read her previous picks here and here). She tries to see all of the critically acclaimed films. She even attended a screening of the Live Action Short Films. Her pick? Boogaloo and Graham (Michael Lennox and Ronan Blaney). I’ll have to take her word for it. You can watch that short here:
Amanda: We need to start by putting our cards on the table by telling the readers what we haven’t yet seen. I’ll admit, 2014 was a very busy year for me, and many of the nominated films never even played in my city, so while I tried to cram in as many of these as I could before our annual interview, there are still a lot I never saw. The critical darlings I have yet to see include: The Imitation Game (Morten Tyldum), Foxcatcher (Bennett Miller), Still Alice (Wash Westmoreland), The Judge (David Dobkin), Wild (Jean-Marc Vallée), and Inherent Vice (PT Anderson).
Mom: Of the big movies, I haven’t seen Selma (Ava DuVernay) or Still Alice. Everything else, I have seen.
Impressive! So what is your pick for Best Picture?
It’s very difficult this year; I loved all of them. One of my favorite films through the entire process was Theory of Everything (James Marsh). I thought it was phenomenal the way Hawking’s life was portrayed. But, recently I’ve changed my mind a little, maybe because I’ve been reading the reviews and editorials and American Sniper…
…was one that they truly loved because it managed to bring people back into the movie theaters.
It did, but is that what a Best Picture is? A film that a lot of people go to see? Because if that’s the case, then 50 Shades of Grey (Sam Taylor-Johnson) will win the Oscar for Best Picture next year.
Good point. But I loved American Sniper because it’s portrayal of an American soldier going to war and what he had to deal with and the effect on him as well as his family, was so powerful.
I agree with you there. It was powerful. But for me, the big problem with American Sniper is that I thought it’s message was very confused. I don’t know what Clint Eastwood thought he was doing with that movie because it seems to be telling two very contradictory stories about contemporary war. I thought the film glamorized Chris Kyle (Bradley Cooper), who was a devoted soldier and an excellent marksman, but what was problematic for me was that it also glamorized what he was doing: shooting people. I think it’s possible to honor the soldier and tell his story without glamorizing killing. Having said that, it was a well-made film because Eastwood is a good filmmaker.
But Eastwood is giving us the perception of the soldier. If my son, or grandson, or granddaughter, were in the army and going in and out of those homes, I would like a sniper there to protect them!
I’m not disputing the need for a soldier like Chris Kyle. I’m disputing the way Eastwood depicted the act of killing, the way he glamorized it. I think that was irresponsible and I also think that’s precisely why the film is so popular right now.
Well, look, Eastwood was just following the book written by Chris Kyle, that’s all he did…
Look, we’re not going to agree on this one. But still, I keep coming back to Theory of Everything. That’s my pick for Best Picture. I know you didn’t care for it.
Can you tell me something that was good about Theory of Everything that *doesn’t* include Eddie Redmayne’s performance?
Well, it’s the story of this genius and what he endured and what he’s accomplished and the fact that he’s still alive. I thought it was a beautiful film.
Let me pause you right there. I agree that Stephen Hawking’s story is amazing. The fact that he was struck with this debilitating disease and the fact that he was also a genius who profoundly shaped the way we understand the universe, those are two remarkable stories happening to the same man at once. But that’s my point — Hawking’s is a great story on its own, regardless of how the filmmaker tells it. I don’t feel like the movie did anything interesting with this truly remarkable story. I think Hawking deserves a better film than this. Eddie Redmayne’s performance is the only redeemable part of the film for me.
I should also say that one of the other best films I saw this year was Whiplash (Damien Chazelle).
I was very “meh” on Whiplash.
I thought the portrayal of Fletcher, by J.K. Simmons, and this young vulnerable boy (Miles Teller) at a music conservatory — and this was obviously supposed to be Juilliard — was so amazing. We get to watch this professor, who knew he found a great talent who was not responding, and the way he chose to address that. Definitely one of the most memorable films that I’ve seen.
I’ll give it that. And frankly, I thought Miles Teller, who plays Andrew, should have been given a Best Actor nomination…
I don’t know about that — he was so overshadowed by Simmons.
And you know, the problem for me this year is that there were a lot of really wonderful films. It’s hard to pick.
I disagree! I was disappointed with pretty much everything I saw.
So let’s talk about Boyhood (Richard Linklater). I thought it was very cute, very nice, how they filmed this boy from when he was a child up through his young adulthood but…
[ she trails off, shrugs her shoulders]
I agree. I love Linklater — Dazed and Confused (1993) is one of my favorite films — and the performances in Boyhood were just fine. But had this been a film with different actors playing the role of Mason (Ellar Coltrane), I don’t think I would have given it a second thought.
Exactly. It was very sweet, but that’s all it was.
Boyhood was very satisfying. I really enjoyed the cinematic experience of watching a character age over time. Just lovely. But now that the film is over? I just don’t care. I mean, whatever, he grew up, so what?
Right. Another film we haven’t talked about is The Imitation Game, which was superb because of Benedict Cumberbatch. Other than that? It wasn’t that great.
And we can’t base a Best Picture pick solely on the strong performance of the lead actor. Let’s chat briefly about Grand Budapest Hotel (Wes Anderson). I am huge Wes Anderson fan. But I fell asleep during this movie and I had no interest in waking up.
I thought it was stupid!
I also thought it was stupid. I guess we can move on then?
I’d like to talk about Selma even though you haven’t had a chance to see it yet. I thought the film was well-directed, but there was nothing remarkable about it. The best thing about it — beyond the story — was the performance by David Oyelowo. I thought he was really great in this role and that he absolutely should have been given a Best Actor nomination. Eddie Redmayne remains my Best Actor pick but Oyelowo’s portrayal of Martin Luther King, Jr. was subtle and powerful at the same time. It would have been very easy to get it wrong — after all, we’ve all seen footage of King giving speeches on television, but Oyelowo did a lovely job. But as for the film itself? Like Theory of Everything, Selma tells a remarkable story in a rather bland film. A Best Picture has to do something other than pick a good story. It’s also about how the story is told.
Let’s move on to the only movie that I think should never win Best Picture and that is Birdman: The Unexpected Virtue of Ignorance (Alejandro González Iñárritu).
And that is actually my pick for Best Picture.
And let me tell you why. It was the only movie from the Best Picture list that really stuck with me after it was over. The movie is essentially, except for the very end, a single take (or what appears to be a single take). The movie never pauses. To achieve this — beyond the use of CGI to mask the moments where the cuts actually took place — Iñárritu and his crew had to plot out where every actor had to be at every moment. Everyone had to hit their marks at the exact right time. To do all that, to coordinate all that with everyone on set, is amazing. And then on top of the technical feats, I thought the story of an aging action hero trying to do an adaptation of a Raymond Carver story for the stage, was really smart and timely. It stuck with me.
So you want Birdman?
As Best Picture of the year?
So let’s talk quickly about the acting categories. We’ll start with Best Actor in a leading role.
This one is tough for me. Through the entire season, Eddie Redmayne has been my choice. But, there was another actor whose performance has haunted me, and that’s Steve Carell in Foxcatcher. He is very disturbing — you hate him. He attracts these young men who are wrestlers and he wants to be their coach, their father, everything. I went home and Googled this guy, duPont, and Steve Carell looked, talked, and acted just like this man. One of the most disturbing films I’ve ever watched. But his performance of this mentally deranged man was really impressive. Still, Redmayne is my pick.
Me too. He truly gave the best performance. And granted, the role provided a lot of opportunities for Redmayne to demonstrate his acting skill. He had to both impersonate a living person whose demeanor and expressions are well known and it was also a very physical role, mimicking the impact of ALS on his character’s body. It was pretty amazing…
So we agree on that. Let’s move on to Best Supporting Actor. I haven’t seen two of these movies, so…
Well, I’ve seen all of these films and if JK Simmons doesn’t win for Whiplash then I am done with these interviews.
Really? You’re willing to give up ever appearing on this blog again…
You’ve seen Whiplash?
Well then. I would accept no pay to keep me back in [your blog]…
So I’m going to have to call the Academy is what you’re saying?
Wow, that’s a shame, because I think Ed Norton should win for Birdman.
[ noises of disgust, growls?, other indistinguishable noises]
Why can’t I have my pick?! On top of all the crazy stuff that was going on in that movie, all I could think about was Ed Norton. He was electric in that film: funny, twitchy, narcissistic. So dynamic…
I can’t even remember him from that film…
WHAT? We clearly didn’t see the same movie. He’s my pick, he’s not going to win…
He better not…
But he’s my pick all the same. Let it be noted.
[blows a raspberry]
Best Actress. Now I’ve only seen two of these so I don’t feel qualified to make a pick.
I’ve not seen Still Alice but I hear Julianne Moore is great.
She is great in everything she does. Let’s just both award this to Julianne Moore.
Yes, let’s do it! I’ll you why: I wasn’t impressed with any of the female roles this year.*
Know why? Because all the female roles were shit.
Well, you might be right.
Best Supporting Actress. Once again, I haven’t seen two of the nominees but I think Patricia Arquette deserves this one. I was so fascinated by her character. We meet her as this struggling single mother who gradually collects these degrees and then becomes an intellectual in her own right. But she keeps marrying the worst men.
I’ve seen all of them and none of them impressed me. They were all just there. Sure, we’ll give it to Patricia.
I just really hated Birdman.
*The day after this interview, Nana admitted to being a little hasty with her Best Actress pick. Over breakfast she told me she wanted to change her pick to Reese Witherspoon in Wild. She also called me from the road today to be sure that change was noted. Please note it.
I don’t normally cross post between my blogs, as the readers shift for each. But today I very much want to share a story I wrote to illustrate a series of photographs my then-3-year-old daughter took just after the birth of her baby brother. If such things interest you, I encourage you to head over to Tell Us A Story and give it a read.
Here is how it begins:
“Only years later did I think to upload any of the hundreds of photos my daughter took with her brand new Fisher Price “Kid Tough” digital camera during the first few months of 2010. In addition to her burgeoning interest in amateur photography, it was during this time that my daughter learned what it meant to have a sibling, a brother who arrived, angry and red, late in the evening on that January 13th.”
Read the rest by clicking here.
Note: a big thanks to Vimala Pasupathi for the constructive conversations that culminated in this post.
If you are a college-level educator, you have most likely experienced the following scenario: a once-promising student stops attending class or turning in her assignments. You know this student, her work ethic and temperament, and thus, her uncharacteristic behavior concerns you. You send the student several email inquiries — gentle nudges about upcoming assignments, reminders that her grade is free-falling, offers to chat during your office hours. Finally, the student shows up in your office looking wan and shaken. She tells you she’s been having trouble getting up in the morning. The thought of leaving her bed exhausts her. She has no energy. She can’t concentrate. She is missing all of her classes, not just yours. She is in danger of failing the entire semester and losing her financial aid and if she loses her financial aid, she tells you, she’ll have nowhere to live. She looks at you, with tears in her eyes, grateful to finally have someone to talk to. It’s clear that this is the first time she’s articulated these spiraling fears to anyone out loud. “What should I do?” she asks you, and she means it. She wants you to tell her what she should do.
According to a 2012 survey conducted by the National Alliance on Mental Illness (NAMI), 64% of students polled said they dropped out of college for a mental-health related reason. A 2013 poll conducted by the Association for University and College Counseling Center Directors found that the top mental health concern among college students was anxiety (41.6%), followed by depression (36.4 percent) and relationship problems (35.8 percent). These numbers, apparently, have been on the rise since the mid-1990s, and Psychology Today’s Gregg Henriques believes it has become a full-scale crisis: the College Student Mental Health Crisis (CSMHC). These claims are not news to those of us who work with college students every day. Every year more and more students miss classes, entire semesters and even drop out of school due to mental health issues. And those are just the students who openly discuss their mental health struggles. Many more remain silent and thus, undiagnosed and therefore, untreated.
These statistics are certainly troubling for professors who work with these students on a daily basis. But, perhaps, just as troubling are the increased responsibilities piled on to the already overburdened instructor, a responsibility which no one is talking about. At the same time that universities are asking more and more of faculty in terms of assessment, recruitment and program development (on top of teaching, service and gasp! research), professors are now increasingly finding themselves in the position of playing armchair psychologist to their students. For those of us who work at universities catering to low-income, first-generation, or non-white college students, the odds that these students will have undiagnosed mental health struggles is even greater. Yet most faculty working today are not provided with the resources (in terms of training, time or, most importantly, financial compensation) to competently deal with this crisis in student mental health. And make no mistake: this has, for better or worse, become our responsibility. Paul Farmer, chief executive of Mind, believes:
Higher education institutions need to ensure not just that services are in place to support mental wellbeing, but that they proactively create a culture of openness where students feel able to talk about their mental health and are aware of the support that’s available.
Yes, today the college instructor frequently finds herself in the difficult position of having to simultaneously play the role of psychiatrist, family counselor, financial advisor, and life coach, all while having to make very real, very difficult decisions about the student’s academic future. The standard advice from the university is to send the student to their mental health services, but these campus centers often have very long waits and/or find themselves underfunded and understaffed. As Arielle Eiser reports:
College counseling centers are frequently forced to devise creative ways to manage their growing caseloads. For example, 76.6 percent of college counseling directors reported that they had to reduce the number of visits for non-crisis patients to cope with the increasing overall number of clients.
More often than not, recommending that the student head to a campus counseling center means simply passing the buck. In my personal experiences at least, that student will disappear from campus, becoming one of the 64% who leave college due to mental health issues.
As an academic advisor my job is to shepherd a group of students through their English major — they must meet with me each semester to discuss their schedule, their progress towards graduation, and their academic standing. Each semester I get a list of student names, along with their registration code for the next semester (a process which ensures that students must meet with me prior to registering for classes). It always breaks my heart when I look at that list of advisees and see the ones with no registration code next to their names. These are the students who have not re-enrolled for the semester. These are the students I have lost.
If only I had checked in on that student after our last tearful meeting. If only I taken the time to make sure she was still going to class, turning in her work, registering for her next semester. A single email, hastily written and sent, might have been the difference between staying in or dropping out. These are the kinds of emails my best self sends, the self I wish I were all the time, but which I am only when my deadlines are met, my children are healthy, and I’m caught up on Downton Abbey. These unmade choices torture me because they exist as possibilities, reminding me of everything I might have done and didn’t. My job and salary don’t depend on sending those emails. Therein lies the rub. When students fail and drop out of the system, who is to blame? It’s the student, sure, but it’s also those of us who are tasked with advising them. And it is this unpaid, unmarked labor that becomes “key” to student retention, a job which has, quite suddenly, been shuffled onto my already very full plate.
So much of the labor expected of faculty today, both on and off the tenure track, is unmarked and unpaid. As our salaries stagnate, our job descriptions inflate exponentially. Although middle management, the dreaded Associate Deans, has skyrocketed over the last few years, it’s ironic that faculty are being asked to take on more and more of the management burden. Our department chairs no longer assess our research, service and teaching contributions. Instead, we assess ourselves and turn in those documents in to our chair, who then quickly rifles through our summaries, offering us arbitrary numbers meant to represent our achievements. The university no longer assesses the value of our individual programs. Instead, we assess our programs — through Byzantine rubrics and committees and “objectives” — and then turn these documents in to our middle-management overlords for quick perusals. The university is no longer tasked with recruiting new students to our programs. No, that is now my responsibility, despite the fact that I have no training in marketing or recruitment. I am expected to spend my work hours (the hours for which I pay for childcare) pitching English courses to community college students or thinking of sexier ways to describe my courses to undeclared majors. And then, if my classes don’t fill up? Yeah, that’s my fault. And I’m told I have to tach freshman composition.
Almost every week I receive a new email announcing the formation of yet another subcommittee on which I am supposed to volunteer to serve. I should volunteer, you see, because we all need to pitch in together and help! We’re a team! Almost daily I receive an email inviting me to attend another training workshop that will show me how to better assess my program or better manage the time that is increasingly being taken up with deleting emails inviting me to time management seminars. There is simply not enough time.
So how do I help my anxious, depressed, spiraling-out-of-control students when I don’t even know how to help myself with these problems? If I ignore the students’ cries for help, their mental health is compromised. If I help them, mine is compromised. This zero-sum game involves just me and the students. One of us is going to lose and right now, it’s both of us.
Academic writing has taken quite a bashing since, well, forever, and that’s not entirely undeserved. Academic writing can be pedantic, jargon-y, solipsistic and self-important. There are endless think pieces, editorials and New Yorker cartoons about the impenetrability of academese. In one of those said pieces, “Why Academics Can’t Write,” Michael Billig explains:
Throughout the social sciences, we can find academics parading their big nouns and their noun-stuffed noun-phrases. By giving something an official name, especially a multi-noun name which can be shortened to an acronym, you can present yourself as having discovered something real—something to impress the inspectors from the Research Excellence Framework.
Yes, the implication here is that academics are always trying to make things — a movie, a poem, themselves and their writing — appear more important than they actually are. These pieces also argue that academics dress simple concepts up in big words in order to exclude those who have not had access to the same educational expertise. In “On Writing Well,” Stephen M. Walt argues:
jargon is a way for professional academics to remind ordinary people that they are part of a guild with specialized knowledge that outsiders lack…
This is how we control the perimeters, our critics charge; this is how we guard ourselves from interlopers. But, this explanation seems odd. After all, the point of scholarship — of all those long hours of reading and studying and writing and editing — is to uncover truths, backed by research, and then to educate others. Sometimes we do that in the classroom for our students, of course, but even more significantly, we are supposed to be educating the world with our ideas. That’s especially true of academics (like me) employed by public universities, funded by tax payer dollars. That money, supporting higher education, is to (ideally) allow us to contribute to the world’s knowledge about our specific fields of study.
So if knowledge-sharing is the mission of the scholar, why would so many of us consciously want to create an environment of exclusion around our writing? As Steven Pinker asks in “Why Academics Stink at Writing”
Why should a profession that trades in words and dedicates itself to the transmission of knowledge so often turn out prose that is turgid, soggy, wooden, bloated, clumsy, obscure, unpleasant to read, and impossible to understand?
Contrary to popular belief, academics don’t *just* write for other academics (that’s what conference presentations are for!). We write believing that what we’re writing has a point and purpose, that it will educate and edify. I’ve never met an academic who has asked for help with making her essay “more difficult to understand.” Now, of course, some academics do use jargon as subterfuge. Walt continues:
But if your prose is muddy and obscure or your arguments are hedged in every conceivable direction, then readers may not be able to figure out what you’re really saying and you can always dodge criticism by claiming to have been misunderstood…Bad writing thus becomes a form of academic camouflage designed to shield the author from criticism.
Walt, Billig, Pinker and everyone else who has, at one time or another, complained that a passage of academese was needlessly difficult to understand are right to be frustrated. I’ve made the same complaints myself. However, this generalized dismissal of “academese,” of dense, often-jargony prose that is nuanced, reflexive and even self-effacing , is, I’m afraid, just another bullet in the arsenal for those who believe that higher education is populated with up-tight, boring, useless pedants who just talk and write out of some masturbatory infatuation with their own intelligence. The inherent distrust of scholarly language is, at its heart, a dismissal of academia itself.
Now I’ll be the first to agree that higher education is currently crippled by a series of interrelated and devastating problems — the adjunctification and devaluation of teachers, the overproduction of PhDs, tuition hikes, endless assessment bullshit, the inflation of middle-management (aka, the rise of the “ass deans”), MOOCs, racism, sexism, homophobia, ablism, ageism, it’s ALL there people — but academese is the least egregious of these problems, don’t you think? Academese — that slow nuanced ponderous way of seeing the world — we are told, is a symptom of academia’s pretensions. But I think it’s one of our only saving graces.
The work I do is nuanced and specific. It requires hours of reading and thinking before a single word is typed. This work is boring at times — at times even dreadful — but it’s necessary for quality scholarship and sound arguments. Because once you start to research an idea — and I mean really research, beyond the first page of Google search results — you find that the ideas you had, those wonderful, catchy epiphanies that might make for a great headline or tweet, are not nearly as sound as you assumed. And so you go back, armed with the new knowledge you just gleaned, and adjust your original claim. Then you think some more and revise. It is slow work, but it’s necessary work. The fastest work I do is the writing for this blog, which as I see as a space of discovery and intellectual growth. I try not to make grand claims for this blog, mostly for that reason.
The problem then, with academic writing, is that its core — the creation of careful, accurate ideas about the world — are born of research and revision and, most important of all, time. Time is needed. But our world is increasingly regulated by the ethic of the instant. We are losing our patience. We need content that comes quickly and often, content that can be read during a short morning commute or a long dump (sorry for the vulagrity, Ma), content that can be tweeted and retweeted and Tumblred and bit-lyed. And that content is great. It’s filled with interesting and dynamic ideas. But this content cannot replace the deep structures of thought that come from research and revision and time.
Let me show you what I mean by way of example:
Stanley has already taken quite a drubbing for this piece (and deservedly so) so I won’t add to the pile on. But I do want to point out that had this profile been written by someone with a background in race and gender studies, not to mention the history of racial and gendered representation in television, this profile would have turned out very differently. I’m not saying that Stanley needed a PhD to properly write this piece, what I’m saying is: the woman needed to do her research. As Tressie McMillan Cottom explains:
Here’s the thing with using a stereotype to analyze counter hegemonic discourses. If you use the trope to critique race instead of critiquing racism, no matter what you say next the story is about the stereotype. That’s the entire purpose of stereotypes. They are convenient, if lazy, vehicles of communication. The “angry black woman” traffics in a specific history of oppression, violence and erasure just like the “spicy Latina” and “smart Asian”. They are effective because they work. They conjure immediate maps of cognitive interpretation. When you’re pressed for space or time or simply disinclined to engage complexities, stereotypes are hard to resist. They deliver the sensory perception of understanding while obfuscating. That’s their power and, when the stereotype is about you, their peril.
Wanna guess why Cottom’s perspective on this is so nuanced and careful? Because she studies this shit. Imagine that: knowing what you’re talking about before you hit “publish.”
Or how about this recent piece on the “rise” of black British actors in America?
Carter’s profile of black British actors in Hollywood does a great job of repeating everything said by her interview subjects but is completely lacking in an analysis of the complicated and fraught history of black American actors in Hollywood. And that perspective is very, very necessary for an essay claiming to be about “The Rise of the Black British Actor in America.” So what is someone like Carter to do? Well, she could start by changing the title of her essay to “Black British Actors Discuss Working in Hollywood.” Don’t make claims that you can’t fulfill. Because you see, in academia, “The Rise of the Black British Actor in America” would actually be a book-length project. It would require months, if not years, of careful research, writing, and revision. One simply cannot write about hard-working black British actors in Hollywood without mentioning the ridiculous dearth of good Hollywood roles for people of color. As Tambay A. Obsenson rightly points out in his response to the piece:
Unless there’s a genuine collective will to get underneath the surface of it all, instead of just bulletin board-style engagement. There’s so much to unpack here, and if a conversation about the so-called “rise in black British actors in America” is to be had, a rather one-sided, short-sighted Buzzfeed piece doesn’t do much to inspire. It only further progresses previous theories that ultimately cause division within the diaspora.
But the internet has created the scholarship of the pastless present, where a subject’s history can be summed up in the last thinkpiece that was published about it, which was last week. And last week is, of course, ancient history. Quick and dirty analyses of entire decades, entire industries, entire races and genders, are generally easy and even enjoyable to read (simplicity is bliss!), and they often contain (some) good information. But many of them make claims they can’t support. They write checks their asses can’t cash. But you know who CAN cash those checks? Academics. In fact, those are some of the only checks we ever get to cash.
Academese can answer those broad questions, with actual facts and research and entire knowledge trajectories. As Obsensen adds:
But the Buzzfeed piece is so bereft of essential data, that it’s tough to take it entirely seriously. If the attempt is to have a conversation about the central matter that the article seems to want to inform its readers on, it fails. There’s a far more comprehensive discussion to be had here.
A far more comprehensive discussion is exactly what academics have been trained to do. We’re good at it! Indeed, Obsensen has yet to write a full response to the Buzzfeed piece because, wait for it, he has to do his research first: “But a black British invasion, there is not. I will take a look at this further, using actual data, after I complete my research of all roles given to black actors in American productions, over the last 5 years.” Now, look, I’m not shitting all over Carter or anyone else who has ever had to publish on a deadline in order to collect a paycheck. I understand that this is how online publishing often works. And Carter did a great job interviewing her subjects. Its a thorough piece that will certainly influence Buzzfeed readers to go see Selma (2015, Ava DuVernay). But it is not about the rise of the black British actor in America. It is an ad for Selma.
Now don’t get me wrong, I’m not calling for an end to short, pithy, generalized articles on the internet. I love those spurts of knowledge, bite-sized bits of knowledge. I may be well-versed in film and media (and really then, only my own small corner of it) but the rest of my understanding of what’s happening in the world of war and vaccines and space travel and Kim Kardashian comes from what I can read in 5 minute intervals while waiting for the pharmacist to fill my prescription. My working mom brain, frankly, can’t handle too much more than that. And that is how it should be; none among us can be experts in everything, or even a few things.
But here’s what I’m saying: we need to recognize that there is a difference between a 100,000 word academic book and a 1500 word thinkpiece. They have different purposes and functions and audiences. We need to understand the conditions under which claims can be made and what facts are necessary before assertions can be made. That’s why articles are peer-reviewed and book monographs are carefully vetted before publication. Writers who are not experts can pick up these documents and read them and then…cite them! In academia we call this “scholarship.”
No, academic articles rarely yield snappy titles. They’re hard to summarize. Seriously, the next time you see an academic, corner them and ask them to summarize their latest research project in 140 characters — I dare you. But trust me, people — you don’t want to call for an end to academese. Because without detailed, nuanced, reflexive, overly-cited, and yes, even hedging writing, there can be no progress in thought. There can be no true thinkpieces. Without academese, everything is what the author says it is, an opinion tethered to air, a viral simulacrum of knowledge.
The WordPress.com stats helper monkeys prepared a 2014 annual report for this blog.
Here’s an excerpt:
The Louvre Museum has 8.5 million visitors per year. This blog was viewed about 99,000 times in 2014. If it were an exhibit at the Louvre Museum, it would take about 4 days for that many people to see it.