From the EverNote Files: A Clutter of Lists

“Recreational Lists”
Created: 6/3/2015
Notebook: BeccaLists!

Contents (titles only):

  • Comic ideas to storyboard during boring meetings
  • Technology that make me feel like it’s the future
  • Songs that Make Good Karaoke Picks – Sorted by Venue
  • Kinds of jokes I make/hear/like a lot
  • Ways contemporary academic life resembles Victorian England
  • Amazing book titles
  • “Iconic” clothing items I have owned
  • Articles I should probably write at some point
  • Names for wireless networks I wish I’d thought of first
  • Pros and cons of major citation styles as viewed by Lazy College Senior
  • Sweet collective nouns

***

Takeaway: My list obsession transcends personal/professional divisions like almost nothing else; might be worth trying to tap into that more in my teaching? If nothing else, it would give me an excuse to make even more lists for the followup ENote: “Recreational Lists 2: Back in the Habit (of Making Sweet Lists).”

Future Action: blog post about listing as a practice (form? genre?); portable lesson plan for teaching effective use of lists in academic writing, pre-writing, argumentative writing; return to recreational listmaking as your standard work break activity

Grammar Nazis, Correctness Police

One imagined/potential goal when I designed my dissertation project was to present a snapshot of students’ process for writing casually online. So I asked a lot of questions in early interviews about practices for re-reading posts, perceptions of others’ writing, how students make quality and credibility judgments based on others’ writing, etc. And almost invariably, they brought up “grammar” as a major marker of not only credibility but overall character. “I hate it when people start using all the chat speak stuff, that just drives me crazy,” one student told me.  “No one ever corrects themselves,” said another, “and I don’t do it because I don’t want to be a dick – but I notice.” “I’m a very big grammar person,” said another, “so even if I tried I can’t just accept ‘ur’ instead of ‘your.’ It just bothers me.”

Purportedly correct alternate answer: "80% of college students"
Purportedly correct alternate answer: “80% of college students”

A disclaimer here: I’m not deeply read in this area; aside from some snapshots out of the Standard English debates I don’t know very much about how this is discussed in the field at large. In this post, however, I’m focusing almost entirely on my experience.

1. They think “grammar” means “correctness.”

“I’m kind of a grammar Nazi, so I always feel like I have to make sure I have my little like, quotation marks or whatever. And commas and stuff. [I] never say LOL, never shorten stuff.”

“I’m a very big grammar person, so even if I tried I can’t just be like ‘ur’ instead of ‘your.’ It just bothers me.”

“I hate it when people start using all the chat speak stuff.”

This shouldn’t surprise me but somehow it did. I think it’s because while both grammar and correctness qualify as relatively petty obsessions in my mind, an obsession with correctness in the context of casual online writing just seems ridiculous. Like using your finger to hold back a damn that broke three months ago. And I think students know – they must know – that a misspelling is not a grammar error. But it sounds less crotchety to say you like good grammar than to say that you expect everyone to always be correct. But this also points to something else – that students don’t seem to actually know what “correct grammar” means. Which perhaps goes some way towards explaining why it’s such a source of worry and preoccupation. It’s a cycle where they know they are expected to have “good grammar” but aren’t sure what that means–>they latch on to the markers of correctness they *do* know and hold on like the dickens–>they grow defensively critical when others don’t do the same because it adds credibility to their interpretation. Which brings me to the next point:

2. Grammar/correctness is a judge-y subject.

“At a certain point you shouldn’t judge that but at the same time you do – it does definitely affect your opinion of that person.”

“Ohhhh my gosh. Take 2 seconds to proofread it and just correct yourselves – it’ll make you seem so much smarter and will get your argument over, make it seem more vaild.”

“If someone has poor grammar or punctuation and stuff, any of that kind of thing – you know, I hate to say it but yeah, it reduces my opinion of them. People who can’t formulate a proper argument or can’t logically defend it.”

This last answer is perhaps the most revealing, as it shows most directly the implicit association between basic errors and intellectual ability hiding in all the comments I heard about grammar (“grammar”)  judgments. The overall light tone and liberal use of the “grammar nazi” appellation when talking about grammar obsessions indicates that students are, to some extent, aware of the potential silliness of focusing too much on these relatively minor issues. But alongside that are quotes like this, which show that even if they’re aware there’s a joke to be made at their expense they are also serious about their expectations. They do think that correctness signals something deeper about a writer, and they make judgments accordingly. These judgments aren’t limited to faceless internet strangers either. Students admitted that they apply them to their real-life friends as well – people they presumably know firsthand are not unintelligent. “My friends, if they say something hilarious and [I] want to retweet it but they use “u” instead of – I will NOT retweet “u.” It does bug me, yeah…I definitely look down if you text me just horrible “r u” kind of stuff. And I mean, I’m not going to say anything about it but I’m just, it just, I just feel like I’m texting an 8th grader.” And another student said that while she wouldn’t go so far as to “unfriend” someone who makes mistakes, “I do look at it differently and am like “really?” Especially if it’s a friend that I know is really smart and would get mad if I ever said something that was wrong, politics-wise, and they spell “your” wrong.”

My data doesn’t say anything about whether or not the average correctness of student writing has changed with the advent of digital communication. But it does seem to say, as evidenced by the quotes above, that they don’t think it should be changing. It suggests that students continue to hold the same kind of assumptions about Standard English proficiency (and deployment) as the generations preceding them. There’s some slippage here, mostly in regard to genre – for example, most students said that things are a bit different when it comes to instant messaging – though not, notably, texting). But overall, the influx of casual digital writing opportunities does not seem to have done much to change dominant notions about the link between correctness and quality. The fact that these judgments hold for real-life relationships and not just online strangers, in particular, pokes at the popular lament that online informality is eroding people’s standards. If these students judge not only strangers for basic errors but those they’ve chosen to be friends with in real life as well, then Standard English conventions clearly still have some real strength left.

3. While “grammar nazi” mindsets are dominant, they are not universal.

“It does definitely affect your opinion of that person – but at the same time at a certain point you shouldn’t judge.”

“There are the grammar trolls looking, looking for those things. You can be perfectly credible, everything cited, you can have one thing – forget a punctuation, mispell a word, first comment after that’s going to be ‘someone didn’t go to school, [someone] can’t spell anything.'”

“A lot of people go on [FB] just to correct people, like if someone wrote a sentence wrong in their status or something. Grammar nazis. I don’t like to do that to people – as long as you can understand them, I don’t see why it matters so much.”

Unlike the other sections, these three quotes are the only ones I have as evidence for this point. But all three of them point to a different reasoning for breaking with the grammar/correctness party line. The first one (referring to posts on Reddit) suggests that this student recognizes that there’s not a direct correlation between correctness and intellectual capacity. The second one is in the same vein, but from the other side – it assumes that errors happen even to the best of us, and that it’s the people who revel in those errors who are deserving of judgment. And the final one is simply indifferent – to her, “your” vs “you’re” is the opposite of a big deal.

Though it wasn’t a conscious hypothesis, in retrospect this is the view I expected students to take. Because it’s the one I’d like to have, that my ideal self would espouse and internalize. Because I firmly believe that, at least in the context of Facebook and Redddit, basic errors shouldn’t matter. While I haven’t kept up with contemporary work on the subject, the reading I did in coursework and prelims about Standard English and the role of conventions early in the field’s history made a strong impression on me. But I hate typos and misused homophones. I can’t not notice them, and they do irritate me. This makes me feel guilty and conflicted. And it makes me more than typically interested in how my students talk about this issue, because I sympathize with the tension, confusion and conviction that pepper their responses.

What does all this mean? I don’t know, because it turns out I’m writing an entirely different dissertation. But it’s something I hope to come back to. Because it feels significant (if only to my personal scholarly development). This is data I’d love to get in conversation with the theories and experiences of others.

 

Membership Anxiety in Digital Communities

In the run-up to Computers and Writing, the DRC fellows have been engaged in an interesting conversation about the communities we belong to – mostly digital, but face-to-face as well. The discussion started as ostensible planning for the panel/workshop we’ll be leading on our work with the DRC wiki this past year, and its ongoing attempt to build a history of digital rhetoric and writing through community efforts. We got to talking about how being a contributing member of any community – but particularly professional ones – is a commitment, something that takes investments of both time and effort. To help us think about how we ourselves make decisions about distributing our efforts, and about community membership in general, we started an email conversation by all answering these 4 seemingly simple questions:

  1. In what digital communities do you participate?
  2. In what f2f communities do you participate?
  3. As follow-ups to the two questions above… What do those communities do/accomplish — what is their shared purpose? Who facilitates the operation of those communities? What are the discourses and genres of those communities?
  4. How do you choose which communities to participate in? Why those communities over others? What “pressures” or demands do these communities place on your “attention,” and how do you negotiate those demands?

Answering these questions was surprisingly tough! I’m already thinking a lot about digital communities and participation these days, so any additional thinking about those topics inevitably pulls on a huge network of mental strings, making things instantly complicated. For example, the term “digital community” – what should I be defining as a community, and what’s more of a participatory or affinity-based space? Should I include those? Am I even really an active member in any of the digital communities I visit these days? And on and on.

I wasn’t the only one who struggled with defining community involvement and what should be included. It became clear from the first wave of the discussion that when we think of our community membership, what comes to mind is professional affiliations, or groups where our involvement has definite links to our professional interests. These lines weren’t always hard and fast, however; for example, one Fellow struggled to identify whether her activity on Wikipedia fell into “personal” or “professional.”

Of course, one of the great things about the rise of all these digital spaces is that it’s encouraged a more permeable boundary between professional and personal – allowing the different discourses, personas and spaces we inhabit to blend more easily, and without seeming out of place. This is something that comes up in The Florida School’s discussions about electracy. In their quest to “jump right in and shape the electronic apparatus” by “inventing new modes of discourse that take both critical theory and digital media for granted” (6), they advocate embracing a much more fluid approach to hat-wearing. When we create theories for understanding our hypermediate world, we need to be drawing from all four sectors of experience: family, entertainment, school (community history) and career (disciplinary field).

This idea fascinated me when I encountered it during my reading for prelims. The idea that successful scholarship not only could but should draw on the discourses and experiences of my life outside academia felt very strange, and at the same time very right. In the years since then, I’ve come to see this idea operating throughout the disciplinary activity around me. It seems bound up, though perhaps only implicitly, with the growing acceptance that all reading and writing is multimodal, and therefore invention today requires us to draw from a wide range of available modes and genres. (Is this still a “growing acceptance?” Or is this an accepted thing in Comp/Rhet now? It can be so hard to tell from inside this comfortable subtower of computers and writing.) As we extend the range of media and material from which we can (and arguably must) draw to do our best work, that range seems to be accommodating more of the sector-crossing that Ulmer and the other Electrates advocate.

It’s also possible, of course, that I see things this way because I’ve become more comfortable in my skin as an academic in the past few years. But the fact that these boundaries proved so hard to define for my fellow…Fellows supports the idea that we’re still adjusting to the idea that our experience sectors can’t be kept entirely separate anymore if we’re to be most successful within them, even if we might be more comfortable that way. In addition to struggling to define whether a particular community should be coded as “work” or “personal,” one fellow also commented that she’d initially left a community off her list because it seemed so directly personal – related to her interest in running. But she was able to see several clear ways in which her experiences in that community had enabled and shaped her participation in other more decidedly professional digital spaces. Moments like these suggest to me that the Electracy folks are right, at least for those of us in digital studies: the best work comes when we’re able to draw on all our available experiences. Just as I can’t make my best contributions to the digital projects of the DRC without drawing on my experiences with personal blogging or editing gaming-based wikis, I can’t make my best scholarly contribution to conversations about digital literacy and theories thereof without pulling on my experiences growing up in a house full of computers in various stages of assembly, or my own struggles to just get my damn website organized the way I want it. (The latter situation is still ongoing as of press time.)

This is one of the reasons I find it so useful to think in terms of affinity spaces, rather than – or I guess in addition to – communities. (It’s worth noting, if only in hopes of shaming myself to action, that despite being aggravated every time I visit by the shortcomings of this entry, I have not yet taken the step of editing it.) The whole idea of affinity spaces is that they let us think of groups of people in terms of their shared interest or purpose, rather than a shared set of traits or practices for interacting within the space. Members of affinity spaces might be active contributors – or they might just be active lurkers. They might be involved in the space because they have a strong personal curiosity about its central topic, or because the topic relates in some way to their professional activities. Or both. The idea of affinity spaces allows for that kind of diversity, and gives participants a way to think of themselves in relationship to the space that doesn’t bring with it a fixed set of requirements. Freed from the anxiety that comes with asking “am I really a member?”, it’s easier to step back and see the full network of spaces, groups and – yes – communities in which we participate.

Ode to #eng177

This past semester, I TA’d for Eng177: Literature and Videogames with my advisor, Jim Brown, as lecturer. I’ll probably be talking about my experience with this class in a number of posts over the next few months, but today I want specifically to reflect on the class’s Twitter component: five compulsory tweets per lecture on the class’s #eng177 hashtag, plus several assignments using Storify to turn those Tweets into curated reflections. This was an unexpectedly cool and complex addition to the course, and one I’ve been thinking about a lot as I think about a lot as the semester wraps up.

Now, the ideal way to do this reflection would clearly have been as a Storify! But many students have already deleted or privatized their accounts now that the semester is completed; it’s also difficult to draw on Tweets from several months back, which is something I’d have wanted to do for sure. I’ve got hopes of putting one together anyhow, but for now a hyperlink-heavy blog post will have to do. A big thank-you to Peter Wagner, Cooper Chell, and Kevin Fentress for allowing me to use their work here. (And if there’s an easy workaround for the time span issue, please point me to it in a comment!)

TL;DR – Mandatory livetweeing in this lecture course had a lot of benefits I didn’t necessarily expect, turned out to be pretty awesome.

When I learned from Jim that students would be livetweeting during lectures, I was a little skeptical. It’s not that I couldn’t imagine how livetweeting could possible benefit the course – Jim said he’d done it before to some success, and I could imagine how networked notetaking might do some cool things. But the idea of encouraging students to actively use social media during lectures themselves was new to me, and I admit it – I wasn’t convinced they’d use it productively. (In my defense, the students were skeptical at first too – most notably on the grounds that having to tweet five times per lecture would prove a multitasking overload.)

In hindsight I feel pretty guilty about this reaction. It shows the kind of assumptions about how students interact with technology (especially in the context of their educations) that I try to push against in my own research – this idea that what I might initially assume about their use of Twitter in a class setting necessarily mirrors reality. And it also made me realize that my policy of not allowing laptops during class unless the day’s activities specifically call for them, might be (/is probably) both outdated and based on the same kind of faulty assumptions.

Chris Gerben spoke really well to this concern in his 4C14 presentation; he made the point that, as is to some extent natural for academic disciplines, we tend to be behind the curve in the genres of communication we’re studying closely – that by the time we’re studying Facebook or blogs, students have moved on to using different spaces and forms, or at least using those ones differently. My own research bears this out so far; for example, while students are definitely still using Facebook regularly, they’re not using it for social networking or as a primary means of connecting with distant friends. If we’re going to keep up with the call to be thinking about, teaching, and engaging in composition in the newest of keys, Gerben argued, we really need to hurry ourselves along. We need to avoid getting our focus and assumptions stuck in how we tend to be composing digitally (she says in her blog post) and remember that students often do and see things differently. My experience Tweeting this semester wasn’t the most dramatic possible example of this, but it was enough to make me feel called out. To remind me that if I’m going to stay ahead of my assumptions, I need to be channeling Mad-Eye Moody.

#eng177 showed me that yes, students can use social media productively during class sessions themselves. It showed me how Twitter creates connections and facilitate interactions that wouldn’t happen otherwise within small, temporary communities like a semester-long course. It showed me how readily students will take up such opportunities and use them productively – and get into exactly the kind of exchanges, both argumentative and dialectic, that we hope for when we design the contents of each class meeting.

Exchanges weren’t the only way tweeting got used, of course. One of the primary practices was tweeting-as-notetaking – preserving important ideas from lecture. This was a reliable way to get in one’s five required Tweets without resorting to the kind of empty “here’s my last Tweet today” stuff we said wouldn’t receive credit. But in addition to being a good default, many students did seem to use them as a record of the lecture material – I saw such Tweets show up a lot in the final Storify assignments that asked them to connect lecture concepts to game playthroughs, pointing to these Tweets as an academic resource as well as an interactional tool. This practice also helped to allay students’ initial concerns that Tweeting combined with traditional notetaking would be too much to handle; by replacing “traditional” notes, either partially or completely, note-Tweeting helped ease the multitasking burden.

But the semester went on, #eng177 did see considerable interaction between students (and instructors as well!). This was probably my favorite thing about the hashtag – the way it created all these little exchanges between the members of the course community that wouldn’t otherwise have taken place. Sometimes these exchanges were implicit, the stream bubbling with related comments about some provocative or controversial topic from that day’s material. This Storify by Cooper Chell documents possibly the best example of this, our first lecture on the game Gone Home; note how even outside direct replies to each other, students created a stream rich in different opinions and crisscrossing perspectives. And here’s one by Peter Wagner that shows some more direct conversations on a controversial but slightly less argumentative topic: different takes on the end of the novel Ready Player One. Through these exchanges, students were able to see that even though the class liked this novel significantly more than the last one we read, they still found the ending controversial. The Twitter stream gave them a place to exchange these thoughts – and in a form that is quick, easy and relatively fun.

Unlike Blackboard-based discussion components, students seemed to relish the chance to post their opinions via Twitter, using language that was frequently informal but also active and expressive to do so. Having Twitter as a means of facilitating this kind of interaction was particularly welcome given the form of this particular class. It’s hard to make lectures of 100+ students discussion-friendly under any circumstances, and in this case it could be harder than usual in the discussions sections as well; the schedule of the class had the TAs meeting with students in double-sections of 40 students, held in the same space as the lecture, which while great for multimedia work wasn’t ideal for discussion. So having an extra forum for students to engage in back-and-forth with each other was really wonderful.

The livetweeting also led to a couple serendipitous encounters with figures outside the class community. Through the power of Twitter, it seems that game critics and developers can sometimes, Beetlejuice-like, be summoned by repeating their names. In this way, our class found itself interacting with Mike Bithell, creator of Thomas Was Alone; Porpentine, who wrote the Twine game Cyberqueen (among others); and game designer/scholar Ian Bogost.  These interactions were all unplanned, coming up solely through the designers hearing (seeing?) themselves invoked through the class hashtag. And they contributed materially to the course experience, especially in the latter two cases. Bogost chimed in with advice on what it means to imitate another person’s writing rather than paraphrasing it, provoking students in both the content and style of his advice to better efforts; Porpentine ended up graciously agreeing to do a guest lecture via Skype about her experiences and advice as a game designer. Twitter also made it easy for the class to interact with an in-person guest lecturer, Matt Haselton; by using the class hashtag Matt was able to keep participating in the conversations his lecture started even after he’d gone back to his day job at Filament Games. These are things that wouldn’t have happened with Twitter, and which were extremely cool – both from my perspective and, it seemed to me, the students’ as well.

Finally, #eng177 was a really productive learning experience for me, as both an instructor and a user of social media. Though I try very hard to stay on top of it, I know I’m prone to exactly what Chris Gerber poked at in his talk – thinking through the frame of the digital genres I know best and use most. I don’t embrace new ones as prolifically or adventurously as I should, and until this semester this reticence included ever something as relatively mainstream as Twitter. I had an account, but used it almost exclusively for following people in the field. Now, I know this is a perfectly legitimate use of Twitter – indeed, one that’s regarded by many as a best Tweeting practice. But having never really used it for interacting, I wasn’t really couscous of how it functioned – for example, how easily people are summoned into conversations when mentioned by name. Tweeting to #eng177 exposed me to how well Twitter can function for conversation between finite communities; how it can extend both digital and f2f conversations beyond the moment in which they initially take place; how they make it easy to pull in outside material to existing conversations; and how they can connect you with members of the community you wouldn’t otherwise talk to. This last on in particular I found very valuable. While I naturally felt a bit more affinity with and interest in what my own crew had to say, it was refreshing to not be limited to just interacting with half the class. I started to look out for certain students whose ideas regularly connected with me – many of whom were in other sections. This is something I’d certainly have missed out on without #eng177.

For all these reasons, #eng177 was a really cool experience, and a wonderful addition to the course. I haven’t seen the evaluations yet, so jury’s still out on exactly what the students thought – but from watching the stream and hearing them chatter before and after class, I think it was at least a reasonable success for them too. Of course, there’s always varying levels of involvement with a component like this – just like with any part of a course. Some students never rose above perfunctory use, going through the motions; others grasped it actively from day one. But mostly I saw active in-between: the majority of students used it in both ways, sharing actively and naturally when they had something to contribute – which happened more often than I’d ever have thought at the beginning of the semester. Students (and the general ever-growing magic of digital media) proved the skeptic in me wrong, and gave me a new tool for my digital pedagogy toolbox. Of course, by the time I teach a 100+ lecture of my own we might all be communicating directly via hivemind – in which case I look forward to designing my first course-specific neural stamp – five mental waves per lecture required.

 

Reflecting on Telltale’s Walking Dead: Stakes, Death and Narrative

[Warning – this post contains minor spoilers for Season One of Telltale’s Walking Dead and Mass Effect 3]

Grant me the serenity to accept the things I cannot change, courage to change the things I can, and wisdom to know the difference.

This is a centuries-old and much-loved Christian prayer. If you swap out “courage” for “basic manual dexterity,” it is also a pretty good summary of how I felt playing much of Telltale’s Walking Dead. And while I got better at all three as the story progressed, the anxiety I felt over the stakes of my agency remained the defining characteristic of my experience.

Here’s what I knew about Walking Dead going in: that it was widely recognized as Really Awesome, that it used the medium to do some sweet things with narrative, and that Matt thought its narrative/game blend would make a good start on getting me used to using a controller. I hadn’t played a new video game in almost a decade, and so I didn’t have a lot of preconceived notions of what to expect going in.

I was therefore somewhat surprised to find myself playing a point-and-click adventure game much in the style of those I played as a child. It took me awhile – longer than I like to admit – to notice the connection. Even though I know classic genres have been making resurgence, it hadn’t occurred to me that my old-school playing days might have relevance here. The tension and immediacy of the real-time sequences contributed to the disconnect; that kind of “oh shit oh shit QUICKLY” is something I associated with point-and-clicks only by absence. But nerdery, like blood, will out: despite it being clear pretty early on that the “look” action is only useful in specific and pivotal situations, I still found myself compelled to look at Every. Single. Thing on the farm in Chapter 2.

farm-walking_dead_ep2
See this farm? Because I have. In great detail.

The other big surprise was how deeply unnerving my character Lee’s death was, every time I let it happen. Even I have played enough video games to know that dying is part of the process. In fact I came into Walking Dead on the tails of playing through about half of Limbo, a game that’s literally been described as “trial and death.” You try something, it doesn’t work, you head back and try something else. But in Walking Dead, I found myself right from the start incredibly on edge about the prospect of letting Lee die. The first time he’s threatened with death, I was mashing the hell out of those buttons to get the cuffs off and the gun loaded. And there was a genuine sense of relief, of tension passed, when I managed it. The encounter with the zombie babysitter at Clementine’s house took me considerably longer to bring off safely – I think I died three times before I managed it – and by the end I was stressed. When Lee dies it upsets me. Even as early as those first few scenes.

The first time I let Lee die - traumatic for us both
The first time I let Lee die – traumatic for us both

Everyone knows stakes are high during the zombie apocalypse. And with writing this good it’s no surprise that you get emotionally attached to characters, and are affected by what happens to them. But when your character dies, you just come back to life and try it again, until you manage to survive the zombie teenager/crazed farmer/dangerous parcour in question. So why did keeping him alive stress me out so much? It’s more than just that potentially lethal scenarios tend to play out in real time (though that’s certainly a factor). The game manages to communicate a very real sense of stakes very quickly, both in terms of the environment – there are horrifying animated corpses ready to eat you everywhere and at any time – and in terms of character, and to blend those stakes together in a way that gave Lee’s deaths an atypical resonance for me. As I desperately tried to make my all-thumbs controller hand load and fire the rifle, my thought was not just “shit, I can’t let him die in his very first zombie encounter” but also “shit, I can’t let him die with all those demons on his chest.”

Since playing Walking Dead, I’ve played through most of the Mass Effect trilogy. I adore those games (post forthcoming), and like most fans of the series have an intense connection to my Shepard. She’s the product of 90+ hours of battles and narrative decisions, she looks like a hot Russian version of myself, and I am going to be so sad to let her go. Violet Shepard and I have a thing going at this point. But when she dies, I just sigh and boot up my last save. Of the many things in Mass Effect that make me feel (Mordin! NEVER FORGET), her dying in combat isn’t one of them. But in Walking Dead, Lee dying makes me feel something unique, a kind of nervous tension that I don’t get from any other moments in the game.

If you were in Lee's shoes, you'd be angsty too
If you were in Lee’s shoes, you’d be angsty too

In both Mass Effect and Walking Dead, narrative is a strong part of the game’s reward system. That feeling of investment you build in your Shepard, and in Lee, is bound up in the game’s nature – it’s your reward for playing, part of what’s driving you to finish. In Mass Effect though, this narrative reward shares the stage with the rewards of a good shooter – levels gained, battles won in style, weapons upgraded, etc. You’re here for the story, but you’re doing other stuff as well – and it’s only in those “other stuff” moments, the moments when the game is not so much about narrative reward, that Shepard can die. Her death doesn’t end the narrative – it ends the battle. In Walking Dead, the narrative is pretty much the whole reward. And though Lee’s potential deaths tend to come during the most “gamelike” moments, for lack of a better term, those sequences remain firmly connected to the narrative progression. There’s no clear line separating the shooting and the story, as there tends to be in MA. When Lee dies, you feel the stakes of that death for the storyline – because you’re very much in it all the time. What happens to Clem now? Oh god, is everyone going to get eaten? By humans??? I have *got* to work on my button-mashing skills.

My feelings towards Lee were a weird blend of identification and readerly distance. Right from the first conversation in the police car, I found myself choosing answers according to my own values – what I hope I would do in his place. Sometimes this involved thinking ahead to the impact my choice might have on the story – but more often my choices were simply (if you’ll pardon the cliche) from the heart. This made the game a very revealing look into my own values, sometimes uncomfortably so. It created a stronger sense of empathy for his character than I think I’d have felt if I’d chosen more “results”-mindedly; it also increased my sense of responsibility for him.

It starts - the comfortable lie or the tragic truth?
It starts – the comfortable lie or the tragic truth?

This might become simple identification in another kind of game, but Walking Dead reminds you at just the right intervals that Lee’s not you. He often uses slightly different language than that of the player choice, for example – though a small thing, I found this to be one of the most effective means of underscoring his nature as independent from mine. And of course, Lee is a black man, an identity whose attendant stereotypes and prejudices, the game argues, survive literally to the end of our world. These reminders that Lee ≠ Becca heightened the stakes for my role in playing through him. It’s not myself I’m letting down when I die – better not mess this up.

This all means that while yes, I enjoyed the game, the predominant feeling throughout was one of stress and emotional exhaustion.*  Matt pointed out to me early on (probably in an effort to reduce my stress) that hey, this is a zombie realism story, not Grand Theft Auto: Zombie Narratives! – no one’s getting a happy ending. Intellectually, I knew my answering truthfully about this or that when a lie might have been more diplomatic had no impact on the central events of the story – the gang would have taken that field trip to Murder Farms in the next chapter either way. But somehow I was still fretting halfway through Chapter 2 about my readiness to have Lee step into the trap, wishing I’d been more cautious, even though I knew it wouldn’t have mattered.

I would argue that’s what makes Walking Dead such a successful example of its genre. It combines the walking narrative of games like Dear Esther, Gone Home, or The Stanley Parable with just the right amount of active risk, creating multiple levels on which to build the player’s emotional stake in the experience. Lee’s narrative choices (and some choices in real-time) shape how the main, unavoidable events of each chapter play out. But by providing the possible (if temporary) dead ends of in-chapter death, the game also gives you agency stakes in progression through those central events. If Lee dies, he’s not ever going to make it to Savannah. Even what coding dictates as the only path forward thus becomes, in a sense, the responsibility of the player. And while it can be intense, I found the result very rewarding.

*In retrospect, this feeling was heightened considerably by its being my first experience with a controller in almost a decade. When I play the second season now, the anxiety is less, the stakes feel a bit lower; I attribute a lot of that to my increased (though still limited) competence with the controller. But I also think Lee’s story was more intense in many ways as well. For all she drove me in Season One, I’m just not as connected to Clementine yet.

The Art of Trolling, Pt 1

One of the things I’d like to do with this blog is address some of the interesting results from my dissertation research that, for whatever reason, don’t seem likely to make it into the dissertation itself. When I think of such elements, there’s one that clearly rises to the top, that I’ve been itching to talk about for awhile now: trolling.

For those who aren’t familiar with the term, online trolls are users who post in digital spaces with the intent of provoking others into emotional and intense responses. Well, really any online behavior intended to provoke a strong response is trolling – it doesn’t have to be written. Trap your Minecraft friend in an inescapable all-obsidian cube while they’re preoccupied with crafting? You’re a troll. Profess that Obamacare is responsible for a 25% raise in healthcare costs for 70% of Americans on a left-leaning social news site? You’re a troll.

There’s an easy reason why trolling isn’t going to make an appearance (or much of one) in my final project: I barely collected any data on it. Of all the students I interviewed, only one said they’d engaged in trolling online. And though a number of others alluded to trolling behavior as a reason they don’t engage in certain kinds of spaces, this took a massive backseat to general perceptions that online discourse among strangers is low-quality. Trolls, while not a positive, didn’t rate highly as a negative either.

This is in line with what we know about trolls from existing studies. (Apparently I’m not the only academic who finds this interesting.) A recent University of Manitoba study estimates that trolls make up less than 5% of internet users – pretty slim. And from one angle – and it’s a fair one, I’ll be the first to admit – it’s not that interesting. People have been saying inflammatory things just to get a rise out of others for like, ever. I think there’s a cave drawing to that effect in Lascaux. What made my interviewee’s comments stick in my mind, however, is the way that they paint trolling as a practice that resists some of the growing critiques of digital communication – and how they show it to be an activity that actively seeks to break down the reticence I saw from so many students to get into the fray and speak back.

This student – let’s call him Joel – said he actively enjoys both reading trolled conversations and indulging in some trolly behavior of his own. “I like reading comments from trolls just because they’re just – I laugh pretty hard at them,” he told me. “Especially at the people who actually respond to the trolls and think they’re being serious. People get pulled in, and it’s kind of a fun thing to watch.” Joel also said he did some trolling himself – specifically, that if he was going to write at all in a public setting, it was going to be as a troll. Joel clearly wasn’t interested in spending time engaging strangers in deep discussion of major issues (though, notably, he did say he’ll have such arguments with friends via social media) – but he did want to elicit responses from them. He liked writing to strangers in a way that made them want to write back – just not in a particularly noble way.

Joel did specify that he has limits – “I’d never take it to the level of mocking kids with cancer, that’s a little too far for me.” Joel’s trolling activity is writing-based, focused on people with strong views taking place in persuasive exchanges; he’s not interested in mocking suicide victims or posting home addresses of pro-Tibet Chinese nationals. Joel engages in trolling as a form of intellectual and emotional release, “a form of expression I guess, to think about it. Not profoundly but abstractly…it’s kind of counterculture to that whole fact checking practice, to people who are the internet police.” Joel’s trolling is highly engaged, argument-centered, (pseudo)fact-based literate online activity, taking place on comment threads for political issues, scientific articles, social news debates.

This is pretty interesting. Because by and large, from what I found in my research, students don’t like commenting on things – but when they do, it’s on the same kinds of issues and arguments that Joel enjoys trolling. Fewer than half of them said they regularly (or even infrequently) write in online spaces beyond the sphere of social media. For this reason,Joel’s comments about being motivated to write because of the rewards trolling brings with it really stuck in my mind. And the more I think about it, the more there does seem to be something about trolling that bucks some of the drawbacks being bandied about concerning online writing.

For example, I’ve been reading up on Jodi Dean’s notion of “communicative capitalism” recently, having come across it in Mark Pepper’s awesome piece in Kairos. Basically, the upshot of Dean’s argument is that online writing, while prolific and seemingly democratic, does very little to create actual discourse or engagement between parties. Instead, it results in a cycle where people are tossing thoughts out there in writing just for the sake of having created that bit of communication, rather than with the intention of getting others to engage with those ideas in any way. “The exchange value of messages overtakes their use value,” says Dean. “Uncoupled from contexts of action and application…[the message’s] particular content is irrelevant. Who sent it is irrelevant. Who receives it is irrelevant. That is need be responded to is irrelevant. The only thing that is relevant is circulation, the addition to the pool.” The goal is to hear yourself talk so you can feel good about having done so – not so others will talk back to you.

When you troll, though, interaction is the whole point. A trolling comment that elicits no response is a total failure. Trolling is a form of online participation with a very specific rhetorical goal – evoke response. This sets it up in the face of Dean’s theory, which posits self-satisfaction and individual expression as the driving motivation for online writing. A troll has to pay attention to their audience – if you don’t know what kind of bait to leave them, you won’t have much success in getting them to take it. And at higher levels of the “art,” you need to have some knowledge of the issue you’re trolling about. For example, Joel told me that his favorite places to troll are those where the other commenters are (or fancy themselves to be) very knowledgeable and fact-focused about the matter at hand, because such audiences can’t let weak or untrue fact-based claims slide. So Joel will link to unscientific or low-credibility sources in his posts, knowing that others in the conversation will feel the need to jump in and roundly demonstrate with their own sources why his claims are wrong.

Can this kind of exchange be termed, as Pepper puts it, “a sparking of in-depth discourse?” Probably not. No one’s mind is being changed, and emotions are probably running higher than reason on both sides. But both the trollee and the troll are drawing on rhetorical skills to meet their respective goals. The troll is feeling out the audience and choosing their words and tools accordingly; the trollee is looking to what’s been said and drawing on their own arsenal to respond to its argument. Not in every case – a lot of trolling devolves pretty quickly into name-calling and exasperation. But Joel says he’s in it for the ones that don’t – that he gets the most pleasure from making people bend over backwards trying to show him why he’s wrong, when all the time he feels it should be clear he *can’t* be shown that, because it should be obvious he’s just a troll. So I’d argue that there’s engagement here, on both sides – that for Joel (and presumably therefore at least some subset of the trolls out there) the content is *not*, as Dean says of online communication, irrelevant; it is in fact essential to creating engagement. This makes trolling, at least in my view, very interesting. It is a form of online writing whose sole and central purpose is engaging in conversation (loosely defined) with others. In a landscape where more than half of students never get around to leaving any comment, and where barriers to writing online are arguably much higher than we are aware of, this seems like not such a bad goal to me – problematic though it may be.

Trolling has a lot going against it, and I don’t want it to seem as if I’m arguing in its favor. There’s evidence, in fact, that the presence of trolls in online debates has a decidedly negative effect on dialectic, causing people to “double down” on their preexisting views rather than opening them up to any actual facts or alternate perspectives being presented. And my own research suggests that trolling may play a role in discouraging writing activity online in a wide variety of contexts – not only comment threads for political or scientific articles, but in more strongly affinity-based spaces as well. What I’m saying, rather, is that examining trolling practices and motivations more closely might tell us some useful things about how written engagement works online. About what pushes people to engage in written conversation with strangers, and what makes them want to do so in the first place.

(Image credit: Mike Puncekar)

Death of a Dispatcher: The Rhetoric of Alterna-Cab Services

In his recent piece for The Atlantic, “The App-Filled Future of Luxury is Avoiding People,” Ian Bogost makes the claim that for many startups these days, particularly service-oriented ones, technology elbows out the actual product in terms of what’s being marketed. Focusing specifically on transportation services, he claims that “utility is beside the point” when it comes to designing customer apps, because what’s really being marketed is a service for people who like to use technology. “When tech startups flaunt their apps, they’re often pandering to an audience that identifies with mobile and web technology, rather than one that needs to make use of it,” he says. “Like its techno-automotive cousins Tesla and Uber, services like Silvercar represent a shift from designing products and services to support a general population to focusing on an elite capable of wrangling, negotiating, or paying their way out of the drudgery of ordinary life.”

Hmm. Okay.

There’s a lot of problems with Bogost’s argument here – including an ambiguity about what exactly it is. The title of the article suggests a point about the translation of elitism into avoidance of those of us common enough to be taking taxis and renting from Avis, but his conflation of tech-elite rental company Silvercar with companies like Uber and Tesla disrupt this line of reasoning pretty glaringly. Uber’s website bills the individual personalities of its fleet of drivers as a distinct perk of their service; Tesla may be limited to luxury cars at the moment, but they are also very open that their endgame is to make a reliable electric car for the everyman. Namechecking these companies to shore up an argument that tech-for-tech’s-sake has become a staple trapping of the wannabe-elite is counterproductive.

But nested within this somewhat confusing argument about elitism and transportation startups, Bogost makes an interesting point about the rhetoric of startups like these. Reserving a rental car via app, he argues, has few if any material advantages over doing it via phone or computer. Yet companies like Silvercar are selling themselves in large part based on their app-centricity. “This is technology for show,” says Bogost, “produced for rhetorical effect rather than functionality.” The rhetoric surrounding these companies sells them as using technology because that’s what we do to set ourselves apart now – rather than because that technology is actually an improvement over the previous way of doing business.

Here, alterna-taxi services like Uber and Lyft make perfect test cases. Calling for a taxi is pretty straightforward in principle – you call a number, give the dispatcher some basic information, and wait at the specified point for your clearly marked taxi. Alternately, you wait in a designated public space (possibly alongside other people with the same goal) for a taxi to cruise by; you might end up sharing it with some of those strangers depending on your respective destinations and patience levels. There are ways one can imagine an app materially improving on this system (for example, by removing the need to know your exact address) – but are those the features being showcased in the marketing rhetoric?

Let’s look at Uber first. Here’s the first image you see when arriving at Uber’s site:

Uber Welcome

Coming off of Bogost’s analysis, this is pretty damning. Well, “damning” is probably strong – it’s not a crime, per se, to prey on our desire to keep up with the mobile-savvy Joneses. But there’s a definite, specific care here to underline that Uber is an app-based service. You’re connected with your ride via the safety and isolation of your very own smartphone; it’s the app that gets your ride on its way, nothing so human as a dispatcher. You won’t have to speak to anyone to arrange your ride. Lest this seem too impersonal, it’s clear that you’ll have to speak to your driver. No Google Cars here, good sir. But technology will be making that connection “at the tap of a button,” emphasizing that this process is screen-based. And we all know, screens are cool. Screens are cosmopolitan, monochrome, multi-cultural cool.

But anyone who uses screens on the regular also knows that they can be really difficult to deal with sometimes. They freeze, they crash, they are sometimes confusing. And they don’t respond well to unexpected problems. What do I do if my Uber ride doesn’t show up? Will I *gasp* have to call someone? Is it really any faster to use Uber than to call a (competant) dispatcher and request a cab with, you know, my voice? This image sells Uber by creating an association between app use and convenient cool – but it’s not clear in this (or in any of the other four banner images) exactly how this app-based taxi is improving on the process of calling except because, you know, screens! This is not to say that actual, concrete advantages to using an app for taxi service do not exist – simply that they’re not a rhetorical focus here. Uber is pretty much literally advertising an app, rather than a service. “The Uber App” connects me with my driver. “The Uber App” allows me to “request, ride and pay using my mobile phone.”

Contrast this to Lyft, a similar service with a different rhetorical approach. Here’s what greets me at Lyft.com:

Lyft welcome shot

As with Uber, there’s still no chance I’m going to miss the fact that this is a service that revolves around an app. But unlike Uber, the app is not the entirety of the sales pitch. Here, the experience of interacting with the driver is elevated to top billing – rather than showcasing the lack of interaction, Lyft wants to draw my attention to how their app powers up the interactional portion of the proceedings. This is a service facilitated by an app – but by drawing attention to a quality not directly tied to the tech component of the process, Lyft presents itself as more than just a screen-for-phone swap. It’s selling based on its app, yes – but also on the basis of purported material advantages over the traditional taxi system.

This divergence continues as we move past the banners on both sites. Uber presents the advantages of their service as a series of thumbnails depicting the app in action on a shiny iPhone, overlaid with brief descriptions (“Reliable pickups!” “Clean pricing!”). Here, we’re seeing the material side of what Uber’s offering, but firmly – literally – within the scope of its app. The app is clearly the product; while these are arguably real improvements (some of them at least) it’s very clear that their utility is infinitely leveled up by the fact that it’s APP-BASED. Can a normal taxi driver split my fare? Of course. Can I get a normal taxi with one click? Sure – I click “Badger Cab” in my contacts list. Uber’s rhetoric is aimed at communicating the deep uncoolness of that in the face of app-ness. Lyft, on the other hand, has only one image of its app on the homepage, instead featuring blocks of text describing the service as “easy,” “affordable,” and “friendly” – only one of which is presented as fully dependent on the app-based nature of the service.

Which brings me back to Bogost’s point – that technology in cases like these is functioning more for rhetorical effect than for function. My brief case study of Uber and Lyft doesn’t prove this one way or the other. I’ve never used either of these apps myself (though I did download and poke around with both), and the few friends I polled who have had no strong feelings about their functionality either way. And from my look over this alternate-taxi model, there do seem to be some distinct potential advantages to using an app besides just avoiding the rabble and looking cool. But Uber’s intense rhetorical focus on their app-ness supports Bogost’s assertion that functionality is an increasingly tangential selling point – particularly for the elite. Lyft is clearly aimed at a younger, less financially independent demographic (no company courting the Wall Street or Capitol Hill suit set chooses a furry fuchsia mustache as its corporate symbol); it correspondingly focuses less on its app-ness, but not much more on that app’s functionality. “Our app takes the interaction out of calling a cab!” Uber’s rhetoric declares. “Our app makes the interaction in calling a cab way more awesome!” says Lyft’s.

Of course, in the end these are both elite services, precisely because they are app-based. To use either one, you have to have a smartphone. All the discussion swirling around these models, the criticism of city governments’ efforts perceived efforts to hamper them by holding them to the same standards as traditional taxi services, has an obfuscating rhetoric of its own – aimed at holding these services up as scrappy underdogs, fighting for us little guys and our potential best consumer experience. Lyft and Uber are not services for underdogs. They are for middle- and upper middle-classites who are too hip to make phone calls. The main difference in their rhetoric, then, boils down to this: Uber’s approach is aimed at those who want to underline this quality; Lyft’s is aimed at those who are trying to forget it.

Time to Rock, Blog-style

I’ve made numerous forays into professional/academic related (or perhaps “suitable” is the more apt term – who can really keep their discourse spheres separate these days?) blogging in the past, but while several of them have been lengthy, none has really been quite what I want tied to something as permanent as a custom domain.

But no more! Behold, beccatarsa.com and its associated content! After my false starts and dry runs, I’m pleased to have this site up and running, where I’ll be hosting more permanent and publishable versions of my rhet/comp-related ideas. The stock, so to speak, of my digital thinking.

This post is mostly a placeholder while I continue to fool with layout and settings – but stay tuned. I’ve got many feelings about my work and digital leisure stored up for this project, and I’m excited to share them.