Digital Communities in Action: Reddit and the Unidan Ban

Recently, there was a major incident on Reddit – its most popular user, Unidan, was banned. If you’re a Redditor, you probably already know this – it was incredibly hard to miss.  If you’re not, here’s a very brief Reddit primer: Reddit is a large content aggregator and participatory digital space, composed of thousands of “subreddits” – forums devoted to a particular topic, interest or pursuit. Reddit uses a “karma”-based voting system to determine the ranking, and therefore visibility, of content throughout the site – an “upvote” earns a post and its author 1 karma point, and a downvote removes one. Karma has no value apart from status, but it is a huge deal within the community and drives participation and content production in all kinds of ways.

Here’s a bio of Unidan written before his banning. He was wildly popular, with a karma count through the roof, and arguably had more name recognition than any other user (with the possible exception of successful novelty accounts like shitty_watercolors and awildsketchappeared.) On July 28th 2014, he was banned permanently by the admins and his account deleted, sparking a flurry of posting and voting activity throughout the site.

There’s a bunch of cool things on display in the Unidan kerfluffle, lessons and qualities of the digital landscape and its communities – but I’m going to focus on just two here.

1) It’s evidence of Reddit as a unified community
Since it shows up a lot in the student interviews for my dissertation, I’ve done a lot of thinking about how to classify Reddit and its warren of subspaces. On the one hand, Reddit’s raison d’être is as a home for affinity spaces – a tool for helping people build spaces devoted to specific common interests and pursuits, with their own membership, character, content, and associated practices. One of the arguments I make in my intro (through an extended and serious close reading of r/AdviceAnimals that was not as fun to write as I expected) is that even subreddits with loosely defined goals and interests can – and should – be considered affinity spaces. Since on average a randomly selected handful of subreddits will have widely divergent topics, frequently leading to equally divergent discourse conventions and posting practices, Reddit as a whole begins to look less like an affinity space and more like a routing station for them – a one-stop shop for (almost) all your affinity needs.

But the aftermath of the Unidan banning makes it extremely clear that while it may not be an affinity space, Reddit  is a community. Not a community of practice; not an affinity space; and more than simply a participatory culture. It’s a straight-up community, bitches. Unidan, charming and/or notorious as he can be by turns, can’t really be termed a “common interest” as digital participation scholars like Gee and Halverson use the concept. His ban didn’t trigger a seismic posting event because all those people feel such a continuous and underlying affinity for the project of  critically recording and analyzing his activity. People tuned in and contributed to the Unidan debate in such numbers because it was an event of cultural importance. 

It’s certainly possible to detect this underlying cultural thread outside of  unusual/abnormal/major events like this one – by tracing the scope of the Redditquette’s influence, for example, or analyzing responses to “Good Guy Reddit” incidents, where Redditors’ real-life actions have a notably positive impact for others. But what’s great about the Unidan Ban is how starkly it shows that culture cutting across the divides that most obscure it at calmer moments. r/AdviceAnimals thinks he’s a martyr to another user’s oversensitivity;  r/subredditdrama thinks he’s high on his own drama; and r/TIFU is just having none of it. (And then r/adviceanimals abruptly changes sides bc of course they did.) But wherever you look, Redditors are showing – not just with the usual votes, but with an outpouring of words as well –  that this is a Community Matter, and that they are part of that community.

2)  Digital communities make the collective memory process highly traceable – and Reddit particularly so 
Reading through the various threads about Unidan’s banning, you can literally see public history in process. The AdviceAnimals side-switching is a great example of this: as the hours go on, the community’s version of the story switches from one where /u/Ecka6 (a user who got into a fight with Unidan shortly before his ban) is to blame, to one where Unidan himself is at fault. Because far more people on Reddit vote than contribute written comments, you can trace the influence of the story depicted in a given thread on the wider community by watching the voting activity it creates: we know /u/Ecka6’s guilt is a community narrative and not just the opinion of the voices in the one thread because the comments by those voices receive many upvotes, and because /u/Ecka6’s posts – not just to Unidan but all of the posts in their recent history – received a huge number of downvotes in the hours following the narrative’s creation.

By watching the voting tide shift in response to new comments/threads revising that narrative, you can see how the changes spread from individuals to the community as a whole. For awhile in the middle, both /u/Ecka6 and Unidan are being “brigaded” by the AA community, until the new (and more or less final) narrative spreads far enough to become the single accepted story. This same process, or some version of it, took place across Reddit as a whole – posts by individuals shifting subreddit communities, those communities using their voting power to influence the visibility of posts in other places, until over time what’s left is a (relatively) unified and widely accepted picture of events.

 


Unfortunately, the very thing that makes this process so uniquely visible on Reddit makes it hard to document it after the fact. Karma scores for comments and threads change constantly. And since the default settings display comments by score (making what’s currently popular most visible at a given time) this creates a snowball effect that’s hard to account for unless you saw it happen. Adding to this difficulty is the fact that comments, user accounts, and sometimes whole threads are frequently deleted entirely – sometimes by the author themselves, sometimes by moderators for violations of the Reddiquette. An interested academic party poking around after the fact can see that the deleted content was influential in some way, but not how or why. So unless you’re at ground zero for a major event and able to put some time into actively tracking it, you’re inevitably going to lose some of the story. Awhile back I had the idea of writing and article about Reddit’s “investigation” of the Boston Bomber and the effects of the detailed (and totally incorrect) theory they created. This could have made an awesome article – but I didn’t get past the idea stage because all the relevant threads had been locked or deleted by the time I got there. I was a bit faster on this Unidan post, but even so there’s a lot missing – the r/AA threads calling for downvotes against Unidan and r/Ecka6 are gone, and I had to rely on secondhand accounts of events like this one rather than my own observations, which works for a blog but wouldn’t fly as an article.

Reddit has long struck me as a rich site for academic study. (I’m not the first to notice this potential. This article by Kristy Roschke on vegan subreddit’s as affinity spaces is the latest example to roll across my path, but there’s lots more – to say nothing of the community’s own meta-efforts.) Watching Unidan’s ban play out helped me articulate some of the reasons for that – but it also highlighted some potential research challenges I hadn’t considered before. The addition of the karma system to the already fast-moving digital community environment makes reviewing threads after the fact an incomplete and potentially even unreliable technique. And since the scope of the space makes surveying everything impossible, you’d need to think carefully about how to match your collection method to your research goals. Embedded ethnography seems like a possible match for some research goals, since it would enable you to study voting patterns more closely and increase your ability to notice and track major events from early on – but being limited to one or just a handful of subreddits would limit the scope of your findings as well, and miss out on the scope and depth that make Reddit such a unique site of inquiry.

Tracing out Reddit research designs for hypothetical research objectives would be a good mental exercise for me – to say nothing of giving me some strong starting points for future work. It’s added to my list of topics for future posts; until then, I’ll set the back of my mind to work generating some possible Reddit-friendly RQs.

Screencasting the Human Element

My thanks to Mike Shapiro for allowing me to share my tutor’s response to his work, and to Anne Wheeler, whose thoughtful framing of writing center “humanity” informed many parts of this post.

A screencast is a video that combines a recording of activity on the tutor’s screen with an audio recording of their comments during that activity. Essentially, it’s a video conference in miniature where writing replaces the tutor’s face. I’ve been helping the UW writing center pilot-test them as a response method for student drafts this summer, and it’s been a pretty great experience. I find them to be a really promising tool, one that pushes me to focus on structuring my comments clearly while allowing me to keep my “human touch” as a tutor.

Here’s how my screencast process plays out: After reading the draft a few times, I compose a little “script” on a Sticky note of the main points I’m going to make (the first of which is usually of the “existing strengths” variety), each with a brief explanation to guide me when I record. Each explanation has to include at least one concrete anchor in the text, which I highlight in bold; this helps guide my action in the draft during the recording, making it easy for me to find the sentences/phrases I want to visually highlight. If I have smaller comments that don’t fit under those main points, I’ll make an “if there’s time” category at the bottom of the script. Since I pretty much never have time left after my main points, I revisit this section post-recording. If the comments still seem important, I’ll add them as margin notes and send the draft along with the video. (You can watch the result here, if you’re interested.)

At UW, we use Jing to record screencasts. It’s got two major things to recommend it: first, it’s incredibly easy to use; and second, it limits screencasts to a maximum length of 5 minutes. (And also it’s free. Three reasons!) I cannot emphasize enough how useful this time limit is. As outlined in this wonderful blog post by Mike Shapiro, co-director of the Writing Center’s summer session, targeting a few high-level concerns with concrete suggestions for addressing them has been shown to be the most effective form of distance feedback. This sounds easy in theory, but I think most instructors will agree that it can be difficult in execution. You get bogged down in details and never get to the larger issues. You let jargon replace concrete examples in your revision suggestions because you forget that what’s clear to you may be opaque to them. (“Use less awkward constructions;” “work on connecting your ideas”) Or you simply get carried away, forgetting that less is usually more. (I’ve been guilty of all three, but I’m especially prone to the last one. It can be bad – like full single-spaced page bad.)

But in a screencast, you go in knowing you have only five minutes. And that in those five minutes, you have only your voice and what’s on the screen to work with. This makes it very hard not to anchor feedback in concrete examples – because otherwise, you’re just talking over a screenshot. Likewise, there’s no time to get caught in tiny details or grand abstractions. To comment effectively on an entire draft in 5 minutes, you have to have a plan. You have to be clear and concise, and you have to choose the 3 points (at most) that will most improve the draft. Combined with the emphasis on concrete anchoring in the document, I’ve found this to be a recipe for just the kind of feedback research (and experience!) encourages.

Screencasting also makes my “best practices” list for its ability to bring a human element into asynchronous feedback. A big part of writing center ethos is, as Michael Pemberton puts it, “being with people…seeing their faces, hearing their voices, reading their body language, experiencing a strong sense of presence as we talk.” Anyone who’s worked in a writing center knows that this “being with people” about their writing powers up your ability to help them as writers. Building rapport, demonstrating engagement with a student’s individual situation and writing through body language and tone, being able to talk through ideas and suggestions until they work – these are key tools in the writing tutor’s toolbox.

Digitally expanding the writing center can be a fraught topic because at first glance, practices like synchronous chat and email feedback seem poised to drain writing centers of this essential togetherness. If the human element is what makes writing centers successful, this argument goes, practices without that element will necessarily be less effective. (And if they are effective, they challenge a heavily-invested-in piece of the writing center narrative.) Lee-Ann Kastman Breuch worried in 2005 that “online tutoring goes much against the idea of a writing center – the idea of Burkean Parlors, of ongoing conversation” (31). While we understand more about the benefits of digital tutoring than we did then, many still share Breuch’s concern: a 2006 survey by Stephen Neaderhiser and Joanna Wolfe looking at delivery locations across 500 writing centers characterized the response of many administrators to the idea of online chat instruction as neutral at best, a kind of “they can come to us” attitude.

I would argue (and I’m hardly the first) that rather than seeing the human element as something threatened by digital expansion, it should be something that guides that process: we should choose and deploy digital practices that preserve or even expand that ethos. (There are other important things to consider alongside this, some of which I’ll look at in future posts.) Based on my experiences so far, screencasting seems like a slam dunk in this area, especially over email feedback.

When I give email feedback, I don’t get to draw on my full arsenal as a tutor; written words are (at least for me) far less powerful tools than spoken ones. Mike Shapiro commented in a recent discussion about this project that one of the major resources writing centers draw on is the thousands of hours’ experience we have talking about writing – and only a few hundred (if that) writing about it. Screencasting allows a tutor to draw on both the humanizing and pedagogical benefits of the WC’s “talk history.” Even if students can’t talk back to me, my being able to speak to them feels like a big step up from doing everything through writing. I regain tone as a tool for signaling engagement, enthusiasm and inquiry; this in turn boosts my ability to structure and present my comments effectively.

The comments Mike and I have received on our screencasting suggests that students like it just as much as I do. Some of this is for the reasons I’ve discussed above – and also, students just really like videos. I forget that all the time because I hate them; if I see a video instead of a written article I’m closing the tab before it can even load. But students are at home with them. So while video comments may seem like a step down to us, students (for the most part) don’t see it that way. And reaching someone where they can go a long way towards making a human connection. (For more on how students feel about writing and videos, see every chapter in my dissertation.)

The future of screencasting at the UW Writing Center is still up in the air. But I know it will continue in at least one place: I’ll be using it to comment on the drafts of my Intermediate Composition students this fall. Because even if doesn’t prove  to be a best practice for writing instructors overall, it definitely seems to be for me. Hopefully my students’ revisions and feedback will prove me right – and if not, show me where and why I’m wrong.

 

Works Cited
Breuch, Lee-Ann Kastman. “The Idea(s) of an Online Writing Center: In Search of a Conceptual Model.” Writing Center Journal 25.2 (2005): 21-37

Neaderhiser, Stephen and Joanna Wolfe. “Between Technological Endorsement and Resistance: The State of Online Writing Centers.” Writing Center Journal 29.1 (2009): 49-77.

Pemberton, Michael. “Forward.” In Beth Hewitt’s The Online Writing Conference. Portsmouth: Heinemann, 2010.

What Games Don’t Teach pt II: Controller Woes

When Matt and I first started talking about videogames and education, one of the things that came up most prominently on his side was the idea of instructor illiteracy. Even if you can sell an instructor on the value of videogames for their teaching, there’s still the pragmatic obstacle posed by how foreign the medium is to many. Yes, teaching is a learning process for both students and instructors, admitting novice status can be a learning moment, etc etc. At the end of the day, teaching something you aren’t comfortable with yourself is much harder, and much less appealing a use of personal resources, than something you are. Such lesson units are challenges we actively set out to overcome, rather than accessible entry points for busy instructors on the fence.

I proposed a panel recently about this topic – how to present videogames to instructors (specifically in composition) as an effective learning tool for their classrooms. In putting it together, I thought a lot about this question of what these obstacles to videogame pedagogy might be for these instructors.

Being new to videogames brings a lot of stuff to learn, some of which I discussed in my last post: inventory management, puzzle-solving strategies (try everything with everything!), conventions for moving the plot along (talk to everyone you know!). But I think some of these new skills pose higher barriers for bringing videogames into the classroom than others. Adapting to the ways games signal narrative significance, for example, is a learning process we’re used to; we’re practiced at picking conceptual patterns out of the noise. It’s the backbone of most of our training, both pedagogically and within our specific subjects. Not to suggest it’s a piece of cake picking up these tropes. But they are, I would argue, relatively easy for even a new-to-gaming instructor to recognize. And that recognition, when combined with a little faith that there’s something to this “games for education” stuff, can lower the access barrier enough to jump in.

But learning to deal with a modern controller? Oh hell no.

Physically mastering a controller has proven, for me, to be by far the most frustrating aspect of learning to game. Learning to drive an *actual* car feels like it was easier than learning to drive in LA Noire. It’s not intuitive at all! So many buttons! At this point, I’ve essentially given up.  This difficulty mastering (or even adequate-ing) something that literally millions of people can do effortlessly is frustrating and embarrassing. So I can’t solve this puzzle – big deal. That’s what puzzles are for. The fact that manual un-dexerity prevents me from solving this puzzle despite knowing what to do, on the other hand – that’s so frustrating it almost cancels out the pleasure of play.

This is pretty much how I feel about controllers.
This is pretty much how I feel about controllers.

Learning to use a controller is not part of the fun and engaged learning that makes videogames so appealing to scholars and educators. It’s more like being presented with chopsticks for the first time when you are very very hungry, and the meal in front of you looks so good: you just have to master this one little set of movements to start enjoying yourself, but the stakes for that learning are high – your shirtfront; your character’s life. (Now granted, I can’t really use chopsticks well either so, you know, make of that what you will.)

My point here, though, is to ask how the physical difficulty of learning to play videogames might affect their adoption by instructors, especially those who are newer to digital teaching practices overall. This isn’t one of the obstacles I considered when writing my proposal, and that oversight seems so glaring now. The first time you pick up a controller or set your hands on AWSD is inevitably awkward and distracting. That memory is very fresh for me, in a way I’d cautiously guess it is not for many of videogame pedagogy’s advocates. Fumbling to navigate a digital world obscures its potential for both enjoyment and learning; it replaces immersion with self-consciousness. If this is all you experience of videogames, their pedagogy is going to be a hard sell.

I watched this self-consciousness play out during #eng177’s lectures on Braid this spring. Jim brought the game up on the main screen a few times for live playthroughs of the sections we were discussing, by either himself or a volunteer student. While both he and the student who took over did an admirable job staying cool and navigating through the levels, it was obviously a bit nerve-wracking for both of them. Even for experienced players, controlling a game is a delicate process, one that’s quick to be thrown off by outside factors like the pressure of an audience or the need to narrate your actions as you go. Only two students were willing to step up at all, with one of them shouldering most of the playing throughout the unit. Doing anything in front of 120 peers/students is nerve-wracking, of course – but this experienced suggested this is particularly true of liveplaying.

Game controls have a steep learning curve: they’re  foreign and challenging at first, but you catch on quickly. The trouble with steep learning curves for those long past them is that because they happens so quickly, it’s easy to forget how hard the initial experience can be. This probably doesn’t matter so much for teaching your mom or your Amish buddy to play Assassin’s Creed –  but for the instructor testing out a potential teaching tool, that learning curve matters. The resemblance of digital tools like blogging to practices we’re already comfortable with gives them a relatively shallow learning curve by comparison. This isn’t a bad thing, but it is an obstacle to encouraging wider experimentation in composition with videogame pedagogy.

Those of us who believe in videogames’ potential for composition instruction need to keep this curve in mind when we evangelize. To remember to acknowledge, when we talk to curious newcomers, the initial awkwardness of the physical learning process as well as the great potential of the intellectual one. To let them know, if only in passing, that the frustration passes quickly, and when it does – that’s when things get cool. And maybe steer them away from consoles.

 

[In-post image: IncarnateFilms]

What Games Don’t Teach pt I: the Kelly Chambers Incident

Since Gee’s idea of Affinity Spaces is a major part of my dissertation’s conceptual frame, I’ve been reading a lot of stuff about the power of games as learning tools. (Also I date a Filament designer, which tends to bring with it a good dose of similarly themed conversations.) These arguments had been interesting in the abstract, but really came to life for me when I started playing myself. Because video games can be really complicated these days! Matt started me out relatively slow, with Knights of the Old Republic as my first fighting-style RPG, but even that felt dauntingly complex at the start. But the game made me want to play it despite the anxiety I felt about all this, and with a little help from Matt I learned the ropes before I was even off Taris. And now I’m playing Skyrim unassisted with only occasional anxiety about figuring out the skill trees.

Playing these games made me appreciate the conversation about games as uniquely powerful learning tools in a new and much more concrete way. Many of the games I played this past year are complicated by any definition, and yet make learning to harness and navigate those complications not only manageable but genuinely pleasurable. But in addition to teaching you the systems and rules specific to the particular game, video games also teach you how to navigate the rules of the genre overall. You learn how puzzle games work by playing a few of them, and then you have a leg up on the next one you come across. You learn that shotguns are for close range and rifles for long (or you’re supposed to – sometimes you’re an idiot and need your boyfriend to point it out to you); that sometimes you solve puzzles by trying every combination of action and object until something works; that levels and dungeons are usually one-way, and if you’re spending a lot of time backtracking through already-cleared environments you probably took a wrong turn. These conventions aren’t completely universal, of course – and expecting them to be sets you up to be caught by the inevitable moment when a game deliberately breaks them to mess with you (a practice that reached its purest and most meta form in The Stanley Parable).

But generally, by playing a bunch of games you learn not only how those games work, but what you can reasonably expect from other games like them.  This is both cool and useful, as it essentially gives you a headstart, even some shortcuts, for learning these new games in the future. It means that when you start a new game, you can, as Gee puts it in an account of playing real-time strategy games, “already know something, have a small foot up.”  Each game is “a precursor for later learning” as well as a learning experience in of itself. When you arrive at that later learning opportunity,  you’ll be able to make useful and cautiously dependable assumptions about new systems and situations even before you’ve finished learning the specifics.

Most of my game experiences (at least as measured in hours of play) are with RPGs, in a genre I’d describe as “combat/narrative blends” – they have a lot of fighting, but a decent chunk of story as well (sometimes very good ones). Over the last 18 months, I’ve experienced a lot of these “small foot up” moments Gee talks about, with the result that each new game I pick up is easier to learn – and, maybe even more important to my overall play experience, navigating their environments and situations is more and more intuitive. The best example I can think of deals with level conventions: I’ve learned that I can reasonable expect each dungeon/mission/whatever to have one relatively obvious route through it from start to finish, and how to recognize the conventional visual signals used to mark it. I used to get painfully disoriented in even simple level layouts, to the point that it was embarrassing; I’d be looking over my shoulder hoping no one was witness to my getting lost in the Presidium again. But now that I have a few more RPG notches in my belt, I have a much better instinct for the conventions and goals underlying level design, and (usually) have no trouble finding my way unless I’m supposed to. This, and a host of other accumulated lessons, suggest that by and large, combat/narrative RPGs are good at teaching about their genre as a whole as well as themselves.

Except there’s one convention that video games have completely failed to teach me – that of the time-sensitive mission.

In most RPGs, time-sensitive missions are the exception rather than the rule. Taking out the mafia kingpin, storming the lab on Virmire, starting up the water purifier, scoping out the dragon burial site for possible resurrecting dragons – no matter how high the narrative stakes for these tasks, or what your NPC companions are screaming in your ear, you’ve got all the time you need to take care of it. So check out that random hallway. Search every inch of the base. Heck, wander off on a side mission for a few days if you feel like it. It’ll be there when you get back, and no one will actually be any worse off than if you’d taken your sidekick at their word and raced through like this was all actually unfolding in real time. Unless there’s some kind of visible timer on the screen, or screamingly explicit narrative cue, there’s no rush.

This is pretty standard across the RPG genre (at least in my experience), and so presumably is something these games should be teaching players along with level conventions and appropriate small arms use. But if so, it’s in a way that my learning style doesn’t recognize, because I cannot seem to internalize it in any meaningful way. Like, at all.

My (admittedly limited) gaming lessons have consistently presented urgency as something explicitly signaled through visual markers (as in the very first Mass Effect mission, when the bombs must be disarmed before the timer reaches zero). Sometimes that marker isn’t precisely time – it might be shield strength, or a companion’s health – but it gives some visual cue that I can’t be screwing around. On the other hand, they’ve also taught me that narrative matters – that what people say and do around me has implications for what I will be able to say and do subsequently. Mass Effect and Fallout 3 taught me that the cues characters give about themselves and the stories around them are important for making decisions that build the narrative outcomes I want. Bioshock and Bioshock Infinite taught me that even seemingly insignificant background details in dialogue and story can turn out to be majorly important. These experiences have primed me to take narrative, including character dialogue, seriously. They have failed to teach me to ignore that dialogue when it’s rushing my character along – despite that being what I’d argue is a pretty necessary lesson.

Because this learning failure has a definite adverse affect on my play experience. I rush past gorgeously rendered vistas and valuable equipment upgrades. I miss entire side quests and quirky characters. I sometimes experience something akin to actual stress over reaching an objective I’ve been told is pressingly urgent when in reality I could wander off, clean my entire apartment, have a leisurely drink with friends, and then come back and finish the mission to exactly the same effect as my instinctive 15-minute bull rush. Just a few days ago I caught myself abandoning a Skyrim questline I was enjoying immensely because a character I’d just met told me I had to meet her at some random location right away. This didn’t have any negative effect on that original questline (of course). But it had an effect on my enjoyment of the game – I felt rushed and resentful. By the time I remembered that urgency is just a narrative trapping in RPG quests I was already at ground zero for the new quest’s first mission.

I’m not sure why I have so much difficulty resolving these conflicted learning objectives – why I can’t seem to learn that narrative tags about urgency are decorative and not declarative. I’m pretty sure it’s not a widespread problem; if it was, designers would already have switched up how they’re coding and scripting these moments to improve their games’ ability to teach both lessons effectively. I have a fairly conventional learning style – traditional school worked fine for me, I’m not unusually visual or tactile or anything like that. So the sensation of falling through a learning-styles crack (even one so laughably insignificant) is kind of a weird (and, because the stakes are so low, interesting) experience. It makes me wonder what a game would look like that did teach me this effectively. Additional tags about the importance of being thorough and exploring? A section in the tutorial mission that explicitly says “sometimes missions are timed; here’s what that looks like”? The latter seems clunky; the former could be too subtle. But then, even if such a game existed, there’s a chance I still wouldn’t learn. Because sometimes bad design reinforces the wrong lesson – and as Skinner taught us, inconsistent reinforcement is the most powerful kind. Which brings us to Mass Effect 2 and the Kelly Chambers Incident.

In its final stages, Mass Effect 2 breaks the rules that urgency gets exceptional markers: once the final mission becomes available a hidden clock starts, and the longer you wait before starting it, the more of your captured crewmates are killed by the enemy. This includes the peppy and, in Matt’s words, “really attractive” Kelly Chambers, a character who (if she survives) plays a role in the following game. A role I will probably never see, because my Shepard’s Kelly is Collector paste. I’d grown used to people telling me missions were urgent, vital, pressing; and nothing about the dialogue surrounding the final mission cued me to think it was any different in this regard, that this time it really was urgent.

So I dicked around and finished a few sidequests, and Kelly died, and I’m still super pissed about it. The ONE time my natural instinct to rush to the rescue would actually have been correct, I ignored it. Consciously. Because I remembered what the game(s) had taught me.

This seems like bad design to me. What good are stakes if you don’t know they’re there – and can’t reasonably be expected to have guessed it? By changing up a basic rule of the genre without warning, Mass Effect 2‘s final mission undercuts the lessons taught by the rest of the series, and other narrative-based RPGs as well. I would argue that this is an example of bad design on their part – of letting a narrative shock outweigh the game’s consistency as a learning tool of the genre. This is not to say games should never play with or flaunt our expectations – not at all. But that they need to do so in a way that’s consistent with the rules the game’s taught us govern its systems. In a good game, we shouldn’t be overly comfortable about what’s coming our way, but we shouldn’t feel cheated by it either.

Now if you’ll excuse me, I was told several days ago that I’m desperately needed at a dragon burial site, so I should probably look into that.

 

Transmedia Negotiation and #donald4spiderman

I’ve been playing around with the idea of “transmedia negotiation” lately. It’s a mashup term I coined and then discarded in the process of drafting Ch2 of the diss, combining two of the digital communication skills Jenkins et al outline in their sweet white paper about participatory media: transmedia navigation, and negotiation.

In the paper, these two skills are treated distinctly, with no suggestion that they’re any more related than any of the other nine skills on the paper’s list. But the very first time I read through the paper’s discussion of the two, they struck me as very closely linked – perhaps not so much that a distinction is unnecessary, but that there’s a lot to be gained by thinking about how they function together. And the reason, I think, that this connection jumped out at me, is because the paper uses Spiderman as an example to illustrate one of the concepts – an illustration that led my train of thought directly to another Spiderman-related example in which the two skills operate hand in hand.

But I’m getting ahead of myself. First, a quick rundown of the skills in question – starting with transmedia navigation. Jenkins et. al’s interpretation of transmedia navigation focuses on the media aspect – the ability to follow threads and information from text to image, video to interactive game, and so on. The paper illustrates this skill effectively with the example of the character Spiderman: Spiderman has a different look depending on whether he’s appearing in a printed comic, a video game, a live-action movie, and so forth. Transmedia navigation, in its most basic form, allows us to recognize all these manifestations as part of the same character. This ability to track particular subjects or interests across multiple media manifestations, the paper posits, gives students “multiple points of entry” for thinking about, discussing, and working with the subject, “enabl[ing] many different forms of participation” (87). Negotiation, the second skill, is “the ability to travel across diverse communities, discerning and respecting multiple perspectives, and grasping and following alternative norms” (97).  It’s the ability both to “negotiate between dissenting perspectives,” and to “negotiate through diverse communities.” Jenkins et al note that while the promise of social media and other highly participatory spaces is one of inclusion and diversity, “[they] also can be deployed as a weapon of exclusion and, as a consequence, as a tool for enforcing conformity to peer expectations” (99). Negotiation is the skill that allows people to recognize the value of new or conflicting viewpoints when they encounter them throughout their travels online.

In other words, I would argue, it’s the processing arm of transmedia navigation. Transmedia navigation is the skill that deals with recognition – with finding and following ideas and knowledge across the wide range of possible means and spaces for its expression. But once something is recognized as part of the conversation, it’s negotiation that makes sense of it, that evaluates and processes its place in the overall picture. There are definitely distinct steps involved here – first following the information to a particular space or manifestation, then processing its place in the whole. But I would argue that these two skills operate very closely, so closely that any productive consideration of one must take into account the other. The real skill to be watching is the mashup: transmedia negotiation.

To see this claim in action, let’s return to the white paper’s Spiderman example. The typical fan, as the example above shows, has enough transmedia savvy to recognize that the character they watched on Sunday morning cartoons is the same one they see in comic books – it’s arguably that act of navigation that feeds their fandom, allowing them to consume the outsized portions of mayhem and mythos that make one a bonafide superhero fan. But a Spidey-related dustup in 2010 shows that without a corresponding ability to negotiate diversity within that range of representation, this power breaks down – fans are prevented from recognizing representations of the character that don’t meet their existing expectations.

All you Spidermen fanatics out there already know what I’m talking about, no doubt. But since that crossover population is probably not large, an explanation for the rest of you:

In May of 2010, the handsome and talented Donald Glover was put forward on Twitter by a fan as a candidate for the role of Spiderman in the upcoming franchise reboot. The campaign quickly gained visibility on Twitter through use of the hashtag “#donald4spiderman,” which in turn made it a hot topic of discussion on entertainment media sites, fan forums, and other forms of social media. But while many were enthusiastically supportive of the idea – including series creator Stan Lee – many other voices were resoundingly negative. Here’s Donald Glover’s own summary of how things went down:

“They were talking about, on this geek blog, remaking Spiderman, you know they’re redoing it. So they said that maybe this new Spiderman, since they’re making it so, you know, quickly after making these other Spiderman’s, maybe they should make it real different. You know, make it kind of dark and edgy, like The Dark Night, and put it in modern day times. And maybe Spiderman doesn’t have to be white, maybe he can be Black, or Hispanic or something like that. And then somebody put a big picture of me in the comments, being like ‘Donald Glover can play Spirderman! He’s nerdy!’ And I was like ‘okay,’ and somebody sent that to me so I was like ‘oh yeah’ so I put that on my twitter. ‘Donald for Spiderman, let’s do this.’ You know kind of a joke, but also who doesn’t want to be Spiderman?”

And that’s when the world went crazy. Half the world was like ‘Donald for Spiderman. We’re only gonna watch the next Spiderman if Donald Glover is playing Peter Parker.’ And the other half was like ‘He’s black, kill him.”

Glover himself undoubtedly saw a wider range of responses from both sides, but there was plenty out there in the public sphere, on social media and entertainment news sites of all stripes, showing the breakdown of transmedia negotiation in action. While there were definitely some straight-up racist comments here and there, the overwhelming majority protested the idea on the basis of recognition: the representations they knew don’t show Spiderman as black, and so obviously he can’t be.

“I am not a racist, but I would like for Movies to match themes of the books, screenplays, or games they are based off of.”

“I dont wanna see famously white characters become Black, Nor do I wanna see Black characters become White! Stick to the god damn source material!”

“Look you can say this isnt a race issue, you can say why shouldnt he have a shot but spider man is a white character…ive read comics and watched the movies ad cartoons since i was a kid, [their] job is to match those characters as closely as possible. i hope this kid a great career but this isnt the role for him!” **

In other words – they could not recognize this representation of the character as a valid one because it ran counter to their own expectations. Their ability to navigate the array of spaces and media in which Spiderman appears was compromised by the limits of their capacity to negotiate the “dissenting perspectives” represented by Glover’s candidacy, and the “diverse communities” where that candidacy was supported.

Ultimately, Marvel passed over Glover in favor of the more traditional-looking Andrew Garfield. However, a subsequent comic bookissue, Ultimate Comics Spider-Man #160, unveiled Miles Morales as the new Spiderman in the wake of Peter Parker’s death – a half-black half-hispanic teenager who looks suspiciously familiar…

spider-miles

The activity surrounding #donald4spiderman serves as a strong illustration of how closely transmedia navigation and negotiation are bound together. It shows that recognition is a complicated act, and one that’s not just a matter of following ideas from comic page to movie screen. Critics of the hypothetical casting weren’t just dissenting opinions – they were unable to recognize a black Spiderman as part of the fandom they knew. Their dissent, and the community divisions that ensued, were products of both navigation and negotiation failures. Fanart and mock-ups of Glover as Spiderman circulated actively as part of the conversation, and many of the comments, for and against, feature visual language – being unable to “see” Glover in the role, or arguments that it “just wouldn’t look right.” But the same failures underlie more abstract, less visually motivated moments of disconnect as well. As the quotes above show, readers who objected “on principle” to the idea of a black Spiderman frequently make assertions that it’s not a question of race, that racism has nothing to do with the issue. But the idea that a traditionally white character cannot be represented otherwise without sacrificing its essence is absolutely about race. Claiming otherwise doesn’t make you a racist, but it does indicate a failure to recognize how one conversation relates to another. Just as objectors can’t see Donald Glover slinging webs as Spiderman, they can’t see the debate surrounding that failure as part of the larger one about race in America.

Recognizing the influence of racial issues in conversations about Spiderman, like recognizing a black actor in webbed spandex as Spiderman, is a function of transmedia negotiation – the ability to recognize and respect alternate perspectives across a wide array of networked representations. One without the other is no good. In order to encounter those moments of difference we need to know how to follow the conversation; but the conversation’s lost on us if we can’t negotiate its contradictions.

 

**Quotes taken from comment section here; lightly cleaned up in places for readability.

Grammar Nazis, Correctness Police

One imagined/potential goal when I designed my dissertation project was to present a snapshot of students’ process for writing casually online. So I asked a lot of questions in early interviews about practices for re-reading posts, perceptions of others’ writing, how students make quality and credibility judgments based on others’ writing, etc. And almost invariably, they brought up “grammar” as a major marker of not only credibility but overall character. “I hate it when people start using all the chat speak stuff, that just drives me crazy,” one student told me.  “No one ever corrects themselves,” said another, “and I don’t do it because I don’t want to be a dick – but I notice.” “I’m a very big grammar person,” said another, “so even if I tried I can’t just accept ‘ur’ instead of ‘your.’ It just bothers me.”

Purportedly correct alternate answer: "80% of college students"
Purportedly correct alternate answer: “80% of college students”

A disclaimer here: I’m not deeply read in this area; aside from some snapshots out of the Standard English debates I don’t know very much about how this is discussed in the field at large. In this post, however, I’m focusing almost entirely on my experience.

1. They think “grammar” means “correctness.”

“I’m kind of a grammar Nazi, so I always feel like I have to make sure I have my little like, quotation marks or whatever. And commas and stuff. [I] never say LOL, never shorten stuff.”

“I’m a very big grammar person, so even if I tried I can’t just be like ‘ur’ instead of ‘your.’ It just bothers me.”

“I hate it when people start using all the chat speak stuff.”

This shouldn’t surprise me but somehow it did. I think it’s because while both grammar and correctness qualify as relatively petty obsessions in my mind, an obsession with correctness in the context of casual online writing just seems ridiculous. Like using your finger to hold back a damn that broke three months ago. And I think students know – they must know – that a misspelling is not a grammar error. But it sounds less crotchety to say you like good grammar than to say that you expect everyone to always be correct. But this also points to something else – that students don’t seem to actually know what “correct grammar” means. Which perhaps goes some way towards explaining why it’s such a source of worry and preoccupation. It’s a cycle where they know they are expected to have “good grammar” but aren’t sure what that means–>they latch on to the markers of correctness they *do* know and hold on like the dickens–>they grow defensively critical when others don’t do the same because it adds credibility to their interpretation. Which brings me to the next point:

2. Grammar/correctness is a judge-y subject.

“At a certain point you shouldn’t judge that but at the same time you do – it does definitely affect your opinion of that person.”

“Ohhhh my gosh. Take 2 seconds to proofread it and just correct yourselves – it’ll make you seem so much smarter and will get your argument over, make it seem more vaild.”

“If someone has poor grammar or punctuation and stuff, any of that kind of thing – you know, I hate to say it but yeah, it reduces my opinion of them. People who can’t formulate a proper argument or can’t logically defend it.”

This last answer is perhaps the most revealing, as it shows most directly the implicit association between basic errors and intellectual ability hiding in all the comments I heard about grammar (“grammar”)  judgments. The overall light tone and liberal use of the “grammar nazi” appellation when talking about grammar obsessions indicates that students are, to some extent, aware of the potential silliness of focusing too much on these relatively minor issues. But alongside that are quotes like this, which show that even if they’re aware there’s a joke to be made at their expense they are also serious about their expectations. They do think that correctness signals something deeper about a writer, and they make judgments accordingly. These judgments aren’t limited to faceless internet strangers either. Students admitted that they apply them to their real-life friends as well – people they presumably know firsthand are not unintelligent. “My friends, if they say something hilarious and [I] want to retweet it but they use “u” instead of – I will NOT retweet “u.” It does bug me, yeah…I definitely look down if you text me just horrible “r u” kind of stuff. And I mean, I’m not going to say anything about it but I’m just, it just, I just feel like I’m texting an 8th grader.” And another student said that while she wouldn’t go so far as to “unfriend” someone who makes mistakes, “I do look at it differently and am like “really?” Especially if it’s a friend that I know is really smart and would get mad if I ever said something that was wrong, politics-wise, and they spell “your” wrong.”

My data doesn’t say anything about whether or not the average correctness of student writing has changed with the advent of digital communication. But it does seem to say, as evidenced by the quotes above, that they don’t think it should be changing. It suggests that students continue to hold the same kind of assumptions about Standard English proficiency (and deployment) as the generations preceding them. There’s some slippage here, mostly in regard to genre – for example, most students said that things are a bit different when it comes to instant messaging – though not, notably, texting). But overall, the influx of casual digital writing opportunities does not seem to have done much to change dominant notions about the link between correctness and quality. The fact that these judgments hold for real-life relationships and not just online strangers, in particular, pokes at the popular lament that online informality is eroding people’s standards. If these students judge not only strangers for basic errors but those they’ve chosen to be friends with in real life as well, then Standard English conventions clearly still have some real strength left.

3. While “grammar nazi” mindsets are dominant, they are not universal.

“It does definitely affect your opinion of that person – but at the same time at a certain point you shouldn’t judge.”

“There are the grammar trolls looking, looking for those things. You can be perfectly credible, everything cited, you can have one thing – forget a punctuation, mispell a word, first comment after that’s going to be ‘someone didn’t go to school, [someone] can’t spell anything.'”

“A lot of people go on [FB] just to correct people, like if someone wrote a sentence wrong in their status or something. Grammar nazis. I don’t like to do that to people – as long as you can understand them, I don’t see why it matters so much.”

Unlike the other sections, these three quotes are the only ones I have as evidence for this point. But all three of them point to a different reasoning for breaking with the grammar/correctness party line. The first one (referring to posts on Reddit) suggests that this student recognizes that there’s not a direct correlation between correctness and intellectual capacity. The second one is in the same vein, but from the other side – it assumes that errors happen even to the best of us, and that it’s the people who revel in those errors who are deserving of judgment. And the final one is simply indifferent – to her, “your” vs “you’re” is the opposite of a big deal.

Though it wasn’t a conscious hypothesis, in retrospect this is the view I expected students to take. Because it’s the one I’d like to have, that my ideal self would espouse and internalize. Because I firmly believe that, at least in the context of Facebook and Redddit, basic errors shouldn’t matter. While I haven’t kept up with contemporary work on the subject, the reading I did in coursework and prelims about Standard English and the role of conventions early in the field’s history made a strong impression on me. But I hate typos and misused homophones. I can’t not notice them, and they do irritate me. This makes me feel guilty and conflicted. And it makes me more than typically interested in how my students talk about this issue, because I sympathize with the tension, confusion and conviction that pepper their responses.

What does all this mean? I don’t know, because it turns out I’m writing an entirely different dissertation. But it’s something I hope to come back to. Because it feels significant (if only to my personal scholarly development). This is data I’d love to get in conversation with the theories and experiences of others.

 

Membership Anxiety in Digital Communities

In the run-up to Computers and Writing, the DRC fellows have been engaged in an interesting conversation about the communities we belong to – mostly digital, but face-to-face as well. The discussion started as ostensible planning for the panel/workshop we’ll be leading on our work with the DRC wiki this past year, and its ongoing attempt to build a history of digital rhetoric and writing through community efforts. We got to talking about how being a contributing member of any community – but particularly professional ones – is a commitment, something that takes investments of both time and effort. To help us think about how we ourselves make decisions about distributing our efforts, and about community membership in general, we started an email conversation by all answering these 4 seemingly simple questions:

  1. In what digital communities do you participate?
  2. In what f2f communities do you participate?
  3. As follow-ups to the two questions above… What do those communities do/accomplish — what is their shared purpose? Who facilitates the operation of those communities? What are the discourses and genres of those communities?
  4. How do you choose which communities to participate in? Why those communities over others? What “pressures” or demands do these communities place on your “attention,” and how do you negotiate those demands?

Answering these questions was surprisingly tough! I’m already thinking a lot about digital communities and participation these days, so any additional thinking about those topics inevitably pulls on a huge network of mental strings, making things instantly complicated. For example, the term “digital community” – what should I be defining as a community, and what’s more of a participatory or affinity-based space? Should I include those? Am I even really an active member in any of the digital communities I visit these days? And on and on.

I wasn’t the only one who struggled with defining community involvement and what should be included. It became clear from the first wave of the discussion that when we think of our community membership, what comes to mind is professional affiliations, or groups where our involvement has definite links to our professional interests. These lines weren’t always hard and fast, however; for example, one Fellow struggled to identify whether her activity on Wikipedia fell into “personal” or “professional.”

Of course, one of the great things about the rise of all these digital spaces is that it’s encouraged a more permeable boundary between professional and personal – allowing the different discourses, personas and spaces we inhabit to blend more easily, and without seeming out of place. This is something that comes up in The Florida School’s discussions about electracy. In their quest to “jump right in and shape the electronic apparatus” by “inventing new modes of discourse that take both critical theory and digital media for granted” (6), they advocate embracing a much more fluid approach to hat-wearing. When we create theories for understanding our hypermediate world, we need to be drawing from all four sectors of experience: family, entertainment, school (community history) and career (disciplinary field).

This idea fascinated me when I encountered it during my reading for prelims. The idea that successful scholarship not only could but should draw on the discourses and experiences of my life outside academia felt very strange, and at the same time very right. In the years since then, I’ve come to see this idea operating throughout the disciplinary activity around me. It seems bound up, though perhaps only implicitly, with the growing acceptance that all reading and writing is multimodal, and therefore invention today requires us to draw from a wide range of available modes and genres. (Is this still a “growing acceptance?” Or is this an accepted thing in Comp/Rhet now? It can be so hard to tell from inside this comfortable subtower of computers and writing.) As we extend the range of media and material from which we can (and arguably must) draw to do our best work, that range seems to be accommodating more of the sector-crossing that Ulmer and the other Electrates advocate.

It’s also possible, of course, that I see things this way because I’ve become more comfortable in my skin as an academic in the past few years. But the fact that these boundaries proved so hard to define for my fellow…Fellows supports the idea that we’re still adjusting to the idea that our experience sectors can’t be kept entirely separate anymore if we’re to be most successful within them, even if we might be more comfortable that way. In addition to struggling to define whether a particular community should be coded as “work” or “personal,” one fellow also commented that she’d initially left a community off her list because it seemed so directly personal – related to her interest in running. But she was able to see several clear ways in which her experiences in that community had enabled and shaped her participation in other more decidedly professional digital spaces. Moments like these suggest to me that the Electracy folks are right, at least for those of us in digital studies: the best work comes when we’re able to draw on all our available experiences. Just as I can’t make my best contributions to the digital projects of the DRC without drawing on my experiences with personal blogging or editing gaming-based wikis, I can’t make my best scholarly contribution to conversations about digital literacy and theories thereof without pulling on my experiences growing up in a house full of computers in various stages of assembly, or my own struggles to just get my damn website organized the way I want it. (The latter situation is still ongoing as of press time.)

This is one of the reasons I find it so useful to think in terms of affinity spaces, rather than – or I guess in addition to – communities. (It’s worth noting, if only in hopes of shaming myself to action, that despite being aggravated every time I visit by the shortcomings of this entry, I have not yet taken the step of editing it.) The whole idea of affinity spaces is that they let us think of groups of people in terms of their shared interest or purpose, rather than a shared set of traits or practices for interacting within the space. Members of affinity spaces might be active contributors – or they might just be active lurkers. They might be involved in the space because they have a strong personal curiosity about its central topic, or because the topic relates in some way to their professional activities. Or both. The idea of affinity spaces allows for that kind of diversity, and gives participants a way to think of themselves in relationship to the space that doesn’t bring with it a fixed set of requirements. Freed from the anxiety that comes with asking “am I really a member?”, it’s easier to step back and see the full network of spaces, groups and – yes – communities in which we participate.

Ode to #eng177

This past semester, I TA’d for Eng177: Literature and Videogames with my advisor, Jim Brown, as lecturer. I’ll probably be talking about my experience with this class in a number of posts over the next few months, but today I want specifically to reflect on the class’s Twitter component: five compulsory tweets per lecture on the class’s #eng177 hashtag, plus several assignments using Storify to turn those Tweets into curated reflections. This was an unexpectedly cool and complex addition to the course, and one I’ve been thinking about a lot as I think about a lot as the semester wraps up.

Now, the ideal way to do this reflection would clearly have been as a Storify! But many students have already deleted or privatized their accounts now that the semester is completed; it’s also difficult to draw on Tweets from several months back, which is something I’d have wanted to do for sure. I’ve got hopes of putting one together anyhow, but for now a hyperlink-heavy blog post will have to do. A big thank-you to Peter Wagner, Cooper Chell, and Kevin Fentress for allowing me to use their work here. (And if there’s an easy workaround for the time span issue, please point me to it in a comment!)

TL;DR – Mandatory livetweeing in this lecture course had a lot of benefits I didn’t necessarily expect, turned out to be pretty awesome.

When I learned from Jim that students would be livetweeting during lectures, I was a little skeptical. It’s not that I couldn’t imagine how livetweeting could possible benefit the course – Jim said he’d done it before to some success, and I could imagine how networked notetaking might do some cool things. But the idea of encouraging students to actively use social media during lectures themselves was new to me, and I admit it – I wasn’t convinced they’d use it productively. (In my defense, the students were skeptical at first too – most notably on the grounds that having to tweet five times per lecture would prove a multitasking overload.)

In hindsight I feel pretty guilty about this reaction. It shows the kind of assumptions about how students interact with technology (especially in the context of their educations) that I try to push against in my own research – this idea that what I might initially assume about their use of Twitter in a class setting necessarily mirrors reality. And it also made me realize that my policy of not allowing laptops during class unless the day’s activities specifically call for them, might be (/is probably) both outdated and based on the same kind of faulty assumptions.

Chris Gerben spoke really well to this concern in his 4C14 presentation; he made the point that, as is to some extent natural for academic disciplines, we tend to be behind the curve in the genres of communication we’re studying closely – that by the time we’re studying Facebook or blogs, students have moved on to using different spaces and forms, or at least using those ones differently. My own research bears this out so far; for example, while students are definitely still using Facebook regularly, they’re not using it for social networking or as a primary means of connecting with distant friends. If we’re going to keep up with the call to be thinking about, teaching, and engaging in composition in the newest of keys, Gerben argued, we really need to hurry ourselves along. We need to avoid getting our focus and assumptions stuck in how we tend to be composing digitally (she says in her blog post) and remember that students often do and see things differently. My experience Tweeting this semester wasn’t the most dramatic possible example of this, but it was enough to make me feel called out. To remind me that if I’m going to stay ahead of my assumptions, I need to be channeling Mad-Eye Moody.

#eng177 showed me that yes, students can use social media productively during class sessions themselves. It showed me how Twitter creates connections and facilitate interactions that wouldn’t happen otherwise within small, temporary communities like a semester-long course. It showed me how readily students will take up such opportunities and use them productively – and get into exactly the kind of exchanges, both argumentative and dialectic, that we hope for when we design the contents of each class meeting.

Exchanges weren’t the only way tweeting got used, of course. One of the primary practices was tweeting-as-notetaking – preserving important ideas from lecture. This was a reliable way to get in one’s five required Tweets without resorting to the kind of empty “here’s my last Tweet today” stuff we said wouldn’t receive credit. But in addition to being a good default, many students did seem to use them as a record of the lecture material – I saw such Tweets show up a lot in the final Storify assignments that asked them to connect lecture concepts to game playthroughs, pointing to these Tweets as an academic resource as well as an interactional tool. This practice also helped to allay students’ initial concerns that Tweeting combined with traditional notetaking would be too much to handle; by replacing “traditional” notes, either partially or completely, note-Tweeting helped ease the multitasking burden.

But the semester went on, #eng177 did see considerable interaction between students (and instructors as well!). This was probably my favorite thing about the hashtag – the way it created all these little exchanges between the members of the course community that wouldn’t otherwise have taken place. Sometimes these exchanges were implicit, the stream bubbling with related comments about some provocative or controversial topic from that day’s material. This Storify by Cooper Chell documents possibly the best example of this, our first lecture on the game Gone Home; note how even outside direct replies to each other, students created a stream rich in different opinions and crisscrossing perspectives. And here’s one by Peter Wagner that shows some more direct conversations on a controversial but slightly less argumentative topic: different takes on the end of the novel Ready Player One. Through these exchanges, students were able to see that even though the class liked this novel significantly more than the last one we read, they still found the ending controversial. The Twitter stream gave them a place to exchange these thoughts – and in a form that is quick, easy and relatively fun.

Unlike Blackboard-based discussion components, students seemed to relish the chance to post their opinions via Twitter, using language that was frequently informal but also active and expressive to do so. Having Twitter as a means of facilitating this kind of interaction was particularly welcome given the form of this particular class. It’s hard to make lectures of 100+ students discussion-friendly under any circumstances, and in this case it could be harder than usual in the discussions sections as well; the schedule of the class had the TAs meeting with students in double-sections of 40 students, held in the same space as the lecture, which while great for multimedia work wasn’t ideal for discussion. So having an extra forum for students to engage in back-and-forth with each other was really wonderful.

The livetweeting also led to a couple serendipitous encounters with figures outside the class community. Through the power of Twitter, it seems that game critics and developers can sometimes, Beetlejuice-like, be summoned by repeating their names. In this way, our class found itself interacting with Mike Bithell, creator of Thomas Was Alone; Porpentine, who wrote the Twine game Cyberqueen (among others); and game designer/scholar Ian Bogost.  These interactions were all unplanned, coming up solely through the designers hearing (seeing?) themselves invoked through the class hashtag. And they contributed materially to the course experience, especially in the latter two cases. Bogost chimed in with advice on what it means to imitate another person’s writing rather than paraphrasing it, provoking students in both the content and style of his advice to better efforts; Porpentine ended up graciously agreeing to do a guest lecture via Skype about her experiences and advice as a game designer. Twitter also made it easy for the class to interact with an in-person guest lecturer, Matt Haselton; by using the class hashtag Matt was able to keep participating in the conversations his lecture started even after he’d gone back to his day job at Filament Games. These are things that wouldn’t have happened with Twitter, and which were extremely cool – both from my perspective and, it seemed to me, the students’ as well.

Finally, #eng177 was a really productive learning experience for me, as both an instructor and a user of social media. Though I try very hard to stay on top of it, I know I’m prone to exactly what Chris Gerber poked at in his talk – thinking through the frame of the digital genres I know best and use most. I don’t embrace new ones as prolifically or adventurously as I should, and until this semester this reticence included ever something as relatively mainstream as Twitter. I had an account, but used it almost exclusively for following people in the field. Now, I know this is a perfectly legitimate use of Twitter – indeed, one that’s regarded by many as a best Tweeting practice. But having never really used it for interacting, I wasn’t really couscous of how it functioned – for example, how easily people are summoned into conversations when mentioned by name. Tweeting to #eng177 exposed me to how well Twitter can function for conversation between finite communities; how it can extend both digital and f2f conversations beyond the moment in which they initially take place; how they make it easy to pull in outside material to existing conversations; and how they can connect you with members of the community you wouldn’t otherwise talk to. This last on in particular I found very valuable. While I naturally felt a bit more affinity with and interest in what my own crew had to say, it was refreshing to not be limited to just interacting with half the class. I started to look out for certain students whose ideas regularly connected with me – many of whom were in other sections. This is something I’d certainly have missed out on without #eng177.

For all these reasons, #eng177 was a really cool experience, and a wonderful addition to the course. I haven’t seen the evaluations yet, so jury’s still out on exactly what the students thought – but from watching the stream and hearing them chatter before and after class, I think it was at least a reasonable success for them too. Of course, there’s always varying levels of involvement with a component like this – just like with any part of a course. Some students never rose above perfunctory use, going through the motions; others grasped it actively from day one. But mostly I saw active in-between: the majority of students used it in both ways, sharing actively and naturally when they had something to contribute – which happened more often than I’d ever have thought at the beginning of the semester. Students (and the general ever-growing magic of digital media) proved the skeptic in me wrong, and gave me a new tool for my digital pedagogy toolbox. Of course, by the time I teach a 100+ lecture of my own we might all be communicating directly via hivemind – in which case I look forward to designing my first course-specific neural stamp – five mental waves per lecture required.

 

Reflecting on Telltale’s Walking Dead: Stakes, Death and Narrative

[Warning – this post contains minor spoilers for Season One of Telltale’s Walking Dead and Mass Effect 3]

Grant me the serenity to accept the things I cannot change, courage to change the things I can, and wisdom to know the difference.

This is a centuries-old and much-loved Christian prayer. If you swap out “courage” for “basic manual dexterity,” it is also a pretty good summary of how I felt playing much of Telltale’s Walking Dead. And while I got better at all three as the story progressed, the anxiety I felt over the stakes of my agency remained the defining characteristic of my experience.

Here’s what I knew about Walking Dead going in: that it was widely recognized as Really Awesome, that it used the medium to do some sweet things with narrative, and that Matt thought its narrative/game blend would make a good start on getting me used to using a controller. I hadn’t played a new video game in almost a decade, and so I didn’t have a lot of preconceived notions of what to expect going in.

I was therefore somewhat surprised to find myself playing a point-and-click adventure game much in the style of those I played as a child. It took me awhile – longer than I like to admit – to notice the connection. Even though I know classic genres have been making resurgence, it hadn’t occurred to me that my old-school playing days might have relevance here. The tension and immediacy of the real-time sequences contributed to the disconnect; that kind of “oh shit oh shit QUICKLY” is something I associated with point-and-clicks only by absence. But nerdery, like blood, will out: despite it being clear pretty early on that the “look” action is only useful in specific and pivotal situations, I still found myself compelled to look at Every. Single. Thing on the farm in Chapter 2.

farm-walking_dead_ep2
See this farm? Because I have. In great detail.

The other big surprise was how deeply unnerving my character Lee’s death was, every time I let it happen. Even I have played enough video games to know that dying is part of the process. In fact I came into Walking Dead on the tails of playing through about half of Limbo, a game that’s literally been described as “trial and death.” You try something, it doesn’t work, you head back and try something else. But in Walking Dead, I found myself right from the start incredibly on edge about the prospect of letting Lee die. The first time he’s threatened with death, I was mashing the hell out of those buttons to get the cuffs off and the gun loaded. And there was a genuine sense of relief, of tension passed, when I managed it. The encounter with the zombie babysitter at Clementine’s house took me considerably longer to bring off safely – I think I died three times before I managed it – and by the end I was stressed. When Lee dies it upsets me. Even as early as those first few scenes.

The first time I let Lee die - traumatic for us both
The first time I let Lee die – traumatic for us both

Everyone knows stakes are high during the zombie apocalypse. And with writing this good it’s no surprise that you get emotionally attached to characters, and are affected by what happens to them. But when your character dies, you just come back to life and try it again, until you manage to survive the zombie teenager/crazed farmer/dangerous parcour in question. So why did keeping him alive stress me out so much? It’s more than just that potentially lethal scenarios tend to play out in real time (though that’s certainly a factor). The game manages to communicate a very real sense of stakes very quickly, both in terms of the environment – there are horrifying animated corpses ready to eat you everywhere and at any time – and in terms of character, and to blend those stakes together in a way that gave Lee’s deaths an atypical resonance for me. As I desperately tried to make my all-thumbs controller hand load and fire the rifle, my thought was not just “shit, I can’t let him die in his very first zombie encounter” but also “shit, I can’t let him die with all those demons on his chest.”

Since playing Walking Dead, I’ve played through most of the Mass Effect trilogy. I adore those games (post forthcoming), and like most fans of the series have an intense connection to my Shepard. She’s the product of 90+ hours of battles and narrative decisions, she looks like a hot Russian version of myself, and I am going to be so sad to let her go. Violet Shepard and I have a thing going at this point. But when she dies, I just sigh and boot up my last save. Of the many things in Mass Effect that make me feel (Mordin! NEVER FORGET), her dying in combat isn’t one of them. But in Walking Dead, Lee dying makes me feel something unique, a kind of nervous tension that I don’t get from any other moments in the game.

If you were in Lee's shoes, you'd be angsty too
If you were in Lee’s shoes, you’d be angsty too

In both Mass Effect and Walking Dead, narrative is a strong part of the game’s reward system. That feeling of investment you build in your Shepard, and in Lee, is bound up in the game’s nature – it’s your reward for playing, part of what’s driving you to finish. In Mass Effect though, this narrative reward shares the stage with the rewards of a good shooter – levels gained, battles won in style, weapons upgraded, etc. You’re here for the story, but you’re doing other stuff as well – and it’s only in those “other stuff” moments, the moments when the game is not so much about narrative reward, that Shepard can die. Her death doesn’t end the narrative – it ends the battle. In Walking Dead, the narrative is pretty much the whole reward. And though Lee’s potential deaths tend to come during the most “gamelike” moments, for lack of a better term, those sequences remain firmly connected to the narrative progression. There’s no clear line separating the shooting and the story, as there tends to be in MA. When Lee dies, you feel the stakes of that death for the storyline – because you’re very much in it all the time. What happens to Clem now? Oh god, is everyone going to get eaten? By humans??? I have *got* to work on my button-mashing skills.

My feelings towards Lee were a weird blend of identification and readerly distance. Right from the first conversation in the police car, I found myself choosing answers according to my own values – what I hope I would do in his place. Sometimes this involved thinking ahead to the impact my choice might have on the story – but more often my choices were simply (if you’ll pardon the cliche) from the heart. This made the game a very revealing look into my own values, sometimes uncomfortably so. It created a stronger sense of empathy for his character than I think I’d have felt if I’d chosen more “results”-mindedly; it also increased my sense of responsibility for him.

It starts - the comfortable lie or the tragic truth?
It starts – the comfortable lie or the tragic truth?

This might become simple identification in another kind of game, but Walking Dead reminds you at just the right intervals that Lee’s not you. He often uses slightly different language than that of the player choice, for example – though a small thing, I found this to be one of the most effective means of underscoring his nature as independent from mine. And of course, Lee is a black man, an identity whose attendant stereotypes and prejudices, the game argues, survive literally to the end of our world. These reminders that Lee ≠ Becca heightened the stakes for my role in playing through him. It’s not myself I’m letting down when I die – better not mess this up.

This all means that while yes, I enjoyed the game, the predominant feeling throughout was one of stress and emotional exhaustion.*  Matt pointed out to me early on (probably in an effort to reduce my stress) that hey, this is a zombie realism story, not Grand Theft Auto: Zombie Narratives! – no one’s getting a happy ending. Intellectually, I knew my answering truthfully about this or that when a lie might have been more diplomatic had no impact on the central events of the story – the gang would have taken that field trip to Murder Farms in the next chapter either way. But somehow I was still fretting halfway through Chapter 2 about my readiness to have Lee step into the trap, wishing I’d been more cautious, even though I knew it wouldn’t have mattered.

I would argue that’s what makes Walking Dead such a successful example of its genre. It combines the walking narrative of games like Dear Esther, Gone Home, or The Stanley Parable with just the right amount of active risk, creating multiple levels on which to build the player’s emotional stake in the experience. Lee’s narrative choices (and some choices in real-time) shape how the main, unavoidable events of each chapter play out. But by providing the possible (if temporary) dead ends of in-chapter death, the game also gives you agency stakes in progression through those central events. If Lee dies, he’s not ever going to make it to Savannah. Even what coding dictates as the only path forward thus becomes, in a sense, the responsibility of the player. And while it can be intense, I found the result very rewarding.

*In retrospect, this feeling was heightened considerably by its being my first experience with a controller in almost a decade. When I play the second season now, the anxiety is less, the stakes feel a bit lower; I attribute a lot of that to my increased (though still limited) competence with the controller. But I also think Lee’s story was more intense in many ways as well. For all she drove me in Season One, I’m just not as connected to Clementine yet.

The Art of Trolling, Pt 1

One of the things I’d like to do with this blog is address some of the interesting results from my dissertation research that, for whatever reason, don’t seem likely to make it into the dissertation itself. When I think of such elements, there’s one that clearly rises to the top, that I’ve been itching to talk about for awhile now: trolling.

For those who aren’t familiar with the term, online trolls are users who post in digital spaces with the intent of provoking others into emotional and intense responses. Well, really any online behavior intended to provoke a strong response is trolling – it doesn’t have to be written. Trap your Minecraft friend in an inescapable all-obsidian cube while they’re preoccupied with crafting? You’re a troll. Profess that Obamacare is responsible for a 25% raise in healthcare costs for 70% of Americans on a left-leaning social news site? You’re a troll.

There’s an easy reason why trolling isn’t going to make an appearance (or much of one) in my final project: I barely collected any data on it. Of all the students I interviewed, only one said they’d engaged in trolling online. And though a number of others alluded to trolling behavior as a reason they don’t engage in certain kinds of spaces, this took a massive backseat to general perceptions that online discourse among strangers is low-quality. Trolls, while not a positive, didn’t rate highly as a negative either.

This is in line with what we know about trolls from existing studies. (Apparently I’m not the only academic who finds this interesting.) A recent University of Manitoba study estimates that trolls make up less than 5% of internet users – pretty slim. And from one angle – and it’s a fair one, I’ll be the first to admit – it’s not that interesting. People have been saying inflammatory things just to get a rise out of others for like, ever. I think there’s a cave drawing to that effect in Lascaux. What made my interviewee’s comments stick in my mind, however, is the way that they paint trolling as a practice that resists some of the growing critiques of digital communication – and how they show it to be an activity that actively seeks to break down the reticence I saw from so many students to get into the fray and speak back.

This student – let’s call him Joel – said he actively enjoys both reading trolled conversations and indulging in some trolly behavior of his own. “I like reading comments from trolls just because they’re just – I laugh pretty hard at them,” he told me. “Especially at the people who actually respond to the trolls and think they’re being serious. People get pulled in, and it’s kind of a fun thing to watch.” Joel also said he did some trolling himself – specifically, that if he was going to write at all in a public setting, it was going to be as a troll. Joel clearly wasn’t interested in spending time engaging strangers in deep discussion of major issues (though, notably, he did say he’ll have such arguments with friends via social media) – but he did want to elicit responses from them. He liked writing to strangers in a way that made them want to write back – just not in a particularly noble way.

Joel did specify that he has limits – “I’d never take it to the level of mocking kids with cancer, that’s a little too far for me.” Joel’s trolling activity is writing-based, focused on people with strong views taking place in persuasive exchanges; he’s not interested in mocking suicide victims or posting home addresses of pro-Tibet Chinese nationals. Joel engages in trolling as a form of intellectual and emotional release, “a form of expression I guess, to think about it. Not profoundly but abstractly…it’s kind of counterculture to that whole fact checking practice, to people who are the internet police.” Joel’s trolling is highly engaged, argument-centered, (pseudo)fact-based literate online activity, taking place on comment threads for political issues, scientific articles, social news debates.

This is pretty interesting. Because by and large, from what I found in my research, students don’t like commenting on things – but when they do, it’s on the same kinds of issues and arguments that Joel enjoys trolling. Fewer than half of them said they regularly (or even infrequently) write in online spaces beyond the sphere of social media. For this reason,Joel’s comments about being motivated to write because of the rewards trolling brings with it really stuck in my mind. And the more I think about it, the more there does seem to be something about trolling that bucks some of the drawbacks being bandied about concerning online writing.

For example, I’ve been reading up on Jodi Dean’s notion of “communicative capitalism” recently, having come across it in Mark Pepper’s awesome piece in Kairos. Basically, the upshot of Dean’s argument is that online writing, while prolific and seemingly democratic, does very little to create actual discourse or engagement between parties. Instead, it results in a cycle where people are tossing thoughts out there in writing just for the sake of having created that bit of communication, rather than with the intention of getting others to engage with those ideas in any way. “The exchange value of messages overtakes their use value,” says Dean. “Uncoupled from contexts of action and application…[the message’s] particular content is irrelevant. Who sent it is irrelevant. Who receives it is irrelevant. That is need be responded to is irrelevant. The only thing that is relevant is circulation, the addition to the pool.” The goal is to hear yourself talk so you can feel good about having done so – not so others will talk back to you.

When you troll, though, interaction is the whole point. A trolling comment that elicits no response is a total failure. Trolling is a form of online participation with a very specific rhetorical goal – evoke response. This sets it up in the face of Dean’s theory, which posits self-satisfaction and individual expression as the driving motivation for online writing. A troll has to pay attention to their audience – if you don’t know what kind of bait to leave them, you won’t have much success in getting them to take it. And at higher levels of the “art,” you need to have some knowledge of the issue you’re trolling about. For example, Joel told me that his favorite places to troll are those where the other commenters are (or fancy themselves to be) very knowledgeable and fact-focused about the matter at hand, because such audiences can’t let weak or untrue fact-based claims slide. So Joel will link to unscientific or low-credibility sources in his posts, knowing that others in the conversation will feel the need to jump in and roundly demonstrate with their own sources why his claims are wrong.

Can this kind of exchange be termed, as Pepper puts it, “a sparking of in-depth discourse?” Probably not. No one’s mind is being changed, and emotions are probably running higher than reason on both sides. But both the trollee and the troll are drawing on rhetorical skills to meet their respective goals. The troll is feeling out the audience and choosing their words and tools accordingly; the trollee is looking to what’s been said and drawing on their own arsenal to respond to its argument. Not in every case – a lot of trolling devolves pretty quickly into name-calling and exasperation. But Joel says he’s in it for the ones that don’t – that he gets the most pleasure from making people bend over backwards trying to show him why he’s wrong, when all the time he feels it should be clear he *can’t* be shown that, because it should be obvious he’s just a troll. So I’d argue that there’s engagement here, on both sides – that for Joel (and presumably therefore at least some subset of the trolls out there) the content is *not*, as Dean says of online communication, irrelevant; it is in fact essential to creating engagement. This makes trolling, at least in my view, very interesting. It is a form of online writing whose sole and central purpose is engaging in conversation (loosely defined) with others. In a landscape where more than half of students never get around to leaving any comment, and where barriers to writing online are arguably much higher than we are aware of, this seems like not such a bad goal to me – problematic though it may be.

Trolling has a lot going against it, and I don’t want it to seem as if I’m arguing in its favor. There’s evidence, in fact, that the presence of trolls in online debates has a decidedly negative effect on dialectic, causing people to “double down” on their preexisting views rather than opening them up to any actual facts or alternate perspectives being presented. And my own research suggests that trolling may play a role in discouraging writing activity online in a wide variety of contexts – not only comment threads for political or scientific articles, but in more strongly affinity-based spaces as well. What I’m saying, rather, is that examining trolling practices and motivations more closely might tell us some useful things about how written engagement works online. About what pushes people to engage in written conversation with strangers, and what makes them want to do so in the first place.

(Image credit: Mike Puncekar)