Technology Literacy Narrative: Part 1

This post is my response to an assignment I gave the graduate students in my Digital Rhetoric class: Technology Literacy Narratives. My goal was to finish the assignment before giving it to them, as a test run of the prompt, but (surprise surprise) I got carried away and ran out of time halfway through. My goal is to post part II by mid-semester, when students will begin remediating their narratives as multimodal objects.  

It’s probably too much to say that the drive to explore the link between the written and the digital is in my blood, but it does seem to run in the family. My father credits his love affair with the classic text adventure Zork as having steered him towards his career in computer engineering. And despite being as different as two siblings could be while we were growing up, my brother and I have both found our way to careers focused on writing and the digital. (Ben, being both more practical-minded and more charismatic than I am, now works as Community Manager for Gamepedia.com, over at Curse Inc.) It seems, in some form or another, we’ve always been a Computers and Writing household.

I grew up surrounded by computers. My Dad had a Sun Microsystems computer in his office going back as far as I can remember, but despite their ubiquity I was never even remotely interested in learning how they were build or controlled. I followed in my father’s footstep, singular: I was captured by the idea of computer games. I say “idea of” because for years, the best “game” available involved releasing crudely animated roaches onto the desktop; the goal was to “crush” them as they scurried from one hiding spot to another. I was beyond terrible at this game. I lacked the motor control (and, let’s be honest, the patience) to figure out how to track and trap them. As a result, my father often came back to a desktop positively swarming with e-roaches. (Watching him squish them effortlessly and handily, far from frustrating me, made me admiring and hopeful. Maybe I too would someday become a carefree exterminator of computer bugs.)

The Sun was the only machine in the house I could actually use, albeit in limited capacity – but it was far from the only hardware in the house. My father was a hardware hobbyist of the old school, always building and tinkering and, above all, collecting. Dad’s “Computer Junk,” we called it: metal shelving units stacked with monitors, peripherals, half-empty towers.  It was all so interesting, so exotic, and it made me feel special – then, and more and more as years passed and computers took on more and more significance outside my basement – that I was close to all this, and that my Dad knew what all these gemlike, inscrutable items could do once brought together.

But I never wanted to do that myself. This feels lame to admit now, looking back. I wish I’d felt more of a draw for that – but that’s just not how technology spoke to me. I was more creative, intuitive, bookish; and in those early years, there wasn’t a clear path for connecting my hobbies with my dad’s hardware.

My dad’s hobby kept us at the (relative) cutting edge of personal computing developments, but my dad was always very deliberate about teaching us to be critical and effective users of these systems, even if we didn’t understand the hardware behind them. I was 12 when my dad taught me the proper way to compose a support request – be specific, remember what you did right before the error, provide information on any error messages received, etc etc. He presented it as no different than learning proper phone manners, or how to open a bank account – just a skill you need to function efficiently in the world.

My father called these lessons “computer literacy.” Being computer literate didn’t mean knowing everything about computers (which at this point it was plain I never would), but knowing enough about their basic workings to troubleshoot basic issues yourself and, failing that, to know how to request help correctly. These lessons made a deep impression; I write every support request as if my dad is looking over my shoulder. This was arguably my first rhetorical lesson about writing with/about technology: consider your audience and purpose, and write so that those knowledgeable people have what they need to fix your shit. Decades later, this has evolved into a cornerstone of my teaching.

The cornerstone of my research, by contrast, came from something I discovered on my own: participatory media (or what passed for it in 2001). My first experience using a computer to interact with digital strangers was through a website called the Brunching Shuttlecocks. I don’t remember exactly how I found the site originally, but I remember loving it immediately. Very much a precursor to the web of today, Shuttlecocks featured a range of different recurring posts – analyses of movie reviews posted by a far-right religious website, simple comic animations starring cartoon versions of the site’s creators, comical “Ratings” of random stuff…boilerplate stuff now, but back then it felt novel – and, in those early days of the net, intimate. When things were published, there was the sense that they were *for* the regular readers like me – and the small scale of the digital world back then meant I could know those other regulars too. And thus I joined my first digital community.

This was a major moment in my technology literacy story, as it turns out. For one, it’s where I first engaged with the type of activity I’d go on to study for my PhD – that voluntary writing between putative strangers that’s driven only by shared interest and digital friendship. It’s where I found out about Kingdom of Loathing, a delightful stick-figure MMORPG that I played on and off for nearly 15 years and which also heavily influenced my eventual research (more on KoL later). It’s where I saw firsthand the power of digital interactions to create genuine bonds between people. I was young – well under 18 when I first joined – and that youth plus my natural shyness in new environments kept me at the margins. But even so, I was arguably more active with the Brunchmas than I have been in any participatory community since. I’ve never reflected on this before, but I really was so lucky with them, to stumble into such a good first experience. The value they placed on their relationships with each other was obvious; even though I never attended the yearly meet-ups or became personally close with anyone beyond the bounds of the forum and its games, it was obvious to be that a digital beginning had no bearing on a friendship’s potential. The Brunching Forum was also the first of many signs that participatory internet writing could be as rich (and even “correct”) as any other kind. Now, I have a name for what I was seeing in action: discourse expectations. Back then, I just thought everyone online wrote with mechanically perfect wit.

For a long time after this, my technology literacy developed mostly through gaming. We never had a system (anathema to my PCMasterRace father) but video games were a staple family activity in my house. Having cut his teeth playing Zork collaboratively with my mother in college, my Dad viewed all adventure games as potential family pursuits. And so my best gaming memories are not solo, as you might expect, but collective. My dad, my brother and I huddled around the PC in the upstairs office, making our way through the Great Stygian Abyss (Ultima Underworld) or the planes of the blackrock gem (Ultima Underworld 2) – this is when I first saw the power of games to make you learn them. The complexities of the skill trees, the casting systems, inventory management: these things were impossibly complicated to my 9-year-old self, but as we passed through the game’s world they snapped into place. When Ultima Underworld 2 rolled around, I entered already knowing those tropes, understanding what the game expected of me and how to work within that system.

And then, in 1993, there was Return to Zork. This game, more than any other, was special to my family. Zork was the game that brought my father towards his chosen field, and the first he played with my mother; unwrapping this new chapter was the highlight of our Christmas. In an unprecedented move, the adults set up the PC on the dining room table. All of us – kids, mom, dad, and my uncle – marveled at the then-stunning motion capture graphics and agonized over the puzzles. It’s a great game, even without the nostalgia factor, and one that still occupies a special place in our hearts. It’s right there in the inscription on the locket my parents gave me at my PhD graduation: “Here’s to us.”

To be continued…

Courses I’d be into Teaching

Coding and Conventions for Online Writing (200-300 level)

The rise of new media has had profound effect on how and what we write – and professional writing is no exception. A successful digital writer does a lot more than cut and paste – they use uniquely digital resources to power up written language, creating texts that can speak directly to their online situations. This course will give you a strong set of digital composing strategies and teach you how to use them effectively with the ones already in your writing toolbox. You’ll research the history, conventions, and trends of online writing within a field or genre that interests you, and explore how individuals and organizations construct and distribute knowledge within electronic spaces. You’ll practice composing effective digital texts in a range of genres and styles, and complete a collaborative online project suitable for inclusion in a professional portfolio.

This course is designed around the belief that all digital code, including markup languages like HTML, is rhetorical – and therefore it is an essential part of effectively critiquing and composing digital texts. As such, you’ll practice the basics of composing with markup languages throughout the semester, and learn how to use these simple techniques to boost the rhetorical impact of your written language. It doesn’t take a computer science degree to write with code – just a grasp of the general principles and a willingness to seek out and experiment with more complex ones as the need arises.

 

Video Games and Learning (Subtitle: Everyone I Know Works in Digital Learning)

Video games are a powerful way to engage and inspire learning. They can also be a powerful medium for storytelling – one that many digital literacy scholars believe helps to drive that unique learning engagement. What does the power of video games to engage and instruct us mean for society? How might that power be used to transform learning both inside and beyond the classroom?

This course is designed to explore these questions the same way video games do themselves: through interdisciplinary, multimodal learning. We’ll read work from prominent games+learning scholars, and play a wide range of games – mostly independent or openware ones, but a few more prominent titles as well. Thanks to the magic of Skype, we’ll be able to talk directly to the designers for some of these games, as well as with other digital learning professionals – creating a unique chance for open dialogue between students, educators, and industry professionals. In addition to writing about the course’s ideas, you’ll also collaborate to produce games of your own using open-source platforms like Twine.

 

Opening the Archives: Digital and Cultural Logics

“The digital is the realm of the open. Anything that attempts to close this space should be recognized for what it is: the enemy.” – The Digital Humanities Manifesto (UCLA 2009)

This common view of information access as a moral imperative has led to some valuable and exciting digital knowledge projects – including Google itself. But this view can also blind us to the political and cultural implications of this openness. The information within those open-access bytes was created in a local context – one that isn’t necessarily compatible with the universal access missions that characterize digital curation today. How can we use digital technologies to unite people through information while also respecting their differences?

This class explores this question in both theory and practice. You’ll learn about theories of digital curation and information design. We’ll examine both the history and the results of the universal access movement, and consider what these digital advancements mean for the groups whose knowledge may eventually become part of these projects. Then, in the second half of the course, we’ll work with a pair of [University X] digital archivists to create a digital archive for a local community partner – one designed to respect their stories as well as preserve them. Partners for similar projects in the past have included community literacy programs, housing and farming cooperatives, and the Southern Wisconsin Hmong Organization. [inspiration shoutout: Kimberly Christen Withey and her amazing C&W keynote this summer]

Image Credit: Kentucky Route Zero, Cardboard Computer

5 for Execution, 8 for Reflection: Casual Composing with Visual Rhetoric

Disclaimer/Statement of Self-Awareness: The image post I discuss in this entry is not, in immediate hindsight, as funny or clever as it seemed at the time. Don’t judge. We’ve all been there.


So I’m finishing up my final edit/proofread of my forthcoming College English piece. Since CE is a legitimate publication (unlike my advisor’s inbox or this blog), this job required me to break out the full MLA Handbook. Luckily as a newly minted MLA member – thanks Vancouver! – I owe one of these, both hard copy and online access. All I needed to view the guidelines onscreen as I edit was a code from the back of the hard copy.

This code, amusingly, is printed under a thin layer of foil scratch ticket-style – a-like so: photo 1

This gave me the unique and somehow appropriate feeling of winning a guide to a formatting style that I had already paid $80 for – yay! It was the opposite of ironic – exactly what I expected it to be in every way. But it was also funny (at least to me), and so I snapped a picture for my brother, the only person I know who buys scratch tickets unironically.

#stylewin
#stylewin

I’d been looking for an excuse to buy a multi-photo framing app for Instagram, so 99 cents later I was deeply engaged in 1) experimenting with ways to convey the humor and sarcasm through image sharing and 2) wasting a lotta time. What job talk? With PicFrame as my composing tool, the choices at hand were about content and quantity. Which pictures sell the joke best – and how many?

I started with just two – the front cover of the guide and the scratched-off code in the back – but it felt flat. What I wanted, I realized, what I was imagining in my head, was a type of photo post I often see on Tumblr: a series of three or more pictures where at least two are progressive zoom-ins of the same shot. (Sort of like in this meme.) Since I wanted a single post suitable for both Instagram and FB, though, that format didn’t quite fit. But it sent me checking out four-image frame layouts that created more of a “journey” through the MLA Scratch-Off Experience of 2015:

photo 4photo 3 I liked the “journey” built by the four image layouts, but without more zooming power it still fell flat. And even without going for zooming effects, four images was too many for Instagram – too busy and cramped, at least for what I wanted.  It also created a dull impression, since very similar images inevitably ended up side by side. Very “meh.”

photo 2

The image on the right is what I ultimately posted to Instagram (and Twitter?): the page that started things off, the hard copy I was instructed to find, and the scratched-off result (with quarter for added lottery ticket effect). Not a stuck landing, still not doing what I wanted it to – but there’s only so much procrastination one can justify. That said, this experience opened up a lot of in-action reflection on what (and how much) goes into even the most casual visual/digital composing.

Even though I try to complete the NM/MM assignments I give my students alongside them (if not beforehand), it really is different when you’re working towards your own specific goal – your own vision, even. Sure, “vision” is a dramatic way to refer to an Instagram post. But it’s also a better way of explaining what was going on in my head as I tried and rejected those various options. I was going for a specific effect, I had a rhetorical style (strategy?) in mind, and I kept working at it til I found something that matched. Or more accurately, until I ran out of time and forced myself to settle. Goldilocks I am not: I can’t always get it just right. Especially when I’m composing with non-native rhetoric.

I’ve got two main takeaways here:

1) I want to do more digital and MM composing of my own, both casual and professional. It’s fun for its own sake, it’s got the extra meta-fun layer, and it is so damn satisfying when you figure out how to achieve even a small part of that vision, whatever it may be. The blogging I did all summer scratched this itch pretty well for awhile, and getting back to it is a good first step – but I want to ultimately create some kind of professional artifact, and/or one that blends personal and professional (as the blog does), that lets me experience new and reflective composing processes like this one.

2) Using digital and/or visual rhetoric to achieve a goal is  different than using only(/primarily) language. And it doesn’t take a serious or large-scale goal to experience that – it’s visible in even the simplest, most casual acts of writing. Which brings into focus something I’ve been groping towards in my dissertation stuff recently – the value of casual online composing for writing instruction.

…brings it more into focus anyhow. I’m not quite finished processing what I want to say about new (“new”) rhetorics and casual writing. But it feels like a sign that working to bring that writing, that casual and often painfully superficial-looking (to both us and to them), into the classroom does have potential value. That starting small might be worth a look. That starting by showing students the rhetorical choices they’re already navigating on a daily basis, with barely a thought, might facilitate a unique opportunity for transfer and conscious composing.

To be continued in my dissertation conclusion!


 

Micro-Fretting about Hyperlinks

I’ve been thinking a lot about hyperlinks lately – very 90s of me, I know. It started with the blogging, actually – as I’ve written these posts, I’ve realized how much I like hyperlinks as composing resources. I can cite without an unsightly in-text spelling out of the piece in question. I can include pictures that enhance the writing in some small way (usually visual jokes, but sometimes more substantial) that don’t warrant in-page inclusion or would be too disruptive if displayed within the post itself. I can make it easy to find more info on things I’m discussing directly, or even that I mention tangentially but which might be of interest to specific readers.

That’s one of the biggest – or at least the most obvious – draws: it makes things easier. Easier for me – throwing in a hyperlink is almost effortless in the WordPress visual editor (though it would be even easier if it didn’t automatically copy text as it’s highlighted, meaning you must copy the link after highlighting the relevant text). And easy for readers – the post is unbroken, and additional materials are a simple command-click away. It feels elegant, both formally and rhetorically. It opens choices for me as I write that wouldn’t exist otherwise.

It’s the choices piece of that I’ve been thinking about lately. Chiefly: when linking to outside material, how do I decide which words should become hypertext? This is a small decision, but one I find myself consciously considering all the time without any real sense of what ought to be guiding an effective choice. I want the power hyperlinks bring – but what’s the best way to present them within my writing?

John Slatin put his finger on this payoff/problem combo nearly 25 years ago – hypertext is great for pulling in lots of stuff you couldn’t include otherwise, but it’s a very imprecise tool for conveying the relationship between the linked material and the ideas of the text. Now, he’s talking more about how hypertext works in the complex, multi-path works that enraptured early digital media scholars than my very direct and relatively linear blog links. It’s pretty obvious what the linked picture in my post refers to regardless of what the exact link text is. But the exact text matters. At least I can’t overcome the feeling that it does.

Here’s an example, from my recent screencasting post, of what I’m talking about: in the second paragraph, I wanted to link a picture of a typical script for my screencast responses. The sentence where I wanted to include the link reads, “After reading the draft a few times, I compose a little “script” on a Sticky note of the main points I’m going to make.” What’s the most rhetorically effective and/or elegant hypertext version of that sentence?

  1. “After reading the draft a few times, I compose a little “script” on a Sticky note of the main points I’m going to make.”
  2. “After reading the draft a few times, I compose a little “script” on a Sticky note of the main points I’m going to make.”
  3. “After reading the draft a few times, I compose a little “script” on a Sticky note of the main points I’m going to make.”
  4. “After reading the draft a few times, I compose a little “script” on a Sticky note of the main points I’m going to make.”

The third option feels obviously weak to me because the link is so small – harder to click, less noticeable. Repeatedly one-word linking feels almost “twee” to me; it also provides the reader with less information about what exactly is being linked, making it more effort to process the place of the hypertext relative to the surrounding context (though in this case that’s not so much a problem). The fourth option is the most complete description of the linked material – a script composed by me using the Sticky Notes program. But it’s so long – its length suggests an importance for the linked picture that’s out of proportion with my intentions.

In Goldilocks terms, #2’s visual length feels “just right” in proportion to how important the image is to my rhetorical goals for the sentence. But I chose #1 instead – and this is where my ability to articulate the rhetorical choices at work breaks down most. Why did I include “I compose?”

Slatin says that rhetorical hypertext guides the reader towards “the pattern which connects” – the organizing idea or value of the text as a while. Perhaps this is why “I compose a little script” felt like the best rhetorical choice – it emphasizes that both the linked picture and the text as a whole are making a point about the act of screencasting. Now, I make a lot of other points in the post about it too – it’s not my tightest piece of writing. But looking back at the notes I used to put the post together, taken during the week or so I when I was working most actively with the method, it’s clear that what I was most intrigued by was the effect the act of constructing a screencast had on my practices as a tutor. Is this the “pattern which connects” everything in the final post? Not really, no. But it was, semi-consciously, the foremost idea in my mind at the time of writing.

So maybe that’s a starting point for me in making these decisions: choose the text that best contributes to your most important idea, or an idea that you’re trying to push forward with both your main text and your link. I was excited about the tutor’s role and practices in this process when I made this link; including “I compose” added emphasis to that idea without also adding undue bulk. And so it felt like the best – the “right” – option.

And as it’s virtually certain someone(s)’ written insightfully about HT much more recently than Slatin, someday when the thick of the market is behind me I can flesh out my decision-making with those as well. And then I will be a linking powerhouse, an unstoppable rhetorically integrated digital force.

What Games Don’t Teach pt II: Controller Woes

When Matt and I first started talking about videogames and education, one of the things that came up most prominently on his side was the idea of instructor illiteracy. Even if you can sell an instructor on the value of videogames for their teaching, there’s still the pragmatic obstacle posed by how foreign the medium is to many. Yes, teaching is a learning process for both students and instructors, admitting novice status can be a learning moment, etc etc. At the end of the day, teaching something you aren’t comfortable with yourself is much harder, and much less appealing a use of personal resources, than something you are. Such lesson units are challenges we actively set out to overcome, rather than accessible entry points for busy instructors on the fence.

I proposed a panel recently about this topic – how to present videogames to instructors (specifically in composition) as an effective learning tool for their classrooms. In putting it together, I thought a lot about this question of what these obstacles to videogame pedagogy might be for these instructors.

Being new to videogames brings a lot of stuff to learn, some of which I discussed in my last post: inventory management, puzzle-solving strategies (try everything with everything!), conventions for moving the plot along (talk to everyone you know!). But I think some of these new skills pose higher barriers for bringing videogames into the classroom than others. Adapting to the ways games signal narrative significance, for example, is a learning process we’re used to; we’re practiced at picking conceptual patterns out of the noise. It’s the backbone of most of our training, both pedagogically and within our specific subjects. Not to suggest it’s a piece of cake picking up these tropes. But they are, I would argue, relatively easy for even a new-to-gaming instructor to recognize. And that recognition, when combined with a little faith that there’s something to this “games for education” stuff, can lower the access barrier enough to jump in.

But learning to deal with a modern controller? Oh hell no.

Physically mastering a controller has proven, for me, to be by far the most frustrating aspect of learning to game. Learning to drive an *actual* car feels like it was easier than learning to drive in LA Noire. It’s not intuitive at all! So many buttons! At this point, I’ve essentially given up.  This difficulty mastering (or even adequate-ing) something that literally millions of people can do effortlessly is frustrating and embarrassing. So I can’t solve this puzzle – big deal. That’s what puzzles are for. The fact that manual un-dexerity prevents me from solving this puzzle despite knowing what to do, on the other hand – that’s so frustrating it almost cancels out the pleasure of play.

This is pretty much how I feel about controllers.
This is pretty much how I feel about controllers.

Learning to use a controller is not part of the fun and engaged learning that makes videogames so appealing to scholars and educators. It’s more like being presented with chopsticks for the first time when you are very very hungry, and the meal in front of you looks so good: you just have to master this one little set of movements to start enjoying yourself, but the stakes for that learning are high – your shirtfront; your character’s life. (Now granted, I can’t really use chopsticks well either so, you know, make of that what you will.)

My point here, though, is to ask how the physical difficulty of learning to play videogames might affect their adoption by instructors, especially those who are newer to digital teaching practices overall. This isn’t one of the obstacles I considered when writing my proposal, and that oversight seems so glaring now. The first time you pick up a controller or set your hands on AWSD is inevitably awkward and distracting. That memory is very fresh for me, in a way I’d cautiously guess it is not for many of videogame pedagogy’s advocates. Fumbling to navigate a digital world obscures its potential for both enjoyment and learning; it replaces immersion with self-consciousness. If this is all you experience of videogames, their pedagogy is going to be a hard sell.

I watched this self-consciousness play out during #eng177’s lectures on Braid this spring. Jim brought the game up on the main screen a few times for live playthroughs of the sections we were discussing, by either himself or a volunteer student. While both he and the student who took over did an admirable job staying cool and navigating through the levels, it was obviously a bit nerve-wracking for both of them. Even for experienced players, controlling a game is a delicate process, one that’s quick to be thrown off by outside factors like the pressure of an audience or the need to narrate your actions as you go. Only two students were willing to step up at all, with one of them shouldering most of the playing throughout the unit. Doing anything in front of 120 peers/students is nerve-wracking, of course – but this experienced suggested this is particularly true of liveplaying.

Game controls have a steep learning curve: they’re  foreign and challenging at first, but you catch on quickly. The trouble with steep learning curves for those long past them is that because they happens so quickly, it’s easy to forget how hard the initial experience can be. This probably doesn’t matter so much for teaching your mom or your Amish buddy to play Assassin’s Creed –  but for the instructor testing out a potential teaching tool, that learning curve matters. The resemblance of digital tools like blogging to practices we’re already comfortable with gives them a relatively shallow learning curve by comparison. This isn’t a bad thing, but it is an obstacle to encouraging wider experimentation in composition with videogame pedagogy.

Those of us who believe in videogames’ potential for composition instruction need to keep this curve in mind when we evangelize. To remember to acknowledge, when we talk to curious newcomers, the initial awkwardness of the physical learning process as well as the great potential of the intellectual one. To let them know, if only in passing, that the frustration passes quickly, and when it does – that’s when things get cool. And maybe steer them away from consoles.

 

[In-post image: IncarnateFilms]

What Games Don’t Teach pt I: the Kelly Chambers Incident

Since Gee’s idea of Affinity Spaces is a major part of my dissertation’s conceptual frame, I’ve been reading a lot of stuff about the power of games as learning tools. (Also I date a Filament designer, which tends to bring with it a good dose of similarly themed conversations.) These arguments had been interesting in the abstract, but really came to life for me when I started playing myself. Because video games can be really complicated these days! Matt started me out relatively slow, with Knights of the Old Republic as my first fighting-style RPG, but even that felt dauntingly complex at the start. But the game made me want to play it despite the anxiety I felt about all this, and with a little help from Matt I learned the ropes before I was even off Taris. And now I’m playing Skyrim unassisted with only occasional anxiety about figuring out the skill trees.

Playing these games made me appreciate the conversation about games as uniquely powerful learning tools in a new and much more concrete way. Many of the games I played this past year are complicated by any definition, and yet make learning to harness and navigate those complications not only manageable but genuinely pleasurable. But in addition to teaching you the systems and rules specific to the particular game, video games also teach you how to navigate the rules of the genre overall. You learn how puzzle games work by playing a few of them, and then you have a leg up on the next one you come across. You learn that shotguns are for close range and rifles for long (or you’re supposed to – sometimes you’re an idiot and need your boyfriend to point it out to you); that sometimes you solve puzzles by trying every combination of action and object until something works; that levels and dungeons are usually one-way, and if you’re spending a lot of time backtracking through already-cleared environments you probably took a wrong turn. These conventions aren’t completely universal, of course – and expecting them to be sets you up to be caught by the inevitable moment when a game deliberately breaks them to mess with you (a practice that reached its purest and most meta form in The Stanley Parable).

But generally, by playing a bunch of games you learn not only how those games work, but what you can reasonably expect from other games like them.  This is both cool and useful, as it essentially gives you a headstart, even some shortcuts, for learning these new games in the future. It means that when you start a new game, you can, as Gee puts it in an account of playing real-time strategy games, “already know something, have a small foot up.”  Each game is “a precursor for later learning” as well as a learning experience in of itself. When you arrive at that later learning opportunity,  you’ll be able to make useful and cautiously dependable assumptions about new systems and situations even before you’ve finished learning the specifics.

Most of my game experiences (at least as measured in hours of play) are with RPGs, in a genre I’d describe as “combat/narrative blends” – they have a lot of fighting, but a decent chunk of story as well (sometimes very good ones). Over the last 18 months, I’ve experienced a lot of these “small foot up” moments Gee talks about, with the result that each new game I pick up is easier to learn – and, maybe even more important to my overall play experience, navigating their environments and situations is more and more intuitive. The best example I can think of deals with level conventions: I’ve learned that I can reasonable expect each dungeon/mission/whatever to have one relatively obvious route through it from start to finish, and how to recognize the conventional visual signals used to mark it. I used to get painfully disoriented in even simple level layouts, to the point that it was embarrassing; I’d be looking over my shoulder hoping no one was witness to my getting lost in the Presidium again. But now that I have a few more RPG notches in my belt, I have a much better instinct for the conventions and goals underlying level design, and (usually) have no trouble finding my way unless I’m supposed to. This, and a host of other accumulated lessons, suggest that by and large, combat/narrative RPGs are good at teaching about their genre as a whole as well as themselves.

Except there’s one convention that video games have completely failed to teach me – that of the time-sensitive mission.

In most RPGs, time-sensitive missions are the exception rather than the rule. Taking out the mafia kingpin, storming the lab on Virmire, starting up the water purifier, scoping out the dragon burial site for possible resurrecting dragons – no matter how high the narrative stakes for these tasks, or what your NPC companions are screaming in your ear, you’ve got all the time you need to take care of it. So check out that random hallway. Search every inch of the base. Heck, wander off on a side mission for a few days if you feel like it. It’ll be there when you get back, and no one will actually be any worse off than if you’d taken your sidekick at their word and raced through like this was all actually unfolding in real time. Unless there’s some kind of visible timer on the screen, or screamingly explicit narrative cue, there’s no rush.

This is pretty standard across the RPG genre (at least in my experience), and so presumably is something these games should be teaching players along with level conventions and appropriate small arms use. But if so, it’s in a way that my learning style doesn’t recognize, because I cannot seem to internalize it in any meaningful way. Like, at all.

My (admittedly limited) gaming lessons have consistently presented urgency as something explicitly signaled through visual markers (as in the very first Mass Effect mission, when the bombs must be disarmed before the timer reaches zero). Sometimes that marker isn’t precisely time – it might be shield strength, or a companion’s health – but it gives some visual cue that I can’t be screwing around. On the other hand, they’ve also taught me that narrative matters – that what people say and do around me has implications for what I will be able to say and do subsequently. Mass Effect and Fallout 3 taught me that the cues characters give about themselves and the stories around them are important for making decisions that build the narrative outcomes I want. Bioshock and Bioshock Infinite taught me that even seemingly insignificant background details in dialogue and story can turn out to be majorly important. These experiences have primed me to take narrative, including character dialogue, seriously. They have failed to teach me to ignore that dialogue when it’s rushing my character along – despite that being what I’d argue is a pretty necessary lesson.

Because this learning failure has a definite adverse affect on my play experience. I rush past gorgeously rendered vistas and valuable equipment upgrades. I miss entire side quests and quirky characters. I sometimes experience something akin to actual stress over reaching an objective I’ve been told is pressingly urgent when in reality I could wander off, clean my entire apartment, have a leisurely drink with friends, and then come back and finish the mission to exactly the same effect as my instinctive 15-minute bull rush. Just a few days ago I caught myself abandoning a Skyrim questline I was enjoying immensely because a character I’d just met told me I had to meet her at some random location right away. This didn’t have any negative effect on that original questline (of course). But it had an effect on my enjoyment of the game – I felt rushed and resentful. By the time I remembered that urgency is just a narrative trapping in RPG quests I was already at ground zero for the new quest’s first mission.

I’m not sure why I have so much difficulty resolving these conflicted learning objectives – why I can’t seem to learn that narrative tags about urgency are decorative and not declarative. I’m pretty sure it’s not a widespread problem; if it was, designers would already have switched up how they’re coding and scripting these moments to improve their games’ ability to teach both lessons effectively. I have a fairly conventional learning style – traditional school worked fine for me, I’m not unusually visual or tactile or anything like that. So the sensation of falling through a learning-styles crack (even one so laughably insignificant) is kind of a weird (and, because the stakes are so low, interesting) experience. It makes me wonder what a game would look like that did teach me this effectively. Additional tags about the importance of being thorough and exploring? A section in the tutorial mission that explicitly says “sometimes missions are timed; here’s what that looks like”? The latter seems clunky; the former could be too subtle. But then, even if such a game existed, there’s a chance I still wouldn’t learn. Because sometimes bad design reinforces the wrong lesson – and as Skinner taught us, inconsistent reinforcement is the most powerful kind. Which brings us to Mass Effect 2 and the Kelly Chambers Incident.

In its final stages, Mass Effect 2 breaks the rules that urgency gets exceptional markers: once the final mission becomes available a hidden clock starts, and the longer you wait before starting it, the more of your captured crewmates are killed by the enemy. This includes the peppy and, in Matt’s words, “really attractive” Kelly Chambers, a character who (if she survives) plays a role in the following game. A role I will probably never see, because my Shepard’s Kelly is Collector paste. I’d grown used to people telling me missions were urgent, vital, pressing; and nothing about the dialogue surrounding the final mission cued me to think it was any different in this regard, that this time it really was urgent.

So I dicked around and finished a few sidequests, and Kelly died, and I’m still super pissed about it. The ONE time my natural instinct to rush to the rescue would actually have been correct, I ignored it. Consciously. Because I remembered what the game(s) had taught me.

This seems like bad design to me. What good are stakes if you don’t know they’re there – and can’t reasonably be expected to have guessed it? By changing up a basic rule of the genre without warning, Mass Effect 2‘s final mission undercuts the lessons taught by the rest of the series, and other narrative-based RPGs as well. I would argue that this is an example of bad design on their part – of letting a narrative shock outweigh the game’s consistency as a learning tool of the genre. This is not to say games should never play with or flaunt our expectations – not at all. But that they need to do so in a way that’s consistent with the rules the game’s taught us govern its systems. In a good game, we shouldn’t be overly comfortable about what’s coming our way, but we shouldn’t feel cheated by it either.

Now if you’ll excuse me, I was told several days ago that I’m desperately needed at a dragon burial site, so I should probably look into that.

 

Tensions over Blogging

Blogs, amiright? They’re writing-based but multimodally flexible, they’ve got low entry barriers, and they’ve been around long enough that they’ve started to feel comfortable even for the less digitally-minded among us – like just another part of the written landscape. But lately, I’ve been noticing some strange tensions across attitudes towards the humble blog. One the one hand – so many pros! They’re everything we could want in a tool for teaching (and practicing) rhetoric and writing in this digital and participatory age. But on the other hand – are they really? or put another way – are they still?

This tension pokes through at a few points in Deborah Brandt’s thoughtful meta-post on the UW Writing Center blog this April.  Brandt opens by discussing her longtime resistance to blogs, despite the advantages typically attributed to them. “Aren’t blogs among the most appealing forms of expression?” she asks. “Breezy, easy, low stakes, anything goes—an embodiment of the best democratic potential of the Internet?” But she’s skeptical of the genre nonetheless. So much so, in fact, that she devotes the rest of the entry to interrogating the potential downfalls of such a perspective.

Brandt raises a number of important points in her post –  about the dangers presented by blogs’ informal and shifting formation of author identities; about whether blogging assignments violate students’ legal right to privacy; and about the slipperiness with which an individual’s voice can be appropriated to serve institutional or corporate agendas. But it’s this initial tension she presents that caught my interest for this post: between the “default” view of blogging as low-stakes and democratic, and the more complicated, even (somewhat) negative view that lies underneath it.  Brandt presents the first view as a voicing of a prevailing view, an “everyone knows”-assumption that blogs are democratic, lower-stakes, and therefore great tools for contemporary writers of all kinds. A view that, despite her clear and concrete reservations, she seems to still feel some attraction, even obligation towards – framing it as a rhetorical question for herself as well as her readers.

Lately, I notice this tension throughout conversations about blogging – academic and otherwise.  It’s rarely expressed as eloquently, and the concerns about the more negative edge of the blogging sword aren’t always in quite the same vein – but in 4Cs panels and student interviews, academic blogs and TA training, there’s an ongoing uncertainty about the value of blogging – and sometimes, more pointedly, whether or not it’s worth doing at all.

On the academic side, there are concerns like Brandt’s that the very qualities that make blogging attractive also give rise to its dangers. We already have so many ways to implicate ourselves forever through out online behavior, and for that behavior to be exploited by bureaucratic interests; blogging, these arguments worry, can too easily become a long-form and even more potentially destructive path towards the same. And there are arguments for blogging as just another path towards the echo chamber effect, encouraging writers to reify their existing opinions for an audience who already holds them. But to me, the most thought-provoking argument against giving blogging top attention among digital genres is that our students are already past it – “over” blogging (at least in the form we tend to consider it) and on to other things, to microblogging and Pinning and the visual-focused Instagram. And while there’s still plenty of blogging to be found among the Comp/Rhet set, it’s definitely not an area in which everyone’s practicing what the prevailing attitude preaches. Collin Brooke recently wrote a blog post of his own about the relative dearth of digital “stock” (of which blogging is a primary element) as compared to the more ephemeral “flow” of Tweets and Tumblings. For all the popularity of blogging in the classroom, it seems like not that many of us are interested in it ourselves.

All of these arguments were mirrored, in some form or another, in my interviews with students about digital writing. Some students had abandoned blogging because they’d come to see the writing they did there as too personal to continue so publicly; some never began at all for similar reasons, saying they didn’t feel comfortable putting the things they feel and think right now out there so permanently, or didn’t see the appeal of doing so. More than one  expressed skepticism or distain for blogging by others that “just made the same point everyone else makes;” speaking to an audience of your friends, one student said wryly, isn’t going to affect anything. And while no one said anything so direct as, say, “blogging is for old people,” there was lots of implicit support for the idea that students don’t blog because they’re expending that expressive energy in other forms. Students had Tumblrs, Pinterest boards, Instagram feeds, Vine accounts – but none of them had an active blog. A few had been active bloggers in the past, but for one reason or another – usually encompassing one of the three reasons above – they’d drifted out of the habit.

Yet despite all these concerns, students felt the same conflicted pull towards the genre as those of us over in Comp/Rhet. Students who kept active blogs in the past universally spoke of the experience positively, almost wistfully at times. Blogging helped them clarify and articulate otherwise confusing opinions and feelings; it was an outlet for frustration and self-expression (emotional and intellectual). That’s not something, the interviews suggest, that they’ve found so freely in these other forms of digital writing. And for almost a third of them, keeping a blog was an attractive future project. Some even had specific ideas about what these blogs would look like – a home for an ongoing graphic novel project, a silly collection of frisbee-related jokes; a place to informally muse about all things science. These students don’t have blogs now – but the idea of one is not without interest. All of this points to the same kind of disconnect visible in C/R’s conversation: there’s something about blogs that makes us reluctant to write them off – but often equally reluctant to actually write them.

My own experience follows this pattern to a T. I was a very prolific blogger throughout college, a direct offshoot of my years of “analog” journaling. This blog was sometimes public, sometimes private, and very much a product of College Becca’s worldview. Once I got to grad school, however, this kind of blogging no longer seemed like what I wanted to be putting out in public, so I began a new blog with vague ideas of being more “academic” but no specific idea what that would look like. Without that guidance the new blog drifted into a similar vein as the old, and I largely lost interest. Every year or so I’d resolve to begin blogging about my work for real, a few times even going so far as to create accounts with a post or two, but despite this underlying feeling that I really should be blogging, it never stuck. (Until now, that is – fingers crossed.) In retrospect, I think a big part of my blogging inertia in those years came from my confusion about what it meant to transform this activity and genre that until then had been primarily personal into one that was only semi-personal. I knew blogging could do a whole range of things, but now I needed it to be something pretty specific – and making that transition was harder than I expected, harder than I felt it was supposed to be for me as, you know, a student of these issues.

When blogging was the only public digital writing game around, it made sense to use it for a very broad range of purposes and posting style – and people did. But now we have a much wider range of platforms and genres available for expending that impulse to share, write, interact online. Social media updates have taken over as a way to keep social circles up to date on the minutia of daily life; Twitter has become the central platform for self-promotion. Before, it wasn’t as necessary – or even useful – to think to much about what a blog should be – its promise was located in all the things it could be. I think now, though, it’s become very much a question of “should.” Because what the picture above says to me is that we’re all aware that blogging fills a role that isn’t quite there in all these other available forms. As Robin Sloan says, and Brooke takes up, it’s a means of generating more permanent, complex “stock” – the material that underlies our online presence.

I’ve mention this idea of stock vs flow here before; I find it extremely useful. And I’d argue that the persistent concern over blogging – its (some would argue) disproportionate presence in our teaching and research, its distinct but distant appeal to contemporary students – is a result of blogging’s continued force as stock-builder. We sense, consciously or not, the truth of Sloan’s reminder that “we neglect stock at our own peril.” And we sense that blogs are a good solution for that problem. But stock is harder than flow. Not just because it’s longer-form and less woven into daily routine and  – but because it’s a longer game, and one whose success rests on the permanence and coherence of its whole. And that makes it a harder, more conscious effort to pinpoint what we want our stock to be – what it should be.

It’s not purely a question of the individual, either – for better or worse the majority of us going to end up part of some industry, field, organization, and there are some conventions and best practices that go along with that. Blogging is a great way to build our portfolios as both individuals and members of that larger (and often future) group as well. But that’s a specific purpose, with a particular kind of imagined audience. The work of formulating those is greater; the risks of miscalculating are larger; and it’s usually both easier and more immediately enjoyable to use our energy on one of those other forms. But at the same time – we’re glad blogging’s out there. Because we know we’ll likely need (or simply want) what it offers at some point down the line.

Which is to say, I don’t think we should write off blogging yet; I think it’s still a genre worth teaching, studying and engaging with. But part of doing those things effectively means being aware of its limits – including the limits of its authorship and appeal among our students (and ourselves). One of the arguments for teaching with blogs as opposed to more “traditional” academic genres is that students find it more recognizable and relatable. But there’s a lot of evidence out there that this isn’t necessarily the case, or not the way we might be assuming. But it seems like there’s also signs that students are interested in blogging in specific ways, in its potential for projects and writing outside the rest of their social media circles. The trick to resolving this tension, then, might be in considering blogs more specifically – as sites for a more particular purpose and kind of writing. Because graphic novels, extended frisbee jokes, and one-off academic musings need their digital genres too; and someday we’re all likely to need our names to turn up some rich stock when we’re Googled.

From the EverNote Files: A Clutter of Lists

“Recreational Lists”
Created: 6/3/2015
Notebook: BeccaLists!

Contents (titles only):

  • Comic ideas to storyboard during boring meetings
  • Technology that make me feel like it’s the future
  • Songs that Make Good Karaoke Picks – Sorted by Venue
  • Kinds of jokes I make/hear/like a lot
  • Ways contemporary academic life resembles Victorian England
  • Amazing book titles
  • “Iconic” clothing items I have owned
  • Articles I should probably write at some point
  • Names for wireless networks I wish I’d thought of first
  • Pros and cons of major citation styles as viewed by Lazy College Senior
  • Sweet collective nouns

***

Takeaway: My list obsession transcends personal/professional divisions like almost nothing else; might be worth trying to tap into that more in my teaching? If nothing else, it would give me an excuse to make even more lists for the followup ENote: “Recreational Lists 2: Back in the Habit (of Making Sweet Lists).”

Future Action: blog post about listing as a practice (form? genre?); portable lesson plan for teaching effective use of lists in academic writing, pre-writing, argumentative writing; return to recreational listmaking as your standard work break activity

Grammar Nazis, Correctness Police

One imagined/potential goal when I designed my dissertation project was to present a snapshot of students’ process for writing casually online. So I asked a lot of questions in early interviews about practices for re-reading posts, perceptions of others’ writing, how students make quality and credibility judgments based on others’ writing, etc. And almost invariably, they brought up “grammar” as a major marker of not only credibility but overall character. “I hate it when people start using all the chat speak stuff, that just drives me crazy,” one student told me.  “No one ever corrects themselves,” said another, “and I don’t do it because I don’t want to be a dick – but I notice.” “I’m a very big grammar person,” said another, “so even if I tried I can’t just accept ‘ur’ instead of ‘your.’ It just bothers me.”

Purportedly correct alternate answer: "80% of college students"
Purportedly correct alternate answer: “80% of college students”

A disclaimer here: I’m not deeply read in this area; aside from some snapshots out of the Standard English debates I don’t know very much about how this is discussed in the field at large. In this post, however, I’m focusing almost entirely on my experience.

1. They think “grammar” means “correctness.”

“I’m kind of a grammar Nazi, so I always feel like I have to make sure I have my little like, quotation marks or whatever. And commas and stuff. [I] never say LOL, never shorten stuff.”

“I’m a very big grammar person, so even if I tried I can’t just be like ‘ur’ instead of ‘your.’ It just bothers me.”

“I hate it when people start using all the chat speak stuff.”

This shouldn’t surprise me but somehow it did. I think it’s because while both grammar and correctness qualify as relatively petty obsessions in my mind, an obsession with correctness in the context of casual online writing just seems ridiculous. Like using your finger to hold back a damn that broke three months ago. And I think students know – they must know – that a misspelling is not a grammar error. But it sounds less crotchety to say you like good grammar than to say that you expect everyone to always be correct. But this also points to something else – that students don’t seem to actually know what “correct grammar” means. Which perhaps goes some way towards explaining why it’s such a source of worry and preoccupation. It’s a cycle where they know they are expected to have “good grammar” but aren’t sure what that means–>they latch on to the markers of correctness they *do* know and hold on like the dickens–>they grow defensively critical when others don’t do the same because it adds credibility to their interpretation. Which brings me to the next point:

2. Grammar/correctness is a judge-y subject.

“At a certain point you shouldn’t judge that but at the same time you do – it does definitely affect your opinion of that person.”

“Ohhhh my gosh. Take 2 seconds to proofread it and just correct yourselves – it’ll make you seem so much smarter and will get your argument over, make it seem more vaild.”

“If someone has poor grammar or punctuation and stuff, any of that kind of thing – you know, I hate to say it but yeah, it reduces my opinion of them. People who can’t formulate a proper argument or can’t logically defend it.”

This last answer is perhaps the most revealing, as it shows most directly the implicit association between basic errors and intellectual ability hiding in all the comments I heard about grammar (“grammar”)  judgments. The overall light tone and liberal use of the “grammar nazi” appellation when talking about grammar obsessions indicates that students are, to some extent, aware of the potential silliness of focusing too much on these relatively minor issues. But alongside that are quotes like this, which show that even if they’re aware there’s a joke to be made at their expense they are also serious about their expectations. They do think that correctness signals something deeper about a writer, and they make judgments accordingly. These judgments aren’t limited to faceless internet strangers either. Students admitted that they apply them to their real-life friends as well – people they presumably know firsthand are not unintelligent. “My friends, if they say something hilarious and [I] want to retweet it but they use “u” instead of – I will NOT retweet “u.” It does bug me, yeah…I definitely look down if you text me just horrible “r u” kind of stuff. And I mean, I’m not going to say anything about it but I’m just, it just, I just feel like I’m texting an 8th grader.” And another student said that while she wouldn’t go so far as to “unfriend” someone who makes mistakes, “I do look at it differently and am like “really?” Especially if it’s a friend that I know is really smart and would get mad if I ever said something that was wrong, politics-wise, and they spell “your” wrong.”

My data doesn’t say anything about whether or not the average correctness of student writing has changed with the advent of digital communication. But it does seem to say, as evidenced by the quotes above, that they don’t think it should be changing. It suggests that students continue to hold the same kind of assumptions about Standard English proficiency (and deployment) as the generations preceding them. There’s some slippage here, mostly in regard to genre – for example, most students said that things are a bit different when it comes to instant messaging – though not, notably, texting). But overall, the influx of casual digital writing opportunities does not seem to have done much to change dominant notions about the link between correctness and quality. The fact that these judgments hold for real-life relationships and not just online strangers, in particular, pokes at the popular lament that online informality is eroding people’s standards. If these students judge not only strangers for basic errors but those they’ve chosen to be friends with in real life as well, then Standard English conventions clearly still have some real strength left.

3. While “grammar nazi” mindsets are dominant, they are not universal.

“It does definitely affect your opinion of that person – but at the same time at a certain point you shouldn’t judge.”

“There are the grammar trolls looking, looking for those things. You can be perfectly credible, everything cited, you can have one thing – forget a punctuation, mispell a word, first comment after that’s going to be ‘someone didn’t go to school, [someone] can’t spell anything.'”

“A lot of people go on [FB] just to correct people, like if someone wrote a sentence wrong in their status or something. Grammar nazis. I don’t like to do that to people – as long as you can understand them, I don’t see why it matters so much.”

Unlike the other sections, these three quotes are the only ones I have as evidence for this point. But all three of them point to a different reasoning for breaking with the grammar/correctness party line. The first one (referring to posts on Reddit) suggests that this student recognizes that there’s not a direct correlation between correctness and intellectual capacity. The second one is in the same vein, but from the other side – it assumes that errors happen even to the best of us, and that it’s the people who revel in those errors who are deserving of judgment. And the final one is simply indifferent – to her, “your” vs “you’re” is the opposite of a big deal.

Though it wasn’t a conscious hypothesis, in retrospect this is the view I expected students to take. Because it’s the one I’d like to have, that my ideal self would espouse and internalize. Because I firmly believe that, at least in the context of Facebook and Redddit, basic errors shouldn’t matter. While I haven’t kept up with contemporary work on the subject, the reading I did in coursework and prelims about Standard English and the role of conventions early in the field’s history made a strong impression on me. But I hate typos and misused homophones. I can’t not notice them, and they do irritate me. This makes me feel guilty and conflicted. And it makes me more than typically interested in how my students talk about this issue, because I sympathize with the tension, confusion and conviction that pepper their responses.

What does all this mean? I don’t know, because it turns out I’m writing an entirely different dissertation. But it’s something I hope to come back to. Because it feels significant (if only to my personal scholarly development). This is data I’d love to get in conversation with the theories and experiences of others.