Friday, June 26, 2009

CI 5472 Post 8 - Beam Me Up, Scotty!

**RESPONSE ONE: "Conversational Agents and Their Longitudinal Affordances on Communication and Interaction"**

Before continuing with the remainder of my post, please view the video that I spliced together embedded bellow. Relax, it's very short,(clocking in at just 1 minute, 25 seconds), and you'll have a good laugh. NOTE: As it's a pretty large, high quality file, you may have to give your computer a couple extra seconds to load the file before you will be able to view the video free of stutter.



Even with Star Fleet's hyper-advanced computing technology (alright, this last post proves I'm possibly the nerdiest person in this class!), the individuals of the 24th century echo complaints and behavior identical to the 21st century participants discussed across the two-articles assigned for today; specifically, the complaint that the artificial intelligence (AI) of pedagogical agents is "inept" / has limited capabilities, and the resulting user behavior of often extreme frustration and even "abuse" (depending on how you conceptualize computer "abuse") if the AI fails to deliver the desired response. However, as opposed to the fictional 24th century Star Fleet officers who you might say have the "right" to be frustrated when their hyper-advanced technology isn't able to correctly respond to basic commands that even current, real 21st century computers can, authors of the article "Conversational Agents and Their Longitudinal Affordances on Communication and Interaction" argue that individuals may perceive "inflated expectations from agents that are presented in human form" (262). With all "Trekkie" humor aside, I feel as though expecting MUCH, IF ANYTHING beyond a simple response is EXTREMELY "inflated" / unrealistic given the current technology our leading minds have with creating computer based, artificial intelligence. In short, before ANY kind of artificial intelligence can be productively used in the educational setting, I argue that both instructors and students need to realign the expectations to which they take to ANY type of AI. However, I argue that such a realignment of expectations is MUCH easier said than done, and can ONLY be done through the detailed study of not only how AI technology works, but what the medium is capable of given the "constraints" of our CURRENT / NEAR FUTURE level of technological advancement. Otherwise, users will of course be met with nothing but frustration if they don't understand that current / near future AI technology CANNOT fulfill what they might expect.

Although slightly off tangent, I liken my argument to teaching my ex-girlfriend Amy how to drive a manual transmission. Because she rode in / drove vehicles with automatic transmissions for her entire life, I argue that she more or less expected the car to shift "itself." However, after spending about 4 hours trying to teach her, witnessing her kill the car about 1,000 times, and ending up with an extreme case of whiplash, our teaching session ended in utter frustration and disappointment. Then, I had an idea. I thought to myself, "what if she could see / understand the mechanics of how exactly a manual transmission works? I wonder if that would help her know what to expect from this car and figure out how to drive it." Going on my hunch, I logged on to www.howstuffworks.com (an amazing website that shows the reader in both text and video the mechanics, electronics, chemistry, biology, physics, general theory, and so on behind how a variety of both real AND fictional devices work / are supposed to work), and looked up material associated with the search term "manual transmission." We then sat down and read the articles / watched the accompanying videos where the authors both explained and showed us what was going on inside of a manual transmission as it worked. We were shown how the flywheel of the engine and clutch plate are apart when your foot is on the clutch, and how they are joined together as one when your foot is off of the clutch. "Oooh, that's how it works,"Amy respond. "So one part is always spinning, and then you gradually ease the other part in with your foot until the rest of the car 'catches'?" Eureeka! We then got back into the car, and after very nearly killing it (not a full kill mind you), Amy was driving the manual transmission like a pro. In fact, about ten minutes after her first successful take-off into first gear, we were already on the highway in overdrive... I was absolutely terrified.

My point is this: until Amy understood the "constraints" of the device as based on the "rules" of its core design, she had absolutely no clue what to expect, let alone how to go about maximizing its productively as based on her needs / the task at hand. Unfortunately, the incredibly advanced theory behind AI technology / design is nowhere near as simple as the workings of a manual transmission. In their article, even the most SIMPLIFIED explanation behind AI technology is incredibly complex:

"... we abstain from using the term intelligent because the world intelligence signifies a higher-oder cognitive ability. Even though the conversational agents we employ may appear to be intelligent, in actuality, they are not - the software is simply trained to match comments to responses. The interaction between student and agent is not pre-determined, but shaped by both student comments and agent responses. For example, if a user asks the agent, 'What does it mean for a website to be accessible?' the conversation will center on this particular question. If the user then asks a further question about a specific aspect of the agent's answer, the conversation can be said to have been influenced by the agent response but would still be defendant on the user's subsequent comment. Using an artificial intelligence engine... student comments are analyzed into meaningful segements and, through an iterative algorithm, are matched to responses" (252-253).

Although this is one of the more basic descriptions of underlying AI design, this explanation is still incredibly complex. However, like the manual transmission, their are similar "constraints" placed on the overall function of an AI as based on the programming / "rules" governing its core design. In other words, the better / more advanced the underlying design, the more capability the AI will have. The less advanced the underlying design, the less capability the AI will have. As such, as Doering and Veletsianos point out, an AI is far from intelligent. Instead, an AI design is only as intelligent as the "rules" with which programmers base its design on.

However, I argue that MANY, MANY instructors AND students DO NOT understand this idea of inherent "constraint" as based on design, and instead think that ANY AI is a supercomputer from the future that can assist with ANY task at hand, regardless of the nature of the task. But, as I've been discussing, the nature of the task at hand can either be compatible or incompatible with the function of the particular AI as based on its core design.

As such, just as I argue that EVERYONE should at least learn the basics of how a manual transmission works because they might have to drive a stick at some point in their lives, I argue that EVERYONE, both teachers and students, should be required to learn about basic programming design for a number of reasons. First, as I've bantered about through this entire post, learning about programming design, as well as the "constraints" that design places on program function, will help individuals realign their expectations toward not only pedagogical agents, but technology in general. Although an extremely cool show, the holo-decks, transporters, food replicators, phasers, and warp drive of the Star Trek universe are either a LONG ways off, or an absolute impossibility. Second, whether we like it or not, and as cliche as this sounds, technology IS the "way of the future" (ugh, I feel ridiculous saying that). But, it's true. The better teacher's understand the underlying design(s) of technologies, the better they will be able to create relevant, productive, and practical technology-based assignments that not only "work" in the classroom, but truly advance learning. The better students understand the underlying design(s) of technology (and I argue that there is otherwise not much encouragement for them to do so as they are now "born into" technology and take it for granted), the better they will be able to use it to do the types of inquiry that we expect of them in our classrooms.

Lastly, although this post might sound a lot like a general lecture on teachers and students sharpening their general "digital / computer" literacies, I am arguing for more than that. As I mentioned in the previous paragraph, I argue that both teachers and students take technologies for granted as they are becoming / being used as not only a means to enhance a particular task at hand, but inherent to a particular task at hand (in other words, no technology = no task at all!). Although the skill of learning how to embed a video on a web page or use photo shop is one thing, understanding the core "rules" on which these programs are built is something completely different, and frankly, more difficult. However, until we put time into learning at least a basic understanding of these core "rules" that "constrain" the use of EVERY new technology that is released from now until the holo-deck, we will continue to have "inflated" / unrealistic expectations for what technology can do for us. As such, we run the risk of overlooking not only potential educational gains, but global gains, if we scream at our computers for them to do what they by design cannot, vs. productively use them for which they were built.

----------------------------------------

**RESPONSE TWO: "When sex, drugs, and violence enter the classroom: Conversations between adolescents and a female pedagogical agent"**

After reading the longitudinal study discussed above, the following article initially creeped me out beyond belief. If you recall my manual transmission analogy, at no point did I tell Amy not to be "too hard on" / "abuse" the car during her frustration. Understanding that the car was an inanimate object, I was more than happy that Amy took her frustration out on it rather than me sitting next to her in the passenger seat! However, Doering, Sharber, and Veletsianos appear to invest "Joan," the pedagogical agent described in their study, with much more human-like qualities than I do my car. Although the qualitative data gathered was possibly referred to as "abuse suffered at the fingertips of... social studies students" as a sort of rhetorical device to hook the reader into the study, I initially found the conceptualizing, or "anthropomorphizing" as the authors put it, of Joan as anything but an inanimate, man-made "object" EXTREMELY uncomfortable to process. However, after some reflection, such anthropomorphizing of Joan really made me think about how our perceived relationship to the technologies we use may actually influence the specific ways we communicate with each other as global citizens.

Prior to reading this article, I considered the computer on which I write these blog entries to hold a subordinate position in relation to myself, identical to the subordinate position the author's discovered Joan to hold in relation to "her" student users. Although I by no means consider my computer to be ANYTHING more than the some of its parts after reading this article, it does make me question if such a collective, dominating attitude toward the technology we use not only as isolated users, but as a collective, global community, is "healthy" given the interconnectedness between actual human beings the technology affords. If this study suggests that human beings are so willing to exercise their power over the device when they perceive a anonymity, what does such behavior mean for the "real" human beings that we use technology to communicate with?

Basically, what I'm getting at is the following... Although it still creepes me out beyond belief to conceptualize the technology we use as something that can actually "feel" the verbal / physical abuses that we might direct at, it's a fact that the human being with which we are using technology to communicate with can. As such, although I'm not advocating that we should treat the technology we use as our "equals," I am advocating that we begin to conceptualize the technologies we use as a true extension of our very selves. If students are learning that it's socially appropriate to direct such extreme violence, hatred, and abuse toward a make-believe approximation of a human being, what does that say about how they might communicate with their peers, parents, teachers, and the countless other individuals they encounter throughout their lives? The authors echo this very concern as they state,

"These conversations are the online equivalents of face-to-face conversations that students could exchange with each other in the hallway or with a teacher. These types of comments are considered serious offenses in schools, punishable by detention, expulsion, and sexual harassment lawsuits."

In short, what makes me feel "concerned" for Joan is the fact that "she" represents not only women, but perhaps subordinated human beings as a whole. If students were so willing to treat "her" the way that they did, how might they treat similar subordinated groups in our society? Will they work with them to gain power? Or work to keep them subordinate to their power? If we as educators are so concerned about agency through critical literacies that we preach, these are some questions that we need to address.

That is all!

Goodbye 5472! I wish the best of luck to you all!

No comments: