Monday, November 14, 2011

Object Lesson IV: The One About Robots

In her article “What Can Automation Tell Us About Agency” Carolyn Miller describes the problem facing contemporary notions of “agency,” calling attention to its decentralization from traditional human subjects. She goes into a detailed discussion on the “subject of computerized assessment automations”, analyzing the responses of both writing and speech instructors to a survey regarding the perceived viability of such grade-bots in the classroom (their response is generally incredulous), and determines that, if these automata can function as an audience, then agency is neither possessed by the speaker nor their audience but exists as a sort of “kinetic energy” which “is a property of the rhetorical event or performance itself.” Marilyn Cooper, however, opposes this assessment, arguing that “deeds are always done by someone, and replacing the doer of the action, the agent, with an amorphous force like kinetic energy leaves us with no basis for assigning responsibility for actions."

Cooper’s idea of “agency as a matter of action” emphasizes free will, or the embodied intention to enact change, and it is on this theme that I made a connection to Richard Powers’s novel Galatea 2.2. In it, a writer and a computer scientist work together in designing an artificial intelligence; their goal is to have it pass the Turing Test by answering a graduate-level question on the subject of literature better than a human student. They dub their A.I. “Helen” and attempt to teach her everything that will help her succeed in the test—this, they come to realize, means Helen needs to know almost everything about everything. In order for her to comprehend Shakespeare, for example, she requires not only the literary context of the story but the historical context in which it was written, and this context is in turn informed by a seemingly endless string of connections that encompasses human experience.

Spoiler alert: …………….Helen fails. Although she develops into an extremely sophisticated A.I., she realizes that everything she knows is being fed to her by the writer and she is incapable of real “experience.” She asks her creators to “see everything” for her before shutting herself down, an apparent act of free will. Drawing from Cooper’s terminology, Helen’s decision was the result of her inability to construct her own disposition, or ethos. She lacked the crucial element which enables, and indeed motivates, human agents to participate in rhetoric: the ability to affect change. As Freeman notes, humans, as “intentional beings”, have the capacity “to construct and pursue their individual goals within the contexts of their societies.” This, I believe, is the essence of “agency” and the reason why Helen was doomed to despair her role as a thinking machine, one possessing extensive databanks of information but lacking that essential capacity to see her actions as causal and therefore valuable—she must instead leave her creators with a desperate and impossible plea, “see everything,” a perfect linguistic distillation of thought without direction or sense of purpose.

These readings have forced me to question, reject, reaccept, reject, doubt and reconstruct any notions of agency I had before; what I find myself walking away with is not “kinetic energy,” however, but a sort of human clairvoyance that projects into the future while drawing from the past.

No comments:

Post a Comment