Skip to main content

Audrey Watters

100 Minutes, Part 2

5 min read

What is the future of teaching and learning? Part 2 of my contribution to EDU8213.

I want to respond to something that Liz said about our focus on computers as calculating. She said she preferred to see them as communication machines, and I do think that an emphasis on communication rather than calculation could help us to think about and to really foster those pedagogical practices that recognize and value affect not just those practices that privilege quantification.

But I’m not sure that saying that computers as not merely calculating machines gets us out of the quandary of our “computational culture.” The ideological underpinnings of computers coincide with this longstanding privileging in Western culture of rationality. For a couple of centuries now, modern societies have been built on the belief that more rationality and more technology and more capital are the solutions to all the problems we face.

This makes it challenging, I think, to talk about “the future of teaching and learning” without seeing “teaching and learning” as a problem to be solved. And specifically about a problem to be solved with more data, more machines, more analytics.

This really stands in stark opposition to affect. Reason and rationality versus emotion – we know that story. The former privileged as the realm of men. Men of science. The latter scorned as the realm of women. Weak. Soft.

A sidenote: it’s so ironic that the women who worked in the field of pre- or proto-computing were called “computers” and “calculators.” But once the work became mechanized, computerized, they were largely ousted from the field and their contributions erased.

In all things, all tasks, all jobs, women are expected to perform affective labor – caring, listening, smiling, reassuring, comforting, supporting. This work is not valued; often it is unpaid. But affective labor has become a core part of the teaching profession – even though it is, no doubt, “inefficient.” It is what we expect –stereotypically, perhaps – teachers to do. (We can debate, I think, if it’s what we reward professors for doing. We can interrogate too whether all students receive care and support; some get “no excuses,” depending on race and class.)

What happens to affective teaching labor when it runs up against machines, against robots, against automation? Politically. Economically. Culturally. I think most of us would admit that even the tasks that education technology purports to now be able to automate – tutoring, testing, grading – are shot through with emotion when done by humans, or at least when done by a person who’s supposed to have a caring, supportive relationship with their students. Grading essays isn’t necessarily burdensome because it’s menial, for example; grading essays is burdensome because it is affective labor; it is emotionally and intellectually exhausting.

This is part of our conundrum, and again I think this is a deep cultural conundrum that we cannot just wave away by calling computers “communication machines”: teaching labor is affective not simply intellectual. Affective labor is not valued. Intellectual labor is valued in research. It is viewed as rational and reasonable. But at both the K–12 and college level, teaching is often seen as menial, routine, and as such replaceable by machine. Intelligent machines will soon handle the task of cultivating human intellect, or so we’re told. And because we already privilege a certain kind of intellect as rational and reasonable, I think culturally we are sort of prepped for intelligent machines handling the tasks of research and decision-making.

Artificial intelligence sees the human mind as a computer. This is a powerful metaphor that underscores the whole field. Intelligence is rational, so they say. It is about calculation. It is mechanical. It is replicable. It is measurable. Think of all the words in artificial intelligence language that are drawn from human’s mental capacities: memory. learning. The benefit of artificial intelligence, so we’re told, is that it can surpass the capabilities of humans. It can be faster. It can store more data. It can process more data. It is computational.

What does it mean for the future of teaching and learning if – culturally – we are being told that the future of intelligence is machine intelligence?

Where does affect fit into this?

Rather than finding that machines are become more intelligent, I fear we will find that humans are becoming more machine-like. But if we bury affect, I do wonder – and one only need look at this US Presidential election for an example – what happens when we have these emotional outbursts. Anxiety. Irrationality.

I think I said in the last recording that I often turn to Antonio Gramsci: “I am a pessimist because of intelligence but an optimist because of will.”

I’ve been thinking a lot lately about irrationality and the Internet, about what seems to be an embrace of conspiracy theories, factlessness, a rejection of expertise. I’m not sure I’ve ever been more pessimistic about the Internet’s potential for participatory democracy or for networked learning before. “Don’t read newspapers,” Trump told his supporters recently. “Read the Internet.” As such, the Internet feels like a weapon of war – and war has always relied on calculation, hasn’t it – a weapon of hate – there’s the affect that culturally we seem to be embracing right now.

Audrey Watters

Diminishing Women's Work

Link to article

Audrey Watters

I'm Not Really With Her, But...

4 min read

Cross-posted to FB...

I had a lengthy conversation with my 23-year-old son today about politics. “Are you registered to vote?” I asked tentatively. (I hate vote-shaming.)

This isn’t the first election where he’s eligible to vote; this is his first election voting.

I know that many pundits like to sneer at “millennials” for some perceived political apathy, for their (supposed) low voter turn-out, for their (supposed) preference for third-party candidates, what have you. Much of this is simply a caricature of “millennials,” a pervasive and perpetual disgust at “kids these days.”

My son and many his age are far from apathetic. They are, however, full of anxiety – about the economy, about their future, about climate change, about violence, about injustice. And my son and many his age are angry. They are angry that they’re set to inherit a world ravaged by war and hate and destruction and shitty jobs marketed to them as “the freelance economy.” They are angry at institutional power. But they are frustrated by extra-institutional power too.

My son is voting for the first time. He’s voting for someone but, like so many of us this year, he’s primarily voting against someone. Like me, he would have preferred Bernie. He’s frightened about the outcome of the election – not just who will win, but the repercussions of the violence and hatred that Donald Trump has legitimized and the effects that this country will have to bear long after November 8 has come and gone.

He’s aware how much of this violence and hatred is racialized and gendered. I’m surprised, quite frankly, to see how much the former in particular has become a focal point for this political awakening. Part of it, no doubt, comes from his experiences as an addict who’s managed to “get away with it” – no criminal record, no jail time.

My son is one of those “young white men without a college degree” that I think the right wing has long believed they can whip up into a populist, nationalist, racist furor. He was pretty frank when I talked to him on the phone today – he thinks that all over North America and Europe that the right wing still can.

We talked about the role that education and technology play in that. He was much more upset about the latter. “I read some bullshit on Breitbart last week, and then Facebook suggested I ‘like’ the KKK.” We talked about algorithms and filters. We talked about the combination of ignorance and incuriosity that a fair portion of the media – old media, new media, new new media – rely upon.

We talked a lot about Bill Clinton – the first President I ever voted for – and his betrayal of my ideals. He has a vague memory of my calling from a payphone just outside of Seattle on November 30, 1999 to reassure the family that, despite the tear gas and rubber bullets, I’d survived the WTO protest. We talked about the environmental activism that he grew up around and how, long before September 11, the Clinton Administration was ready to criminalize it. We talked about how his dad’s drug convictions shaped his ability to get financial aid.

We talked about the past. We talked about the future.

He’s utterly dispirited, and that is just crushing to me, particularly when I think of how the Obama campaign’s message was “Hope.” I confessed to him my own fears: I check the poll numbers at 538 several times a day. I told him that talking to him and hearing about his commitment to vote made me feel a little better. We’re not really “with her.” But we’re with each other on this one. We’ll check the box by her name, knowing we have to do much much more than vote if we’re going to make progress.

Audrey Watters

“...Journalistic content is a technical complex expressly intended to adapt the man to the machine” -- Jacques Ellul, The Technological Society

Audrey Watters

Fantastic Job Opportunity

from a 1970s Fantastic Four issue

Audrey Watters

Notes and Highlights from Amusing Ourselves to Death

22 min read

Orwell warns that we will be overcome by an externally imposed oppression. But in Huxley’s vision, no Big Brother is required to deprive people of their autonomy, maturity and history. As he saw it, people will come to love their oppression, to adore the technologies that undo their capacities to think.

What Orwell feared were those who would ban books. What Huxley feared was that there would be no reason to ban a book, for there would be no one who wanted to read one.

Orwell feared those who would deprive us of information. Huxley feared those who would give us so much that we would be reduced to passivity and egoism.

Orwell feared that the truth would be concealed from us. Huxley feared the truth would be drowned in a sea of irrelevance.

Orwell feared we would become a captive culture. Huxley feared we would become a trivial culture, preoccupied with some equivalent of the feelies, the orgy porgy, and the centrifugal bumblepuppy.

Our politics, religion, news, athletics, education and commerce have been transformed into congenial adjuncts of show business, largely without protest or even much popular notice. The result is that we are a people on the verge of amusing ourselves to death.

Although the Constitution makes no mention of it, it would appear that fat people are now effectively excluded from running for high political office.

Indeed, we may have reached the point where cosmetics has replaced ideology as the field of expertise over which a politician must have competent control.

When a professor teaches with a sense of humor, people walk away remembering.”1 She did not say what they remember or of what use their remembering is. But she has a point: It’s great to be an entertainer.

There is no shortage of critics who have observed and recorded the dissolution of public discourse in America and its conversion into the arts of show business. But most of them, I believe, have barely begun to tell the story of the origin and meaning of this descent into a vast triviality.

We are all, as Huxley says someplace, Great Abbreviators, meaning that none of us has the wit to know the whole truth, the time to tell it if we believed we did, or an audience so gullible as to accept it.

For on television, discourse is conducted largely through visual imagery, which is to say that television gives us a conversation in images, not words. The emergence of the image-manager in the political arena and the concomitant decline of the speech writer attest to the fact that television demands a different kind of content from other media. You cannot do political philosophy on television. Its form works against the content.

lacking a technology to advertise them, people could not attend to them, could not include them in their daily business.

Such information simply could not exist as part of the content of culture. This idea—that there is a content called “the news of the day”—was entirely created by the telegraph (and since amplified by newer media), which made it possible to move decontextualized information over vast spaces at incredible speed. The news of the day is a figment of our technological imagination. It is, quite precisely, a media event. We attend to fragments of events from all over the world because we have multiple media whose forms are well suited to fragmented conversation. Cultures without speed-of-light media—let us say, cultures in which smoke signals are the most efficient space-conquering tool available—do not have news of the day. Without a medium to create its form, the news of the day does not exist.

the decline of the Age of Typography and the ascendancy of the Age of Television.

all of this sounds suspiciously like Marshall McLuhan’s aphorism, the medium is the message,

the clearest way to see through a culture is to attend to its tools for conversation.

Each medium, like language itself, makes possible a unique mode of discourse by providing a new orientation for thought, for expression, for sensibility. Which, of course, is what McLuhan meant in saying the medium is the message.

it may lead one to confuse a message with a metaphor.

A message denotes a specific, concrete statement about the world. But the forms of our media, including the symbols through which they permit conversation, do not make such statements. They are rather like metaphors, working by unobtrusive but powerful implication to enforce their special definitions of reality.

time-keepers, and then time-savers, and now time-servers.

Thou shalt not make mechanical representations of time.

Philosophy cannot exist without criticism, and writing makes it possible and convenient to subject thought to a continuous and concentrated scrutiny. Writing freezes speech and in so doing gives birth to the grammarian, the logician, the rhetorician, the historian, the scientist—all those who must hold language before them so that they can see what it means, where it errs, and where it is leading.

a shift from the ear to the eye as an organ of language processing.

from the magic of writing to the magic of electronics.

When Galileo remarked that the language of nature is written in mathematics, he meant it only as a metaphor.

And our languages are our media. Our media are our metaphors. Our metaphors create the content of our culture.

to avoid the possibility that my analysis will be interpreted as standard-brand academic whimpering, a kind of elitist complaint against “junk” on television, I must first explain that my focus is on epistemology, not on aesthetics or literary criticism.

we do not measure a culture by its output

of undisguised trivialities but by what it claims as significant. Therein is our problem, for television is at its most trivial and, therefore, most dangerous when its aspirations are high, when it presents itself as a carrier of important cultural conversations.

how media are implicated in our epistemologies.

As Walter Ong points out, in oral cultures proverbs and sayings are not occasional devices: “They are incessant. They form the substance of thought itself.

Testimony is expected to be given orally, on the assumption that the spoken, not the written, word is a truer reflection of the state of mind of a witness.

there is a residual belief in the power of speech, and speech alone, to carry the truth; on the other hand, there is a much stronger belief in the authenticity of writing and, in particular, printing. This second belief has little tolerance for poetry, proverbs, sayings, parables or any other expressions of oral wisdom. The law is what legislators and judges have written. In our culture, lawyers do not have to be wise; they need to be well briefed.

You are mistaken in believing that the form in which an idea is conveyed is irrelevant to its truth.

Truth does not, and never has, come unadorned.

“Seeing is believing” has always had a preeminent status as an epistemological axiom, but “saying is believing,” “reading is believing,” “counting is believing,” “deducing is believing,” and “feeling is believing” are others that have risen or fallen in importance as cultures have undergone media change.

As a culture moves from orality to writing to printing to televising, its ideas of truth move with it. Every philosophy is the philosophy of a stage of life, Nietzsche remarked. To which we might add that every epistemology is the epistemology of a stage of media development. Truth, like time itself, is a product of a conversation man has with himself about and through the techniques of communication he has invented.

Since intelligence is primarily defined as one’s capacity to grasp the truth of things, it follows that what a culture means by intelligence is derived from the character of its important forms of communication.

We have reached, I believe, a critical mass in that electronic media have decisively and irreversibly changed the character of our symbolic environment.

We are now a culture whose information, ideas and epistemology are given form by television, not by the printed word.

every new technology for thinking involves a trade-off.

Media change does not necessarily result in equilibrium. It sometimes creates more than it destroys. Sometimes, it is the other way around.

The invention of the printing press itself is a paradigmatic example. Typography fostered the modern idea of individuality, but it destroyed the medieval sense of community and integration. Typography created prose but made poetry into an exotic and elitist form of expression. Typography made modern science possible but transformed religious sensibility into mere superstition. Typography assisted in the growth of the nation-state but thereby made patriotism into a sordid if not lethal emotion.

although literacy rates are notoriously difficult to assess, there is sufficient evidence (mostly drawn from signatures) that between 1640 and 1700, the literacy rate for men in Massachusetts and Connecticut was somewhere between 89 percent and 95 percent, quite probably the highest concentration of literate males to be found anywhere in the world at that time.2 (The literacy rate for women in those colonies is estimated to have run as high as 62 percent in the years 1681-1697.3)

The only communication event that could produce such collective attention in today’s America is the Superbowl.

The first printing press in America was established in 1638 as an adjunct of Harvard University, which was two years old at the time.

This odd practice is less a reflection of an American’s obstinacy than of his modeling his conversational style on the structure of the printed word.

a kind of printed orality,

“Is the Iliad possible,” he asks rhetorically, “when the printing press and even printing machines exist? Is it not inevitable that with the emergence of the press, the singing and the telling and the muse cease; that is, the conditions necessary for epic poetry disappear?”

Marx understood well that the press was not merely a machine but a structure for discourse, which both rules out and insists upon certain kinds of content and, inevitably, a certain kind of audience.

Not only did Lincoln and Douglas write all their speeches in advance, but they also planned their rebuttals in writing. Even the spontaneous interactions between the speakers were expressed in a sentence structure, sentence length and rhetorical organization which took their form from writing.

the written word, and an oratory based upon it, has a content: a semantic, paraphrasable, propositional content.

Whenever language is the principal medium of communication—especially language controlled by the rigors of print—an idea, a fact, a claim is the inevitable result.

the written word fosters what Walter Ong calls the “analytic management of knowledge.”

To engage the written word means to follow a line of thought, which requires considerable powers of classifying, inference-making and reasoning. It means to uncover lies, confusions, and overgeneralizations, to detect abuses of logic and common sense. It also means to weigh ideas, to compare and contrast assertions, to connect one generalization to another. To accomplish this, one must achieve a certain distance from the words themselves, which is, in fact, encouraged by the isolated and impersonal text. That is why a good reader does not cheer an apt sentence or pause to applaud even an inspired paragraph. Analytic thought is too busy for that, and too detached.

Of words, almost nothing will come to mind. This is the difference between thinking in a word-centered culture and thinking in an image-centered culture.

Women were probably more adept readers than men, and even in the frontier states the principal means of public discourse issued from the printed word. Those who could read had, inevitably, to become part of the conversation.

the Age of Exposition ---> the Age of Show Business.

For telegraphy did something that Morse did not foresee when he prophesied that telegraphy would make “one neighborhood of the whole country.” It destroyed the prevailing definition of information, and in doing so gave a new meaning to public discourse.

“We are in great haste to construct a magnetic telegraph from Maine to Texas; but Maine and Texas, it may be, have nothing important to communicate.... We are eager to tunnel under the Atlantic and bring the old world some weeks nearer to the new; but perchance the first news that will leak through into the broad flapping American ear will be that Princess Adelaide has the whooping cough.”

The telegraph made a three-pronged attack on typography’s definition of discourse, introducing on a large scale irrelevance, impotence, and incoherence.

The telegraph made information into a commodity, a “thing” that could be bought and sold irrespective of its uses or meaning.

The penny newspaper, emerging slightly before telegraphy, in the 1830’s, had already begun the process of elevating irrelevance to the status of news.

It was not long until the fortunes of newspapers came to depend not on the quality or utility of the news they provided, but on how much, from what distances, and at what speed.

As Thoreau implied, telegraphy made relevance irrelevant.

How often does it occur that information provided you on morning radio or television, or in the morning newspaper, causes you to alter your plans for the day, or to take some action you would not otherwise have taken, or provides insight into some problem you are required to solve?

In both oral and typographic cultures, information derives its importance from the possibilities of action.

Prior to the age of telegraphy, the information-action ratio was sufficiently close so that most people had a sense of being able to control some of the contingencies in their lives.

The principal strength of the telegraph was its capacity to move information, not collect it, explain it or analyze it. In this respect, telegraphy was the exact opposite of typography.

burn its contents.

Facts push other facts into and then out of consciousness at speeds that neither permit nor require evaluation.

The telegraph introduced a kind of public conversation whose form had startling characteristics: Its language was the language of headlines—sensational, fragmented, impersonal.

“Knowing” the facts took on a new meaning, for it did not imply that one understood implications, background, or connections. Telegraphic discourse permitted no time for historical perspectives and gave no priority to the qualitative. To the telegraph, intelligence meant knowing of lots of things, not knowing about them.

The photograph also lacks a syntax, which deprives it of a capacity to argue with the world.

As Susan Sontag has observed, a photograph implies “that we know about the world if we accept it as the camera records it.”

It offers no assertions to refute, so it is not refutable.

the crossword puzzle became a popular form of diversion in America at just that point when the telegraph and the photograph had achieved the transformation of news from functional information to decontextualized fact.

The crossword puzzle is one such pseudo-context;

the cocktail party is another; the radio quiz shows of the 1930’s and 1940’s and the modern television game show are still others; and the ultimate, perhaps, is the wildly successful “Trivial Pursuit.”

Why not use them for diversion? for entertainment? to amuse yourself, in a game?

The pseudo-context is the last refuge, so to say, of a culture overwhelmed by irrelevance, incoherence, and impotence.

Theirs was a “language” that denied interconnectedness, proceeded without context, argued the irrelevance of history, explained nothing, and offered fascination in place of complexity and coherence. Theirs was a duet of image and instancy, and together they played the tune of a new kind of public discourse in America.

As a small, ironic example of this point, consider this: In the past few years, we have been learning that the computer is the technology of the future. We are told that our children will fail in school and be left behind in life if they are not “computer literate.” We are told that we cannot run our businesses, or compile our shopping lists, or keep our checkbooks tidy unless we own a computer. Perhaps some of this is true. But the most important fact about computers and what they mean to our lives is that we learn about all of this from television.

Television has achieved the status of “meta-medium”—an instrument that directs not only our knowledge of the world, but our knowledge of ways of knowing as well.

television has achieved the status of “myth,” as Roland Barthes uses the word.

myth a way of understanding the world

A myth is a way of thinking so deeply embedded in our consciousness that it is invisible.

What is television? What kinds of conversations does it permit? What are the intellectual tendencies it encourages? What sort of culture does it produce?

Only those who know nothing of the history of technology believe that a technology is entirely neutral.

Each technology has an agenda of its own.

It is, as I have suggested, a metaphor waiting to unfold.

All of this has occurred simultaneously with the decline of America’s moral and political prestige, worldwide. American television programs are in demand not because America is loved but because American television is loved.

what I am claiming here is not that television is entertaining but that it has made entertainment itself the natural format for the representation of all experience.

The problem is not that television presents us with entertaining subject matter but that all subject matter is presented as entertaining, which is another issue altogether.

Entertainment is the supra-ideology of all discourse on television.

Had Irving Berlin changed one word in the title of his celebrated song, he would have been as prophetic, albeit more terse, as Aldous Huxley. He need only have written, There’s No Business But Show Business.

The viewers also know that no matter how grave any fragment of news may appear (for example, on the day I write a Marine Corps general has declared that nuclear war between the United States and Russia is inevitable), it will shortly be followed by a series of commercials that will, in an instant, defuse the import of the news, in fact render it largely banal.

I should go so far as to say that embedded in the surrealistic frame of a television news show is a theory of anticommunication, featuring a type of discourse that abandons logic, reason, sequence and rules of contradiction. In aesthetics, I believe the name given to this theory is Dadaism; in philosophy, nihilism; in psychiatry, schizophrenia. In the parlance of the theater, it is known as vaudeville.

The result of all this is that Americans are the best entertained and quite likely the least well-informed people in the Western world.

“Television is the soma of Aldous Huxley’s Brave New World. ” Big Brother turns out to be Howdy Doody.

America’s newest and highly successful national newspaper, USA Today, is modeled precisely on the format of television.

Radio, of course, is the least likely medium to join in the descent into a Huxleyan world of technological narcotics. It is, after all, particularly well suited to the transmission of rational, complex language.

NOTE: Podcasting? Is this an heir to radio and that print-orality?

Though it may be un-American to say it, not everything is televisible.

to put it more precisely, what is televised is transformed from what it was to something else, which may or may not preserve its former essence.

“Television,” Billy Graham has written, “is the most powerful tool of communication ever devised by man.

This is gross technological naivete. If the delivery is not the same, then the message, quite likely, is not the same. And if the context in which the message is experienced is altogether different from what it was in Jesus’ time, we may assume that its social and psychological meaning is different, as well.

the television screen itself has a strong bias toward a psychology of secularism.

There are, of course, counterarguments to the claim that television degrades religion. Among them is that spectacle is hardly a stranger to religion.

Show business is not entirely without an idea of excellence, but its main business is to please the crowd, and its principal instrument is artifice.

the television commercial has mounted the most serious assault on capitalist ideology since the publication of Das Kapital.

The television commercial is not at all about the character of products to be consumed. It is about the character of the consumers of products.

The television commercial has been the chief instrument in creating the modern methods of presenting political ideas. It has accomplished this in two ways. The first is by requiring its form to be used in political campaigns.

Being a celebrity is quite different from being well known.

Although it may go too far to say that the politician-as-celebrity has, by itself, made political parties irrelevant, there is certainly a conspicuous correlation between the rise of the former and the decline of the latter.

Czeslaw Milosz, winner of the 1980 Nobel Prize for Literature, remarked in his acceptance speech in Stockholm that our age is characterized by a “refusal to remember”; he cited, among other things, the shattering fact that there are now more than one hundred books in print that deny that the Holocaust ever took place.

an anxious age of agitated amnesiacs....

We Americans seem to know everything about the last twenty-four hours but very little of the last sixty centuries or the last sixty years.”

“Sesame Street” encourages children to love school only if school is like “Sesame Street.”

“Sesame Street” undermines what the traditional idea of schooling represents.

Whereas a classroom is a place of social interaction, the space in front of a television set is a private preserve. Whereas in a classroom, one may ask a teacher questions, one can ask nothing of a television screen.

Whereas school is centered on the development of language, television demands attention to images. Whereas attending school is a legal requirement, watching television is an act of choice.

Whereas in school, one fails to attend to the teacher at the risk of punishment, no penalties exist for failing to attend to the television screen. Whereas to behave oneself in school means to observe rules of public decorum, television watching requires no such observances, has no concept of public decorum. Whereas in a classroom, fun is never more than a means to an end, on television it is the end in itself.

every television show is educational. Just as reading a book—any kind of book —promotes a particular orientation toward learning, watching a television show does the same.

“The Little House on the Prairie,” “Cheers” and “The Tonight Show” are as effective as “Sesame Street” in promoting what might be called the television style of learning.

If we are to blame “Sesame Street” for anything, it is for the pretense that it is any ally of the classroom.

it is important to add that whether or not “Sesame Street” teaches children their letters and numbers is entirely irrelevant. We may take as our guide here John Dewey’s observation that the content of a lesson is the least important thing about learning.

As he wrote in Experience and Education: “Perhaps the greatest of all pedagogical fallacies is the notion that a person learns only what he is studying at the time. Collateral learning in the way of formation of enduring attitudes ... may be and often is more important than the spelling lesson or lesson in geography or history.... For these attitudes are fundamentally what count in the future.”

the most important thing one learns is always something about how one learns.

We face the rapid dissolution of the assumptions of an education organized around the slow-moving printed word, and the equally rapid emergence of a new education based on the speed-of-light electronic image.

This is why I think it accurate to call television a curriculum.

As I understand the word, a curriculum is a specially constructed information system whose purpose is to influence, teach, train or cultivate the mind and character of youth. Television, of course, does exactly that, and does it relentlessly. In so doing, it competes successfully with the school curriculum. By which I mean, it damn near obliterates it.

Thou shalt have no prerequisites

Television is a nongraded curriculum and excludes no viewer for any reason, at any time. In other words, in doing away with the idea of sequence and continuity in education, television undermines the idea that sequence and continuity have anything to do with thought itself.

Thou shalt induce no perplexity

the average television viewer could retain only 20 percent of the information contained in a fictional televised news story.

21 percent of television viewers could not recall any news items within one hour of broadcast.

What Huxley teaches is that in the age of advanced technology, spiritual devastation is more likely to come from an enemy with a smiling face than from one whose countenance exudes suspicion and hate.

that only through a deep and unfailing awareness of the structure and effects of information, through a demystification of media, is there any hope of our gaining some measure of control over television, or the computer, or any other medium. How is such media consciousness to be achieved?

we are in a race between education and disaster,

necessity of our understanding the politics and epistemology of media.

Audrey Watters

Unfair Taxes

3 min read

Here's another rant I posted to Facebook today (in lieu of my actual work):

 What counts as “fair share” of taxes is a totally subjective assessment. Most Americans do believe that the wealthy and corporations do not pay their “fair share,” although according to Pew, Democrats find this much more disconcerting than Republicans do. (Corporate taxes today make up around 10% of federal revenues, down from about a third of revenues in the 1950s.)

Sure, people have every right to claim deductions. But we aren’t talking about Trump taking advantage of deductions here, ffs.

If nothing else, we should recognize the way in which tax laws are set up and how the tax structure benefits the affluent much more than the poor – from deductions for having a mortgage to the low rate on capital gains. And it’s the latter in particular that has helped fuel the growing economic inequality in this country. The wealthy – including folks like Trump – are quite skilled at moving their money into categories that take advantage of lower tax rates so that they aren’t being taxed on income (at a high rate) but are taxed on investments (at a lower rate). They can afford to hire lawyers and accountants to do so.

While individual income tax does make up the largest share of government revenues, the fastest growing part of that revenue comes from the payroll tax. And most Americans – all but the wealthiest 20% – pay more in payroll taxes than they do in income taxes.

To Susanne’s point: Trump seemed to indicate last night he pays nothing in taxes. I agree with her that that is wrong. It is grossly unfair. (Indeed, some of the reporting out of The Washington Post suggests that Trump has used his foundation to commit tax fraud in his attempts to avoid paying taxes.) The Clintons, for what it’s worth, paid over $3 million. I’m sure that had they hired “the best people” as Trump does, they could have whittled that down. But they didn’t. Because when you are wealthy, you have an obligation to help fuel prosperity for everyone, not just line your own pockets. And almost all economists agree that raising taxes on the wealthy would have enormous benefits, including addressing the growing inequality that this country faces.

Audrey Watters

Presidential Debate, No. 1

3 min read

I come from a long line of racists. I mean, let’s be honest white folks, we all do.

But all I can think right now is of my dad (RIP Kirk), who called me in tears on November 4, 2008 because he had cast his first vote for a Democrat in a presidential election.

I’m from Wyoming. One of the reddest states. You’re either a Republican there or you might as well be a Commie.

My dad called me that night in tears because he was proud of his vote and, I think, frightened of his vote. He said his dad would be rolling in his grave that his son had voted for a Black president. “What was this country coming to? A better place. A better place,” he kept repeating.

My dad said he couldn’t vote for McCain. He just couldn’t. He couldn’t support McCain if the guy would pick Palin as his running mate. To acquiesce to the kind of people who support Palin, he said, was to surrender everything that had made the Republican Party great; and even more, everything that had made this very flawed country believe in progress. “She doesn’t believe in dinosaurs, for fuck’s sake,” he said.

It wasn’t a vote for Obama. Let’s be clear. My dad was one helluva polite white supremacist, I’ll give him that much.

And that’s the Republican Party I knew as a kid in Wyoming, I suppose. One that believed in free markets and war and whiteness but also dinosaurs.

I don’t recognize much about the GOP today. Oh I do recognize the racism, for sure. I recognize the sexism. But there’s something about Trump, about his smug selfishness, sure, but about his willful dismissal of facts and truths – “We hold these truths to be self-evident” – that would have driven my dad and my dad’s dad to another party at this stage. I’m sure of it. I’m not sure how anyone, quite frankly, could have watched Trump in tonight’s debate and then pronounced “that’s my guy.” My dad and my grandpa – small businessmen, both of them – paid taxes, not because they were “dumb” as Trump suggested tonight, but because that’s what you do as a responsible citizen.

I want to write more about all of thisthis. About this embrace of factlessness and fantasy. About selfishness in lieu of sacrifice. But mostly I want to be able to understand how so many people I grew up with and love can support someone like Trump – someone who I think (and I think my dad would think too) is really poised to unravel everything everything that “we” – we white folks, we the people, what have you – have worked toward.

Why, it’s almost as though once “we” are confronted with extending rights and dignity to everyone – “all men are created equal” – that white folks would rather burn it to the ground than let people of color have access to freedom and justice and happiness.

Audrey Watters

Staying with the Trouble

15 min read

Notes and highlights from Donna Haraway's latest book Staying with the Trouble:

Trouble is an interesting word. It derives from a thirteenth-century French verb meaning “to stir up,” “to make cloudy,” “to disturb.” We—all of us on Terra—live in disturbing times, mixed-up times, troubling and turbid times.

Our task is to make trouble, to stir up potent response to devastating events, as well as to settle troubled waters and rebuild quiet places.

In fact, staying with the trouble requires learning to be truly present, not as a vanishing pivot between awful or edenic pasts and apocalyptic or salvific futures, but as mortal critters entwined in myriad unfinished configurations of places, times, matters, meanings.

[Notice the spelling, intentionally re-spelling Lovecraft]


Nothing in kainos must mean conventional pasts, presents, or futures.

Chthonic ones romp in multicritter humus but have no truck with sky-gazing Homo.

Chthonic ones are not safe; they have no truck with ideologues; they belong to no one;

They make and unmake; they are made and unmade.

Kin is a wild category that all sorts of people do their best to domesticate. Making kin as oddkin rather than, or at least in addition to, godkin and genealogical and biogenetic family troubles important matters, like to whom one is actually responsible.

Anthropocene. Capitalocene.

The first is easy to describe and, I think, dismiss, namely, a comic faith in technofixes, whether secular or religious: technology will somehow come to the rescue of its naughty but very clever children, or what amounts to the same thing, God will come to the rescue of his disobedient but ever hopeful children.

The second response, harder to dismiss, is probably even more destructive: namely, a position that the game is over, it’s too late, there’s no sense trying to make anything any better, or at least no sense having any active trust in each other in working and playing for a resurgent world.

This book argues and tries to perform that, eschewing futurism, staying with the trouble is both more serious and more lively.

Companion species are relentlessly becoming-with.

Pigeons are also “creatures of empire”—that is, animals who went with European colonists and conquerors all over the world, including places where other varieties of their kind were already well established, transforming ecologies and politics for everybody in ways that still ramify through multispecies flesh and contested landscapes.

Building naturalcultural economies and lives for thousands of years, these critters are also infamous for ecological damage and biosocial upheaval.

They are treasured kin and despised pests, subjects of rescue and of invective, bearers of rights and components of the animal-machine, food and neighbor, targets of extermination and of biotechnological breeding and multiplication, companions in work and play and carriers of disease, contested subjects and objects of “modern progress” and “backward tradition.”

Becoming-with people for several thousand years, domestic pigeons (Columba livia domestica) emerged from birds native to western and southern Europe, North Africa, and western and southern Asia. Rock doves came with Europeans to the Americas, entering North America through Port Royal in Nova Scotia in 1606.

Called “rats with wings,” feral pigeons are subjects of vituperation and extermination, but they also become cherished opportunistic companions who are fed and watched avidly all over the world.

Domestic rock doves have worked as spies carrying messages, racing birds, fancy pigeons at fairs and bird markets, food for working families, psychological test subjects, Darwin’s interlocutors on the power of artificial selection, and more.

Pigeons are competent agents—in the double sense of both delegates and actors—who render each other and human beings capable of situated social, ecological, behavioral, and cognitive practices.

My hope is that these knots propose promising patterns for multispecies response-ability inside ongoing trouble.

In Project Sea Hunt in the 1970s and ’80s, the U.S. Coast Guard worked with pigeons, who were better at spotting men and equipment in open water than human beings.16 Indeed, pigeons were accurate 93 percent of the time, compared to human accuracy in similar problems of 38 percent.

Clearly, the pigeons and Coast Guard personnel had to learn how to communicate with each other, and the pigeons had to learn what their humans were interested in seeing. In nonmimetic ways, people and birds had to invent pedagogical and technological ways to render each other capable in problems novel to all of them.

Not very many kinds of other-than-human critters have convinced human skeptics that the animals recognize themselves in a mirror—a talent made known to scientists by such actions as picking at paint spots or other marks on one’s body that are visible only in a mirror. Pigeons share this capacity with, at least, human children over two years old, rhesus macaques, chimpanzees, magpies, dolphins, and elephants.

Pigeons passed their first mirror tests in the laboratories of B. F. Skinner in 1981.

pigeons did better at self-recognition tests with both mirrors and live video images of themselves than three-year-old human children.

“It would seem that our pigeons do quite a good job of exhibiting an agape type of love toward each other . . . Our pigeons are actually doing the work of real love.”

“The pigeon ‘backpack’ developed for this project consisted of a combined GPS (latitude, longitude, altitude) / GSM (cell phone tower communication) unit and corresponding antennas, a dual automotive CO/NOx pollution sensor, a temperature sensor, a Subscriber Identity Module (SIM) card interface, a microcontroller and standard supporting electronic components.


To re-member, to com-memorate, is actively to reprise, revive, retake, recuperate.

They remember; they entice and prolong into the fleshly present what would disappear without the active reciprocity of partners. Homing or racing pigeons and feral pigeons call both their emergent and traditional peoples to response-ability, and vice versa. City dwellers and rural people of different species and modes of living and dying make each other colombophiles talentueux in company with voyageurs fiables.

the municipal pigeon tower certainly cannot undo unequal treaties, conquest, and wetlands destruction; but it is nonetheless a possible thread in a pattern for ongoing, noninnocent, interrogative, multispecies getting on together.

Companion species infect each other all the time. Pigeons are world travelers, and such beings are vectors and carry many more, for good and for ill. Bodily ethical and political obligations are infectious, or they should be. Cum panis, companion species, at table together. Why tell stories like my pigeon tales, when there are only more and more openings and no bottom lines? Because there are quite definite response-abilities that are strengthened in such stories.

As spies, racers, messengers, urban neighbors, iridescent sexual exhibitionists, avian parents, gender assistants for people, scientific subjects and objects, art-engineering environmental reporters, search-and-rescue workers at sea, imperialist invaders, discriminators of painting styles, native species, pets, and more,

Nobody lives everywhere; everybody lives somewhere. Nothing is connected to everything; everything is connected to something.

denizens of the depths, from the abyssal and elemental entities, called chthonic.

Their many appendages make string figures; they entwine me in the poiesis—the making—of speculative fabulation, science fiction, science fact, speculative feminism, soin de ficelle, so far. The tentacular ones make attachments and detachments; they ake cuts and knots; they make a difference; they weave paths and consequences but not determinisms; they are both open and knotted in some ways and not others. SF is storytelling and fact telling; it is the patterning of possible worlds and possible times, material-semiotic worlds, gone, here, and yet to come. I work with string figures as a theoretical trope, a way to think-with a host of companions in sympoietic threading, felting, tangling, tracking, and sorting. I work with and in SF as material-semiotic composting, as theory in the mud, as muddle.

In passion and action, detachment and attachment, this is what I call cultivating response-ability; that is also collective knowing and doing, an ecology of practices.

“It matters what ideas we use to think other ideas.”

It matters what thoughts think thoughts. It matters what knowledges know knowledges. It matters what relations relate relations. It matters what worlds world worlds. It matters what stories tell stories.

What is it to surrender the capacity to think?

In that surrender of thinking lay the “banality of evil” of the particular sort that could make the disaster of the Anthropocene, with its ramped-up genocides and speciescides, come true.

Arendt insisted that thought was profoundly different from what we might call disciplinary knowledge or science rooted in evidence, or the sorting of truth and belief or fact and opinion or good and bad.

Arendt witnessed in Eichmann not an incomprehensible monster, but something much more terrifying—she saw commonplace thoughtlessness.
[NOTE: Not monsters but thoughtlessness]

a deeper surrender to what I would call immateriality, inconsequentiality, or, in Arendt’s and also my idiom, thoughtlessness.

what it means to hold open space for another.

Extinction is a protracted slow death that unravels great tissues of ways of going on in the world for many species, including historically situated people.

Mourning is about dwelling with a loss and so coming to appreciate what it means, how the world has changed, and how we must ourselves change and renew our relationships if we are to move forward from here. In this context, genuine mourning should open us into an awareness of our dependence on and relationships with those countless others being driven over the edge of extinction . . . The reality, however, is that there is no avoiding the necessity of the difficult cultural work of reflection and mourning. This work is not opposed to practical action, rather it is the foundation of any sustainable and informed response.

Grief is a path to understanding entangled shared living and dying; human beings must grieve with, because we are in and of this fabric of undoing.

Without sustained remembrance, we cannot learn to live with ghosts and so cannot think. Like the crows and with the crows, living and dead “we are at stake in each other’s company.”

carrier bag theory of storytelling
[NOTE: Carrier bag theory of pigeons]

Think we must; we must think. That means, simply, we must change the story; the story must change.

None of the parties in crisis can call on Providence, History, Science, Progress, or any other god trick outside the common fray to resolve the troubles.

sciences, not Science.

This is neither relativism nor rationalism; it is SF, which Latour would call both sciences and scientifiction and I would call both sciences and speculative fabulation—all of which are political sciences, in our aligned approaches.

the time-space-global thing called Anthropocene. The term seems to have been coined in the early 1980s by University of Michigan ecologist Eugene Stoermer (d. 2012),

Still, if we could only have one word for these SF times, surely it must be the Capitalocene.50 Species Man did not shape the conditions for the Third Carbon Age or the Nuclear Age.

Note that insofar as the Capitalocene is told in the idiom of fundamentalist Marxism, with all its trappings of Modernity, Progress, and History, that term is subject to the same or fiercer criticisms. The stories of both the Anthropocene and the Capitalocene teeter constantly on the brink of becoming much Too Big. Marx did better than that, as did Darwin. We can inherit their bravery and capacity to tell big-enough stories without determinism, teleology, and plan.

This Chthulucene is neither sacred nor secular; this earthly worlding is thoroughly terran, muddled, and mortal—and at stake now.

technotheocratic geoengineering fixes

Sympoiesis is a simple word; it means “making-with.” Nothing makes itself; nothing is really autopoietic or self-organizing.


A model is a work object; a model is not the same kind of thing as a metaphor or analogy. A model is worked, and it does work.

“We Have Never Been Individuals,”


“an idea of what the female bee looked like to the male bee . . . as interpreted by a plant . . . the only memory of the bee is a painting by a dying flower.”

The practice of the arts of memory enfold all terran critters. That must be part of any possibility for resurgence!

Symchthonic stories are not the tales of heroes; they are the tales of the ongoing.

“Make Kin Not Babies!”

Making—and recognizing—kin is perhaps the hardest and most urgent part.

Kin making is making persons, not necessarily as individuals or as humans. I was moved in college by Shakespeare’s punning between kin and kind—the kindest were not necessarily kin as family; making kin and making kind (as category, care, relatives without ties by birth, lateral relatives, lots of other echoes) stretch the imagination and can change the story.

Marilyn Strathern taught me that “relatives” in British English were originally “logical relations” and only became “family members” in the seventeenth century—this is definitely among the factoids I love.16 Go outside English, and the wild multiplies.

Kin is an assembling sort of word.

All critters share a common “flesh,” laterally, semiotically, and genealogically.

Cyborgs are kin, whelped in the litter of post–World War II information technologies and globalized digital bodies, politics, and cultures of human and not-human sorts.

they are not hybrids at all. They are, rather, imploded entities, dense material semiotic “things”—articulated string figures of ontologically heterogeneous, historically situated, materially rich, virally proliferating relatings of particular sorts, not all the time everywhere, but here, there, and in between, with consequences.

cyborgs are critters in a queer litter, not the Chief Figure of Our Times.

Conjugating is about yoking together; conjugal love is yoked love; conjugated chemical compounds join together two or more constituents. People conjugate in public spaces; they yoke themselves together transversally and across time and space to make significant things happen.

Marx understood all about how privileged positions block knowledge of the conditions of one’s privilege.

We are all responsible to and for shaping conditions for multispecies flourishing in the face of terrible histories, but not in the same ways. The differences matter—in ecologies, economies, species, lives.

So much of earth history has been told in the thrall of the fantasy of the first beautiful words and weapons, of the first beautiful weapons as words and vice versa. Tool, weapon, word: that is the word made flesh in the image of the sky god. In a tragic story with only one real actor, one real world-maker, the hero, this is the Man-making tale of the hunter on a quest to kill and bring back the terrible bounty. This is the cutting, sharp, combative tale of action that defers the suffering of glutinous, earth-rotted passivity beyond bearing. All others in the prick tale are props, ground, plot space, or prey. They don’t matter; their job is to be in the way, to be overcome, to be the road, the conduit, but not the traveler, not the begetter. The last thing the hero wants to know is that his beautiful words and weapons will be worthless without a bag, a container, a net.

Plants, however, they speculated, “do not communicate” and so have no language. Something else is going on in the vegetative world, perhaps something that should be called art.

Emma Goldman’s understanding of anarchist love and rage make sense in the worlds of ants and acacias. These companion species are a prompt to shaggy dog stories—growls, bites, whelps, games, snufflings, and all. Symbiogenesis is not a synonym for the good, but for becoming-with each other in response-ability.

polite inquiry.

“Think we must!”

Why should Virginia Woolf, or any other woman, or men for that matter, be faithful to such patrilines and their demands for sacrifice? Infidelity seems the least we should demand of ourselves!

Hannah Arendt and Virginia Woolf both understood the high stakes of training the mind and imagination to go visiting, to venture off the beaten path to meet unexpected, non-natal kin, and to strike up conversations, to pose and respond to interesting questions, to propose together something unanticipated, to take up the unasked-for obligations of having met. This is what I have called cultivating response-ability. Visiting is not a heroic practice; making a fuss is not the Revolution; thinking with each other is not Thought. Opening up versions so stories can be ongoing is so mundane, so earth-bound.

That is what “going too far” means, and this curious practice is not safe.

racing pigeons, also called carrier pigeons (in French voyageurs) and with their avid fanciers (in French colombophiles, lovers of pigeons).

Pigeon racing is a working-class men’s sport around the world, one made immensely difficult in conditions of urban war (Baghdad, Damascus), racial and economic injustice (New York, Berlin), and displaced labor and play of many kinds across regions (France, Iran, California).

Audrey Watters


7 min read

(For a project, formerly known as "Speaking Openly")

I often quote the Marxist Antonio Gramsci – “I am a pessimist because of intelligence but an optimist because of will.” I quote Gramsci because, as “ed-tech’s Cassandra,” I’m often accused of being too critical, too negative about the future of education. And admittedly, I do fear that the future might be grim. But I am an optimist. I think that most of us that work in and near education are – we have to be. We believe in the transformative potential of teaching and learning. We believe in shaping and changing minds; as such, we believe in shaping and changing the future. The three other respondents have all laid out fairly optimistic visions of the future of teaching and learning – deliberately so, no doubt – a future that honors individuals, empathy, cultural relevance, social change, and social justice. And if that future is technologically-enhanced, it’s enhanced in such a way to make it more human and humane and less machine-like.

These are all reflections of our pedagogical goals, I think, as progressive educators. But these are also political goals. And I want to pause here to talk a little bit more about what I see as the future of the politics of education and, perhaps just as importantly, the future of the politics of the digital technology industry. A possible future, I should be clear, if we do not tackle these questions politically.

I think the others were right to point out that “learning” is distinct from “education.” But I think we have to talk about “education,” the institution. We have to scrutinize their role in past injustices, their role in inscribing and re-inscribing hierarchies, and we have to demand better. But I’m not sure we can abandon institutions, particularly public institutions, entirely. I say this recognizing that among the many crises we face right now, a lot of these involve our loss of faith in institutions – in the government, in the Church, in markets, in medicine, in science, in schools. How do we rebuild so that the collective and the communal is protected and that, as I fear would happen without institutions, it’s up to the individual and her or his privilege and social capital alone, in order to survive and succeed.

When I talk about the digital technology industry, I use the shorthand “Silicon Valley.” It’s not quite an accurate term geographically, but I use it to refer to its ideology – one of radical individualism, libertarianism, neoliberalism, exploitative and unchecked capitalism. This ideology isn’t espoused only by those who work and invest in Silicon Valley, of course. But increasingly – because of the financial and political power and influence of Silicon Valley – this ideology is becoming quite dominant.

We must ask how this will affect education. Disinvestment? The shrinking of the public sector? A move away from the communal to the individual? “Personalization” – one of the buzzwords of education technology? Standardization? Outsourcing? Uber-ification? Dismantling of labor protections? Automation? Algorithms? Financialization and monetization of all aspects of our lives? Surveillance, not only by the state but by corporations?

2015 was a record-setting year for education technology investment. Over $6 billion by some estimates. What was popular among investors? Test prep. Tutoring. Private student loans. Learning management systems. Online “skills training.”

Now to be fair, that $6 billion is dwarfed by venture capital that goes into other sub-sectors of tech. And Uber alone raised about $5 billion last year. But this flood of money comes with political power. It comes with a power to reshape – or to try to reshape – all sorts of narratives about what it means to be social, political, workers, students, “users,” citizens. The narratives that Silicon Valley tells about education are that schools are broken, that they are irrelevant, that they are inefficient, that unionized labor prevents innovation, that education can be automated. Successful entrepreneurs do not just form companies or form investment firms; they start philanthropies, like the Gates Foundation and now the Chan-Zuckerberg Initiative. These organizations have an oversized influence on education policy. They envision a future of teaching and learning that is, to borrow from Liz’s formulation, very much about calculation – about data and algorithms and efficiencies and tracking and analytics. They are profoundly anti-democratic.

This is one of the challenges we face, I think, particularly when we talk about a future of teaching and learning and digital technologies: this question of democracy and open communication and collaboration built on technologies of surveillance and command and control, built on top of pre-existing communication networks, never quite erasing the previous manifestations of power or politics, despite our rather utopian hopes that technologies like the Web just might.

Investor Marc Andreessen famously said a few years ago that “software is eating the world.” Andreessen is an important figure to think about in terms of technology and education – and not simply because his investment portfolio includes companies like the MOOC startup (or once upon a time a MOOC startup) Udacity. Andreessen was an undergraduate at the University of Illinois Urbana Champaign where he worked on the Mosaic Browser, the first browser for the Web. He believed that the browser had commercial possibilities and built Netscape Navigator – which shared no code with the browser built by a public university but shared its functionality. Andreessen became a billionaire with Netscape, a company’s whose IPO is generally seen as synonymous with the Internet bubble and with young tech entrepreneurs who would reshape the world. “Software is eating the world.” It’s eating public education, it’s eating higher education, arguably, despite the origins of almost all the innovations of the past sixty years in the computer industry being intimately tied up with these scholarly institutions.

To echo Maha’s question about whose learning – learning by whom and for whom in what contexts, I would add a litany of questions about the world that software is purportedly eating – whose software, who benefits, whose world is being re-enacted and recoded and digitized? A world of the global elite? A world of the global north? A world of engineers? A world of white men? A world of machines?

What about the rest of us? Non machines and non humans alike?

The future of teaching and learning will continue to be, as the history of teaching and learning would show us, political acts, political practices. They must be ones of resistance, I think, to the stories and the practices of exploitation. As we think about institutions – new ones and old ones – we must demand justice. We must cultivate “response- ability” – I’m using this term as Donna Haraway does – to be able to respond, to be able to recognize our complicity in harmful acts past and present, and to think about transformation that is deeply critical and deeply empathetic to all the world around us. This is a political undertaking, and an incredibly urgent one. It isn’t because, as Andreessen gleefully pronounces, because “software is eating the world.” It is because the world is dying or careening at least to another global extinction event. Addressing this isn’t simply a question of engineering. It is a question of compassion and teaching and learning and radical pedagogy. We must be optimists, not pessimists as hard as it can be in the face of global crises. Our world, our survival demands it.