Skip to main content

Audrey Watters

On What Goes Unsaid

4 min read

Every Friday, I write an article that links to all the education news I’ve seen cross my desk during the previous week. Ostensibly, it’s with an eye to education technology, but you cannot talk about education technology without situating its promotion and adoption in the politics of education more broadly. I’m not sure how useful the “Hack Education Weekly News” is to readers, but I do it anyway, as it’s a crucial part of the research that goes into my annual look at the “Top Ed-Tech Trends.”

With this undertaking, I consume a lot of education and technology journalism. And while I’m interested in the stories that are told – these help me see what are some of the larger narratives taking shape about the future of education – I’m also fascinated by what remains untold. Which publications, for example, consistently fail to report on which stories?

I think it’s important to recognize that a fair number of education outlets and almost all the education technology and technology press are trade publications. That means that some of the journalistic practices that we expect to see in the media – avoiding financial conflicts of interest, most notably – aren’t really principles that are tightly held by these publications. (Nor are these principles tightly held by other new forms of media, including many blogs.)

The Atlantic’s Alexis Madrigal wrote about trade publications back in 2011 when news broke that Mike Arrington, founder of the tech industry blog Techcrunch, had become a venture capitalist (and would therefore be investing in the very companies that his publication frequently covered):

Many websites are functioning largely as trade magazines that occasionally commit acts of journalism. TechCrunch, and Mashable to an even greater extent, are more like the new American Thresherman and Farm Power or Stone World or Successful Farming than they are the new New York Times. But it’s hard to know when they’re acting like the Times and when they are acting like Plumbing and Mechanical Magazine.

Even the news that they break would generally come out via a press release in due time. People care about what they write, and they beat other people to the information, but the scoops are fundamentally benign. (This company got some money, that company has a new app, another may do something that alters the competitive landscape.) Trade magazines have been doing this kind of thing for as long as there have been trade and magazines.

Madrigal’s concerns here are largely ethical concerns: what are the implications of this sort of new (?) journalistic ethics –particularly as the tech industry becomes increasingly powerfully economically, politically, culturally? It’s an ethics, he points out, that argues that any sort of financial conflict of interest is alright as long as there’s transparency – a disclosure note at the bottom of s story, for example.

This sort of transparency is pretty weak. (I try to track on this, in part, with a Twitter account that notes failure in ed-tech publications to disclose their financial ties in stories.) Disclosure do not always happen. And if they are included in the footer of a story, that rarely translates onto social media where people only see headlines.

But I’m also interested in what’s undisclosed because of what goes unpublished. Some of this, obviously, is about editorial decision-making. Some of it is about the writers’ interests and knowledge of the sector. No one person and no one publication can be comprehensive. There are always things that get left out. I’m still interested in thinking about why that happens – again, particularly as ed-tech publications are primarily industry mouthpieces.

Audrey Watters

"I Can Change"

3 min read

But I can change, I can change, I can change, I can change
I can change, I can change, I can change
If it helps you fall in love (in love) – LCD Soundsystem

In a speech at the Council of the Great City Schools last week, Bill Gates announced that the Gates Foundation’s “education efforts are evolving.”

“Evolving” – according to his speech at least – means developing curricula, supporting charter schools, and focusing “on locally-driven solutions identified by networks of schools, and support[ing] their efforts to use data-driven continuous learning and evidence-based interventions to improve student achievement.”

“Evolving,” we’re supposed to gather, means “improving.”

I spent some time this summer going through all the foundation’s education-related grants, which I’d contend have always changed over time as Gates and other education reformers and policy makers have altered their focus and their beliefs. Some initiatives have been there from the start, no doubt. “Personalized learning” is one of those, and it was notably absent from Gates’ speech – as either an ongoing or former effort. (Perhaps “personalized learning” is something Gates thinks the Chan Zuckerberg Initiative is going to take care of. Perhaps that is how venture philanthropy is evolving – by adding new tech CEOs to the handful of billionaires dictating education policy.) But there many, many other projects that have been funded then abandoned – and not just the high profile ones like the small schools efforts or inBloom, the $100 million data infrastructure project.

“Evolving.” Now we must watch and see what the Gates Foundation really will pay for in the coming years. What will count as “evolution” in the eyes of the foundation and the press? The Gates Foundation is the most powerful force in shaping the national conversation about education (and funding the direction that conversation takes), and so any “evolution” isn’t simply going to involve the Gates’ grants and investment portfolio. All this is about the future of education and education technology more broadly. What will change?

“Evolving” – most of the press on Gates’ announcement have taken the man at his word. He’s (often) the richest man in the world – I guess that’s what happens with all that wealth and status and philanthropic hustle. We believe him. And a lot of the press has run with the headline about the foundation’s commitment to spend $1.7 billion on education in the next few years – many describing it as an investment in “public schools.” That would, indeed, be a shift as much of the $15 billion Gates has put into education projects and programs since the organization was founded has gone to companies not schools (and that’s not counting the Gates grant money that schools have been awarded and then directed to companies too).

“Evolving.” Why make this speech now? Why does Gates want to project a willingness to learn? (I mean, other than everyone’s supposed to have a “growth mindset” these days.) Does Gates wish to differentiate himself and his organization from others in education reform? (And do others in education reform wish to differentiate themselves from the Trump Adminstration?) Who cares if you’re “evolving”? The damage is done. “Evolution” doesn’t undo that.

Audrey Watters

The History of the Future of "Everyone Should Learn to Code"

2 min read

I’m working on an article as part of my Spencer Fellowship and part of the larger research I’m undertaking this year (which involves tracking the networks of influence among technology investors and entrepreneurs interested in education, education technology, and education reform).

Where did this “learn to code” craze come from?

I’m not asking who in Silicon Valley was “first” to teach students programming. Indeed, there’s a much longer history of computing in education that many involved with this latest push for “coding” overlook: the work of those developing and teaching LOGO, for example.

I’m curious about a couple of stories here: the origin story of Codecademy, for starters – an idea the startup founders came up with a week and a half before Demo Day at Y Combinator in 2011. Y Combinator has now supported many, many “learn to code” startups. But when the founders of AppJet – who eventually focused on their Etherpad product, later acquired by Google – had a different experience at YC. Back in 2008 or so, they were discouraged from building a programming tutorial. What changed (at YC and beyond) and why?

Codecademy is a particularly interesting startup here because it garnered so much positive press (with a few notable exceptions cough) and so much high profile hype. Remember Code Year – even then Mayor Bloomberg tweeted that his new year’s resolution was to learn to code. Codecademy sort of faded from view – there are only so many exciting stories you can write about a Web-based IDE, I suppose. But other startups and organizations maintained the “learn-to-code” drumbeat, particularly Code.org.

What motivated the founding of Code.org in 2013? It came after “the Year of the MOOC” in 2012, which was very much a story about higher education’s failure to focus on high tech skills training. (Sebastian Thrun, for example, would often claim – incorrectly – that colleges do not teach things like mobile app development.)

Who’s funding this narrative that everyone needs to learn to code? (I’ve started tracking some of this as part of my larger researcher into venture capital’s influence in education technology and education policy. Here’s what Code.org’s funding network looks like: hack-education-data.github.io/code-dot-org.)

Audrey Watters

NYC, Day 54

2 min read

I started writing this and realized I had actual work to do. So it's a fragment (the title of this site) and not a blog post.

I am not sure when it’s really “official.” Am I a New Yorker when I rent an apartment? When I register to vote? When I get my New York driver’s license? I’ve done the first two at least.

This weekend, I returned to the apartment Kin and I called home for over three years to start the process of packing everything up. Much is going into storage – all of Anthony’s artwork, primarily, which we’d just pulled out of storage in Oregon a year ago. We’ve lived on the road for so long; it’s really only been within the last year or so that we started to accumulate possessions again: mostly books and artwork. That’s all getting shipped east. The furniture – all that low-quality IKEA crap – is going up on Craigslist, free to the person who’ll haul it away.

Hermosa Beach was really good to us. But it’s time to move on.

Audrey Watters

Literature Review: The History of the Future of Personalization

5 min read

A literature review for Evidence and Inference, a class I'm sitting in on as part of my Spencer Fellowship.

“Personalized learning” has become something of a buzzword in education circles in recent years – shorthand, in many cases, for the potential for new technologies to reshape how schools operate and how students learn. “Personalized learning,” some insist, will allow students to move at their own pace through instructional materials. Some say it will enable a customization of instruction, so that lessons can be tailored to students’ interests and capabilities.

Despite its popularity in policy discussions, it’s not always clear what the phrase means – does “personalized learning” require technology at all, for example? Or is it akin to older, “progressive” education models that have long encouraged individual inquiry rather than “whole class instruction”?

It’s not clear what it means and it’s not clear if “it works.” It’s not readily apparent what, if any, effect that “personalized learning” has on students’ educational “outcomes” – their achievement, their curiosity and interest, and so on.

But the idea has powerful backers nonetheless – the Gates Foundation and the Chan Zuckerberg Initiative, for starters. The White House, under both Trump and Obama.

There are several schools of thought about “personalized learning,” much of this based on the disciplinary backgrounds of the scholars themselves. There are the educational researchers – “learning scientists” -- who investigate whether or not personalized learning is “effective.” In other words, does it raise test scores? There are those who study the sociology of education, some of whom are less interested in questions about “effectiveness” and more interested in the influence – politically, ideologically -- that industry has on educational reforms and education narratives. These researchers also look at how “personalization” works at an institutional level. There are also, of course, those in business schools who suggest that industry should have more influence on how schools are run. There are those too who study the history of education, many of whom would note that the rhetoric of and calls for “personalized learning” in some form or another is at least one hundred years old.

NOTABLE SCHOLARS:

Learning Sciences:

Bloom, Benjamin. “The 2 Sigma Problem: The Search for Methods of Group Instruction as Effective as One-to-One Tutoring.” Educational Researcher. 1984.

Chen, Chih-Ming. “Intelligent web-based learning system with personalized learning path guidance.” Computers and Education. 2007.

Pane, John F., Elizabeth D. Steiner, Matthew D. Baird, Laura S. Hamilton, and Joseph D. Pane “Observations and Guidance on Implementing Personalized Learning.” RAND Corporation. 2017.

Tseng, Judy C.R., Hui-Chun Chu, Gwo-Jen Hwang, and Chin-Chung Tsai. “Development of an adaptive learning system with two sources of personalization information.” Computers and Education. 2008.

Sociology of Education:

Selwyn, Neil. Education and Technology: Key Issues and Debates. 2016.

---. “Web 2.0 applications as alternative environments for informal learning – a critical review.” CERI-KERIS. 2007.

Williamson, Ben. “Educational reform, enquiry-based learning and the re-professionalisation of teachers.” The Curriculum Journal. 2009.

---. “Governing software: Networks, databases and algorithmic power in the digital governance of public education.” Learning, Media, and Technology. 2015.

History of Education:

Cuban, Larry. Oversold and Underused: Computers in the Classroom. 2001.

---. “Reforming Again, and Again, and Again.” Educational Researcher. 1990.

Education Business:

Christensen, Clayton, Curtis Johnson, and Michael Horn. Disrupting Class, Expanded Edition: How Disruptive Innovation Will Change the Way the World Learns. 2008.

FURTHER QUESTIONS:

It is rather remarkable that a topic as widely ballyhooed as “personalized learning” would have as little research about how well (or not) it works. But then again, that’s probably something that can be said of almost all education technologies: in the aggregate, education technologies have mixed results – some show negative effect, some show minimal effect, and some show no effect at all. There’s no “silver bullet,” if you will, that will address the problems that the education system faces.

But this makes the question of “why personalized learning?” all the more interesting.

How did “personalized learning” go from being something touted by progressive educators in the early twentieth century to the latest craze touted by education reformers and technology billionaires-turned-education philanthropists? (There are decades between progressive education of the 1900s and the push for the personal computer in schools. What happened?) Does the phrase mean the same as it did one hundred years ago? How has its meaning changed – and changed beyond simply the addition of computers?

And how does the addition of computers change what one means by “personalized learning”? Have computers changed “progressive education”? How does progressive education reconcile the highly commercial focus of much of today’s educational software? How much of “personalized learning” as imagined and built and sold by tech companies is echoes what “personalization” means on their platforms: metrics, marketing, conversion rates, customer satisfaction?

What makes “personalized learning” so appealing – again, historically but more importantly, today? Is there something about the tension of the American education system – its mandate to educate the public in some sort of collective manner? Does “personalization” offer some sort of psychological balm, perhaps, for standardization? (It is noteworthy, no doubt, that calls for “personalization” in the late nineteenth and early twentieth century occurred alongside the push for mass education.) And what makes “personalized learning” so appealing to education reformers? Is it something the technology promises? Is it the technology itself?

Audrey Watters

Initial Thoughts on Conspiracy Theories in Education

4 min read

Someone asked me the other day how I respond to charges that my work involves conspiracy theories about a “billionaire boys’ club” seeking the privatization of education. And it struck me that education reform and education technology have been caught up in aspersions accusations about “evidence” and accusations of “fake news” for some time now.

The phrase “billionaire boys’ club” is one historian Diane Ravitch uses in a chapter of her 2010 book The Death and Life of the Great American School System to describe the network of philanthropic organizations – namely, the Bill and Melinda Gates Foundation, the Walton Foundation, and the Eli and Edythe Broad Foundation – and the policies that these wealthy education reformers have funded and promoted. The phrase “billionaire boys’ club” can be traced farther back still, to a 1980s Ponzi scheme operating among the wealthy students at the Harvard School for Boys. Ravitch never makes reference to this social investing organization in her book, so it’s not clear if she is intentionally invoking the criminal enterprise when describing the efforts of Gates et al. Any connection between ed-reform and that “BBC” would certainly be the makings of a grand conspiracy theory. She doesn’t go there, but Ravitch’s work has been often dismissed as conspiracy theory nonetheless.

What makes something a conspiracy theory? The designation is no longer reserved for hypotheses about secret government activities, alien technologies, or celebrity deaths. It’s become a label used to dismiss all sorts of criticism that one disagrees with.

Sociologist Bruno Latour has argued that there is an unsettling connection between conspiracy theory and criticism: "What’s the real difference between conspiracists and a popularized, that is a teachable version of social critique inspired by a too quick reading of, let’s say, a sociologist as eminent as Pierre Bourdieu?

In both cases, you have to learn to become suspicious of everything people say because of course we all know that they live in the thralls of a complete illusio of their real motives. Then, after disbelief has struck and an explanation is requested for what is really going on, in both cases again it is the same appeal to powerful agents hidden in the dark acting always consistently, continuously, relentlessly. Of course, we in the academy like to use more elevated causes – society, discourse, knowledge-slash-power, fields of forces, empires, capitalism – while conspiracists like to portray a miserable bunch of greedy people with dark intents, but I find something troublingly similar in the structure of the explanation, in the first movement of disbelief and, then, in the wheeling of causal explanations coming out of the deep dark below. What if explanations resorting automatically to power, society, discourse had outlived their usefulness and deteriorated to the point of now feeding the most gullible sort of critique? Maybe I am taking conspiracy theories too seriously, but it worries me to detect, in those mad mixtures of knee-jerk disbelief, punctilious demands for proofs, and free use of powerful explanation from the social neverland many of the weapons of social critique. Of course conspiracy theories are an absurd deformation of our own arguments, but, like weapons smuggled through a fuzzy border to the wrong party, these are our weapons nonetheless. In spite of all the deformations, it is easy to recognize, still burnt in the steel, our trademark: Made in Criticalland.

This is an argument that Kurt Andersen seems to pick up on in his recent book Fantasyland: America has “lost its mind,” in part because of postmodernist theory and what according to Andersen is its penchant for “anything-goes relativism.”

Diane Ravitch is no postmodernist (no matter how you define the term). So perhaps it’s easier to see the charges that she peddles in conspiracy theories as simply part of that longer history of accusing women of “hysteria.” Perhaps that’s what’s meant when someone frames my work that way too.

But I think there’s more to it than that.

Audrey Watters

Notes and Highlights from Bringing Montessori to America

21 min read

Bringing Montessori to America: S. S. McClure, Maria Montessori, and the Campaign to Publicize Montessori Education by Gerald L. Gutek and Patricia A. Gutek

the individual child’s liberty to realize his or her own self-development.

She would distinguish her scientific method of education from what she called the traditional old “ordinary schools” that had their students assimilate “content digested by others.”

In these schools, Montessori said, “students learned by studying much—teachers explained much—the students’ only labor was to accept the content of labor of those before them.”

In contrast, Montessori told her trainees to use the “method of practical science” in their teaching. The new teacher “does not give ready made thoughts but allows the child to make his own.”

Contrary to the individualized learning that would characterize the Montessori Method,

The only individualized part of the school day was the recitation, when a child was called upon to recite a previously memorized passage—able students, standing at attention, correctly responded to the teacher’s questions with answers they had previously memorized from the textbook.

if children did not perform the activities needed in their developmental sequence at the right time, they suffered the consequences of continual and cumulative impairment.

Montessori, who saw his findings as “the first attempts at experimental psychology.”

The child needed to be free to demonstrate what he or she wanted to do. Child freedom or children’s liberty to choose their work became a compelling part of Montessori’s method.

Anne George, the first American trained by Montessori as a directress, informed her readers in Good Housekeeping Magazine that Seguin’s premise that sensory training was the necessary first step in children’s intellectual and moral development profoundly shaped Montessori’s educational theory.11 Montessori would use some of Seguin’s training materials as prototypes for her didactic apparatus.

individual child’s independent self-learning, which she called “auto-education.”

Seguin and Montessori anticipated the contemporary Individualized Education Program (IEP) that is part of the education of children with special needs.

sensory learning

didactic apparatus

These Montessori learning materials would play a key, and sometimes contentious, role in the introduction of the Montessori Method to the United States by McClure and others.

“I wish to transform education into the experimental sciences,” she said, and “make it a science in itself.”

Montessori regarded her directresses as disciples and why she regarded deviations or revisions of her method as heresies.

Montessori directresses were told to accurately record each child’s weight and height weekly. These measurements formed an individualized empirical record, “a biographical chart,” that the team of teacher, pediatrician, and psychologist maintained for each child and shared with parents.

she cautioned that the scientific observation of children, though necessary, was not the same as educating them; it was rather a necessary guide to their education.

clinical observation

For clinical observation to produce valid findings about children’s development and behavior, it needed to be free from unnecessary adult constraints so children could act to achieve their own growth and development.24 The educator could then prepare a structured learning space (the Montessori school) with the materials (the didactic apparatus), opportunities, occasions, and encouragement for children to interact with the environment in an educative way.

In the correct educational environment, the child has physical freedom to move that in turn nourishes “inner liberty.”

the five underlying principles of which would become the Montessori Method:

(1) each child, as an individual, needs to be free to work at her or his self-development; (2) children experience major stages of development for which there are appropriate educational activities; (3) the methods and materials used to train children with mental disabilities could be applied effectively to normal, indeed to all, children, throughout the world; (4) the clinical observation and diagnosis of children was a necessary prelude to planning their education; and (5) instruction needed to be calibrated carefully to the major developmental stage a child was undergoing at a particular time.

Neither the articles in McClure’s Magazine in 1911–1912 nor Montessori’s lecture tour in 1913 emphasized the role the Montessori school might play for working mothers. Neither did they emphasize the method as an educational strategy to improve the conditions of the working poor. Rather, the American interpreters of Montessori framed their message to correspond to the educational aspirations of progressive middle-class parents who read McClure’s Magazine.

She did not want the classroom furniture to limit the children’s freedom of movement as in traditional schools.

“Exercises of Practical Life,” which became a standard part of the Montessori school curriculum for all children.

the Montessori’s principle of child independence was implemented in practice.

Children who were independent could become their own moral agents. Furthermore, the skills of health, hygiene, and manners were the necessary features of a civilized society.

The children, eager to learn new skills, would stay with a task, repeating it, until they had mastered it. Montessori concluded that children did not have to be forced to learn; if permitted to choose between work and play, they would choose work. Montessori concluded that when a “child fixes intense attention” on an activity and “pursues and repeats [it] many times—this is [the] basis for auto-education.”

The apparatuses were self-correcting, in that the child could proceed only if she was using it correctly.

The self-correcting aspect of the apparatus was based on Montessori’s belief that children would acquire self-discipline and self-reliance by becoming aware of their own mistakes and repeating a particular task until they had mastered

Visitors to Montessori’s school were impressed with the sight of children working independently with a didactic apparatus. Some left thinking that the apparatus was the core of Montessori’s method.

Although Montessori was constructing a rather profound philosophy, the articles in McClure’s Magazine did not probe or analyze her philosophy of education. Written to attract the widest possible audience, they focused on the method’s more concrete features: the Montessori school as a structured environment; children’s freedom of movement and choice; the didactic apparatus; and especially the success in teaching very early reading and writing.

Montessori resented its popularization and often attempted to correct the more popular accounts.

Montessori. McClure, particularly McClure’s Magazine, played a key role in creating this climate, which he would use as a launching pad for his concerted campaign to promote Montessori from 1911 through 1914. The publication of an English translation of The Montessori Method in 1912 was also important.

“The principle . . . of the Montessori school is the ideal principle of democracy . . . that human beings reach their highest development (and hence are of most use to society) only when for the growth of their individuality they have the utmost possible liberty which can be granted them without interfering with the rights of others.”

meant freedom from any kind of despotism, including that of parents and teachers, that kept individuals dependent.

“Any attempt to use the Montessori apparatus or system by anyone who does not fully grasp or is not wholly in sympathy with its bed-rock idea, results inevitably in a grotesque, tragic, caricature of the method.”

Throughout the introduction of the Montessori Method in the United States, in books like Fisher’s and in the articles in McClure’s Magazine, didactic apparatus was prominently featured. This fascination with and promotion of the apparatus carried with it a contradiction.

Fisher fell into the contradiction, which would imperil her relationship with Montessori, when she advised her readers, “The first thing to do, if you can manage it, is to secure a set of the Montessori apparatus. It is the result of the ripest thought, ingenuity, and practical experience of a gifted specialist who has concentrated all her forces on the invention of the different devices of her apparatus.”

NOTE: You can see the class differences here -- US vs Italy -- wherein the former found Montessori pitched to affluent parents (in this case who were encouraged to buy the apparatus)

Fisher then entered a region that would cause severe consternation to Montessori. Might Americans adapt or add to the Montessori apparatus? Since Montessori said that her method was still incomplete, Fisher reasoned that others could add to the method. After all, Montessori, herself, had added to and revised the apparatus that Seguin had originally developed. Fisher, to her peril, misread Montessori. Only Montessori would complete her method and others, trained by her, were to replicate, not change it.

Books such as Fisher’s Manual and Tozier’s articles in McClure’s Magazine left Americans with the impression that the didactic apparatus was at the core of the Montessori Method.

Montessori wrote a letter that was published in the London Educational Times Supplement that pointedly disassociated her work from Fisher’s book:

Fisher had broken a fundamental Montessori rule: the method was not to be implemented by following instructions in a simple manual, like Fisher’s; it could be implemented only by directresses that Montessori herself had trained.

Montessori’s injunction that teachers should maintain a biological and anthropological record of each child’s mental and physical development.27

Although not obvious at the time, Montessori and Holmes, like other professors of education, held different conceptions of the scientific method applied to education. Believing her method of education to be scientifically rather than metaphysically based, Montessori meant that it was derived from her clinical observation of children at their work. These observations could be organized into a method much like one used by physicians in diagnosing and treating illness. Successful learning activities in children, like successful treatments in medicine, could be replicated if the teacher was properly trained. Holmes and many other professors of education saw observation as a first step in using the experimental method in education.

group games were highly imaginative and symbolic, the Montessori school’s activities were geared to performing the work of real life.34

A charge raised repeatedly against the Montessori Method was that it failed to encourage children’s imagination and creativity. Froebel’s kindergarten philosophy emphasized encouraging children to learn by freeing their imagination through play, stories, and arts and crafts. Rejecting fantasy for reality, Montessori stressed the need for children to learn the practical skills needed in real life.

While teachers in conventional elementary schools took center stage in their classrooms, Montessori’s directresses were to quietly guide children to teach themselves.47

Montessori’s relationship to her trainees was that of the mother-leader and her students were her disciples. The word disciple is deliberately used here since that is exactly what Montessori expected—directresses who would be devoted to her and to her method.

George, who gained notoriety as the pioneer American Montessori educator, established the country’s first Montessori school at the home of Edward Harden in Tarrytown, New York, in October 1911.50 Her school, supported by Frank A. Vanderlip, exemplified the patronage of Montessori schools by wealthy individuals who supported the method.

Montessori claimed that she had discovered the universal laws of childhood development. George had observed that cultural and socioeconomic differences also had an impact on how children learned.

McClure’s campaign to popularize Montessori was most unusual for an educator and an educational method. Articles about educational methods typically appeared in educational journals, not popular magazines. Written by professional educators and professors, these articles were descriptive, evaluative, and cautious, not rave reviews.

“For the first time, I believe, in the history of educational thought, a new movement has come to the front through the medium of a popular magazine, instead of by means of a scientific treatise by a specialist in education, which would naturally have a limited appeal.

NOTE: Lots of parallels here between how Montessori became popular and how ed-tech did

Tozier deliberately highlighted the principles that would most appeal to middle-class, educated, progressive American parents: auto-education, children’s liberty, the use of didactic apparatus, and the almost spontaneous development of reading and writing. Many parents wanted something better for their children than the rote recitations, drills, rules, regulations, and discipline that they themselves had experienced in late–Victorian era schools.

Presenting Montessori’s alternative to teacher-dominated learning, Tozier told her readers that auto-education meant that the children were their own instructors, with teachers present to guide but not control the learning process.

Children needed to be in a preorganized structured learning environment, a Montessori school, stocked with the apparatus and materials that motivated them to learn. Remembering their own education, the progressive parent saw auto-education as a way in which their children could learn without being prodded or coerced.

For Montessori, the movement to independence not only means that the child has acquired a skill but also develops the moral value of staying with a challenge until it is mastered.

While progressive parents wanted their children to enjoy their promised freedom to learn, they also wanted them to acquire the literacy synonymous with middle-class economic and social success, the mastery of reading and writing.

Tozier’s emphasis on the Montessori Method being based on empirical observations of children fit well with the scientific method’s growing credibility among the educated public.

The claim that Montessori education would improve the human condition resonated well with those readers who supported the Progressive ideology. These Progressives believed that the application of the scientific method to government, society, health care, and education could and would make life easier, more efficient, and better.

Some progressive and kindergarten educators criticized Montessori’s emphasis on the child’s individualized self-learning, which neglected participation and socialization in the group.

“collective education.”

The graded school, heralded as an educational innovation, had replaced the one-room multiage school where instruction tended to be by individual recitations. Group learning, according to educational administrators, was more efficient and effective in that a group, using standardized textbooks, could be taught the same skill or subject at the same time.

By individualizing learning, Montessori limited the group’s educational role.

self-discipline

For the orthodox kindergarten educators who followed Friedrich Froebel’s Idealist philosophy, Montessori had seriously minimized the educational potency of the child’s imagination.

By the time of McClure’s promotion of Montessori education, the kindergarten, a nineteenth-century import from Germany, was well established in many American public-school systems.

Montessori materials, the apparatus, itself, corrected the child who would repeat the task until it was mastered.64

the availability of purchasing Montessori’s didactic apparatus to be used at home without the supervision of a Montessori directress and Montessori’s injunction that only teachers trained by her in the use of the apparatus could correctly use the method.

Montessori materials and didactic apparatus were being manufactured and sold in the United States, as they were in other countries, on a for-profit basis. Included in their purchase was a handbook on how to use the materials. In addition to this pedagogical dichotomy, there was an entrepreneurial commercial issue, which would eventually lead to troubled relationships between Montessori and American business people.

critics decried its commercialism.

The Bells and the Montessori Educational Association played a significant but an often unclear role in the introduction of the Montessori Method in the United States. The centerpiece of the introduction of Montessori education was Maria Montessori’s lecture tour in 1913, which was officially sponsored by the association.

one may well have grave doubts, about how it will go with ‘auto-education’ when Maria Montessori’s personality is removed.”

her view of the school as a laboratory

as a community setting, with broad social implications.

misquoted her.

By 1913, the use of motion pictures to illustrate lectures had become highly popular.

While his statement was important in giving Montessori even greater national attention, the power of the US Commissioner of Education was very limited. In 1913, the commissioner of education headed a bureau within the Department of the Interior. Rather than a policy-making office for education at the national level, the Commissioner and the Bureau acted as a clearing house on educational statistics, enrollment trends, and state expenditures for schools.

one has to wonder what John Dewey thought

how some parents had misinterpreted her idea of liberty.

The House of Childhood, the company that manufactured and sold Montessori’s didactic apparatus in the United States, was a pivotal financial issue with unfortunate consequences for the business relationship between Montessori and McClure. Montessori had authorized similar companies to manufacture her didactic apparatus in Italy, England, Germany, and China, but the United States posed special problems for her.4 The original American franchise holder of the House of Childhood was Carl Byoir, a young man from Iowa who first learned about Montessori education in the pages of McClure’s Magazine. Sensing a profitable financial opportunity because of American parents’ enthusiastic response to Montessori’s method, he acquired the franchise for the House of Childhood from Montessori in 1911.

Two aspects of the Byoir-Montessori relationship were unusual. One, the usually suspicious Montessori was agreeable and open to the young American businessman; two, the beginnings of Montessori education in the United States, unlike most other educational movements, were marked by profit-making commercial motives as well as pedagogical ones.

He was a public relations pioneer who worked for the Hearst organization and served on the Creel Committee on Public Information in World War I.

Because of the extensive publicity in McClure’s Magazine about the Montessori Method, allegations were made that the magazine’s owners profited from the sale of Montessori’s books and educational apparatus. A Montessori enthusiast raised these concerns with the editor of McClure’s Magazine, “I take the liberty of asking you whether anything you could say or do to refute the false idea that has been such a drawback of this wonderful system of education,—that it is only a money making scheme because patents have been taken out and the price of the materials used having been so high that only the rich can possibly obtain them.”

“This Company will have to be renamed and the name “Montessori” must be left out of the new title. Dr. Montessori does not allow the use of her name without her consent. So please give another title to the Company without using the name Montessori.”136

At the very time that Montessori education was being introduced to an American audience, Progressive education was starting to dominate teacher education programs in colleges and universities. As public enthusiasm for her method waned, professional educators, especially Progressive professors of education who adhered to John Dewey’s pragmatic Experimentalism, grew more openly critical of Montessori and her method.

Walter Halsey, a professor at the University of Omaha, decried Montessori’s method as a “fad promoted and advertised by a shrewd commercial spirit” that had been foisted on an ill-informed “novelty loving American public.”1 Halsey was condemning the entrepreneurial efforts to manufacture and sell Montessori didactic apparatus.

Kilpatrick summarily dismissed Montessori’s assertion that her method was based on scientific findings. Kilpatrick found her method was deeply flawed by her limited knowledge of contemporary educational psychology.

professor, the Montessori Method was definitely not innovative, modern, nor experimentally scientific.

The Montessori Educational Association, which had been organized in 1913 by McClure, Mabel and Alexander Graham Bell, and Anne George, was still functioning. McClure, a key mover in the association’s founding, was no longer active but continued as the association’s second vice president and as a member of the Board of Trustees. The Montessori Educational Association, with nearly seven hundred members including a number of prominent individuals, had functioned throughout 1914 and remained a presence on the country’s educational scene in 1915.

The ending of the uneasy relationship between Montessori and the Montessori Educational Association closes the curtain on the first phase of Montessori education in the United States.

She wanted it understood that as the Montessori Method’s originator and patent-holder, the method was legally hers alone. No local society would be allowed to use the name “Montessori” in its title without her expressed authorization.

and in April 1916, the Montessori Educational Association officially voted to dissolve.

Instead of acting as Montessori’s American surrogate, Parkhurst’s attention shifted to developing and promoting her own version of Progressive education, the Dalton Laboratory Plan.

Parkhurst, as a teacher educator, developed a plan for a reorganized school in which pupils, between eight and twelve, worked on a unit in a particular subject in educational laboratories. In this new type of school, pupils “would enjoy more freedom” and “their studies” would be organized into sections, or laboratories, in which “each instructor” would be a specialist.

Although Parkhurst’s aim to reorganize the school “so that it can function like a community whose essential condition is freedom for the individual to develop himself” had a Montessori-like resonance, she positioned herself as a Progressive educator rather than a Montessorian.

1920. In practice, the subjects in the curriculum were divided into units, or “jobs.” The student, who signed a contract to complete a particular job within a month, worked cooperatively with other students who had made similar commitments in an educational space, called a laboratory. Each student worked from a “job-book,” a guide with instructions, suggestions, questions, and activities related to the monthly contract.

Though taught according to Parkhurst’s Progressive method, the standard secondary curriculum—mathematics, history, science, English, geography, and foreign languages—required in most high schools and tested on college entrance examinations— remained in place.62 However, the co-relationship of subjects, such as literature to history and science to mathematics, was emphasized as integrated, not isolated, areas of teaching and learning. Based on her work at the Massachusetts school, Parkhurst renamed her method the Dalton Laboratory Plan.

Parkhurst’s ideas gained considerable popularity in the United Kingdom where they were implemented in several British schools.

Deciding that she would no longer delegate authority for Montessori education to others, Montessori, with her son, Mario, as her agent, established the Association Montessori Internationale (AMI) in 1929 as the official organization to control and supervise all Montessori activities, including training programs, throughout the world.

The Montessori school would be located at the periphery of the public educational system but in the 1950s would gain a strong presence in private early childhood education.

By the mid-1950s, Progressive education, which had eclipsed the first attempt to introduce the Montessori Method in 1910–1920, was itself declining.

Gentile arranged a meeting between Mussolini and Montessori in 1924 at which the Duce expressed an interest in establishing Montessori schools. Mussolini was impressed by a method that instilled discipline and order and in which children learned to read and write at age four. He also wanted to use Montessori’s name and her associations and societies in other countries to promote his Fascist ideology.

In 1926, Montessori was recognized by the Tessera Fascista, the Fascist women’s organization, and made an honorary party member.

Mussolini, whose slogan was “Everything in the State, nothing against the State, nothing outside the State,” was determined to instill the Fascist ideology throughout Italy, including its schools and youth organizations.24 The Fascist regime was also tightening its control of Italy’s schools with all teachers required to take a loyalty oath.25

government responded to Montessori’s intransigence by closing Montessori schools and suppressing Montessori education.26 Maria Montessori left her native Italy as an exile.

Audrey Watters

Twitter and an Infrastructure of Hate

2 min read

This is a comment I left on Sherri Spelic's blog post "Nobody's Version of Dumb"

I think there are massive problems with Twitter. It is, by design, a platform for harassment. Think, for example, of how easy it is to retweet something in order to create a “pile on.” @-mentions pile up. The app becomes unusable. And there is nothing in Twitter’s architecture or in its business model to stop you being DDOS’d like that. Twitter is a platform for anger. George is right about that.

But that does not mean that Twitter is a homogenous, “safe” space and we are only exposed to ideas we agree with. A great deal of what happens on Twitter is wildly unsafe because of vicious, vicious disagreement. We all experience that differently, of course, based on identity — race, gender, religion, and so on.

Twitter is also, by design, a platform of brevity. It’s so easy for 140 characters to be insufficient — even when threaded together into longer arguments. It’s so easy too for 140 characters to be taken out of context. Again, by design.

Twitter is also, by design, a platform of celebrity. If you have the blue checkmark, as celebrities and media personalities and whatnot do, you are granted a “quality filter.” I’m not sure what that entails — my god, what constitutes “quality” on Twitter?! Who decides?! But I gather it means you are less likely to see the things that the unverified masses (that you do not follow explicitly) tweet. Celebrities tend not to be part of communities. (They really can’t be, because people can be ridiculous.)

All this makes Twitter a terrible place for community, but humans’ desire for community and communication are much more powerful than that. In the face of all the infrastructure that encourages us to clap back, there is at the very same time (and often in the very same users) a strong incentive on Twitter to care.

Audrey Watters

Advice from TNC

3 min read

We had a substitute teacher last night for the first session of our seminar on Opinion Writing. Professor Jelani Cobb couldn’t make it, so he sent his friend Ta-Nehisi Coates.

I should have taken notes. Instead I sat there starstruck as TNC graciously answered the class’s questions on writing. Those questions were wide-ranging: how long does it take him to write a book; what’s that process like; where do his ideas come from; how has celebrity changed how he works; how does he choose his subject matter; and so on.

Strangely, being in the room with such greatness helped me feel better about my own position as an opinion writer. I’ve been really feeling anxious about my fellowship, feeling like I don’t belong at this prestigious J School. I signed up to for Jelani Cobb’s class on opinion writing – even though that’s largely what I do for a living already – because, while many of the other Spencer Fellows sit in on education classes, I felt like I needed to do much more to hone my craft in journalism itself. I’m not a journalist by training, after all.

I do have plenty of ideas and plenty of opinions. But turning those into long form isn’t so easy. And frankly, sometimes I feel guilty that I don’t have “takes” on everything that happens in ed-tech, even though I certainly have thoughts on all of it.

TNC talked a bit with us about the difference between opinions of the sort you toss out in conversations with friends – on- or offline – and those that you develop into an article. He talked too about how he writes in anger – I can relate – but how he doesn’t feel compelled to weigh in with a knee-jerk response but rather builds on that anger until there’s a deeper, richer, more powerful argument. (He’s out today with a new article “The First White President” that really exemplifies this.) There’s a difference between the kind of opinion you tweet, he told us, and the kind of opinion that’s worthy of building out into an opinion piece.

There’s something about the demands not just of social media but, more structurally, of the job of an “op-ed writer” (particularly those columnists at The NYT – you know who I mean) that almost requires people write silly stuff. When you’ve got to come up with two opinion pieces a week, there’s no time for research, no time for contemplation, no time for much more than a very routine and empty 800 words. A waste of time, and TNC insists that he never wants to waste anyone’s time.

I don’t want to waste my own time this year. It’s a huge privilege to have 9 months to think and to write and to not have to worry about my usual hustle. After last night, I’m feeling more confident I can do this, and I’m feeling less pressured to just publish because the Web demands more “content.”

Audrey Watters

Open Pedagogy and Social Justice

4 min read

I was asked to give a "provocation" to the Open Pedagogy and Social Justice track at this year's Digital Pedagogy Lab in Vancouver, BC. Here's (roughly) what I said:

When Rajiv and Robin sent me an email, asking me to pop into the class and say a few words to you, I immediately said “yes,” in part because I think the theme of this track is so important. “Open Pedagogy and Social Justice.” I think it’s important – crucial even – that the theme is explicit about its politics, that it centers “social justice” as part of its project. In fact, to me, that’s the important part: “social justice.” Not the “open” bit.

Too often, I think, open education has come to rely on that adjective “open” to stand in for an assumption about politics, an assumption about good intentions and social change. Open education has acted as though “open” – as a label, as a license – is sufficient, as though the social change fostered by “open” will 1) happen and 2) be progressive.

If we look at history – hell, if we look around us today – we can find plenty of examples (in education and elsewhere) where “open” is not aligned with social justice. MIT researcher Justin Reich, for example, has found that open educational resources – MOOCs, Khan Academy, and so on – can actually expand education inequalities by disproportionately benefitting the affluent. “Open” is not enough – you have to be explicit, as this strand does, and orient your work towards social justice. “Open” does not necessarily address structural inequality at all.

Indeed, “open” without this orientation towards justice might make things worse.

I’m guessing that many of the conversations you’ve had and will have in this track involve definitions – the often competing definitions – of “open.” Does “open” mean openly licensed content or code? And, again, which license is really “open”? (People love to argue about this one.) Does “open” mean “made public”? Does “open” mean shared? Does “open” mean “accessible”? Accessible how? To whom? Does “open” mean editable? Negotiable? Does “open” mean “free”? Does “open” mean “open-ended”? Does “open” mean transparent? Does it mean “open for business”? Who gets to decide? That is, whose stories about “open” get told?

If you’re familiar with my work, you know I spend a lot of time looking at the financial and political networks of education technology. And so this question of open education being so deeply intertwined with business – with venture capital and venture philanthropy – is something I’m quite concerned about. And I think that’s probably one of the greatest challenges that open education faces: can it extricate itself from the forces of education reform that are strikingly neoliberal, imperialist, and exploitative? As public education is under threat – from budget cuts, Betsy DeVos, and tech billionaires alike – will open education resist the dismantling of institutions, or will the movement (I guess it’s a movement) ally itself with libertarians who seek to place all risk and responsibility onto the individual, place all teaching and learning under the rule of “markets”?

And will education – public education, open education – address its own history of white supremacy, exclusion, exploitation?

If I had one big concern personally (which is always politically) – and I realize my time is almost up here – it would be that I see “open” being weaponized by those who are, in my estimation, the very antithesis of social justice. This is the Julian Assange model of “open,” if you will. Weaponized transparency.

What does “open” mean in a world of Wikileaks? I think, in part, it means we need social justice. Indeed, we need to put social justice at the center of our work. And it means, dare I say, we distance ourselves from those for whom “open” is weaponized (or readily weaponizable) against marginalized people.

Audrey Watters

Ed-Tech and Neo-Feudalism

7 min read

I'm posting my responses here to a friend that's working on a story on neo-feudalism. I think there are some interesting ideas here that could be developed more...


One of the reasons I wanted to talk to you was because it occurs to me that this use of the term ‘neofuedalism’ seems to represent an acceptance of (or fear that) the myth of meritocracy has finally failed, and that class structures are being ‘firmed up’, or even formalised. As such it feels a little like an admittance that large scale social mobility has ended, Considering your research interests - especially in relation to education and technology - what are your thoughts on this?

In some circles, education has long been touted as “the silver bullet.” (I think there’s a famous quotation from The West Wing that makes such a claim. I dunno. I’ve never watched the show.) If we just improve access to a good education – to higher education in particular – then all sorts of other problems will be ameliorated. Poverty, for example. Bigotry. Ignorance.

Education isn’t the fix for inequality. It does not address structural issues like racism. It does not redistribute wealth. Black people are paid less than whites at every education level. A college degree is clearly not enough.

A college degree is, of course, a signaling mechanism. It’s not just about what you know. It’s about where you went. Higher education is very bunch bound up in our notions of prestige and social hierarchy.

But many in Silicon Valley – and that’s are the stories I pay closest attention to – claim that one needn’t go to college. They point to Mark Zuckerberg, Bill Gates, Steve Jobs as college dropouts. Those in Silicon Valley like to suggest one can simply take a MOOC (a massive open online class) or get a badge or attend a coding bootcamp instead. As long as one can demonstrate “skills,” one can do anything. It’s the classic meritocracy mythology, and I see the tech sector as one of the myth’s greatest promoters. But if we look more closely at the powerful entrepreneurs and powerful investors and powerful companies in the tech industry, we can see there are powerful networks. And these networks often involve where people went to college: Stanford. MIT. Harvard. Even the famous dropouts – Zuckerberg, Gates, Jobs – had wealth and, of course, their whiteness to connect them to these networks.

The tech industry remains overwhelmingly white and overwhelmingly male, and yet it insists that it’s a meritocracy. This insistence allows Silicon Valley to continue to ignore the structural inequalities at play – who gets hired, who gets funded, what technologies get built, and so on. This has grave implications for the future, no doubt, as computing technologies are increasingly central to all aspects of our lives – professional, personal, political.

Again a very wide, general question - sorry! - but in what ways are you observing a transfer of power and decision making when it comes to education from government to private corporations? And is that leading to a two (or more) tier model for education? How is technology accelerating this?

Schools have always relied on private companies to supply things like desks and chalkboards and textbooks. So in some ways, new computing technologies are no different. But if we think of these technologies as simply upgrades to textbooks, we might be able to recognize the ways in which textbooks have long served to diminish the expertise of the classroom teacher. That is, content expertise resides outside the teacher. It’s in the authors of the textbooks, perhaps. It’s in the publisher itself. With new technologies, we see expertise – content expertise, technical expertise – increasingly moving outside of the school. Schools readily outsource all sorts of administrative and pedagogical functions to companies.

One of the most important transfers of power involves the role of technology entrepreneurs as education philanthropists. The Gates Foundation, for example, is a $44 billion endowment. The Chan Zuckerberg Initiative, the venture philanthropy firm founded by Mark Zuckerberg, was started with $1 billion worth of Facebook shares. Both of these organizations have a vast, and I’d argue incredibly anti-democratic, influence on US education policy. These organizations help shape the narratives about “the future of education” – which of course will by highly mediated and monitored by computer technologies (unless you attend an elite private school as Bill Gates’ children do or Gates and Zuckerberg did). We can see this currently with calls to “personalize learning” – this isn’t so much about the adoption of progressive pedagogical practices, but rather the adoption of data-driven teaching machines.

Our historical concept of feudalism is always tied to relationships to land - and digital platforms and data are often referred to as a new form of real estate. In what ways do you see platforms (and the control/ownership of data) the new land of neofeudalism? In the same way that serfs worked on land they didn’t own, are we producing data we don’t own in spaces we don’t control? Are we being locked in to feudal relationships with those that own and control the online spaces we inhabit and the data we produce?

We do use the language of labor to talk about learning – schoolwork, homework – but students have never really had much control over that process at all. They have – arguably – been able to “own” their work. That is to say, at the end of the school year, they could walk away with a manilla envelope containing the worksheets they’d filled out and the essays they’d written and the pictures they’d drawn. But as school work becomes digital, there’s no manila envelope. Often what students do is trapped in a piece of proprietary software. And these software systems don’t just administer and store assignments and grades; they track all sorts of additional data: what time the student used the software and for how long; where the student was located when she used the software; what other applications are on her computer. All this data feeds the narrative of “personalized learning” that Zuckerberg and Gates and other technology investors like to push. But this data also feeds the algorithms and the knowledge base of these software makers. Students see no remuneration for this work. Nor do the schools that buy or license the software. And often, students and schools seem quite unaware that their data is being harvested and mined this way.

We’ve long been threatened in school with the saying “this will go down on your permanent record.” But now, with this massive data collection, it just might. We don’t have a lot of insight into how decisions are made based on this data – how algorithms are designed and implemented, how algorithmic decision-making in education is made. It’s a “black box,” as Frank Pasquale calls it – a “black box society.” Rather than being a silver bullet, this might make our education-related data trail precisely the thing that maintains social inequalities, but in ways that are even more opaque.

Finally: what’s your job title/how would you like to be described?

Writer and scholar who focuses on technology and education. Spencer Fellow at Columbia University School of Journalism. (Gotta take advantage of that prestigious title while I can.)

Audrey Watters

My Twitter Advertisers

11 min read

Twitter has updated its privacy policy, indicating that it plans to use our data more extensively. I requested a list of advertisers whose "audiences" I am apparently a part of. (This is odd since I block ads and block any promoted tweet,)

The list (a 33-page PDF):

@1199seiu @12monkeyssyfy @13hours @1776 @1800contacts @2020companies @20jeans @23andme @365ninja @3drobotics @6fusion @76 @7eleven @a_i @aarp @aarpacademy @aasincco @abc_thecatch @abcsharktank @abercrombie @abetteror @academy @accatrackertm @accenture @accenturejobsfr @accionus @acedge @achievers @ackeeapp @acorn_stairlift @acquirent @act @actstudent @acura @acuracanada @adaptive_sys @adayinriyadh @addesignshow @adeccofrance @adl_national @adska360 @adstest6 @adultswim @adveronline @adviacu @aerie @affinio @ageofish @aha_io @ahmadshafey @ahs_careers @aibgb @aiginsurance @aip_publishing @airtable @ajenglish @ajitjaokar @albertandp @alexandani @alfredstate @all_laundry @alloresto @allstate @allyapp_de @alongsidehr @alterrecrute @alzassociation @alzregistry @amarlakel @amazon @amazonjp @amazonkindle @amberbezahler @amctheatres @amdocs @amediacompany @amercharities @americanexpress @americanfunds @americanxroads @amexopen @amfam @amplify @amtrak @angieslist @angietribecatbs @anthembusiness @anthology @apalon @apartnerships @aperolspritzita @appanniejapan @appexchange @appirio @applebees @applegate @appliedsystems @appointmentplus @arabicfbs @archerfxx @areyouthirstie @argos_online @armtheanimals @asana @asentaes @ashfordu @ashleybridgetco @asklloydsbank @aspcapetins @aspiration @aspokesman @astrazenecaus @ataleunfolds @atlanticnet @atomtickets @atosfr @att @attdeals @attsmallbiz @audi @audible_com @audienseco @audiuk @audubonsociety @autosoftdms @autotrader_com @auviknetworks @avado_finance @avanadefrance @avvo @axios @axon_us @babiesrus @backchnnl @badmoms @balajipalace @baltimoresun @bananarepublic @bankofamerica @bankofireland @bankofscotbiz @bantbreen @barclaycardus @barclaysuk @barcorp_news @baristabar @basecamp @battlefield_es @bauschlomb @baydin @bayesiannetwork @baytobreakers @bayviewfunding @bcbcnj @be_the_book @beef @beenverified @beepi @believeagaingop @belk @belllabs @belvita @bench @benchmadeknives @bentleymotors @berkeleydata @berniesanders @bestfriends @bestselfco @besttimeever @bestvaluecopy @betabrand @betfair @bettercloud @betterworks @bgconoxygen @bhillsmd @bicrazors @bidelastic @bigdatato @biggerbooks @bigstock @billgates @birchbox @bitly @bizfinyc @blackbaud @blackhatmovie @blackrock @bleacherreport @blizzheroes @blizzheroesde @blizzheroesfr @blocknload @blookupnews @blowdrystyle @blueapron @bluediamond @bluehost @blurbbooks @bnbuzz @bnymellon @bobsredmill @bodomelockport @bodyfortress @bofa_news @bofa_tips @bombas @bonobos @bookingcom @bookingcomnews @boombeach @boomerang @boomerangtoons @boostinsider @boostmobile @box_europe @boxhq @bp_america @bpkleo @bpkleo2002 @braindeadcbs @braintree @brilliantearth @britausa @brocade @brookstreetuk @brr_karent @bscacareers @btenergy @buddygit @budlight @budlightca @buildup_io @bukalapak @bulbapp @bullet_news @burtsbees @busbud @bushel @businessontapp @busytreats @buyinsnet @buzzfeednews @buzzfeedpartner @buzzsumo @bvex_emea @c2montreal @cadillac @calabrio @calcasinsurance @callofduty @calpizzakitchen @camletmount @camtasia @canarylearning @candycrushsoda @candysodajp @canonusa @canonusabiz @canonusaimaging @canonusapro @capitalone @capitalonecb @capitalonespark @capphysicians @captainamerica @car2goaustin @car2gocalgary @car2gocolumbus @car2godc @car2gomiami @car2gomontreal @car2goseattle @car2govancouver @carbonite @careactionnow @caredotcom @careersatcrown @careersmw @carlsjr @carolinabio @carolrundle @carphoria @carrierebancair @carsdotcom @castandcrewnews @cavalia @cbre @cdotechnologies @cdwcorp @cedatotech @cellpressnews @cengagelearning @century21 @centurylinkent @centurylinkjobs @cfainstitute @champssports @change @chappycpga @charmin @chartmogul @cheapflights @checkmarx @cheesecake @cheezit @chevrolet @chfund @chicagotribune @chickfila @chipotletweets @chipsahoy @choicetechgroup @cholulahotsauce @christicraddick @chromecast @chrysler @chubb @cibc @circlebackinc @cisco @cisco_germany @ciscocollab @ciscofrance @ciscomfg @ciscorussia @ciscosecurity @ciscosp360 @cision @ciszek @citibank @citizensbank @ck12foundation @clashroyalejp @classflow @classk12 @classroomgenie @classy @cleanmaster_jp @clearwaterps @cleclinicmd @clickmeeting @clickviewau @clickviewuk @climatereality @cloudinary @cloudllycom @club4growth @cmt @cnn @coachdotme @cobiasystems @cocacola @cocacolaco @coconala @codecademy @codefights @cogecopeer1 @cognizant @cognizanttalent @colehaan @collisionhq @comcstspotlight @comedycentral @comedydottv @comixology @communitytv @comparably @comprenditech @comthingssas @concur @concurrencyinc @conoco @constitutionctr @converse @convince @coolsculpting @coopuk @coorslight @coorslightca @copromote @cordblood @coreonapp @corespaceinc @cornellmba @corvilinc @couchbase @courvoisierusa @covetfashion @cppinc @cq_chat @craft_ryan @cranecareers @crashplan @cray_inc @creditkarma @crewlabs @cricketnation @criticschoice @crnc @crossroadsgps @crrsinc @crunchyroll @crushpath @cspac @cspenn @cspire @css_hero @ctcorporation @ctgla @cugelman @culturrh @cunyjschool @curesimple @cvspharmacy @cxsocialcare @dailydot @dailydotmedia @dailysignal @dairyqueen @damrap @daniellemin @dapulselabs @darkmushroomau @darylurbanski @dashboardapp @dashhudson @dashlane @datadoghq @datasift @davis_support @dazsi @ddmsllc @deadpoolmovie @dearwhitepeople @defianceworld @defymedia @deichmann_de @dellemc @dellemc_ci @dellemcdssd @dellemcecs @dellemcisilon @dellemcscaleio @dellenterprise @dellevents @dellsmbus @delmonte @democracycolor @demoversion1111 @dennym15 @desk @dfeley365 @diamondcandles @dice_techuk @dicks @difficultonhulu @digipillapp @digitaldealer @digitalocean @dinkoeror @directv @directvnow @discoverglobal @dish @disney_it @disneyaulani @disneystudiosla @divergent @dnbus @dnncorp @docusign @dodge @dollarshaveclub @dominos @domotalk @donotcrack @door2doorhq @doordash @doritoscanada @dotandbo @dotdebug @dots @doubledutch @dove @dowjones @draftkings @dragoncitygame @drewglick @drivemaven @dropbox @dropboxbusiness @dunkindonuts @dynatrace_ruxit @eagletalent @eamaddenmobile @eat24 @ebarproductions @ebay_uk @ebayinccareers @ecampusdotcom @ecco_usa_shoes @eciconsulting @ecommission @econocom_fr @econsultingrh @ecrpubconnect @eddievs @edgeendo @edible @edintfest @edmundoptics @edtrust @edutopia @eehlee @efcollegebreak @effenvodka @efmurphyiii @eiuperspectives @ekhoinc @ellemagazine @emailage @emaze_tweets @emily_is_emily @empirefox @empowertoday @enterprise @envisagelive @envisioninc @envoy @eqdepot @ericgreitens @esade @eset @esteelauder @esurance @ethoswatches @eugenesymphony @eventbrite @eventbriteatl @eventbriteuk @everlifeparis @evernote @evidos @evonomicsmag @exactonline @executrade @exelate @expedia @experis_us @expertspool @ey_performance @ey_tas @eyeem @ezyinsights @facetuneapp @fairmonthotels @falconio @famousbirthdays @famousfootwear @fanbasenet @fandangomiles @fandangonow @fantv @farfaria @farmfreshtoyou @fedex @feliciamupo @fenwickwest @festivalflix @fetc @fiftyshades @fiftythree @filmstruck @first_backer @fisherhousefdtn @fitbit @fitstar @fiverr @fixautousa @flightcentreau @flightdelays @flipboard @floatapp @flonase @fluentconf @flylaxairport @fontainebleau @footerfamily @footlocker @forcepointsec @ford @fordfusion @forduk @fortunemagazine @forty3north @fotosearch @foundertees @foursquareguide @fpcnational @frankandoak @freeenterprise @freepeople @freshgrade @frontierbiz @fti_us @ftreports @fuelgoodprotein @fujitsu_global @fujitsu_uk @fullscreen @fundingcircleus @fusion @fusiontv @futureadvisor @futurereadysg @fxbusa @g5games @ga @gadventures @gain @galka_max @gallofamily @gameit_app @gap @gapkids @garagedoorsvc1 @garanti @garantione @gatesfoundation @gatorade @gb_recrute @gdms @gdnhighered @ge_europe @geekwire @geico_jobs @generalelectric @genesisusa @geoffreyac @geoffsdesk @getbridge @geteero @getresponse @getspectrum @getstocks @gett @gett_uk @gettyimages @getzeel @gge4k @gifkeyboard @gigya @gillettevenus @gilt @giltman @giphy @github @gitlab @givingucashback @glade @gluereplyjobs @goairguard @goanimate @gocompare @godaddy @goldengoosepro @goldenvoice @goldieblox @goldmansachs @golfnow @golfshotgps @gomodev @googleanalytics @googlecloud @googlehome @gopro @gosolaramerica @gotodaydotcom @gozaik1 @gozcardstest1 @gozcardstest10 @gozcardstest15 @gozcardstest2 @gozcardstest3 @gozcardstest4 @gozcardstest6 @grammarly @grantamag @greatindoorscbs @greenhouse @gregabbott_tx @greyhoundbus @groundfloortbs @groupon @groupsjr @grubhub @gs10ksmallbiz @gs10kwomen @gsuite @guardiangdp @guardianlife @guruenergy @guvera @gwsphonline @haagendazs_us @hackreactor @hallmark @hamiltonjeweler @handy @hangtime @happify @hardees @harrisjb @harrys @harvardbiz @hayscanada @hbo @hbonow @headspace @healthiergen @heart_of_vegas @hearthstone_de @heineken_us @hello @hellothinkster @henryholt @herobaby @hersheys @highimpactlaw @hillaryclinton @hiltonhonors @hired_hq @history @hlinvest @holidayclaims0 @holidayinn @hollisterco @homedepot @homejoy @homesweethome @hometownquotes @honda @honda_uk @honest @honeymaidsnacks @hootsuite @hortonworks @hostgator @hotdogcollars @hotelsdotcom @hoteltonight @hover @howaboutwe @hpe @hpe_smb @hrc @hrchaostheory @hsbc_ca @hsbc_uk @hsbc_us @hsbcukbusiness @htcvive @htmlwasher @htsi @hubspot @hubspotacademy @hulu @hunterselection @hyattregency @hyatttweets @hyphenapp @hyundai @hyvee @iagdotme @iberostar_eng @ibm @ibmanalytics @ibmbigdata @ibmcloud @ibmcloudant @ibmpolicy @ibottaapp @ice_markets @icebreakernz @iconohash @icontact @ideou @idgtechtalk @ifonly @ifsabutler @ihgrewardsclub @ijm @iloveindique @imaterialise @immoverkauf_de @immunio @imperva @incisivecareers @indeed @indeedau @independent_ie @indignationfilm @inficon @influitive @infusionlounge @ingramcontent @inliving @insideamazon @insidemancnn @insightpool @instasupply @insuremypath @intel @intel_italia @inteliot @intelitcenter @intellabs @inteluk @intercom @interoute @interstatebatts @invescous @invisionapp @ioi_lc_vacature @iopa_solutions @ipswitch @irobot @isexperiment @ishares @isteconnects @istock @iti_jobs @itmedia_online @itsflo @jackbox @jackpotjoyslots @jackthreads @jaguarusa @jamfsoftware @janeallen08 @jason @jason2cd @jasonnazar @jcpenney @jcrew @jeep @jennfarrer @jerrysartarama @jet @jetblue @jetbrains @jh_investments @jhtnacareers @jibjab @jimbeam @jiraservicedesk @jisc @jogo_amor_doce @joinphilu @joniforiowa @jonlee_recruit @jonloomer @jordo37 @josecotto @josecuervo @jossandmain @joyent @joytolive69 @jpmorgan @juicebeauty @justcozapp @justdancegame @justeatuk @justgiving @jwmarriott @jzoudis @kalyptusrecrute @kaneisableinc @kaplandevries @karankhanna @kareomarketing @kayjewelers @kbidonline @keen @kelleybluebook @kenmore @kentucky_strong @kernelmag @keypathedu @kfc @kiindellc @kimkardashian @kindlemw @kindsnacks @kingofnerdstbs @kingsprophets @kitkat @kiva @kiwi_qa @kiwicrate @klear_com @knetbooks @knowroaming @kobo @kodakmomentsapp @kohls @kpmg @kpmguk @kraftcheese @kred @krispykreme @ks_saturday @kubothemovie @kyled @kyoppcoalition @lalalab_app @landroverusa @larrykim @laserfiche @latimes @latonas @laundrapp @lays @leadpages @leadsift @leanercreamer @learnvest @lelo_official @lenarachel @lenovoeducation @lenovogov @letgo @letgoturkiye @letote @levelupvillage @lexisnexis @lexus @lexuscanada_fr @librarianstnt @lifeatgozaik @lifeatpandora @lifeextension @lifelock @lifetimetv @lifetouch @liftmaster @lightercapital @limearita @lincolnmotorco @linkedin @linkedinbrasil @linkedineditors @linktv @linkup_expo @lipton @liquidgrids @litter_robot @littlebits @livefyre @livethelooknow @livingsocial @livingspaces @lloydsbankbiz @loadimpact @localheroesuk @logicalisjobs @londonfallen @lotame @lovemyphilly @lovingthefilm @lovoo @lowes @lt_careers @ltirlangi @lucidchart @luminafound @luminessair @lumosity @luxury @lyft @lynda @lyonsmagnus @madeinal @magentobi @magicianssyfy @magnumicecream @mahabis @mahlknecht58 @mailchimp @mailup @mailup_us @makerbot @manpower_us @mapp_digital @marcaria @marchmadness @marketchorus @marketingcloud @marketo @marksandspencer @marriott @marriottrewards @martinomalley @marvelchampions @marvelstudios @mashable @massageenvy @mastercard @mastercardbiz @masterofnone @mastersuites @maxmara @mayankjainceo @mbertoldi1 @mbna_canada @mbusa @mcad @mcdonalds @medaxs @mediatemple @medium @medium_politics @meencanta @meetup @megapolissq @mejorpizza @mel_science @melindagates @mercari_jp @mercuriurval_nl @mercycorps @meritonsa @merrilllynch @meshfire @metlife @metromile @metropcs @meundies @mgspellacy @michael_venuto @michaelcajansen @michaelsstores @michel_augustin @michiganalumni @microsoftcanada @microsoftedu @midclassstrong @milestech @millerlite @mint @miraclewhip @missionrace3 @mitsubishihvac @mitxonedx @mkjigsaw @mleison @mliferewards @mlive @mltamplin @mobilecause @mobileiron @mobymax @molson_canadian @moneyconfhq @moneymorning @moneysupermkt @monster @monster_buzz @monster_uk @monstercareers @monstergozaik11 @monstergozaik13 @monsterjobs_uk @montereyaq @moo @moodle @moovit @morganmovie @morneau_shepell @morningstrfarms @mountaindew @movableink @moz @mrworknl @ms_ignite @msa_testing @msftbusinessuk @mssociety @mtestingads2 @mtv @mtvteenwolf @mujjostore @munchery @murthy_gozaik @museumhack @musicfirst @mybetcom @mycuboulder @myheritage @mymyrtlebeach @mynamenecklace @mytopks @nafme @namecheap @namedotcom @nascaronnbc @nastygal @natgeo @natgeochannel @nationalpro @nationbuilder @naturalbalance @naturevalley @navyfederal @nba @nbatv @nbclilbigshots @nbcnewyork @nbcshadesofblue @ndivinc @nearravi @neptunegametime @nerdist @nespressousa @netflix @netflix_ca @netflixanz @netflixbrasil @netflixjp @netflixlat @networkmonkeyco @neutrogena @new_gozaik @newamericanecon @newpig @newrelic @newsweek_int @newyorklife @nexrep_llc @nfib @nhlonnbcsports @nicelaundry @nicki_briggs_ @nicodermcq @nielsen @nike @nikeaustralia @nikebrasil @nikkdahlberg @nikonusa @ninewest @nissanusa @noble1solutions @nomadgoods @nomadixinc @nookbn @nordstrom @nordstromrack @norton_uk @nortononline @notonthehighst @nra @nrcc @nrfnews @nrm_inc @nrsc @ntt_europe @nutribulletuni @nuvi @nychealthy @nyse @nytfood @nytimes @nytimeswordplay @nytnow @nyusteinhardt @obeverett @offerup @office @office365 @officialkoni @oikos @ojustus78 @olayskin @oldgozaik @oldnavy @olivegarden @omahasteaksjobs @on24 @onbondstreet @onekingslane @onthehub @opensociety @opentable @opentextdctm @operadeparis @opposingviews @optimum @optimumoffers @oracledatacloud @orbitz @oreillysecurity @oreo @orlandosentinel @oscarmayer @oscon @osper @otusk12 @ourhealthca @ourridenyc @outback @outdoorvoices @overstock @owntv @oxford_seminars @oyster @pacapparelusa @pampers @pamperslatinous @pandacable @pandorabrands @pandoramusic @panerabread @panoply @papajohns @paperpile @paradisus @parallelsmac @pardot @parseit @partsunknowncnn @pathbrite @pathscale @patriciachica @paypaluk @pb_careers @pcfinancial @peacockstore @pearsonprek_12 @penguinrandom @peoplepattern @pepsi @percolate @perfectaudience @periscopedata @personalcapital @petco @petedge @petsmartcharits @pgacom @pge4me @philipshealthna @phillips66gas @phrma @phunware @piazza @pillpack @pivotal @pizzahut @planetbooking @plangrid @plargentlgs @playerstribune @playfoodstreet @playhearthstone @playkids @playoverwatch @playstationuk @playwell_tek @pluralsight @porsche @portfoliobooks @poshmarkapp @postcardmania @postmates @power_starz @powerimpossible @powertofly @pressroomvip @prezi @priceline @procoretech @proflowers @programmableweb @prologicdesign @propel_jobs @proplan @protegehunters @proxe_pay @prweb @pryan @psdgroup @pssclabs @ptc @puntonewsspain @pureproteinpro @purina @pwc_uk @qaloring @quaker @qualys @quantcast @questarai @quickbooks @quickbooksuk @quip @quizup @quorumreview @qvc @r2rusa @racemovie @racing_uk @rackspace @radeon @rakutenjp @rallysfbay @rankinthomas @rapidsos @rare @ravishastri577 @rbcontentpool @reachnj @reachravens @readypulse @realdonaldtrump @realexpayments @realhomechef @realnameshq @realwotf @reassurancedntl @rebuildingamnow @recurly @redgiantnews @redlobster @redrockapps @reesespbcups @regalmovies @regentu @rei @reservationscom @ressoftware @retailmenot @retentionsci @reuterstv @richstoddart @ridepeloton @rifftrax @rightdrive_cars @rightmove @rightside @ringcentral @riseconfhq @ritzcrackers @robertblakely @robertswesleyan @rocelec_jobs @rohitprologic @rootclaim @rosettastoneuk @royalrevolt @roybluntmo @rsasecurity @ruffles @rupaulsdragrace @rushcard @ruumkidswear @saasler @safeaffordable @sagenamerica @sageuk @salecycle @salesforce @salesforce_nl @salesforcedevs @salesforceiq @salesforcewave @samsclub @samsungbizusa @samsungbusiness @samsungmobile @samsungmobileus @samsunguk @sandralee @sapsmallbiz @sapsports @sarahomecare @sbs @scaddotedu @sce @scenecard @schoolchoicewk @schoolkeep @schoology @schoolsedu @scjohnson @scmp_news @scorementors @scotiabank @scottbaldridge @scottevest @scottforflorida @scottishwidows @scribd @scrowder @scrubbingbubble @sdl @sea_primeteam @seamless @sears @searsdeals @searsoutlet @seatgeek @secretdeodorant @secretly @securlyinc @seekjobs @seesaw @seesotv @seetechsystems @segment @semamembers @sendgrid @sense8 @sensodyne_us @seoclerks @sephora @sethrobot @seventhson @sfchronicle @sg_tweets @shareonetime @shastry007 @shopheroesgame @shopstyle @showtime @shutterfly @sidekick @sierratp @signalconf @simcitybuildit @similarweb @simple @simplymeasured @siostech @siriusdecisions @skybet @slurpee @smartcarusa @smarterschls @smartnews_ja @smexaminer @smiledirectclub @smurdochking @snagit @snap_hr @snapstream @snapwire @sncf_recrute @snow_jp_snow @snuggle_bear @socialbakers @socialmoms @socialstrategi @sofi @softeamgroup @softorino @sokap1 @solarcity @solarwindsmsp @solidfire @solutionstream @songkick @sonos @sonyxperia @sonyxperiafr @soundcloud @sourcelink @southwestair @spaceagent @speedtest @spinweb @spireon @splcenter @sportchek @spot_im @spothero @spotify @spotify_latam @spotifyarg @spotifybrands @spotifycanada @spotifychile @spotifycolombia @spotifyjp @spotifymexico @spotifynl @spotifyuk @spredfast @springautfair @sprint @sprintbusiness @sprintlatino @sproutsocial @spwright31 @spycameranews @square @squarespace @stackla @stacksocial @stacye_peterson @stamats @standearth @stanfordbiz @stanslap @staples @star @starbucks @starbuckscanada @starcomww @startafirecom @startupleaguehq @starz @statsocial @steamintech @steelersshop @steelhouse @stellaartois @stepikorg @sticky9hq @stitchfix @stjude @storageguymark @stormfallrob @strategyand @stratoscard @stripe @structure @strvcom @stubhub @studentloanxprt @studyatgold @studyo @subscribetowapo @subway @sumall @supermariorunjp @superwalk @supportwaschool @surgeconfhq @surveycompareuk @susanwbrooks @suzukiireland @swidowsadviser @syd_rh @syfytv @sylvanlearning @symantec @symantec_dach @symantecemea @symantecfr @synergypharma @synertechinc @sysomos @t14haley @tableau @takedaoncology @takelessons @talbotsofficial @talent_io @talent_io_de @talentbinhiring @talentcove @talktalkgroup @tanbooks @tangerinebank @target @targetdeals @targetedvictory @targetstyle @tarot_4all @td_canada @tdameritrade @teachable @teacher2teacher @teacherkit @teachforamerica @teamcoco @teamcsproject @teccanada @techcrunch @techmgrweekly @techraptr @techsmith @techsmithedu @techspaceinc @teconnectivity @tedxcesaloned @tejas @teleflora @telltalegames @tenmarks @tesglobalcorp @tesonline @testmonster2 @texture @the74 @the_bbi @the_tldc @thebhf @thecliodotcom @thecw_legends @thedavegrossman @thedivisiongame @thedjhookup @theduff @theeconomist @theforestisreal @thegrid @thehungergames @theibmmspteam @theironyard @thekevinrennie @thelastshiptnt @theleftovershbo @themoevans @themotleyfool @theopen @theory__ @thepointsguy @thepublicsquare @theretreatuofl @thesandwichbar @theskimm @theskinnypop @thesundaytimes @thesuperscreen @thetonetree @thetoppuzzle @thetrackr @thewinkapp @thingworx @thinkful @thinkpureb2b @thinkwithgoogle @thirdspacetweet @thisiscarrot @thisisglow @thoughtworks @threadless @throne_rush @tiaa @ticket_iq @tictacusa @tidalhifi @tigglykids @tigglylearn @tilt @tingftw @tintri @tippn @tivo @tmobileatwork @tmobilecareers @todaytix @todaytixuk @toddnuckols @tombernardin @toms @tonkawater @toofarmedia @toppsdigital @toptradr @torbooks @torontostar @tortus_ict_fin @torysllp @toshibausa @touchofmodern @toyota @toyotaevents @toyotafanzone @toysrus @tradegovuk @tradeideas @tradestconsult @transamerica @transworldbooks @travelers @travelocity @tresemme @tribalecommerce @tridentgum @tridentsystemsi @tripcase @trippliteca @trivago @trove @truecar @trulia @truthorange @trychangeup @trytheworld @tune_mc @tunecore @turnitin @tutor2u @tvengagement @tweetdeck @twentythree @twilio @twitter @twitteradsita @twitteradsnl @twitteradsnord @twitteradsza @twitterbusiness @twitterdev @twitterisrael @twittermktgbr @twittermktgdach @twittermktges @twittermktgfr @twittermktgid @twittermktgmena @twittermktgsg @twittermktlatam @twittersafety @twittersurveys @twodots @twperfmktg @tyco_is @tysofast @uaw @uber @uber_brasil @ubereats @uberflip @uberfr @ubisoft_spain @uchicagoalumni @ucrushapp @udacity @udemy @ugo @ultabeauty @umuc @unbounce @undelayio @underarmour @uniicard @unileverusa @uniqlousa @united @universitycu @unmetric @unrollme @updesk @upperquadrant @uproxx @ups @upskilled @upstart @urbanairship @urbanoutfitters @urthbox @usaa @usarmyne @usaswimming @usbank @uscellular @uschamberaction @usertesting @usmarinecorps @usps @uspsbiz @vailresortsjobs @valvoline @vantagedc @vantiv @varaprasadb1 @vascodatanews @vasg4u @vassili @vectrencareers @vegas @velcrobrand @vendhq @venturebeat @veracode @verizon @verizondeals @verizonfios @verizonlatino @versal @verychic @vevo @vh1savethemusic @viacom @viatortravel @vice @viceland @vicenews @victorinox @videoblocks @videogamevoters @vidretal @vidyard @vimeo @vinecreators @virgin @virginamerica @virtual_inst @visa @visiblevc @visitbritaingcc @vmware @vocus @vodacom @volvocarusa @voya @vsco @waistshaperz @wakeupcalltnt @waldenu @walgreens @walkmeinc @wallapop_us @walmart @walmartcomus @waltdisneyworld @warbyparker @warcraft @warcraft_de @warcraft_es @warcraft_fr @warcraft_ru @washingtonian @wastedive @wateraiduk @watsonanalytics @wayfair @wbhomeent @wealthdragonsuk @wealthfront @wearejoybird @wearepolly @weareteachers @webex @webroot @websummit @wehireleaders @welchs @wellnesspetfood @weloveweather @welt @wendys @westerndigidc @wework @wffbarry @wglenergy @wharton @whbm @whereswhatapp @whihofficial @whitetruffle @whsmith @wienervention @willhillaus @williamssonoma @williewilkov @winc @windex @windowsdev @wipro @wishshopping @wistia @wix @wklawfirm @wonderworkshop @woothemes @wordbraingame @wordery @wordpressdotcom @wordstream @wordswfriends @workcast @workforsky @worldfirstuk @worldfirstus @worldofcocacola @wpallimport @wpengine @wrapbootstrap @wsj @wsjambitioushrs @wutswhat @wwlp22news @xactly @xerox @xfinity @xmedialab @xperttechinc @xtrade @yahoofantasy @yelp @yesware @yikyakapp @your_daily_dish @youralley @zadarastorage @zaggdaily @zapatomundo @zapproved @zarbees @zetta_corp @zhihao @zillow @zingypet @zipcar @zipcaru @zipjetuk @zomato @zomatouk @zoom_us @zoosk @zters @zulily

Worth noting the education- and ed-tech-related accounts...

Audrey Watters

Deleting the Network

3 min read

Bryan Alexander wrote this morning that he was making his “ruthless Facebook purge of spring 2017,” announcing his plans to remove some of his “friends” on Facebook. I don’t have nearly as many connections on Facebook – no doubt because I deleted my account a few years ago and only signed up again for a new one. I’ve been fairly cautious about who I “friend” there, as I am uninterested in having a large Facebook network to manage. I have a Facebook page for Hack Education and a page for myself as a writer. I figure that folks can follow along there if they want updates about what I’m working on.

I do regularly unfollow people on Twitter. A few years ago, I trimmed my follow numbers by about half. I now follow about 800 people, which doesn’t seem like too few or too many. (I also rely on lists of journalists and news organizations rather than following these accounts directly.)

Bryan’s efforts seem to address culling the composition of the network. What I’ve been doing recently – and I’m thinking now about how similar or different this might be from what Bryan is up to – is culling my own history on the network.

I now delete all Facebook and Twitter posts that are older than 90 days*. I also delete all email that’s older than a year. (This past week’s phishing scam using a fake Google Docs app alongside the hacking attack on the Macron campaign demonstrated, I think, how vulnerable all our documents and messages and connections are in email. I mean, I thought we’d have learned it earlier thanks to Wikileaks’ release of the DNC’s emails. But hey.)

I think Bryan is severing ties to certain accounts on Facebook because he wants to see better information and he wants his own posts to be more readily seen. Me, I am deleting information because I am not interested in the retention of data as part of a weaponized information gaze. But by deleting “friends” and by deleting posts, we both are actively and willfully reshaping our social networks. We are making adjustments to the reach and level of activity that will certainly alter our “presence” online – in no small part too because these networks increasingly display us information algorithmically.

 

 

* In order to delete my tweets, I used Tweet Deleter, which did require me to sign up for a premium account in order to delete everything. In order to delete my Facebook activity, I used F___book Post Manager, which is a Chrome add-on. Before deleting anything – Facebook, Twitter, Gmail – I made sure to download a copy of my data.

Audrey Watters

Anti-Racist Literacy and Facebook

2 min read

Cross-posted to Facebook

A student asked me last week after I spoke about “higher education in the disinformation age” what I’d suggest she do when she sees dis- and misinformation on Facebook.

In some way, her question was not “how do I fact-check online.” (If she was thinking about fact-checking, she’s already ahead of the game.)

Her question was really “should I say something?” As much of the dis- and misinformation on Facebook is deeply intertwined with bigotry –racism, sexism, homophobia, transphobia, jingoism, anti-Semitism, anti-Muslimism, and so on – I do believe it is crucial to speak up. (But I also recognize that it can be challenging in many circumstances to decide how to respond. How to respond depends on the poster, for starters, on one’s relationship to her or him.)

A lot of the work on “___ literacy” lately has focused on the skills that everyone needs in order to decipher information online. (For better or for worse. Rolin Moe and Mike Caulfield have pointed out that, at best, it’s mostly for nothing.) But I would argue that this work also needs to be explicitly anti-racist as well.

Audrey Watters

Education and Technology: Critical Approaches

1 min read

I have two chapters in Education and Technology: Critical Approaches, and I was asked by the editors to make a quick video to tease out some of the ideas I write about.

Audrey Watters

Federal Money Bought Me This...

2 min read

Cross-posted from Facebook

Tressie McMillan Cottom wrote on Twitter about how she'd benefited from the various federal education programs that Trump wants to cut. The responses are amazing (and her observation, of course, that by cutting these programs the Trump Administration absolutely hopes to target brilliant Black folks like her is dead on).

I can't even begin to write about all the ways in which I've benefited from federal dollars for my education. I mean, I grew up in Wyoming where I had an amazing public education because of the ways in which the state benefits from federal mineral rights payments.

I bet there were a ton of programs that I didn't even realize growing up that I benefited from that received federal dollars -- band, foreign language class, the Wyoming State Reading Council, my school and public libraries...

I know I learned the alphabet from Sesame Street. I learned compassion from Mr. Roger's Neighborhood. I learned science from 3-2-1-Contact. I learned some Spanish from Villa Allegre. Thank you, PBS.

I learn daily to this day from NPR.

As a college student, I was the recipient of a Pell Grant, federal work study, and a Stafford student loan. I paid off all my loans even though the Department of Education turned me over to a collection agency. My tax dollars at work. Thanks, Dept of Ed. I still don't want to see you defunded, even if you fucked me over.

My family has benefited from the free and reduced lunch program, from SNAP, and from WIC. My son attended a Title I school. (And thanks to SSI and Social Security benefits, I was able to barely scrape by as a parent of a young student while my husband was sick and after he died. The Social Security Admin claims they overpaid me, and that I owe them money. My tax dollars at work once again. But I'd never wish that any of us had no social safety net. Only callous assholes say such things. And even worse, those who move to enact it.)

Audrey Watters

Distraction Shaming

1 min read

Cross-posted from Facebook

I grow weary of all this talk that certain things are "distractions" from "the real story."

I believe you can hold many ideas, multiple agitations and angers in your head all at once. You can pay attention to many stories. Really. You can.

See, I was widowed on August 29, 2005. And I was able to deal with the grief and horror of losing a husband on that day AND witness the grief and horror of Hurricane Katrina.

Human suffering knows no bounds. But nor should human compassion.

If you're worried about "distraction," maybe work on your own capacity, on your own empathy. Don't scold others who are attending to the world.

Audrey Watters

Rage and Disappointment

1 min read

Cross-posted from Facebook

I'm from a red state, from a red family, tho I know my politics deviated early. So I'm used to different politics and visions for the future.

But right now I am so angry and so disappointed and so disgusted in every single person I know -- including many I love -- for voting for 🍊👹 I don't even know how to proceed with you.

(Most of you have blocked me on FB. That's how *you* proceed.)

I see the white nationalist hatred that 🍊👹 has unleashed and that you've sanctioned, and I'm not sure how I will ever forgive you. How can I ever sit with you and not talk about this? How can I ever break bread with you if I'm supposed to be silent?

This I know:

I will be here for your children -- your queer kids and your trans kids and your immigrant kids and your brown kids and your alter-abled kids and your adopted kids and your daughters and and all those whose worlds you want to foreclose because *they're* curious and *you're* a small-minded, fearful asshole...

Audrey Watters

DeVos' Higher Ed Agenda

2 min read

Cross-posted to Facebook

I don’t think DeVos is going to get much done when it comes to any K–12 policy agenda that involves vouchers.

But this administration is going to come hard for higher ed. Our President, after all, ran a scam college. One of his closer education advisors is Jerry Falwell Jr, the head of Liberty University. Trump tweeted recently he wanted to withhold funding for UC Berkeley because there were protests against a white nationalist speaker.

The regulations that the Obama Administration put in place to curb the exploitative behaviors of for-profits will likely be rolled back. Coding bootcamps and MOOCs, the new for-profit higher ed, will become eligible for federal financial aid.

It wouldn’t surprise me if there were attempts to privatize student loans. (DeVos was an investor in SoFi. Peter Thiel is an investor in SoFi and in several other student loan startups. Let’s just note here that Edsurge – that dangerous piece of trash – refuses to discuss these startups because "they’re not ed-tech.)

Privatization – through ed-tech – will be heavily, heavily pushed.

It’s going to be a disaster.

The bright side: screwing over college students is not a smart plan – for governing, for re-election. And I don’t just mean those young folks age 18–24-ish who stereotypically do not vote. I mean all of us with student loan debt. I mean vets. I mean parents. I mean exactly the people who flooded the Senate with calls and faxes and letters. People are drowning in student loan debt, and these rich assholes have made it clear they want us to drown.

Resist.

Audrey Watters

Education Secretaries and a Progressive Path Forward

2 min read

Cross-posted to Facebook

Even if Democrats lose this vote – and it’s likely they will – I hope that they have learned a lesson with Betsy DeVos: people care about education. It doesn’t always figure into debates. It doesn’t always poll as well as “the economy” or “national security.” But people care. Public education matters. It matters on a personal level that white, middle class folks in particular relate to. (This, to my mind, explains why people are so enraged about DeVos’s nomination but not about other possible Cabinet members.)

The Democrats have not done much to distinguish themselves from Republicans when it comes to education policy in recent years. Obama’s education legacy is much the same as Bush’s which was much the same as Clinton’s. (Perhaps going after for-profit higher education – a centerpiece of the Obama higher ed policy – was a notable exception.)

Can Democrats recenter themselves so that they are committed to communities, to equity, to justice, to democratic education? That’s their path forward to win. Not some embrace of punitive charter school education or surveillance-oriented ed-tech.

Audrey Watters

Mutually Assured Destruction Again

3 min read

Cross-posed to Facebook

I grew up in Wyoming in the 1970s and 1980s. I grew up fearing that nuclear war was bound to happen. I felt the word “assured” in that phrase “mutually assured destruction” meant our destruction was inevitable. Nuclear war was inevitable.

I also knew that Wyoming had been selected to house the US’s nuclear arsenal because of its small and dispersed population. We residents of Wyoming were expendable (in case of war, in case of accident).

The first nuclear missiles came to Wyoming before I was born. Warren Air Force Base in Cheyenne, Wyoming was chosen to house the Atlas ballistic missile squadrons in 1960. The Titan squadrons were in neighboring Colorado. The Atlas missiles were replaced by the Minutemen missiles in the 1970s.

That name – a nod to the Minutemen of the Revolutionary War – meant that they could be launched almost instantaneously.

Nuclear holocaust could occur any minute. The world would be destroyed in an instant.

And Wyoming would be a target.

I lived about 180 miles away from the Warren Air Force Base. So I wouldn’t die instantly. I imagined my death would be like those in The Day After who witnessed the mushroom cloud, then suffered from radiation sickness and nuclear fallout and died slowly and painfully. I’d die like Jason Robards’ character. I just knew it.

In 1982, the year before The Day After was broadcast, President Reagan announced 100 new Minutemen missiles would be deployed in Wyoming. Each one of these missiles had ten nuclear warheads; each warhead carried about a third of a megaton of explosive power. (For comparison, the bomb that destroyed Hiroshima had approximately 15 kilotons of explosive power.)

Reagan called the weapon the “Peacekeeper.”

But this wasn’t about peace keeping at all. It was a mad race towards destruction, one that humans have worked hard in the last forty years or so to move away from.

There are still missiles in Wyoming (about 150 Minutemen), contrary to Nate Silver’s droll little map here. There are no winners in nuclear war. There are certainly no winners in my home state. There never were supposed to be.

Audrey Watters

Adding Your Name to a List

2 min read

A comment, left on Facebook, to a post urging professors to add their name to a Watchlist, created by a conservative website, of liberal professors to keep an eye on.

I feel like this is a terrible and dangerous idea, one that reflects a privilege that many scholars do not have.

Having your name and your work pointed out by conservative lists or stories means being doxxed. It means being threatened. Repeatedly. It means having your family threatened. Your parents threatened. Your children threatened. It means having your students threatened. It means having you school threatened. It means having your livelihood threatened. You are emailed constantly with death and rape threats. The phone rings off the hook. Your mailbox is filled with hate-filled letters and postcards.

I realize that the AAUP is supposed to stand up for faculty in these situations. But what really has it done? What has it done to protect adjuncts?

We have seen in recent years, conservatives targeting faculty of color. Particularly women of color. Many of these scholars' departments and schools have failed to back them loudly in public. People have lost their jobs. Most schools have absolutely no plan of how they'll respond to this. Most faculty have zero notion of solidarity.

Instead of adding yourself to a list in some feel-good gesture, gather your department together and come up with concrete steps of how to protect one another -- the most vulnerable among you, including grad students -- when a conservative site targets you. What will the department head or the president say to the media? How will school security respond? How will classes be handled? Who will answer your email? Are there options for counseling? Will academic freedom and research actually be protected?

This is no joking matter. It isn't a badge of honor to be on a list like this. It's a fucking nightmare.

Audrey Watters

Revoking Citizenship

2 min read

Cross-posted to Facebook

I don’t believe Trump’s tweets – even when they seem unhinged – are a “distraction.” And that’s even if he very much hopes you talk about the tweet rather than whatever latest scandal is making headlines. I do believe that we can pay attention to more than one thing and be horrified about more than one thing and take action against more than one thing at once.

Take today’s flag-burning tweet, for example.

It’s important – and horrific – because of his obvious disregard for the First Amendment.

But it’s important too because of Trump’s threat to revoke citizenship for those who burn flags. As it stands, citizenship is guaranteed to all who are born in the US under the 14th Amendment. There is no clause therein for the revocation of citizenship.

“The 14th Amendment guarantees he can’t do this,” I’ve seen several people say today in response to Trump’s flag-burning tweet.

But the 14th Amendment is under attack already by Trump. He, along with many on the right, want to get rid of birthright citizenship, something that, as a matter of practice, the government believes is granted by that very amendment. This is one of many policies the right has sought to adopt, they claim, in order to curb illegal immigration. Trump believes that to change this practice would not require a constitutional amendment – just a law that clarified who was actually eligible for citizenship.

When Trump talks about revoking citizenship, we shouldn’t scoff. He has already expressed interest in pursuing precisely this idea…

Audrey Watters

Feeding Facebook

2 min read

Cross-posted on Facebook, and I can't even believe I have to tell grown-ass people this.

Facebook has become a key vehicle for spreading misinformation. Please do not contribute to this. If nothing else, as we’ve seen this week, misinformation serves to undermine democracy.

BEFORE YOU CLICK “SHARE,” READ THE ARTICLE. Facebook doesn’t want you to leave its site. It gives you a headline and a pretty picture. But click and read the article. Does the article content match the headline? Do you need to share the article with a bit of clarification or explanation? What’s the date on the article? (Friends don’t let friends let Chinua Achebe die again. Let’s have that be a goal for 2017.)

BEFORE YOU CLICK “SHARE,” NOTICE THE SOURCE. Look at the domain name. We’re pretty good at identifying well-known media companies, and we think we understand their biases. I’m not saying “don’t share biased information.” My complaint here isn’t about liberal news sites or conservative ones. It’s about fake news. It’s about propaganda. (RT, remember, is funded by the Russian government.) Many new news sites, created specifically for Facebook-sharing, specialize in misinformation. (See this recent Buzzfeed story, for starters: https://www.buzzfeed.com/craigsilverman/partisan-fb-pages-analysis.) Facebook fuels this. It spreads it. It does so algorithmically. And every time you share one of these stories, you’re fueling this misinformation too. What sorts of other stories does this site have? If there are more than two stories about the Illuminati or aliens, maybe you should think twice about sharing an article from that site. Maybe? If nothing else, find another more reliable site and share an article from there instead.

BEFORE YOU CLICK “SHARE,” DO A CURSORY SEARCH TO SEE IF, INDEED, THE CLAIMS ARE VERIFIABLE. What sorts of expertise is being claimed? By whom? Snopes.com is a start. There’s also Factcheck.org. You might want to consider also checking the Southern Poverty Law Center to see if what you’re sharing is connected to a hate group (https://www.splcenter.org/fighting-hate/extremist-files/groups). I can’t even believe I have to tell you this. Jesus, people. Pull your shit together.

Audrey Watters

100 Minutes, Part 2

5 min read

What is the future of teaching and learning? Part 2 of my contribution to EDU8213.

I want to respond to something that Liz said about our focus on computers as calculating. She said she preferred to see them as communication machines, and I do think that an emphasis on communication rather than calculation could help us to think about and to really foster those pedagogical practices that recognize and value affect not just those practices that privilege quantification.

But I’m not sure that saying that computers as not merely calculating machines gets us out of the quandary of our “computational culture.” The ideological underpinnings of computers coincide with this longstanding privileging in Western culture of rationality. For a couple of centuries now, modern societies have been built on the belief that more rationality and more technology and more capital are the solutions to all the problems we face.

This makes it challenging, I think, to talk about “the future of teaching and learning” without seeing “teaching and learning” as a problem to be solved. And specifically about a problem to be solved with more data, more machines, more analytics.

This really stands in stark opposition to affect. Reason and rationality versus emotion – we know that story. The former privileged as the realm of men. Men of science. The latter scorned as the realm of women. Weak. Soft.

A sidenote: it’s so ironic that the women who worked in the field of pre- or proto-computing were called “computers” and “calculators.” But once the work became mechanized, computerized, they were largely ousted from the field and their contributions erased.

In all things, all tasks, all jobs, women are expected to perform affective labor – caring, listening, smiling, reassuring, comforting, supporting. This work is not valued; often it is unpaid. But affective labor has become a core part of the teaching profession – even though it is, no doubt, “inefficient.” It is what we expect –stereotypically, perhaps – teachers to do. (We can debate, I think, if it’s what we reward professors for doing. We can interrogate too whether all students receive care and support; some get “no excuses,” depending on race and class.)

What happens to affective teaching labor when it runs up against machines, against robots, against automation? Politically. Economically. Culturally. I think most of us would admit that even the tasks that education technology purports to now be able to automate – tutoring, testing, grading – are shot through with emotion when done by humans, or at least when done by a person who’s supposed to have a caring, supportive relationship with their students. Grading essays isn’t necessarily burdensome because it’s menial, for example; grading essays is burdensome because it is affective labor; it is emotionally and intellectually exhausting.

This is part of our conundrum, and again I think this is a deep cultural conundrum that we cannot just wave away by calling computers “communication machines”: teaching labor is affective not simply intellectual. Affective labor is not valued. Intellectual labor is valued in research. It is viewed as rational and reasonable. But at both the K–12 and college level, teaching is often seen as menial, routine, and as such replaceable by machine. Intelligent machines will soon handle the task of cultivating human intellect, or so we’re told. And because we already privilege a certain kind of intellect as rational and reasonable, I think culturally we are sort of prepped for intelligent machines handling the tasks of research and decision-making.

Artificial intelligence sees the human mind as a computer. This is a powerful metaphor that underscores the whole field. Intelligence is rational, so they say. It is about calculation. It is mechanical. It is replicable. It is measurable. Think of all the words in artificial intelligence language that are drawn from human’s mental capacities: memory. learning. The benefit of artificial intelligence, so we’re told, is that it can surpass the capabilities of humans. It can be faster. It can store more data. It can process more data. It is computational.

What does it mean for the future of teaching and learning if – culturally – we are being told that the future of intelligence is machine intelligence?

Where does affect fit into this?

Rather than finding that machines are become more intelligent, I fear we will find that humans are becoming more machine-like. But if we bury affect, I do wonder – and one only need look at this US Presidential election for an example – what happens when we have these emotional outbursts. Anxiety. Irrationality.

I think I said in the last recording that I often turn to Antonio Gramsci: “I am a pessimist because of intelligence but an optimist because of will.”

I’ve been thinking a lot lately about irrationality and the Internet, about what seems to be an embrace of conspiracy theories, factlessness, a rejection of expertise. I’m not sure I’ve ever been more pessimistic about the Internet’s potential for participatory democracy or for networked learning before. “Don’t read newspapers,” Trump told his supporters recently. “Read the Internet.” As such, the Internet feels like a weapon of war – and war has always relied on calculation, hasn’t it – a weapon of hate – there’s the affect that culturally we seem to be embracing right now.

Audrey Watters

I'm Not Really With Her, But...

4 min read

Cross-posted to FB...

I had a lengthy conversation with my 23-year-old son today about politics. “Are you registered to vote?” I asked tentatively. (I hate vote-shaming.)

This isn’t the first election where he’s eligible to vote; this is his first election voting.

I know that many pundits like to sneer at “millennials” for some perceived political apathy, for their (supposed) low voter turn-out, for their (supposed) preference for third-party candidates, what have you. Much of this is simply a caricature of “millennials,” a pervasive and perpetual disgust at “kids these days.”

My son and many his age are far from apathetic. They are, however, full of anxiety – about the economy, about their future, about climate change, about violence, about injustice. And my son and many his age are angry. They are angry that they’re set to inherit a world ravaged by war and hate and destruction and shitty jobs marketed to them as “the freelance economy.” They are angry at institutional power. But they are frustrated by extra-institutional power too.

My son is voting for the first time. He’s voting for someone but, like so many of us this year, he’s primarily voting against someone. Like me, he would have preferred Bernie. He’s frightened about the outcome of the election – not just who will win, but the repercussions of the violence and hatred that Donald Trump has legitimized and the effects that this country will have to bear long after November 8 has come and gone.

He’s aware how much of this violence and hatred is racialized and gendered. I’m surprised, quite frankly, to see how much the former in particular has become a focal point for this political awakening. Part of it, no doubt, comes from his experiences as an addict who’s managed to “get away with it” – no criminal record, no jail time.

My son is one of those “young white men without a college degree” that I think the right wing has long believed they can whip up into a populist, nationalist, racist furor. He was pretty frank when I talked to him on the phone today – he thinks that all over North America and Europe that the right wing still can.

We talked about the role that education and technology play in that. He was much more upset about the latter. “I read some bullshit on Breitbart last week, and then Facebook suggested I ‘like’ the KKK.” We talked about algorithms and filters. We talked about the combination of ignorance and incuriosity that a fair portion of the media – old media, new media, new new media – rely upon.

We talked a lot about Bill Clinton – the first President I ever voted for – and his betrayal of my ideals. He has a vague memory of my calling from a payphone just outside of Seattle on November 30, 1999 to reassure the family that, despite the tear gas and rubber bullets, I’d survived the WTO protest. We talked about the environmental activism that he grew up around and how, long before September 11, the Clinton Administration was ready to criminalize it. We talked about how his dad’s drug convictions shaped his ability to get financial aid.

We talked about the past. We talked about the future.

He’s utterly dispirited, and that is just crushing to me, particularly when I think of how the Obama campaign’s message was “Hope.” I confessed to him my own fears: I check the poll numbers at 538 several times a day. I told him that talking to him and hearing about his commitment to vote made me feel a little better. We’re not really “with her.” But we’re with each other on this one. We’ll check the box by her name, knowing we have to do much much more than vote if we’re going to make progress.

Audrey Watters

Notes and Highlights from Amusing Ourselves to Death

22 min read

Orwell warns that we will be overcome by an externally imposed oppression. But in Huxley’s vision, no Big Brother is required to deprive people of their autonomy, maturity and history. As he saw it, people will come to love their oppression, to adore the technologies that undo their capacities to think.

What Orwell feared were those who would ban books. What Huxley feared was that there would be no reason to ban a book, for there would be no one who wanted to read one.

Orwell feared those who would deprive us of information. Huxley feared those who would give us so much that we would be reduced to passivity and egoism.

Orwell feared that the truth would be concealed from us. Huxley feared the truth would be drowned in a sea of irrelevance.

Orwell feared we would become a captive culture. Huxley feared we would become a trivial culture, preoccupied with some equivalent of the feelies, the orgy porgy, and the centrifugal bumblepuppy.

Our politics, religion, news, athletics, education and commerce have been transformed into congenial adjuncts of show business, largely without protest or even much popular notice. The result is that we are a people on the verge of amusing ourselves to death.

Although the Constitution makes no mention of it, it would appear that fat people are now effectively excluded from running for high political office.

Indeed, we may have reached the point where cosmetics has replaced ideology as the field of expertise over which a politician must have competent control.

When a professor teaches with a sense of humor, people walk away remembering.”1 She did not say what they remember or of what use their remembering is. But she has a point: It’s great to be an entertainer.

There is no shortage of critics who have observed and recorded the dissolution of public discourse in America and its conversion into the arts of show business. But most of them, I believe, have barely begun to tell the story of the origin and meaning of this descent into a vast triviality.

We are all, as Huxley says someplace, Great Abbreviators, meaning that none of us has the wit to know the whole truth, the time to tell it if we believed we did, or an audience so gullible as to accept it.

For on television, discourse is conducted largely through visual imagery, which is to say that television gives us a conversation in images, not words. The emergence of the image-manager in the political arena and the concomitant decline of the speech writer attest to the fact that television demands a different kind of content from other media. You cannot do political philosophy on television. Its form works against the content.

lacking a technology to advertise them, people could not attend to them, could not include them in their daily business.

Such information simply could not exist as part of the content of culture. This idea—that there is a content called “the news of the day”—was entirely created by the telegraph (and since amplified by newer media), which made it possible to move decontextualized information over vast spaces at incredible speed. The news of the day is a figment of our technological imagination. It is, quite precisely, a media event. We attend to fragments of events from all over the world because we have multiple media whose forms are well suited to fragmented conversation. Cultures without speed-of-light media—let us say, cultures in which smoke signals are the most efficient space-conquering tool available—do not have news of the day. Without a medium to create its form, the news of the day does not exist.

the decline of the Age of Typography and the ascendancy of the Age of Television.

all of this sounds suspiciously like Marshall McLuhan’s aphorism, the medium is the message,

the clearest way to see through a culture is to attend to its tools for conversation.

Each medium, like language itself, makes possible a unique mode of discourse by providing a new orientation for thought, for expression, for sensibility. Which, of course, is what McLuhan meant in saying the medium is the message.

it may lead one to confuse a message with a metaphor.

A message denotes a specific, concrete statement about the world. But the forms of our media, including the symbols through which they permit conversation, do not make such statements. They are rather like metaphors, working by unobtrusive but powerful implication to enforce their special definitions of reality.

time-keepers, and then time-savers, and now time-servers.

Thou shalt not make mechanical representations of time.

Philosophy cannot exist without criticism, and writing makes it possible and convenient to subject thought to a continuous and concentrated scrutiny. Writing freezes speech and in so doing gives birth to the grammarian, the logician, the rhetorician, the historian, the scientist—all those who must hold language before them so that they can see what it means, where it errs, and where it is leading.

a shift from the ear to the eye as an organ of language processing.

from the magic of writing to the magic of electronics.

When Galileo remarked that the language of nature is written in mathematics, he meant it only as a metaphor.

And our languages are our media. Our media are our metaphors. Our metaphors create the content of our culture.

to avoid the possibility that my analysis will be interpreted as standard-brand academic whimpering, a kind of elitist complaint against “junk” on television, I must first explain that my focus is on epistemology, not on aesthetics or literary criticism.

we do not measure a culture by its output

of undisguised trivialities but by what it claims as significant. Therein is our problem, for television is at its most trivial and, therefore, most dangerous when its aspirations are high, when it presents itself as a carrier of important cultural conversations.

how media are implicated in our epistemologies.

As Walter Ong points out, in oral cultures proverbs and sayings are not occasional devices: “They are incessant. They form the substance of thought itself.

Testimony is expected to be given orally, on the assumption that the spoken, not the written, word is a truer reflection of the state of mind of a witness.

there is a residual belief in the power of speech, and speech alone, to carry the truth; on the other hand, there is a much stronger belief in the authenticity of writing and, in particular, printing. This second belief has little tolerance for poetry, proverbs, sayings, parables or any other expressions of oral wisdom. The law is what legislators and judges have written. In our culture, lawyers do not have to be wise; they need to be well briefed.

You are mistaken in believing that the form in which an idea is conveyed is irrelevant to its truth.

Truth does not, and never has, come unadorned.

“Seeing is believing” has always had a preeminent status as an epistemological axiom, but “saying is believing,” “reading is believing,” “counting is believing,” “deducing is believing,” and “feeling is believing” are others that have risen or fallen in importance as cultures have undergone media change.

As a culture moves from orality to writing to printing to televising, its ideas of truth move with it. Every philosophy is the philosophy of a stage of life, Nietzsche remarked. To which we might add that every epistemology is the epistemology of a stage of media development. Truth, like time itself, is a product of a conversation man has with himself about and through the techniques of communication he has invented.

Since intelligence is primarily defined as one’s capacity to grasp the truth of things, it follows that what a culture means by intelligence is derived from the character of its important forms of communication.

We have reached, I believe, a critical mass in that electronic media have decisively and irreversibly changed the character of our symbolic environment.

We are now a culture whose information, ideas and epistemology are given form by television, not by the printed word.

every new technology for thinking involves a trade-off.

Media change does not necessarily result in equilibrium. It sometimes creates more than it destroys. Sometimes, it is the other way around.

The invention of the printing press itself is a paradigmatic example. Typography fostered the modern idea of individuality, but it destroyed the medieval sense of community and integration. Typography created prose but made poetry into an exotic and elitist form of expression. Typography made modern science possible but transformed religious sensibility into mere superstition. Typography assisted in the growth of the nation-state but thereby made patriotism into a sordid if not lethal emotion.

although literacy rates are notoriously difficult to assess, there is sufficient evidence (mostly drawn from signatures) that between 1640 and 1700, the literacy rate for men in Massachusetts and Connecticut was somewhere between 89 percent and 95 percent, quite probably the highest concentration of literate males to be found anywhere in the world at that time.2 (The literacy rate for women in those colonies is estimated to have run as high as 62 percent in the years 1681-1697.3)

The only communication event that could produce such collective attention in today’s America is the Superbowl.

The first printing press in America was established in 1638 as an adjunct of Harvard University, which was two years old at the time.

This odd practice is less a reflection of an American’s obstinacy than of his modeling his conversational style on the structure of the printed word.

a kind of printed orality,

“Is the Iliad possible,” he asks rhetorically, “when the printing press and even printing machines exist? Is it not inevitable that with the emergence of the press, the singing and the telling and the muse cease; that is, the conditions necessary for epic poetry disappear?”

Marx understood well that the press was not merely a machine but a structure for discourse, which both rules out and insists upon certain kinds of content and, inevitably, a certain kind of audience.

Not only did Lincoln and Douglas write all their speeches in advance, but they also planned their rebuttals in writing. Even the spontaneous interactions between the speakers were expressed in a sentence structure, sentence length and rhetorical organization which took their form from writing.

the written word, and an oratory based upon it, has a content: a semantic, paraphrasable, propositional content.

Whenever language is the principal medium of communication—especially language controlled by the rigors of print—an idea, a fact, a claim is the inevitable result.

the written word fosters what Walter Ong calls the “analytic management of knowledge.”

To engage the written word means to follow a line of thought, which requires considerable powers of classifying, inference-making and reasoning. It means to uncover lies, confusions, and overgeneralizations, to detect abuses of logic and common sense. It also means to weigh ideas, to compare and contrast assertions, to connect one generalization to another. To accomplish this, one must achieve a certain distance from the words themselves, which is, in fact, encouraged by the isolated and impersonal text. That is why a good reader does not cheer an apt sentence or pause to applaud even an inspired paragraph. Analytic thought is too busy for that, and too detached.

Of words, almost nothing will come to mind. This is the difference between thinking in a word-centered culture and thinking in an image-centered culture.

Women were probably more adept readers than men, and even in the frontier states the principal means of public discourse issued from the printed word. Those who could read had, inevitably, to become part of the conversation.

the Age of Exposition ---> the Age of Show Business.

For telegraphy did something that Morse did not foresee when he prophesied that telegraphy would make “one neighborhood of the whole country.” It destroyed the prevailing definition of information, and in doing so gave a new meaning to public discourse.

“We are in great haste to construct a magnetic telegraph from Maine to Texas; but Maine and Texas, it may be, have nothing important to communicate.... We are eager to tunnel under the Atlantic and bring the old world some weeks nearer to the new; but perchance the first news that will leak through into the broad flapping American ear will be that Princess Adelaide has the whooping cough.”

The telegraph made a three-pronged attack on typography’s definition of discourse, introducing on a large scale irrelevance, impotence, and incoherence.

The telegraph made information into a commodity, a “thing” that could be bought and sold irrespective of its uses or meaning.

The penny newspaper, emerging slightly before telegraphy, in the 1830’s, had already begun the process of elevating irrelevance to the status of news.

It was not long until the fortunes of newspapers came to depend not on the quality or utility of the news they provided, but on how much, from what distances, and at what speed.

As Thoreau implied, telegraphy made relevance irrelevant.

How often does it occur that information provided you on morning radio or television, or in the morning newspaper, causes you to alter your plans for the day, or to take some action you would not otherwise have taken, or provides insight into some problem you are required to solve?

In both oral and typographic cultures, information derives its importance from the possibilities of action.

Prior to the age of telegraphy, the information-action ratio was sufficiently close so that most people had a sense of being able to control some of the contingencies in their lives.

The principal strength of the telegraph was its capacity to move information, not collect it, explain it or analyze it. In this respect, telegraphy was the exact opposite of typography.

burn its contents.

Facts push other facts into and then out of consciousness at speeds that neither permit nor require evaluation.

The telegraph introduced a kind of public conversation whose form had startling characteristics: Its language was the language of headlines—sensational, fragmented, impersonal.

“Knowing” the facts took on a new meaning, for it did not imply that one understood implications, background, or connections. Telegraphic discourse permitted no time for historical perspectives and gave no priority to the qualitative. To the telegraph, intelligence meant knowing of lots of things, not knowing about them.

The photograph also lacks a syntax, which deprives it of a capacity to argue with the world.

As Susan Sontag has observed, a photograph implies “that we know about the world if we accept it as the camera records it.”

It offers no assertions to refute, so it is not refutable.

the crossword puzzle became a popular form of diversion in America at just that point when the telegraph and the photograph had achieved the transformation of news from functional information to decontextualized fact.

The crossword puzzle is one such pseudo-context;

the cocktail party is another; the radio quiz shows of the 1930’s and 1940’s and the modern television game show are still others; and the ultimate, perhaps, is the wildly successful “Trivial Pursuit.”

Why not use them for diversion? for entertainment? to amuse yourself, in a game?

The pseudo-context is the last refuge, so to say, of a culture overwhelmed by irrelevance, incoherence, and impotence.

Theirs was a “language” that denied interconnectedness, proceeded without context, argued the irrelevance of history, explained nothing, and offered fascination in place of complexity and coherence. Theirs was a duet of image and instancy, and together they played the tune of a new kind of public discourse in America.

As a small, ironic example of this point, consider this: In the past few years, we have been learning that the computer is the technology of the future. We are told that our children will fail in school and be left behind in life if they are not “computer literate.” We are told that we cannot run our businesses, or compile our shopping lists, or keep our checkbooks tidy unless we own a computer. Perhaps some of this is true. But the most important fact about computers and what they mean to our lives is that we learn about all of this from television.

Television has achieved the status of “meta-medium”—an instrument that directs not only our knowledge of the world, but our knowledge of ways of knowing as well.

television has achieved the status of “myth,” as Roland Barthes uses the word.

myth a way of understanding the world

A myth is a way of thinking so deeply embedded in our consciousness that it is invisible.

What is television? What kinds of conversations does it permit? What are the intellectual tendencies it encourages? What sort of culture does it produce?

Only those who know nothing of the history of technology believe that a technology is entirely neutral.

Each technology has an agenda of its own.

It is, as I have suggested, a metaphor waiting to unfold.

All of this has occurred simultaneously with the decline of America’s moral and political prestige, worldwide. American television programs are in demand not because America is loved but because American television is loved.

what I am claiming here is not that television is entertaining but that it has made entertainment itself the natural format for the representation of all experience.

The problem is not that television presents us with entertaining subject matter but that all subject matter is presented as entertaining, which is another issue altogether.

Entertainment is the supra-ideology of all discourse on television.

Had Irving Berlin changed one word in the title of his celebrated song, he would have been as prophetic, albeit more terse, as Aldous Huxley. He need only have written, There’s No Business But Show Business.

The viewers also know that no matter how grave any fragment of news may appear (for example, on the day I write a Marine Corps general has declared that nuclear war between the United States and Russia is inevitable), it will shortly be followed by a series of commercials that will, in an instant, defuse the import of the news, in fact render it largely banal.

I should go so far as to say that embedded in the surrealistic frame of a television news show is a theory of anticommunication, featuring a type of discourse that abandons logic, reason, sequence and rules of contradiction. In aesthetics, I believe the name given to this theory is Dadaism; in philosophy, nihilism; in psychiatry, schizophrenia. In the parlance of the theater, it is known as vaudeville.

The result of all this is that Americans are the best entertained and quite likely the least well-informed people in the Western world.

“Television is the soma of Aldous Huxley’s Brave New World. ” Big Brother turns out to be Howdy Doody.

America’s newest and highly successful national newspaper, USA Today, is modeled precisely on the format of television.

Radio, of course, is the least likely medium to join in the descent into a Huxleyan world of technological narcotics. It is, after all, particularly well suited to the transmission of rational, complex language.

NOTE: Podcasting? Is this an heir to radio and that print-orality?

Though it may be un-American to say it, not everything is televisible.

to put it more precisely, what is televised is transformed from what it was to something else, which may or may not preserve its former essence.

“Television,” Billy Graham has written, “is the most powerful tool of communication ever devised by man.

This is gross technological naivete. If the delivery is not the same, then the message, quite likely, is not the same. And if the context in which the message is experienced is altogether different from what it was in Jesus’ time, we may assume that its social and psychological meaning is different, as well.

the television screen itself has a strong bias toward a psychology of secularism.

There are, of course, counterarguments to the claim that television degrades religion. Among them is that spectacle is hardly a stranger to religion.

Show business is not entirely without an idea of excellence, but its main business is to please the crowd, and its principal instrument is artifice.

the television commercial has mounted the most serious assault on capitalist ideology since the publication of Das Kapital.

The television commercial is not at all about the character of products to be consumed. It is about the character of the consumers of products.

The television commercial has been the chief instrument in creating the modern methods of presenting political ideas. It has accomplished this in two ways. The first is by requiring its form to be used in political campaigns.

Being a celebrity is quite different from being well known.

Although it may go too far to say that the politician-as-celebrity has, by itself, made political parties irrelevant, there is certainly a conspicuous correlation between the rise of the former and the decline of the latter.

Czeslaw Milosz, winner of the 1980 Nobel Prize for Literature, remarked in his acceptance speech in Stockholm that our age is characterized by a “refusal to remember”; he cited, among other things, the shattering fact that there are now more than one hundred books in print that deny that the Holocaust ever took place.

an anxious age of agitated amnesiacs....

We Americans seem to know everything about the last twenty-four hours but very little of the last sixty centuries or the last sixty years.”

“Sesame Street” encourages children to love school only if school is like “Sesame Street.”

“Sesame Street” undermines what the traditional idea of schooling represents.

Whereas a classroom is a place of social interaction, the space in front of a television set is a private preserve. Whereas in a classroom, one may ask a teacher questions, one can ask nothing of a television screen.

Whereas school is centered on the development of language, television demands attention to images. Whereas attending school is a legal requirement, watching television is an act of choice.

Whereas in school, one fails to attend to the teacher at the risk of punishment, no penalties exist for failing to attend to the television screen. Whereas to behave oneself in school means to observe rules of public decorum, television watching requires no such observances, has no concept of public decorum. Whereas in a classroom, fun is never more than a means to an end, on television it is the end in itself.

every television show is educational. Just as reading a book—any kind of book —promotes a particular orientation toward learning, watching a television show does the same.

“The Little House on the Prairie,” “Cheers” and “The Tonight Show” are as effective as “Sesame Street” in promoting what might be called the television style of learning.

If we are to blame “Sesame Street” for anything, it is for the pretense that it is any ally of the classroom.

it is important to add that whether or not “Sesame Street” teaches children their letters and numbers is entirely irrelevant. We may take as our guide here John Dewey’s observation that the content of a lesson is the least important thing about learning.

As he wrote in Experience and Education: “Perhaps the greatest of all pedagogical fallacies is the notion that a person learns only what he is studying at the time. Collateral learning in the way of formation of enduring attitudes ... may be and often is more important than the spelling lesson or lesson in geography or history.... For these attitudes are fundamentally what count in the future.”

the most important thing one learns is always something about how one learns.

We face the rapid dissolution of the assumptions of an education organized around the slow-moving printed word, and the equally rapid emergence of a new education based on the speed-of-light electronic image.

This is why I think it accurate to call television a curriculum.

As I understand the word, a curriculum is a specially constructed information system whose purpose is to influence, teach, train or cultivate the mind and character of youth. Television, of course, does exactly that, and does it relentlessly. In so doing, it competes successfully with the school curriculum. By which I mean, it damn near obliterates it.

Thou shalt have no prerequisites

Television is a nongraded curriculum and excludes no viewer for any reason, at any time. In other words, in doing away with the idea of sequence and continuity in education, television undermines the idea that sequence and continuity have anything to do with thought itself.

Thou shalt induce no perplexity

the average television viewer could retain only 20 percent of the information contained in a fictional televised news story.

21 percent of television viewers could not recall any news items within one hour of broadcast.

What Huxley teaches is that in the age of advanced technology, spiritual devastation is more likely to come from an enemy with a smiling face than from one whose countenance exudes suspicion and hate.

that only through a deep and unfailing awareness of the structure and effects of information, through a demystification of media, is there any hope of our gaining some measure of control over television, or the computer, or any other medium. How is such media consciousness to be achieved?

we are in a race between education and disaster,

necessity of our understanding the politics and epistemology of media.

Audrey Watters

Unfair Taxes

3 min read

Here's another rant I posted to Facebook today (in lieu of my actual work):

 What counts as “fair share” of taxes is a totally subjective assessment. Most Americans do believe that the wealthy and corporations do not pay their “fair share,” although according to Pew, Democrats find this much more disconcerting than Republicans do. (Corporate taxes today make up around 10% of federal revenues, down from about a third of revenues in the 1950s.)

Sure, people have every right to claim deductions. But we aren’t talking about Trump taking advantage of deductions here, ffs.

If nothing else, we should recognize the way in which tax laws are set up and how the tax structure benefits the affluent much more than the poor – from deductions for having a mortgage to the low rate on capital gains. And it’s the latter in particular that has helped fuel the growing economic inequality in this country. The wealthy – including folks like Trump – are quite skilled at moving their money into categories that take advantage of lower tax rates so that they aren’t being taxed on income (at a high rate) but are taxed on investments (at a lower rate). They can afford to hire lawyers and accountants to do so.

While individual income tax does make up the largest share of government revenues, the fastest growing part of that revenue comes from the payroll tax. And most Americans – all but the wealthiest 20% – pay more in payroll taxes than they do in income taxes.

To Susanne’s point: Trump seemed to indicate last night he pays nothing in taxes. I agree with her that that is wrong. It is grossly unfair. (Indeed, some of the reporting out of The Washington Post suggests that Trump has used his foundation to commit tax fraud in his attempts to avoid paying taxes.) The Clintons, for what it’s worth, paid over $3 million. I’m sure that had they hired “the best people” as Trump does, they could have whittled that down. But they didn’t. Because when you are wealthy, you have an obligation to help fuel prosperity for everyone, not just line your own pockets. And almost all economists agree that raising taxes on the wealthy would have enormous benefits, including addressing the growing inequality that this country faces.

Audrey Watters

Presidential Debate, No. 1

3 min read

I come from a long line of racists. I mean, let’s be honest white folks, we all do.

But all I can think right now is of my dad (RIP Kirk), who called me in tears on November 4, 2008 because he had cast his first vote for a Democrat in a presidential election.

I’m from Wyoming. One of the reddest states. You’re either a Republican there or you might as well be a Commie.

My dad called me that night in tears because he was proud of his vote and, I think, frightened of his vote. He said his dad would be rolling in his grave that his son had voted for a Black president. “What was this country coming to? A better place. A better place,” he kept repeating.

My dad said he couldn’t vote for McCain. He just couldn’t. He couldn’t support McCain if the guy would pick Palin as his running mate. To acquiesce to the kind of people who support Palin, he said, was to surrender everything that had made the Republican Party great; and even more, everything that had made this very flawed country believe in progress. “She doesn’t believe in dinosaurs, for fuck’s sake,” he said.

It wasn’t a vote for Obama. Let’s be clear. My dad was one helluva polite white supremacist, I’ll give him that much.

And that’s the Republican Party I knew as a kid in Wyoming, I suppose. One that believed in free markets and war and whiteness but also dinosaurs.

I don’t recognize much about the GOP today. Oh I do recognize the racism, for sure. I recognize the sexism. But there’s something about Trump, about his smug selfishness, sure, but about his willful dismissal of facts and truths – “We hold these truths to be self-evident” – that would have driven my dad and my dad’s dad to another party at this stage. I’m sure of it. I’m not sure how anyone, quite frankly, could have watched Trump in tonight’s debate and then pronounced “that’s my guy.” My dad and my grandpa – small businessmen, both of them – paid taxes, not because they were “dumb” as Trump suggested tonight, but because that’s what you do as a responsible citizen.

I want to write more about all of thisthis. About this embrace of factlessness and fantasy. About selfishness in lieu of sacrifice. But mostly I want to be able to understand how so many people I grew up with and love can support someone like Trump – someone who I think (and I think my dad would think too) is really poised to unravel everything everything that “we” – we white folks, we the people, what have you – have worked toward.

Why, it’s almost as though once “we” are confronted with extending rights and dignity to everyone – “all men are created equal” – that white folks would rather burn it to the ground than let people of color have access to freedom and justice and happiness.

Audrey Watters

Staying with the Trouble

15 min read

Notes and highlights from Donna Haraway's latest book Staying with the Trouble:

Trouble is an interesting word. It derives from a thirteenth-century French verb meaning “to stir up,” “to make cloudy,” “to disturb.” We—all of us on Terra—live in disturbing times, mixed-up times, troubling and turbid times.

Our task is to make trouble, to stir up potent response to devastating events, as well as to settle troubled waters and rebuild quiet places.

In fact, staying with the trouble requires learning to be truly present, not as a vanishing pivot between awful or edenic pasts and apocalyptic or salvific futures, but as mortal critters entwined in myriad unfinished configurations of places, times, matters, meanings.

Chthulucene
[Notice the spelling, intentionally re-spelling Lovecraft]

Kainos

Nothing in kainos must mean conventional pasts, presents, or futures.

Chthonic ones romp in multicritter humus but have no truck with sky-gazing Homo.

Chthonic ones are not safe; they have no truck with ideologues; they belong to no one;

They make and unmake; they are made and unmade.

Kin is a wild category that all sorts of people do their best to domesticate. Making kin as oddkin rather than, or at least in addition to, godkin and genealogical and biogenetic family troubles important matters, like to whom one is actually responsible.

Anthropocene. Capitalocene.

The first is easy to describe and, I think, dismiss, namely, a comic faith in technofixes, whether secular or religious: technology will somehow come to the rescue of its naughty but very clever children, or what amounts to the same thing, God will come to the rescue of his disobedient but ever hopeful children.

The second response, harder to dismiss, is probably even more destructive: namely, a position that the game is over, it’s too late, there’s no sense trying to make anything any better, or at least no sense having any active trust in each other in working and playing for a resurgent world.

This book argues and tries to perform that, eschewing futurism, staying with the trouble is both more serious and more lively.

Companion species are relentlessly becoming-with.

Pigeons are also “creatures of empire”—that is, animals who went with European colonists and conquerors all over the world, including places where other varieties of their kind were already well established, transforming ecologies and politics for everybody in ways that still ramify through multispecies flesh and contested landscapes.

Building naturalcultural economies and lives for thousands of years, these critters are also infamous for ecological damage and biosocial upheaval.

They are treasured kin and despised pests, subjects of rescue and of invective, bearers of rights and components of the animal-machine, food and neighbor, targets of extermination and of biotechnological breeding and multiplication, companions in work and play and carriers of disease, contested subjects and objects of “modern progress” and “backward tradition.”

Becoming-with people for several thousand years, domestic pigeons (Columba livia domestica) emerged from birds native to western and southern Europe, North Africa, and western and southern Asia. Rock doves came with Europeans to the Americas, entering North America through Port Royal in Nova Scotia in 1606.

Called “rats with wings,” feral pigeons are subjects of vituperation and extermination, but they also become cherished opportunistic companions who are fed and watched avidly all over the world.

Domestic rock doves have worked as spies carrying messages, racing birds, fancy pigeons at fairs and bird markets, food for working families, psychological test subjects, Darwin’s interlocutors on the power of artificial selection, and more.

Pigeons are competent agents—in the double sense of both delegates and actors—who render each other and human beings capable of situated social, ecological, behavioral, and cognitive practices.

My hope is that these knots propose promising patterns for multispecies response-ability inside ongoing trouble.

In Project Sea Hunt in the 1970s and ’80s, the U.S. Coast Guard worked with pigeons, who were better at spotting men and equipment in open water than human beings.16 Indeed, pigeons were accurate 93 percent of the time, compared to human accuracy in similar problems of 38 percent.

Clearly, the pigeons and Coast Guard personnel had to learn how to communicate with each other, and the pigeons had to learn what their humans were interested in seeing. In nonmimetic ways, people and birds had to invent pedagogical and technological ways to render each other capable in problems novel to all of them.

Not very many kinds of other-than-human critters have convinced human skeptics that the animals recognize themselves in a mirror—a talent made known to scientists by such actions as picking at paint spots or other marks on one’s body that are visible only in a mirror. Pigeons share this capacity with, at least, human children over two years old, rhesus macaques, chimpanzees, magpies, dolphins, and elephants.

Pigeons passed their first mirror tests in the laboratories of B. F. Skinner in 1981.

pigeons did better at self-recognition tests with both mirrors and live video images of themselves than three-year-old human children.

“It would seem that our pigeons do quite a good job of exhibiting an agape type of love toward each other . . . Our pigeons are actually doing the work of real love.”

“The pigeon ‘backpack’ developed for this project consisted of a combined GPS (latitude, longitude, altitude) / GSM (cell phone tower communication) unit and corresponding antennas, a dual automotive CO/NOx pollution sensor, a temperature sensor, a Subscriber Identity Module (SIM) card interface, a microcontroller and standard supporting electronic components.

response-able.

To re-member, to com-memorate, is actively to reprise, revive, retake, recuperate.

They remember; they entice and prolong into the fleshly present what would disappear without the active reciprocity of partners. Homing or racing pigeons and feral pigeons call both their emergent and traditional peoples to response-ability, and vice versa. City dwellers and rural people of different species and modes of living and dying make each other colombophiles talentueux in company with voyageurs fiables.

the municipal pigeon tower certainly cannot undo unequal treaties, conquest, and wetlands destruction; but it is nonetheless a possible thread in a pattern for ongoing, noninnocent, interrogative, multispecies getting on together.

Companion species infect each other all the time. Pigeons are world travelers, and such beings are vectors and carry many more, for good and for ill. Bodily ethical and political obligations are infectious, or they should be. Cum panis, companion species, at table together. Why tell stories like my pigeon tales, when there are only more and more openings and no bottom lines? Because there are quite definite response-abilities that are strengthened in such stories.

As spies, racers, messengers, urban neighbors, iridescent sexual exhibitionists, avian parents, gender assistants for people, scientific subjects and objects, art-engineering environmental reporters, search-and-rescue workers at sea, imperialist invaders, discriminators of painting styles, native species, pets, and more,

Nobody lives everywhere; everybody lives somewhere. Nothing is connected to everything; everything is connected to something.

denizens of the depths, from the abyssal and elemental entities, called chthonic.

Their many appendages make string figures; they entwine me in the poiesis—the making—of speculative fabulation, science fiction, science fact, speculative feminism, soin de ficelle, so far. The tentacular ones make attachments and detachments; they ake cuts and knots; they make a difference; they weave paths and consequences but not determinisms; they are both open and knotted in some ways and not others. SF is storytelling and fact telling; it is the patterning of possible worlds and possible times, material-semiotic worlds, gone, here, and yet to come. I work with string figures as a theoretical trope, a way to think-with a host of companions in sympoietic threading, felting, tangling, tracking, and sorting. I work with and in SF as material-semiotic composting, as theory in the mud, as muddle.

In passion and action, detachment and attachment, this is what I call cultivating response-ability; that is also collective knowing and doing, an ecology of practices.

“It matters what ideas we use to think other ideas.”

It matters what thoughts think thoughts. It matters what knowledges know knowledges. It matters what relations relate relations. It matters what worlds world worlds. It matters what stories tell stories.

What is it to surrender the capacity to think?

In that surrender of thinking lay the “banality of evil” of the particular sort that could make the disaster of the Anthropocene, with its ramped-up genocides and speciescides, come true.

Arendt insisted that thought was profoundly different from what we might call disciplinary knowledge or science rooted in evidence, or the sorting of truth and belief or fact and opinion or good and bad.

Arendt witnessed in Eichmann not an incomprehensible monster, but something much more terrifying—she saw commonplace thoughtlessness.
[NOTE: Not monsters but thoughtlessness]

a deeper surrender to what I would call immateriality, inconsequentiality, or, in Arendt’s and also my idiom, thoughtlessness.

what it means to hold open space for another.

Extinction is a protracted slow death that unravels great tissues of ways of going on in the world for many species, including historically situated people.

Mourning is about dwelling with a loss and so coming to appreciate what it means, how the world has changed, and how we must ourselves change and renew our relationships if we are to move forward from here. In this context, genuine mourning should open us into an awareness of our dependence on and relationships with those countless others being driven over the edge of extinction . . . The reality, however, is that there is no avoiding the necessity of the difficult cultural work of reflection and mourning. This work is not opposed to practical action, rather it is the foundation of any sustainable and informed response.

Grief is a path to understanding entangled shared living and dying; human beings must grieve with, because we are in and of this fabric of undoing.

Without sustained remembrance, we cannot learn to live with ghosts and so cannot think. Like the crows and with the crows, living and dead “we are at stake in each other’s company.”

carrier bag theory of storytelling
[NOTE: Carrier bag theory of pigeons]

Think we must; we must think. That means, simply, we must change the story; the story must change.

None of the parties in crisis can call on Providence, History, Science, Progress, or any other god trick outside the common fray to resolve the troubles.

sciences, not Science.

This is neither relativism nor rationalism; it is SF, which Latour would call both sciences and scientifiction and I would call both sciences and speculative fabulation—all of which are political sciences, in our aligned approaches.

the time-space-global thing called Anthropocene. The term seems to have been coined in the early 1980s by University of Michigan ecologist Eugene Stoermer (d. 2012),

Still, if we could only have one word for these SF times, surely it must be the Capitalocene.50 Species Man did not shape the conditions for the Third Carbon Age or the Nuclear Age.

Note that insofar as the Capitalocene is told in the idiom of fundamentalist Marxism, with all its trappings of Modernity, Progress, and History, that term is subject to the same or fiercer criticisms. The stories of both the Anthropocene and the Capitalocene teeter constantly on the brink of becoming much Too Big. Marx did better than that, as did Darwin. We can inherit their bravery and capacity to tell big-enough stories without determinism, teleology, and plan.

This Chthulucene is neither sacred nor secular; this earthly worlding is thoroughly terran, muddled, and mortal—and at stake now.

technotheocratic geoengineering fixes

Sympoiesis is a simple word; it means “making-with.” Nothing makes itself; nothing is really autopoietic or self-organizing.

holobiont

A model is a work object; a model is not the same kind of thing as a metaphor or analogy. A model is worked, and it does work.

“We Have Never Been Individuals,”

“response-ability”

“an idea of what the female bee looked like to the male bee . . . as interpreted by a plant . . . the only memory of the bee is a painting by a dying flower.”

The practice of the arts of memory enfold all terran critters. That must be part of any possibility for resurgence!

Symchthonic stories are not the tales of heroes; they are the tales of the ongoing.

“Make Kin Not Babies!”

Making—and recognizing—kin is perhaps the hardest and most urgent part.

Kin making is making persons, not necessarily as individuals or as humans. I was moved in college by Shakespeare’s punning between kin and kind—the kindest were not necessarily kin as family; making kin and making kind (as category, care, relatives without ties by birth, lateral relatives, lots of other echoes) stretch the imagination and can change the story.

Marilyn Strathern taught me that “relatives” in British English were originally “logical relations” and only became “family members” in the seventeenth century—this is definitely among the factoids I love.16 Go outside English, and the wild multiplies.

Kin is an assembling sort of word.

All critters share a common “flesh,” laterally, semiotically, and genealogically.

Cyborgs are kin, whelped in the litter of post–World War II information technologies and globalized digital bodies, politics, and cultures of human and not-human sorts.

they are not hybrids at all. They are, rather, imploded entities, dense material semiotic “things”—articulated string figures of ontologically heterogeneous, historically situated, materially rich, virally proliferating relatings of particular sorts, not all the time everywhere, but here, there, and in between, with consequences.

cyborgs are critters in a queer litter, not the Chief Figure of Our Times.

Conjugating is about yoking together; conjugal love is yoked love; conjugated chemical compounds join together two or more constituents. People conjugate in public spaces; they yoke themselves together transversally and across time and space to make significant things happen.

Marx understood all about how privileged positions block knowledge of the conditions of one’s privilege.

We are all responsible to and for shaping conditions for multispecies flourishing in the face of terrible histories, but not in the same ways. The differences matter—in ecologies, economies, species, lives.

So much of earth history has been told in the thrall of the fantasy of the first beautiful words and weapons, of the first beautiful weapons as words and vice versa. Tool, weapon, word: that is the word made flesh in the image of the sky god. In a tragic story with only one real actor, one real world-maker, the hero, this is the Man-making tale of the hunter on a quest to kill and bring back the terrible bounty. This is the cutting, sharp, combative tale of action that defers the suffering of glutinous, earth-rotted passivity beyond bearing. All others in the prick tale are props, ground, plot space, or prey. They don’t matter; their job is to be in the way, to be overcome, to be the road, the conduit, but not the traveler, not the begetter. The last thing the hero wants to know is that his beautiful words and weapons will be worthless without a bag, a container, a net.

Plants, however, they speculated, “do not communicate” and so have no language. Something else is going on in the vegetative world, perhaps something that should be called art.

Emma Goldman’s understanding of anarchist love and rage make sense in the worlds of ants and acacias. These companion species are a prompt to shaggy dog stories—growls, bites, whelps, games, snufflings, and all. Symbiogenesis is not a synonym for the good, but for becoming-with each other in response-ability.

polite inquiry.

“Think we must!”

Why should Virginia Woolf, or any other woman, or men for that matter, be faithful to such patrilines and their demands for sacrifice? Infidelity seems the least we should demand of ourselves!

Hannah Arendt and Virginia Woolf both understood the high stakes of training the mind and imagination to go visiting, to venture off the beaten path to meet unexpected, non-natal kin, and to strike up conversations, to pose and respond to interesting questions, to propose together something unanticipated, to take up the unasked-for obligations of having met. This is what I have called cultivating response-ability. Visiting is not a heroic practice; making a fuss is not the Revolution; thinking with each other is not Thought. Opening up versions so stories can be ongoing is so mundane, so earth-bound.

That is what “going too far” means, and this curious practice is not safe.

racing pigeons, also called carrier pigeons (in French voyageurs) and with their avid fanciers (in French colombophiles, lovers of pigeons).

Pigeon racing is a working-class men’s sport around the world, one made immensely difficult in conditions of urban war (Baghdad, Damascus), racial and economic injustice (New York, Berlin), and displaced labor and play of many kinds across regions (France, Iran, California).

Audrey Watters

100mins

7 min read

(For a project, formerly known as "Speaking Openly")

I often quote the Marxist Antonio Gramsci – “I am a pessimist because of intelligence but an optimist because of will.” I quote Gramsci because, as “ed-tech’s Cassandra,” I’m often accused of being too critical, too negative about the future of education. And admittedly, I do fear that the future might be grim. But I am an optimist. I think that most of us that work in and near education are – we have to be. We believe in the transformative potential of teaching and learning. We believe in shaping and changing minds; as such, we believe in shaping and changing the future. The three other respondents have all laid out fairly optimistic visions of the future of teaching and learning – deliberately so, no doubt – a future that honors individuals, empathy, cultural relevance, social change, and social justice. And if that future is technologically-enhanced, it’s enhanced in such a way to make it more human and humane and less machine-like.

These are all reflections of our pedagogical goals, I think, as progressive educators. But these are also political goals. And I want to pause here to talk a little bit more about what I see as the future of the politics of education and, perhaps just as importantly, the future of the politics of the digital technology industry. A possible future, I should be clear, if we do not tackle these questions politically.

I think the others were right to point out that “learning” is distinct from “education.” But I think we have to talk about “education,” the institution. We have to scrutinize their role in past injustices, their role in inscribing and re-inscribing hierarchies, and we have to demand better. But I’m not sure we can abandon institutions, particularly public institutions, entirely. I say this recognizing that among the many crises we face right now, a lot of these involve our loss of faith in institutions – in the government, in the Church, in markets, in medicine, in science, in schools. How do we rebuild so that the collective and the communal is protected and that, as I fear would happen without institutions, it’s up to the individual and her or his privilege and social capital alone, in order to survive and succeed.

When I talk about the digital technology industry, I use the shorthand “Silicon Valley.” It’s not quite an accurate term geographically, but I use it to refer to its ideology – one of radical individualism, libertarianism, neoliberalism, exploitative and unchecked capitalism. This ideology isn’t espoused only by those who work and invest in Silicon Valley, of course. But increasingly – because of the financial and political power and influence of Silicon Valley – this ideology is becoming quite dominant.

We must ask how this will affect education. Disinvestment? The shrinking of the public sector? A move away from the communal to the individual? “Personalization” – one of the buzzwords of education technology? Standardization? Outsourcing? Uber-ification? Dismantling of labor protections? Automation? Algorithms? Financialization and monetization of all aspects of our lives? Surveillance, not only by the state but by corporations?

2015 was a record-setting year for education technology investment. Over $6 billion by some estimates. What was popular among investors? Test prep. Tutoring. Private student loans. Learning management systems. Online “skills training.”

Now to be fair, that $6 billion is dwarfed by venture capital that goes into other sub-sectors of tech. And Uber alone raised about $5 billion last year. But this flood of money comes with political power. It comes with a power to reshape – or to try to reshape – all sorts of narratives about what it means to be social, political, workers, students, “users,” citizens. The narratives that Silicon Valley tells about education are that schools are broken, that they are irrelevant, that they are inefficient, that unionized labor prevents innovation, that education can be automated. Successful entrepreneurs do not just form companies or form investment firms; they start philanthropies, like the Gates Foundation and now the Chan-Zuckerberg Initiative. These organizations have an oversized influence on education policy. They envision a future of teaching and learning that is, to borrow from Liz’s formulation, very much about calculation – about data and algorithms and efficiencies and tracking and analytics. They are profoundly anti-democratic.

This is one of the challenges we face, I think, particularly when we talk about a future of teaching and learning and digital technologies: this question of democracy and open communication and collaboration built on technologies of surveillance and command and control, built on top of pre-existing communication networks, never quite erasing the previous manifestations of power or politics, despite our rather utopian hopes that technologies like the Web just might.

Investor Marc Andreessen famously said a few years ago that “software is eating the world.” Andreessen is an important figure to think about in terms of technology and education – and not simply because his investment portfolio includes companies like the MOOC startup (or once upon a time a MOOC startup) Udacity. Andreessen was an undergraduate at the University of Illinois Urbana Champaign where he worked on the Mosaic Browser, the first browser for the Web. He believed that the browser had commercial possibilities and built Netscape Navigator – which shared no code with the browser built by a public university but shared its functionality. Andreessen became a billionaire with Netscape, a company’s whose IPO is generally seen as synonymous with the Internet bubble and with young tech entrepreneurs who would reshape the world. “Software is eating the world.” It’s eating public education, it’s eating higher education, arguably, despite the origins of almost all the innovations of the past sixty years in the computer industry being intimately tied up with these scholarly institutions.

To echo Maha’s question about whose learning – learning by whom and for whom in what contexts, I would add a litany of questions about the world that software is purportedly eating – whose software, who benefits, whose world is being re-enacted and recoded and digitized? A world of the global elite? A world of the global north? A world of engineers? A world of white men? A world of machines?

What about the rest of us? Non machines and non humans alike?

The future of teaching and learning will continue to be, as the history of teaching and learning would show us, political acts, political practices. They must be ones of resistance, I think, to the stories and the practices of exploitation. As we think about institutions – new ones and old ones – we must demand justice. We must cultivate “response- ability” – I’m using this term as Donna Haraway does – to be able to respond, to be able to recognize our complicity in harmful acts past and present, and to think about transformation that is deeply critical and deeply empathetic to all the world around us. This is a political undertaking, and an incredibly urgent one. It isn’t because, as Andreessen gleefully pronounces, because “software is eating the world.” It is because the world is dying or careening at least to another global extinction event. Addressing this isn’t simply a question of engineering. It is a question of compassion and teaching and learning and radical pedagogy. We must be optimists, not pessimists as hard as it can be in the face of global crises. Our world, our survival demands it.

Audrey Watters

Thoughts on Colin Kaepernick

2 min read

(Reposted from Facebook)

Hate the 49ers. Love Colin Kaepernick. Full respect.

1) Among the things that makes this country "great"? We aren't mandated to respond a certain way to the flag or to the national anthem or to other nationalist symbols

2) White folks love to see black folks be "athletic," much of which involves all sorts of long-standing racist performances around the body. The male body. The same body we punish, we laud. The very same body. White folks readily cheer when strong black men's bodies get broke. I can see why Colin would sit down for a celebration of this. I can see why he extends his analysis of broken bodies off the field.

3) There is a long history of black athletes performing some of the most powerful displays of anti-imperialism and anti-white supremacy. I honor this history. I am proud to live in a country of raised black fists, of defiant black athletes but ashamed that it falls upon them to do so. We must recognize the history of white supremacy, built on black bodies.

4) Look at college athletes, particularly in the high profile men's sports -- football, basketball, baseball. This is the new plantation, as Taylor Branch has argued. Schools are completely implicated in this and by this. Black and brown bodies on display. We deny them a college education, while claiming that they get one for free. We use and abuse their body for sport. For. Sport.

5) "I wouldn’t fly the flag on the Fourth of July or any other day. When I see a car with a flag on it, I figure the guy behind the wheel isn't my friend." -- Jackie Robinson. You do not disrespect the flag or the country by refusing to fly or salute the flag. You disrespect it by denying others freedom.

Audrey Watters

Introducing Tressie

4 min read

I was honored to get to introduce Tressie McMillan Cottom this week when she delivered the opening keynote at the Digital Pedagogy Lab Institute. Here's what I said:

When I was first asked to introduce the keynote today, I thought about wearing my Denver Broncos t-shirt to troll Tressie, who I haven’t seen since my team – go Broncos – beat her team, the Carolina Panthers, in the Super Bowl. See, I would have tied it all together though, something about Cam Newton and how society demands certain bodies – Black folks in this case – perform a certain kind of emotional labor alongside physical and intellectual labor, what that looks like not just in post-game interviews, but what that looks like in academia, what that looks like in a public talk.

I was also sorely tempted to tell you an anecdote from that one time we were together on the 17-hour flight from Johannesburg, South Africa, to Atlanta, Georgia and Magic Mike XXL was one of the in-flight movies. But the set-up is kinda long and it perhaps require you have seen the movie and know a little about “Where Mike Got the Magic.” So I’ll save the story for the cocktail hour.

I actually want to be serious with this intro, because Tressie does some of the most seriously important work of anyone I know. In the last four years, her scholarship has become foundational to my own, as we work to analyze the systems and stories surrounding “skills,” “markets,” “certification,” and “schooling.”

I can tell you the first time I heard of Tressie McMillan Cottom. It was 2012. Tressie had written a response (or two) to an article published in The Chronicle of Higher Education by right-wing pundit whose name isn't worth mentioning and started a petition to have Schaefer Riley dismissed from the publication. I caught wind of this all on Twitter (because thankfully, I’m not in academia anymore and I needn’t subscribe to The Chronicle).

In the Chronicle article in question, this pundit argued for the elimination of Black Studies departments by viciously mocking and attacking the work of three doctoral students. The work of three female doctoral students. The work of three Black women.

Perhaps it’s a familiar story to us now: a publication hires someone it knows is going to say outrageous things. That person writes something outrageous. Outrage ensues. Outrage and virality. The publication then solicits articles, from the offender and the offended, in response – “We encourage you to weigh in!” – an attempt, let’s be honest, to extend not resolve the outrage. As the business model for online publishing increasingly depends on page-views, we get rage clicks. Hate reads.

And Tressie, then a doctoral student herself, named it. She named it for what it is – not just the baiting (link baiting, click baiting, race baiting), but “the institutional logic of racism.” The institutional logic of racism at work on the pages of the premier publication for higher education, one that echoes the institutional logic of racism in higher education.

The Chronicle of Higher Education is just one of the many, many gatekeepers in higher education. It’s the publication that faculty, staff, administrators, and yes graduate students are urged to turn to for the latest on the state of the institution, the disciplines, the politics, the future. It helps identify and shape the important issues, the important characters. The Chronicle, like all gatekeepers, carves out who belongs, whose scholarship – whose lives – matter. These gatekeepers distinguish, designate, and reinforce prestige.

Higher ed is, as Tressie’s work reminds us, a “prestige cartel.” (Her book Lower Ed: The Troubling Rise of For-Profit Colleges in the New Economy will be out in February.) This distinction, this stratification – “high” and “low” – coincides, overlaps with others – “real” and “fake,” “public” and “private,” “open” and “closed,” “Ivy” and the rest of us plebs, and perhaps central to our purposes here at this event, “offline” and “online,” “standardized” and “personalized.” The keywords of the new higher ed-tech economy – “innovative,” “disruptive,” “at scale” versus the old, the traditional, the outmoded, the irrelevant.

I’m honored today to introduce the Digital Pedagogy Institute’s opening keynote, assistant professor of sociology at Virginia Commonwealth University Dr. Tressie McMillan Cottom – a model public scholar, openly and ferociously engaged in issues of education and justice. My friend…

Audrey Watters

Notes from The Real World of Technology

26 min read

Ursula Franklin passed away several weeks ago. Although I'd been exposed to her work via several (Canadian, feminist) technologies and scientists, I hadn't ever read this book. I have to say: it's the best book I've read on technology in a very, very long time.

technology has built the house in which we all live. The house is continually being extended and remodelled.

Technology, like democracy, includes ideas and practices; it includes myths and various models of reality. And like democracy, technology changes the social and individual relationships between us. It has forced us to examine and redefine our notions of power and of accountability.

technology as practice

Technology is not the sum of the artifacts, of the wheels and gears, of the rails and electronic transmitters Technology is a system. It entails far more than its individual material components. Technology involves organization, procedures, symbols, new words, equations, and, most of all, a mindset.

Technology also needs to be examined as an agent of power and control, and I will try to show how much modern technology drew from the prepared soil of the structures of traditional institutions, such as the church and the military.

technology’s social impact. I myself am overawed by the way in which technology has acted to reorder and restructure social relations, not only affecting the relations between social groups, but also the relations between nations and individuals, and between all of us and our environment. To a new generation, many of these changed relationships appear so normal, so inevitable, that they are taken as given and are not questioned. Yet one can establish clear historical trends. In order to understand the real world of technology and cope with it, we need to have some knowledge of the past, as well as to give some thought to the future.

Central to any new order that can shape and direct technology and human destiny will be a renewed emphasis on the concept of justice. The viability of technology, like democracy, depends in the end on the practice of justice and on the enforcement of limits to power.

The historical process of defining a group by their agreed practice and by their tools is a powerful one. It not only reinforces geographic or ethnic distributions, it also affects the gendering of work.

The common practice that a particular technology represents, in addition to leading to an identification with culture and gender, can also lead to the “right” of the practitioners to an exclusive practice of the technology.

Another facet of the concept of technology as practice is the fact that the practice can define the content.

Work-related technologies make the actual practice easier.

control- and work-related technologies

holistic technologies and prescriptive technologies

Holistic technologies are normally associated with the notion of craft.

Using holistic technologies does not mean that people do not work together, but the way in which they work together leaves the individual worker in control of a particular process of creating or doing something.

It is the first kind of specialization, by product, that I call holistic technology, and it is important because it leaves the doer in total control of the process. The opposite is specialization by process; this I call prescriptive technology.

“division of labour”

a production method.

The amount of material found, and the knowledge that this constitutes only a small fraction of what was produced, assures us of the presence of a large, coordinated production enterprise. It was only when I considered in detail – as a metallurgist – what such a production enterprise would entail, that the extraordinary social meaning of prescriptive technologies dawned on me. I began to understand what they meant, not just in terms of casting bronze but in terms of discipline and planning, of organization and command.

When work is organized as a sequence of separately executable steps, the control over the work moves to the organizer, the boss or manager.

invention. In political terms, prescriptive technologies are designs for compliance.

Today’s real world of technology is characterized by the dominance of prescriptive technologies. Prescriptive technologies are not restricted to materials production. They are used in administrative and economic activities and in many aspects of governance, and on them rests the real world of technology in which we live. While we should not forget that these prescriptive technologies are often exceedingly effective and efficient, they come with an enormous social mortgage. The mortgage means that we live in a culture of compliance, that we are ever more conditioned to accept orthodoxy as normal, and to accept that there is only one way of doing “it.”

Any tasks that require caring, whether for people or nature, any tasks that require immediate feedback and adjustment, are best done holistically. Such tasks cannot be planned, coordinated, and controlled the way prescriptive tasks must be.

When successful, prescriptive technologies do yield predictable results. They yield products in numbers and qualities that can be set beforehand, and so technology itself becomes an agent of ordering and structuring.

The ordering that prescriptive technologies has caused has now moved from ordering at work and the ordering of work, to the prescriptive ordering of people in a wide variety of social situations.

“the digitalized footprints of social transactions,” since the technology can be set up not only to include and exclude participants, but also to show exactly where any individual has spent his or her time.

prescriptive technologies eliminate the occasions for decision-making and judgement in general and especially for the making of principled decisions. Any goal of the technology is incorporated a priori in the design and is not negotiable.

As methods of materials production, prescriptive technologies have brought into the real world of technology a wealth of important products that have raised living standards and increased well-being. At the same time they have created a culture of compliance.

scale

Underlying the different uses of the concept of scale are two different models or metaphors: one is a growth model, the other a production model.

A production model is different in kind. Here things are not grown but made, and made under conditions that are, at least in principle, entirely controllable.

Production, then, is predictable, while growth is not.

choosing a particular university, following a particular regimen, will turn the student into a specifiable and identifiable product.

If there ever was a growth process, if there ever was a holistic process, a process that cannot be divided into rigid predetermined steps, it is education.

The real world of technology seems to involve an inherent trust in machines and devices (“production is under control”) and a basic apprehension of people (“growth is chancy, one can never be sure of the outcome”).

vernacular reality

extended reality that body of knowledge and emotions we acquire that is based on the experience of others.

constructed or reconstructed reality. Its manifestations range from what comes to us through works of fiction to the daily barrage of advertising and propaganda. It encompasses descriptions and interpretations of those situations that are considered archetypal rather than representative. These descriptions furnish us with patterns of behaviour. We consider these patterns real, even if we know the situations have been constructed in order to make a particular pattern very clear and evident.

projected reality – the vernacular reality of the future.

today there is no hierarchical relationship between science and technology. Science is not the mother of technology. Science and technology today have parallel or side-by-side relationships; they stimulate and utilize each other. It is more appropriate to regard science and technology as one enterprise with a spectrum of interconnected activity than to think of two fields of endeavour – science as one, and applied science and technology as the other.

Today scientific constructs have become the model of describing reality rather than one of the ways of describing life around us.

As a consequence there has been a very marked decrease in the reliance of people on their own experience and their own senses.

the downgrading of experience and the glorification of expertise is a very significant feature of the real world of technology.

the message-transmission technologies have created a host of pseudorealities based on images that are constructed, staged, selected, and instantaneously transmitted.

Media images seem to have a position of authority that is comparable to the authority that religious teaching used to have.

Media images seem to have a position of authority that is comparable to the authority that religious teaching used to have.

As a community we should look at what the new technologies of message-forming and -transmitting do to our own real world of technology and democracy. This is why I have a sense of urgency to map the real world of technology, so that we might see how in our social imagination the near is disadvantaged over the far. We should also understand that this does not have to be so.

Viewing or listening to television, radio, or videos is shared experience carried out in private. The printing technologies were the first ones that allowed people to take in separately the same information and then discuss it together. Prior to that, people who wanted to share an experience had to be together in the same place – to see a pageant, to listen to a speech.

there are new, high-impact technologies and these produce largely ephemeral images. The images create a pseudocommunity, the community of those who have seen and heard what they perceive to be the same event that others, who happened not to have watched or listened, missed for good.

Since normally only a fraction of the pseudocommunity become members of the real and active community, the possibility of forming such groups may be greater in the case of broadly based international concerns that are “the far” for most viewers than in the case of specific problems of “the near.”

There is a lot of talk about global crises and “our common future.”8 However, there is far too little discussion of the structuring of the future which global applications of modern technologies carry in their wake.

Whenever human activities incorporate machines or rigidly prescribed procedures, the modes of human interaction change.

technical arrangements reduce or eliminate reciprocity. Reciprocity is some manner of interactive give and take, a genuine communication among interacting parties.

For example, a face-to-face discussion or a transaction between people needs to be started, carried out, and terminated with a certain amount of reciprocity. Once technical devices are interposed, they allow a physical distance between the parties. The give and take – that is, the reciprocity – is distorted, reduced, or even eliminated.

reciprocity is not feedback. Feedback is a particular technique of systems adjustment. It is designed to improve a specific performance.

Reciprocity, on the other hand, is situationally based.

It is neither designed into the system nor is it predictable. Reciprocal responses may indeed alter initial assumptions. They can lead to negotiations, to give and take, to adjustment, and they may result in new and unforeseen developments.

these technologies have no room for reciprocity. There is no place for response. One may want to speculate for a moment whether this technological exclusion of response plays a part in the increasing public acceptance of the depiction of violence and cruelty.

technologically induced human isolation:

even in the universe of constructed images and pseudorealities there still exists a particular enclave of personal directness and immediacy: the world of the ham-radio operator. It is personal, reciprocal, direct, affordable – all that imaging technology is not – and it has become in many cases a very exceptional early warning system of disasters. It is a dependable and resilient source of genuine communication. I am citing this example so as not to leave the impression that the technological reduction of meaningful human contact and reciprocal response is inherently inevitable.

the growth of prescriptive technologies provided a seed-bed for a culture of compliance.

Technology has been the catalyst for dramatic changes, in the locus of power.

Any task tends to be structured by the available tools.

Tools often redefine a problem.

The real world of technology is a very complex system. And nothing in my survey or its highlights should be interpreted as technological determinism or as a belief in the autonomy of technology per se. What needs to be emphasized is that technologies are developed and used within a particular social, economic, and political context. They arise out of a social structure, they are grafted on to it, and they may reinforce it or destroy it, often in ways that are neither foreseen nor foreseeable. In this complex world neither the option that “everything is possible” nor the option that “everything is preordained” exists.

A change in one facet of technology, for instance the introduction of computers in one sector, changes the practice of technology in all sectors. Such is the nature of systems.

I much prefer to think in terms not of systems but of a web of interactions.

When women writers speak about reweaving the web of life,5 they mean exactly this kind of pattern change. Not only do they know that such changes can be achieved but, more importantly, they know there are other patterns. The web of technology can indeed be woven differently, but even to discuss such intentional changes of pattern requires an examination of the features of the current pattern and an understanding of the origins and the purpose of the present design.

1740s, a very influential book was published by La Mettrie called L’Homme-machine

the discovery of the body as object and instrument of power led to a host of regimes of control for the efficient operations of these bodies, whether they were the efficiencies of movement, the measured intervals of the organization of physical activities, or the careful analysis and timing of the tasks bodies could perform, usually in unison.

It was into this socially and politically well prepared soil that the seeds of the Industrial Revolution fell. The factory system, with its mechanical devices and machines, only augmented the patterns of control. The machinery did not create them.

To plan with and for technology became the Industrial Revolution’s strongest dream. The totally automated factory – that is, a factory completely without workers – was discussed by Babbage and his contemporaries in the early nineteenth century.

While the eighteenth century exercised control and domination by regarding human bodies as machines, the nineteenth century began to use machines alone as instruments of control.

For the British manufacturers, machines appeared more predictable and controllable than workers. The owners of factories dreamt of a totally controlled work environment, preferably without any workers. If and where workers were still needed, they were to be occupied with tasks that were paced and controlled by machines.

Industrial layout and design was often more a case of planning against undesirable or unpredictable interventions than it was of planning for greater and more predictable output and profit.

a clearly perceived loss of workers“ control and autonomy. It was not resistance to technology per se so much as an opposition to the division of labour and loss of autonomy that motivated the workers” resistance.

What the Luddites and other groups of the period clearly perceived was the difference between work-related and control-related technologies.

somehow I find no indication that they realized that while production could be carried out with few workers and still run to high outputs, buyers would be needed for these outputs. The realization that though the need for workers decreased, the need for purchasers could increase, did not seem to be part of the discourse on the machinery question. Since then, however, technology and its promoters have had to create a social institution – the consumer – in order to deal with the increasingly tricky problem that machines can produce, but it is usually people who consume.

Technology has changed this notion about the obligations of a government to its citizens. The public infrastructures that made the development and spread of technology possible have become more and more frequently roads to divisible benefits. Thus the public purse has provided the wherewithal from which the private sector derives the divisible benefits, while at the same time the realm from which the indivisible benefits are derived has deteriorated and often remains unprotected.

The global environ mental destruction that the world now has to face could not have happened without the evolution of infrastructures on behalf of technology and its divisible benefits, and corresponding eclipsing of governments" obligation to safeguard and protect the sources of indivisible benefits. Whether the task of reversing global environmental deterioration can be carried out successfully will depend, to a large extent, on understanding and enforcing the role and obligation of governments to safeguard the world’s indivisible benefits.

Prescriptive technologies are a seed-bed for a culture of compliance.

Many technological systems, when examined for context and overall design, are basically anti-people. People are seen as sources of problems while technology is seen as a source of solutions.

the “technological imperative.”

whatever can be done by technological means, will be done.

the need for a credible long-term enemy.

the changes that technology has brought to the part of citizens in war preparation and warfare. Just as fewer and fewer unskilled workers are needed in a modern technological production system, a country now has little practical need for raw recruits to operate its modern technological destruction system. Abandoning compulsory military service is not so much a sign of peaceful intentions as it is a sign of galloping automation.

Military service from citizens is no longer a prerequisite for war. What is a prerequisite is the compulsory financial service of all citizens, well before any military exchange begins.

Planning, in my sense of the word, originated with prescriptive technologies. As prescriptive technologies have taken over most of the activities in the real world of technology, planning has become society’s major tool for structuring and restructuring, for stating what is doable and what is not. The effects of lives being planned and controlled are very evident in people’s individual reactions to the impingement of planning on them. The real world of technology is full of ingenious individual attempts to sabotage externally imposed plans.

A common denominator of technological planning has always been the wish to adjust parameters to maximize efficiency and effectiveness.

holistic strategies are, more often than not, intended to minimize disaster rather than to maximize gain.

planning as part of the strategy of maximizing gain, and coping as central to schemes for minimizing disaster.

the real world of technology denies the existence and the reality of nature.

the prediction of a senior official at IBM, in an article called “The Banishment of Paperwork.” He confidently forecast the total absence of paperwork in 1984: Computers, within two decades, would have become the sole medium of communication, while all that burdensome paper would have vanished from our desks.

Ivan Illich pointed out in his 1981 essay, Shadow Work,1 that prescriptive technologies, particularly those in the administrative and social-service sectors, produce the desired results only when clients – for instance, parents, students, or patients – comply faithfully and to the letter with the prescriptions of the system. Thus, advanced applications of prescriptive technologies require compliance not only from workers, but also from those who use the technologies or are being processed by them. Illich stressed the role of individual and group compliance by citizens in this process of making prescriptive technologies work.

as more and more of daily life in the real world of technology is conducted via prescriptive technologies, the logic of technology begins to overpower and displace other types of social logic, such as the logic of compassion or the logic of obligation, the logic of ecological survival or the logic of linkages into nature. Herbert Marcuse, in One Dimensional Man, speaks of this overpowering.

a “mechanical bride,” the term used by Marshall McLuhan to describe the relationship between car and owner.

It is aimed at creating an atmosphere of harmless domesticity around the new technology to ease its acceptance.

If one doesn’t watch the introduction of new technologies and particularly watch the infrastructures that emerge, promises of liberation through technology can become a ticket to enslavement.

The authors of this prognostication evidently assumed that the introduction of the sewing machine would result in more sewing – and easier sewing – by those who had always sewn. They would do the work they had always done in an unchanged setting. Reality turned out to be quite different. With the help of the new machines, sewing came to be done in a factory setting, in sweatshops that exploited the labour of women and particularly the labour of women immigrants. Sewing machines became, in fact, synonymous not with liberation but with exploitation.

What turns the promised liberation into enslavement are not the products of technology per se – the car, the computer, or the sewing machine – but the structures and infrastructures that are put in place to facilitate the use of these products and to develop dependency on them.

To recap: many new technologies and their products have entered the public sphere in a cloud of hope, imagination, and anticipation. In many cases these hopes were to begin with fictional, rather than real; even in the best of circumstances they were vastly exaggerated. Discussion focused largely on individuals, whether users or workers, and promised an easier life with liberation from toil and drudgery. Discourse never seemed to focus on the effects of the use of the same device by a large number of people, nor was there any focus on the organizational and industrial implications of the new technologies, other than in the vaguest of terms.

once a given technology is widely accepted and standardized, the relationship between the products of the technology and the users changes. Users have less scope, they matter less, and their needs are no longer the main concern of the designers. There is, then, a discernable pattern in the social and political growth of a technology that does not depend on the particular technical features of the system in question.

how teaching, research, and practice in most areas of science and technology follow essentially male patterns by being basically hierarchical, authoritarian, competitive, and exclusive.

Major facets of technology are related to prescriptive practices and thus to the development of instruments of power and control.

The great contribution of women to technology lies precisely in their potential to change the technostructures by understanding, critiquing, and changing the very parameters that have kept women away from technology.

What does it say about our society, when human needs for fellowship and warmth are met by devices that provide illusions to the users and profits to the suppliers?

as a response to loneliness, it seems to me deceitful and fraudulent.

the disregard that technical designers can have for the needs of operators. Typists not only got awkward machines, but they – and the telephone operators – also encountered the usual division of work that has become part of mechanization and automation. As the technologies matured and took command, women were left with fragmented and increasingly meaningless work.

The way of doing something can be “holistic” when the doer is in control of the work process. The way of doing something can also be “prescriptive,” when the work – whatever it might be – is divided into specified

steps, each carried out by separate individuals. This form of division of labour, historically quite old and not dependent on the use of machines, is a crucial social invention at first practised in the workplace.

I hold that, in fact, we have lost the institution of government in terms of responsibility and accountability to the people. We now have nothing but a bunch of managers, who run the country to make it safe for technology.

I firmly believe that when we find certain aspects of the real world of technology objectionable we should explore our objections in terms of principle, in terms of justice, fairness, and equality. It may be well to express concerns as questions of principle rather than to try to emphasize merely pragmatic explanations – for instance, that objectionable practices may also be inefficient, inappropriate, or polluting.10 The emphasis on a pragmatic rationale for choice tends to hide the value judgements involved in particular technological stances.

When my colleagues in the field of cold-water engineering speak of “ice-infested waters,” I am tempted to think of “rig-infested oceans.” Language is a fine barometer of values and priorities. As such it deserves careful attention.

Let’s make a checklist to help in the discourse on public decision-making. Should one not ask of any public project or loan whether it: (1) promotes justice; (2) restores reciprocity; (3) confers divisible or indivisible benefits; (4) favours people over machines; (5) whether its strategy maximizes gain or minimizes disaster; (6) whether conservation is favoured over waste; and (7), whether the reversible is favoured over the irreversible?

redemptive technologies

the development and use of redemptive technologies ought to be part of the shaping of a new social contract appropriate for the real world of technology, one that overcomes the present disenfranchisement of people.

“protest and survive.”

“Let us understand, and on the basis of our common understanding, protest.” We must protest until there is change in the structures and practices of the real world of technology, for only then can we hope to survive as a global community.

If such basic changes cannot be accomplished, the house that technology built will be nothing more than an unlivable techno-dump.

many such communications have to be regarded as messages looking for receivers.

I have never liked the term cyberspace because it neither describes a space nor does its current use reflect the concepts of control and systems-design implied in the term cybernetics, after which the term cyberspace was patterned.

I got into real trouble once, when I suggested that the Internet could be looked at as one giant dump: people and organizations dump information in bits and pieces; they also retrieve whatever is of use and interest to them. What is found by the scavengers depends on where they dig, what was dumped, and what is considered useful or relevant enough to be retrieved. There is no pattern in the digging or reassembly, no one path through the dump, no compulsory reference to the scource of the bounty. And since the Internet contains information rather than stuff, the same treasures, or junk, can be retrieved again and again.

measured time and experienced time.

The role of asynchronicity in unravelling social and political patterns without apparent replacement with other patterns cannot be overestimated.

Many people have experienced the asynchronous forms of labour and have felt their consequences; the impact often includes the lack of work-related solidarity and selfidentification that can have profound social implications.

Women in particular have often treasured the opportunity to work asynchronously – getting a bit of writing done when the kids are asleep, sneaking in a slice of private life into their tightly structured existences. But I see a real difference between supplementing a rigidly patterned structure with asynchronous activities and substituting synchronous functions by asynchronous schemes.

The inhabitants of the City of Bits are still real live human beings, yet nature, of which humans are but a small part, appears to have no autonomous space in the bitsphere. There are no seasonal rhythms, no presence of the land nor the ebb and flow of individual lives, even though these are the synchronous patterns that have shaped culture and community throughout the time and, through their patterns, have provided a source of meaning to people for many generations.

the difference between a mechanism and an organism.

the biosphere and the bitsphere

Within the biosphere, human beings have attempted to codify and transmit their understanding of the world around them by ordering their experiences into general schemes and structures.1 Myths, religion, and science have endeavoured to transmit knowledge and experience so ordered as to convey sequence and consequence as ordering principles. Learning to recognize such ordering principles has been traditionally part of growing up in a given society. Ordering schemes help us to evaluate and interpret new knowledge and experience.

One of the most striking attributes of the bitsphere, on the other hand, is the absence of structure.

Unfortunately, the new technologies have entered the realm of education largely because they were regarded as production improvements, promising better products and faster or bigger production runs, and not because they were deemed to offer enrichment to the soil. Thus it is not surprising that the electronic classroom raises the same types of problems and exhibits the same social and political difficulties that one encounters in the realm of work or governance in the real world of the new technologies.

the displacement of people by devices

When external devices are used to diminish the need for the drill of explicit learning, the occasion for implicit learning may also diminish.

As considerations of efficiency and cost-cutting shift the balance of synchronous and asynchronous classroom activities, the balance of explicit and implicit learning is changing. While the pool of information available to the students may increase, the pool of available understanding may not. This has considerable consequences for social cohesion and peace and deserves careful attention.

how and where, we ask again, is discernment, trust, and collaboration learned, experience and caution passed on, when people no longer work, build, create, and learn together or share sequence and consequence in the course of a common task?

where, if not in school and workplace, is society built and changed?

the practice of democratic governance is in grave question and the advancement of social justice and equality appears stalled in a labyrinth of random transactions.14 This does not have to be so. The interface of the biosphere and the bitsphere not only poses problems and precipitates crises but it offers new opportunities to advance the common good. It will take the collective thought, moral clarity, and strong political will of many people to move towards this goal rather than away from it.

This is a collective endeavour that no group or conglomerate can do on its own. Most of our social and political institutions are both reluctant and ill-equipped to advance such tasks. Yet if sane and healthy communities are to grow and prevail, much more weight has to be placed on maintaining the non-negotiable ties of all people to the biosphere.

Audrey Watters

Notes from Questionnaire

12 min read

Here are my notes from Evan Kindley's new book Questionnaire. I think this is an incredibly important book for those interested in the histories of educational testing as well as the futures of learning analytics. Hopefully I'll carve out time to write a longer review:

The fact must be faced: for many of us, under the right circumstances, filling out forms is fun.

The word “questionnaire” appears first in French, in its modern sense, in the mid-nineteenth century. Some of the word’s early usages suggest persistent associations with the Catholic practices of catechism and confession, as well as governmental inquisition and interrogation. (In the eighteenth century, the term “questionnaire-juré” described a torturer.)

“intelligencers”

the importance of blank “job-printed” forms to the rise of bureaucracy and the consolidation of the new capitalist economy in the nineteenth and twentieth centuries.

Blank forms, Gitelman argues, are the ultimate bureaucratic objects: bland, impersonal, utilitarian documents designed to help officials process and sort large groups of people.

The history of the questionnaire is the history of attempts to make interacting with such dreary objects more and more fun for more and more people.

people can actually enjoy interrogation by questionnaire,

The history of the questionnaire is thus also a history of psychological manipulation, and of salesmanship: a series of attempts to find the magic words that will open the heart of the public.

“they will be used only as data for general statistical conclusions.”

On Galton: * a raft of dubious generalizations. * a definite methodological success.

More than any other single scientific work, English Men of Science established the self-report questionnaire in the United Kingdom as a legitimate instrument for the collection of empirical data.

In his work on heredity, he took the first steps toward solving a major practical problem for the social sciences: how to convince people to overcome their disinclination to provide personal information about themselves.

he exploited financial instincts, offering cash prizes

The emphasis was on the generation of family heirlooms rather than of experimental data.

With this stratagem, Galton invented the baby book, a popular genre that continues to flourish today.

A combination of rationalism, progressivism, and narcissism drove the early development of the questionnaire.

The Victorians loved questionnaires because they pandered to their faith in science, their earnest desire to improve the world around them, and – most important, perhaps – their intense interest in the quotidian details of their own lives.

the mania for anthropometric questionnaires bears a curious similarity to another contemporary trend among the British middle class of the late nineteenth century: the vogue for confession albums, which were a popular parlor game in the 1870s and later.

Like the personal details that circulate on today’s social media, these revelations were not true confessions but symbolic tokens meant to be shared.

“le questionnaire de Proust.”

“the advantage of questionnaires, from a financial point of view, was that not one of the celebrities who agree to submit [answers] expect to be paid.” [Note: this is so very similar to the extraction of data via quizzes today. Lots of pageviews; little to no payout for participants and/or writers]

In 1905, the French psychologists Alfred Binet and Théodore Simon developed a scale to measure the intelligence of children aged three to twelve. Lewis Terman, a psychology professor at Stanford, revised it in 1916 to create the Stanford-Binet Intelligence Scales, which in turn provided the model for the Scholastic Aptitude Test (SAT), the first national standardized intelligence test in the United States, introduced in 1926.

Tests like the Binet-Simon, the Stanford-Binet, and the SAT, by contrast, were used for evaluative purposes, and thus had an immediate impact on the life chances of those who took them.

The immediate aims of the Alpha and Beta examinations were pragmatic: they allowed the Army to identify exceptional individuals who might be suited for officer training, and consign the lowest-scoring recruits to labor battalions and other menial posts. But the project also enabled psychologists to amass an unprecedented amount of anthropometric data on the American population.

The Alpha tests were far from what we would now call “culture-blind”: that is, what they measured was not “intelligence” (whatever that means) so much as familiarity with a specific cultural context.

In scientific terms, as measurements of intelligence or ability, such tests are virtually useless. Nonetheless, the study’s findings were almost immediately weaponized by the antiimmigrant nativist movement.

Stress Disorder (PTSD). The severity of the epidemic led the Army to experiment with more rigorous screening of recruits for psychological instability, the governing assumption being that only the mentally weak would “crack up” under the strain of combat.

the Woodworth Psychoneurotic Inventory,

Industrial, or “applied,” psychology came into its own as a field, taking its place alongside Frederick Winslow Taylor’s Scientific Management as a major influence on the culture of capitalist production.

Humm-Wadsworth Temperament Scale

testing was a management science, and, like Taylorism, it was often put to antiunion purposes.

Bernreuter Personality Inventory, the Worthington Personal History Blank, the Thurstone Personality Schedule, the Adams-Lepley Personal Audit, the Allport Ascendance-Submission Reaction Study, the Guilford-Zimmerman Temperament Survey

The Organization Man

Whyte’s book was a national bestseller, and it inaugurated a vicious cultural backlash against mandatory personality testing. Psychological tests at work began to seem like the epitome of totalitarian thought policing, and were thus susceptible to attack from both the left and the right as the 1960s wore on.

The 1964 Civil Rights Act made companies reluctant to use tests that might be shown to have a systematic bias against minorities. In 1966, Senator Sam J. Ervin Jr. of North Carolina convened a hearing on Privacy and the Rights of Federal Employees that specifically targeted personality inventories as an unacceptable invasion of privacy.9

the steady drumbeat of scientific skepticism about its basic validity and value. Intelligence testing had been controversial from the beginning: it was opposed especially vociferously by anthropologists

confirmation bias.

In Mischel’s view, then, the fundamental premise of personality assessment – that individuals possess core psychological traits and attributes that remain consistent across different situations, contexts, and life stages – was simply wrong. All previous attempts to “test” for personality were based on a fundamental fallacy about human behavior, and should therefore be thrown out.

In the 1940s, Myers read an article in Reader’s Digest about the Humm-Wadsworth Temperament Scale entitled “Fitting the Worker to the Job.” The MBTI, modeled on the Humm and other industrial “people-sorters” but grounded in Jungian type theory as opposed to the categories of eugenic psychiatry, was conceived as a career-placement tool that would help employers identify the strengths of job candidates and individuals find their proper line of work.

Beginning in 1962, it was carried by the Educational Testing Service, the publishers of the SAT

Of all of the personality tests developed in the twentieth century – and there have been hundreds – the MBTI is the closest to the language of pop psychology and self-help.

“The Indicator’s unfailingly positive tone blends seamlessly … with our society’s emphasis on promoting self-esteem,” the journalist Annie Murphy Paul has noted.

Oxford Capacity Analysis (OCA)

Scientology founder L. Ron Hubbard

In the mid–1950s, publishers of personality tests began to require their customers to be accredited by the American Psychological Association, thus cutting Hubbard off from access to more legitimate scientific instruments. But it also allowed the church to shape the test to its own institutional requirements.

Ultimately, though, the responses given to these particular questions don’t matter very much, as it appears to be impossible to achieve a “good” score on the OCA.

the test was rigged to produce a negative result

Where the Myers-Briggs test flatters and protects those who take it, revealing to them their special psychological gifts, the Oxford Capacity Analysis is designed to tear your personality down, in order to rebuild.

“Your opinion of you,” then, is that you are a problem only Scientology can solve.

With the birth of the scientific opinion poll, Gallup attests, the long search for that Holy Grail of representative democracy – an accurate gage of popular opinion – was finally reaching an end.

the semantics of interrogation

The whole project of opinion research is predicated on the assumption that people can tell you what they really think.

The rise of the personal questionnaire broadly parallels the rise of women’s literacy, which soared across class divisions in the late nineteenth and early twentieth centuries.

Questionnaires, then, could be mechanisms of psychological control, but also portals to self-reflection, instruments of what the women’s movement of the 1970s would call “consciousness-raising.”

Popenoe founded the American Institute of Family Relations (AIFR), the nation’s first marriage clinic, in Pasadena, California, in 1930.

“the JTA’s statistical assumptions and assessment protocols allowed men much greater deviation [from the norm] than women”: even the math behind the JTA’s was sexist.

the fundamentally conservative enterprise of marriage counseling made some accidental contributions to women’s liberation nonetheless.

Quizzes played an important role in defining this hypothetical individual.

To take a psychological test is to put your trust in science (or pseudoscience, as the case may be). To take a quiz is to put your trust in an omniscient, benevolent magazine editor. Both of them involve a sort of quasi-religious faith. It’s a type of faith based on familiarity, which can often shade into contempt without undermining the basis of the faith, and it has been essential to the development of passionate online fan bases for quizzes, personality tests, purity tests, and other questionnaire-based forms. Even if you don’t share your answers with anyone, you’ve given up a part of yourself to a higher authority: you have confessed.

It allowed questionnaires, freed from any requirement to be accurate, to be fun.

the questionnaire as the basic building block of their information architecture.

In the mid–1950s, at the apex of his fame as a marriage counselor, Popenoe collaborated with computer scientists at Remington Rand on the world’s first computer dating program.

eHarmony, founded in the year 2000 by Neil Clark Warren, a Christian marriage counselor in Popenoe’s adopted hometown of Pasadena. In its early years, it was affiliated with Focus on the Family’s James Dobson, who got his start in the 1970s as one of Popenoe’s assistants at the AIFR.

All computer dating programs are built on a quasi-eugenic premise: that the fitness of a potential mate can be determined objectively, thus allowing “inappropriate” sexual partners to be screened out.

Without subscribing to their racial theories, they share with Popenoe and Galton a belief that human qualities can be quantified and that, once this data is collected and correlated, a better social order can be engineered.

it is becoming increasingly clear that they do far more with their users’ personal data than use it to set them up. A case in point is OkCupid. While it is far from the most successful dating site in raw numbers, OkCupid has had perhaps the greatest influence on the style of contemporary Internet culture at large.

Data is data, and when enough of it is compiled, patterns of some kind will inevitably emerge.

That raw data is used to match OkCupid’s customers, but it’s also sold (as Paumgarten reported in 2011) to academic social scientists, and probably to other outside parties as well.

none of the people represented in this data set agreed to be part of a study, nor did they sign the informed-consent agreements that are prerequisites for any legitimate research on human subjects in the social sciences.

“siren server”

a powerful computer network with exclusive access to a specific type of data (in this case, answers to personal questions relating to dating preferences) and proprietary sorting algorithms to help make sense of it all.

the quality of the typical answer matters less than the quantity of total answers.

quizzes are still a consistent traffic driver for BuzzFeed

BuzzFeed has denied that it’s selling the user data generated by quizzes, or even collecting it beyond basic metrics like how many people have taken the quiz, whether they share it, and their final results.

As soon as you land on any BuzzFeed page, Barker notes, custom variations to the site’s Google Analytics code allow it to see whether you’ve arrived via Facebook, your age, gender, the country you’re currently in, and how many times you’ve shared BuzzFeed content in the past. In the particular cases of quizzes, the site also records each “event” (i.e., each click on the page). “If you click ‘I have never had an eating disorder’” (an actual checklist item from the “How Privileged Are You?” quiz) “they record that click,” Barker writes. This means that, in theory at least, BuzzFeed is in possession of some extraordinarily sensitive information about their users.

they have both the technological capability and a strong economic incentive to do so.

The politics of Big Data are still up for grabs, though it’s difficult to believe that things won’t ultimately tilt in the direction of management rather than labor.

Audrey Watters

Notes from When We Are No More

28 min read

I'm working on a keynote with the phrase "Memory Machines" in the title, as this book has me thinking about the importance of personal memory, cultural memory, knowledge-making, and ed-tech.

Over forty thousand years ago, humans discovered how to cheat death. They transferred their thoughts, feelings, dreams, fears, and hopes to physical materials that did not die. They painted on the walls of caves, carved animal bones, and sculpted stones that carried their mental and spiritual lives into the future. Over generations we have created sophisticated technologies for outsourcing the contents of our minds to ever more durable, compact, and portable objects.

The carrying capacity of our memory systems is falling dramatically behind our capacity to generate information.

Every innovation in information technology, going back to ancient Mesopotamians’ invention of cuneiform tablets, precipitates a period of overproduction, an information inflation that overpowers our ability to manage what we produce.

Having more knowledge than we know what to do with while still eager to acquire more is simply part of the human condition, a product of our native curiosity.

massive machines that create, store, and read our memory for us.

What this mastery looks like and how we achieve it is today’s frontier of knowledge.

Digital memory is ubiquitous yet unimaginably fragile, limitless in scope yet inherently unstable.

the Declaration of Independence in full Track Changes mode

One data-storage company estimates that worldwide, web data are growing at a rate that jumped from 2.7 billion terabytes in 2012 to 8 billion terabytes in 2015. But nobody really knows—or even agrees how we should be counting bits.

The question had always been: “What can we afford to save?”

The question today is: “What can we afford to lose?”

We are replacing books, maps, and audiovisual recordings with computer code that is less stable than human memory itself.

Code is rapidly overwritten or rendered obsolete by new code.

Digital data are completely dependent on machines to render them accessible to human perception.

Two reigning misconceptions stand in the way of a happy ending to our experiment in reimagining memory for an economy of digital abundance. First is the notion that today’s abundance is a new phenomenon, unseen in human history, which began with computers and is driven by technology.

That was the radically transformative idea that the universe and all that

exists is no more and no less than the material effect of material causes.

materialism,

Four inflection points in particular precede and enable the scientific advances of the nineteenth century that inaugurated today’s information inflation: (1) the development of writing in Mesopotamia for administrative and business purposes, together with professional management of the collections; (2) the ancient Greeks’ development of libraries as sites for the cultivation of knowledge for its own sake; (3) the Renaissance recovery of Greek and Roman writings and the invention of movable type, which together helped to propel the West into the modern age; and (4) the Enlightenment of the eighteenth century, which refashioned knowledge into an action verb—progress—and expanded the responsibilities of the state to ensure access to information.

when science moved from the Age of Reason to the present Age of Matter

The computer is not an accurate model for the brain.

Memory is the entire repertoire of knowledge an animal acquires in its lifetime for the purpose of survival in an ever-changing world—essentially everything it knows that does not come preprogrammed with its DNA. Given the complexity of the world, memory takes a less-is-more approach.

We keep our mental model of the world up to date by learning new things. Fortunately, our memory is seldom really fixed and unchangeable.

data is not knowledge, and data storage is not memory.

facts are only incidental to memory.

memory is not about the past. It is about the future.

Human memory is unique because from the information stored in our brains we can summon not only things that did or do exist, but also things that might exist. From the contents of our past we can generate visions of the future.

This deep temporal depth perception is unique in Nature.

As we consider memory in the digital age, we will see how our personal memory is enhanced, and at times compromised, by the prodigious capacities and instantaneous gratifications of electronic information.

Collective memory—the full scope of human learning, a shared body of knowledge and know-how to which each of us contributes and from which each of us draws sustenance—is the creation of multiple generations across vastly diverse cultures.

culture, a collective form of memory, we create a shared view of the past that unites us into communities and allows large-scale cooperation among perfect strangers.

We recognize ourselves in the irrational yet compelling desire to breach the limits of time and space, to bear witness to our existence, and to speak to beings distant in time and space.

self-awareness, symbolic thought, and language.

Creation myths usually feature people who are dangerously curious.

in Paradise, there is no curiosity.

Culture is the collective wit by which we live.

Through a curious interbreeding of biblical theology and Greek thought, the West gradually stopped seeing knowledge as a threat to reverence and instead began to cherish it as a form of reverence in itself.

writing remained intrinsic to the process of accounting for goods and services and making public—publishing—that information.

cuneiforms represent a more powerful innovation in the deep history of memory than a technical solution. This innovation made the invention of writing not only possible, but also almost inevitable. It led to the creation of objects as evidence, capable of transcending the frailty of human memory and thwarting the temptation to shade the truth by holding people accountable.

objective witnesses that cannot lie or forget.

to manage economic assets, secure control over a ruler’s possessions, and extol his power.

The proliferation of tablets with valuable information led to the vexing questions of storage, security, preservation, and cataloging—an early instance of a Big Data management problem.

CULTURE IS MORE EFFICIENT THAN BIOLOGY

As a mechanism of adaptation, culture is far more efficient than biology. Genetic material is more or less fixed at the time of conception. The genome does not acquire new information from an animal’s experiences of life. Learning modifies the nervous system of an animal, but not its DNA.

Before globalization, there were thousands of ways of being human, each with its own language, dress, kinship systems, counting methods, and food ways.

Because we are by nature culture-making creatures, distinctions we like to draw between what is natural and what is artificial or man-made are illusory at best.

Any feeling of there being a gap between humans and Nature is itself a by-product of culture.

Extending the reach and longevity of knowledge became a distinct competitive advantage not only over animals, but also over rival Homo sapiens.

The strategic alliance between knowledge and power, record keeping and administering power

Autobiographical memory gives us a sense of who we are and provides continuity as we age.

Memory begins to focus less on learning new things than on integrating all that we have experienced and known to provide a sense of continuity between past and present selves.

This is memory’s task of retrospection, to integrate the knowledge that we have, to impute a sense of cause and effect to the events in our lives, and to offer a sense of meaning.

Culture provides the large-scale framework for memory and meaning. It aids in the creation of new knowledge, but it also acts as a filter that over time determines what is of long-term value from the social perspective.

Natural memory is designed to be labile, flexible, easily modified or written over to suit new environments.

Artificial memory is designed to be stable, fixed and unchanging, slow, and resilient, freeing up mental space for individuals to learn new things.

We are all born into a culture, specific to a time and place, that provides a wealth of ready-made knowledge and know-how for us to use in making our way in the world without delay.

how much of our individuality emerges from our native culture,

shared memory is the midwife of innovation and, paradoxically, accelerates the change in our environment.

Collective memory and the sheer power of knowledge accumulated over millennia both push us ahead and pull us from behind.

In periods of great instability, the past becomes more useful as we increasingly tap into the strategic reserve of humanity’s knowledge. Yet it is at moments like this when the past is most easily lost.

The cultural amnesia induced by their complete loss was not Caesar’s doing, but the work of many generations, Christian and Muslim, who felt no responsibility to care for pagan learning.

the astounding efficiencies of ink-on-paper writing were bought at the price of durability.

in terms of sheer durability, the technology for writing reached a peak five thousand years ago and has been going downhill ever since.

By the fifth century B.C., the Greeks had embarked on a novel enterprise, the concerted cultivation of knowledge for its own sake. In doing so, they made three contributions to the expansion of human memory whose effects are still playing out today. The first is the creation of mnemonic or memory techniques that tap into a profound understanding of how memory relies on emotion and spatialization, thereby predating contemporary neuroscience’s findings by twenty-five hundred years.

The second is the creation of libraries as centers of learning and scholarship, not primarily storage depots for administrative records. And third is recognition of the moral hazards of outsourcing the memory of a living, breathing, thinking, and feeling person to any object whatsoever.

By cultivating knowledge for its own sake, they raised the pursuit of beauty and harmony to a level as high as, or higher, than the pursuit of know-how to solve pragmatic problems.

two fundamental discoveries about how the brain forms memories through emotion and spatial thinking. Prizing the art of rhetoric as a civic virtue—democratic citizenship in action, as it were—the Greeks had to perform feats of memorization and recitation.

memory palaces.

part-for-the-whole substitution is termed “synecdoche,”

For reasons we do not fully understand, memory can be reinforced and amplified by using physical objects, whether it is a memory stone, a series of knots tied on fingers or into elaborate quipu, or merely an extension of the body itself.

We do not know why, let alone how, moderate physical movement stirs up the archives of the mind along with the circulation of the blood.

We may refer to the Internet as cyberspace, but its lack of material substance has distinct disadvantages when it comes to finding our way in its dense forests of data. We understand so very little about how real physical space affects memory and vice versa.

Context is spatial.

Until roughly 2000 A.D., if someone wanted access to information, they had to go to where the books and journals, maps and manuscripts were—the library.

a collection that supported scholarship, embedded in a temple to learning.

The collections were copied and managed by experts, studied and edited by other experts. The scholars were paid for their intellectual labor,

It does not matter how comprehensive and well tended a collection may be. If an item cannot be located on demand because it is out of order, misplaced, or incorrectly cataloged, it effectively does not exist.

The librarians of Alexandria could afford to solve the scroll-management problem by throwing a lot of cheap labor at it. But a better solution was a technically advanced format—the codex.

Until the present age, managing physical objects was the only way we managed knowledge.

an imperial library

As depositories of human memory, libraries became the symbol of man’s attempts to master the world through the gathering of all knowledge. No library was quite as ambitious as the one in Alexandria. It is the essential model for the library in the digital age.

according to the written testimony of Plato, Socrates warned that the invention of writing would lead to ignorance and, ultimately, the death of memory.

Once knowledge is transferred to a piece of paper, then it essentially leaves us and with that, Socrates argues, we no longer feel responsible for remembering it.

For Socrates, remembering is a moral action, touching the very substance of our being.

The art of memory was taught as a species of performance,

The very foundation of memory itself was understood to be emergent and performative—not fixed and forever, but coming into being under specific circumstances.

Expanding the scope of knowledge above and beyond a certain scale makes it impossible to achieve the single thing he thought mattered in life: to know thyself.

Fundamental to today’s anxiety about the future of memory is the lurking awareness that our recording medium of choice, the silicon chip, is vulnerable to decay, accidental deletion, and overwriting.

Without preservation, there is no access.

With every innovation in information technology that produces greater efficiency by further compressing data, librarians and archivists begin a frantic race against time to save the new media, inevitably more ephemeral.

By 1500, a mere four decades after printing presses began operations, between 150 and 200 million books flooded the circulation system of European culture.

He called his prose pieces essais, meaning attempts, tests, or perhaps experiments.

he shows us that sometimes the best way to understand ourselves is to reveal ourselves to others,

No institution failed more spectacularly than the papacy,

the papacy’s religious authority and reputation remained fast among the faithful until they encountered detailed (often printed) reports of moral corruption in the papal court, frequently accompanied by salacious drawings

People became disillusioned with the clergy at all levels. But they did not become less religious. On the contrary, it was an inflammation of religious passion that led to the reformation, rather than the dying off, of Christian faith.

The presses had destroyed the possibility of monopolizing channels of communication.

It took a few generations before people and parties became adept at controlling the presses so they could control the message.

he created his audience by giving them something new, something they did not even know they wanted.

As authorities and institutions fail, we are forced to decide for ourselves which sources are trustworthy and which are not. The question of what to believe becomes, almost imperceptibly, a question of who to believe.

A new genre was invented—the newspaper—to meet the growing appetite for novelty, information, and gossip.

“Enlighten the people generally, and tyranny and oppressions of body and mind will vanish like evil spirits at the dawn of day.”

The first libraries were collections of religious books, packed in the luggage of religious pilgrims from the Old World.

they were read with one goal in mind—personal salvation.

For Jefferson, the goal of reading was not salvation but freedom.

Jefferson was more an acquisitor than an annotator.

America’s inane tension between nostalgia and utopian futurism,

private collections are the vanguard of our collective memory, but their value is realized only when they pass into the public sphere.

institutions are important for functions that must persist over long durations of time. Their job is to slow us down, to add friction to the flow of thought, foster inertia, and carve out from the fleeting moment a place for deliberation.

why collectors are so valuable a part of an information ecosystem. The faster the present moves, the more valuable they become. Collectors historically have acted as the first-line defense against the physical loss of our cultural legacy. They collect and curate the artifacts of knowledge on our behalf. While the motives of individual collectors can vary between the poles of intellectual curiosity and personal vanity, great collectors have some larger purpose they wish to accomplish and to which they dedicate enormous amounts of time and treasure. They are the ones who keep the strategic reserve of memory rich, saving various fossils of extinct cultures and ensuring that the collective memory of mankind does not become a monoculture.

Before the Enlightenment, the idea of a right to access, let alone to a universal collection, would have been nonsensical. Now it is the default expectation.

The technologically advanced, data-rich world we live in today all devolves from one central discovery made in the nineteenth century: The universe was created in deep time, extends infinitely across deep space, and leaves a record of its own history in matter. The material universe is itself an archive and the Earth writes its autobiography in matter.

the enthusiastic pedantry of the graduate student.

Jefferson saw the category of Memory or History encompassing all matters intrinsic to the natural world and the result of natural processes.

“WHEN REASON FAILS, WE USE EXPERIENCE”

What is past has now become prologue.

the power of predictions inherent in materialism that captured the public and scientific minds alike.

In hindsight we see Darwin’s proposal that human beings evolved from primates as singularly traumatizing.

As the universe got bigger, we got smaller.

The disestablishment of the church meant that the pursuit of science and learning was protected behind a cordon sanitaire from sectarian battles.

In a nation where there were many different and often competing religious persuasions, liberating the pursuit of knowledge from religious oversight seemed eminently practical as well as self-evidently moral.

From the perspective of memory, the most consequential effect of embracing materialism is its unquenchable appetite for information in its smallest increments—single data points—and as many of them as possible.

evidence

forensis:

Evidence, in other words, is information available to all without prejudice.

It is not esoteric, subjective, or privileged information. Even if not literally to be introduced at court, information has forensic value when it is reliable, publicly available, and authentic—that is, being what it purports to be, not a false representation.

What counts for evidence is always culturally determined, as is its interpretation.

If philosopher is a term “too wide and lofty” for the likes of the assembled, then “by analogy with artist, we may form scientist” as one who is “a student of the knowledge of the material world collectively.”

NEW MACHINES FOR NEW DATA

Beginning in the 1830s, new technologies appear in rapid succession: image capture (the first daguerreotype was taken in 1839); sound recording (the first recording of a human voice was made in 1860 by Édouard-Léon Scott de Martinville and the first playback machine by Thomas Edison in 1877); and X-rays (discovered by Wilhelm Röntgen in 1895).

The nineteenth century was marked by a series of crises around physical and

intellectual control of all the evidence streaming in. Emboldened by the dream of accelerating the rate of human progress and well-being, expanding our control over the natural world, and freeing ourselves us from drudge labor, we went to work on documenting the world. We built more infrastructure to manage the documents, supported the growth of highly specialized fields of knowledge to keep pace with the incoming data, and trained cadres of skilled professionals who in turn spontaneously differentiated themselves, like Darwin’s finches developing differently shaped beaks. Technologies and tools coevolve with the ideas and cultural practices of their users. And then the users outgrow them and want more. Science itself cannot advance without ever finer tools of observation, measurement, experimentation, and the communication of these results to other scholars. (The World Wide Web was devised to speed communication of research results among collaborating scientists at distant sites.)

By the 1840s the forensic imagination had already penetrated popular culture. In 1841, Edgar Allan Poe published the first story of detection, a “tale of ratiocination.”

Their zealous, even ascetic, dedication and single-mindedness of purpose became the hallmark of the professional man (and eventually woman). Immersed in the data-dense environment of a crime scene, Sherlock Holmes always brought laser-like focus and purpose.

When Karl Marx described the “alienation of labor,” he was not just talking economic theory, but also of the increasing perception that laborers were losing a sense of autonomy. As we outsource more of the most intimate part of ourselves—our personal memory and identity—to computer code, the fear of losing our autonomy—the alienation of our data, so to speak—increases because in the digital age, only machines can read our memory and know what we know at scale. As we gain mastery over our machines, this anxiety will lessen. But it will never go away, for the trade-offs we make between our appetite for more knowledge and our need for autonomy and control will continue to keep us on the alert for unintended consequences.

Emboldened by the Enlightenment cult of reason, we saw curiosity no longer as a vice, but as a civic virtue.

The first failure is the interruption of long-term memory formation.

The second failure is the loss or disintegration of memory.

what we perceive in any moment is a combination of real-time perception and stored information, our memory of the world. No creature is able to process enough information in real time to react appropriately to events as they transpire.

Living creatures come with preprogrammed memory, the genome, that encodes the history of the species and provides full instructions on how to become an ant if you are born with ant genes, a marmot if with marmot genes, a human if with human genes.

One of the breathtakingly simple advantages of the cuneiform, scroll, or printed page was that the memories inscribed on them were not easily changed, overwritten, or erased. On the contrary, these durable objects acted in exactly the opposite way our brains work. If kept in reasonably good physical shape, the words and images on a piece of paper would not change one whit for hundreds of years, no matter how many times they were read. Digital memory operates much more like biological memory. It is not really fixed and is easily overwritten or updated without leaving much trace of the changes made.

Deep learning and creativity, on the other hand, rely on the transformation of one day’s intake of perceptions to something sustained over time, embedded within a network of existing associations.

As man-made physical objects, all these artifacts of recorded knowledge—maps and photos, books and magazines—exist on the same scale as the humans who created them. The digital does not.

we know the effect of accelerated processing time and of binary thinking in our everyday lives: We have simultaneously more information and fewer means to sort its value.

“It is clear that the brain is much more like a social network than a digital computer.” Memory and learning are investigated now as products of “the graded, analog, distributed character of the brain.” It turns out that the computer is not an accurate metaphor for the brain.

Most of what we learn bypasses our awareness altogether and goes straight to the emotional and instinctual centers of the brain.

From the time of the Enlightenment onward, Western culture has deemed reason a stronger and more prestigious form of intelligence than emotion. Reason thinks slowly, not intuitively, and is effective as an instrument used to exert human will. But it is not the font of empathy and fellow feeling that social life requires. Computers, on the other hand, can reason with stunning speed, but they cannot simulate human decision-making processes with equal speed because they are not emotional. They can learn to simulate our behaviors by assessing the outcomes of past choices to make probabilistic predictions (“people who liked this also liked …”), and often that is good enough.

It is easy to read S.’s life story as a cautionary tale about the temptation to save all data because our capacity for digital storage keeps growing. The quantity of data we amass does not by itself add up to a grand narrative. Good memory is not data storage. Forgetting is necessary for true memory.

We have created a technologically advanced world that operates at a breathless pace, driven by a culture that demands more innovation (“disruption”). But science tells us that this disruptive and accelerated pace is self-defeating because our bodies and minds still operate at the same tempo as they did in ancient Sumer.

We are a culture obsessed with facts—the intrinsic value of a fact. But our brains do not share this reverence for facts.

The past is a plural noun.

memory is not about the past, it is about the future.

Our perception always tends toward prediction: It anticipates what it is seeing.

If the great feat of memory is to construct a model of the world that approximates reality closely enough—however we do it—the genius of imagination lies in using that model to create alternative orders and models of reality. Memory records the world as so.

Imagination

transposes it into the key of as if, transforming experience into speculation. That is why to lose one’s memory means losing the future. Because imagination is memory in the future tense.

Imagination in adults is quite different from what we find in children. It is more akin to conjectural thinking, the ability to predict based on incomplete information.

a child’s model of the world is full of enchantment and driven by desires.

It is an intrinsic property of human memory,

As the scientist Richard Feynman said, “Science is imagination in a straitjacket.”

The loss of collective memory is as devastating to cultural identity as the loss of personal memory was to Murdoch. In the wars of the last century, both civil and international, the destruction of cultural memory became a central strategy in subduing civilian populations.

How will society respond flexibly, inventively, optimistically to the increasing pace of change if we lose our imagination?

Outsourcing more and more knowledge to computers will be no better or worse for us personally and collectively than putting ink on paper. What is important in the digital age, as it has been for the print era, is that we maintain an equilibrium between managing our personal memory and assuming responsibility for collective memory.

In the twenty-first century that means building libraries and archives online that are free and open to the public to complement those that are on the ground.

By extracting invaluable information from our use data, they create algorithms that predict our desires and streamline production facilities that offer to fulfill them even before we can articulate them—the “if you like this, you will like that” magic of user-driven algorithms. On the one hand, these shortcuts to gratification work for us because they save us so much time. On the other hand, we end up not with more freedom of choice but less, and the results can be easily gamed without our knowledge.

The trade-off between choice and convenience is always there.

“slow thinking,” as opposed to instinctual reactions, “fast thinking.”

most of our personal digital memory is not under our control.

We view our Facebook pages and LinkedIn profiles as intimate parts of ourselves and our identities, but they are also corporate assets.

The fundamental purpose of recording our memories—to ensure they live on beyond our brief decades on Earth—will be lost in the ephemeral digital landscape if we do not become our own data managers.

The skills to control our personal information over the course of our lives are essential to digital literacy and citizenship.

Sound recording is a recent technology, the first recording made in 1860. Despite its youth, in many ways audio is far more vulnerable to decay and loss than parchment manuscripts that have survived for two thousand years.

The forensic imagination means that there are now almost limitless possibilities for the extraction of information from any piece of matter, no matter how fragile.

The new paradigm of memory is more like growing a garden: Everything that we entrust to digital code needs regular tending, refreshing, and periodic migration to make sure that it is still alive, whether we intend to use it in a year, one hundred years, or maybe never. We simply cannot be sure now what will have value in the future. We need to keep as much as we can as cheaply as possible.

Beyond the problem of sheer scale, there are formidable social, political, and economic challenges to building systems that effectively manage an abundance of data, of machines, and of their human operators. These are not technical matters like storage and artificial intelligence that rest in the hands of computer scientists, engineers, and designers. They are social. Digital infrastructure is not simply hardware and software, the machines that transmit and store data and the code in which data are written. It comprises the entire regime of legal and economic conditions under which these machines run—such as the funding of digital archives as a public good, creating a robust and flexible digital copyright regime, crafting laws that protect privacy for digital data but also enable personal and national security, and an educational system that provides lifelong learning for digital citizens. We need to be competent at running our machines. But much more, we need to understand how to create, share, use, and ultimately preserve digital data responsibly, and how to protect ourselves and others from digital exploitation.

in the digital age, the fundamental mission of libraries and archives to preserve and make knowledge accessible is at risk because there is no effective exemption from copyright law that covers the specific technical preservation needs of digital data.

It is unrealistic to assume that in market capitalism, publishers, movie studios, recording companies, and other commercial enterprises will preserve their content after it loses its economic and commercial value or becomes part of the public domain.

The World Wide Web is not a library. It is a bulletin board.

It will be a challenge to re-create the traditional public library online, because a public library exists in large part to provide access to contemporary copyrighted materials.

Only decades of living with digital memory will reveal how reading on a screen differs from reading on a page, how digital audio recording affects our acoustical sensibilities, and how the inescapable ubiquity of information that chases us rather than the other way around alters our habits of thought.

Our twin aspirations—to be open yet to protect privacy, to embed democratic values in our digital code to support the public good while fostering competition and innovation in the private sector—will clash repeatedly.

And so it is with our artificial memory. The more fragile the medium, the more redundancy we need. Nothing we have invented so far is as fragile as digital data.

The web has the scope of a comprehensive library, but it lacks a library’s rules of access and privacy.

Much web content is inaccessible behind paywalls and passwords. Readers leave vivid trails of where they have been online in their browser history.

To reinvent something like the Library of Congress or Alexandria online, we would begin with an Internet that provides easy access to information, make it searchable by something more comprehensive than Google, and add the crucial back end of a network of Internet Archives to ensure persistence of data. Readers and researchers would have use of cheap, ubiquitous fiber connection, regulated as a utility to ensure equitable access. The reading room of the public library, open to all and allowing privacy of use, would now be complemented by similar spaces on the Internet.

(Today a web page lasts on average for one hundred days before changing or disappearing altogether.)

Search engines, whose business is built from the ground up on the reuse of other people’s data, also stake their future on managing and preserving the data they harvest.

The Greeks saw imagination as a divine dispensation from ignorance, a gift from the goddess of memory.

it encourages the pursuit of curiosity for its own sake and democratizes it.

we will take advantage of outsourcing logical tasks to our machines to free up time for more imaginative pursuits.

Our machines will not grow a moral imagination anytime soon. They must rely on ours.

Today, we see books as natural facts. We do not see them as memory machines with lives of their own, though that is exactly what they are.

Audrey Watters

Notes from The Global Pigeon

8 min read

I haven't finished reading The Global Pigeon yet, but I'm starting to pull together some thoughts on pigeons for a keynote this fall. I'll update this post when I finish the book. The following are passages I've highlighted:


“matter out of place.”

Rather than seeking communion with nature, Carmine absconded to his rooftop with a bit of nature’s raw material and relished his power to sculpt the pigeons, through selective breeding and training, according to his will.

the “social self.”

how cross-species encounters can in fact be a constitutive feature of social life in the city.

the pigeon coop was simultaneously an embodiment of the men’s nostalgia for a lost world and an organizing principle of their present-day social relations and identity.

this ritual led pedestrians to interpret the pigeon flocks as rightful—even celebrated—residents of the squares, and that the decision in both cities to evict the seed vendors, redefine pigeons as “rats with wings,” and ban pigeon feeding was part of a larger political project aimed at tackling a host of “quality of life” issues.

the ways that encounters with pigeons mediated people’s experience of urban spaces.

The pigeon is a felicitous animal for exploring the social significance of cross-species urban entanglements.

pigeons have in effect become naturalized urban citizens.

Their presence on city streets is utterly pedestrian, in both senses of the word.

Commonly referred to as “rats with wings,” a label meant to characterize them as filthy vectors of disease,

[… pigeons, though it was decommissioned after mauling a pedestrian’s Chihuahua. …]

People have not always deemed pigeons “nuisance animals” or sought to reduce their numbers. And it is only in the last century or so that pigeons have come to be considered distinct from doves.

“Rock pigeons,” also called “rock doves” (Columba livia), were first domesticated about 5,000 years ago, and for millennia humans selectively bred them to meet a variety of material needs.

The legacy of these efforts is omnipresent: the gray pigeons with black-barred wings and iridescent neck feathers that occupy city streets worldwide today are rock pigeons’ feral descendents.

Pigeons partly domesticated themselves.

“reproductive magic,”

Their gregariousness and docility also made them fitting symbols of peace (their predators, hawks, stand for aggression).

In feudal times, pigeon meat and guano were deemed so valuable that the right to own a dovecote was restricted to nobility—no wonder, then, that so many of these ornate structures were toppled in the wake of the French Revolution.

Over time, humans bred larger and fatter varieties for food, and stronger and leaner varieties with enhanced “homing” instincts (called homers) that could carry messages over hundreds of miles.

Pigeon fancying was so popular, and the variation in shapes, sizes, and colors that fanciers had produced through centuries of selective breeding was so extraordinary, that Charles Darwin opened On the Origin of Species with an exhaustive genealogy of so-called “toy” pigeon breeds.

“Everybody is interested in pigeons.”

The pigeons that occupy our sidewalks never existed in the wild. They are descendants of escaped domesticated pigeons that were imported to the United States, Europe, and elsewhere centuries ago.

French settlers who introduced the rock dove to North America in the early 1600s, primarily for consumption.

Society, then, has abetted an animal whose niche is one designed to be the exclusive habitat of humans: the sidewalk.

nonnative, feral birds—neither purely wild nor domestic—now confront humans as our own historical detritus

Synanthropes like pigeons challenge the conventional notion that urbanization has insulated people from contact with animals and nature.

biophilia

the social experience of animals.

Animals were central to totemic belief systems not primarily because they embodied nature but because they were useful symbols for expressing the relationship between self and society.

Conceiving of nonhumans as separate from society—the corollary of Nature Lost—precludes a sociological understanding of how they are incorporated into contemporary social life.

Animals become pets or pests through social processes of interaction and classification.

While sociologists recognize that our sense of self is created through interaction, prevailing accounts of the “social self” neglect the role that nonhumans play in its construction.

we most enjoy having nature in our midst when we can exercise our “impulse to reduce—and thereby, order and control” it.

The edges of the city and nature continually rub against, and run over, each other like tectonic plates. The interaction may be smooth, or tremors may result; and the fault line can widen, narrow, and shift as an effect of the encounter. By straddling the fault line, we can begin to understand how the borders and contours of urban experience are shaped through cross-species encounters.

how animals can become part of what sociologist Erving Goffman called the “interaction order”36 of public space, and how social contexts structure whether or not people welcome the presence of the “wild” in city streets.

“hybridity,”

the capacity of pigeons to recognize regular feeders and coax unwitting park visitors into feeding them, behaviors that reflected the birds’ adaptation to the demands and opportunities of city living.

pigeons are “able to learn quickly from their interactions with human feeders” on the street and “use this knowledge to maximize the profitability of the urban environment,” discriminating between friendly feeders and hostile pedestrians and adopting begging strategies that elicit food from strangers.

the social significance of pigeon feeding.

pigeons, like chickens, are naturally inclined to scavenge for food by pecking the ground—not flowers, trees, and shrubs (which is why they walk rather than hop).

However different its intentions, the pigeon still confronts the feeder as an active agent, what philosopher Martin Buber would call a “thou,” “a truly subjective other whose immediate presence is compelling.”

a subtle tension at work, then, when one enters “open regions” such as parks. By putting oneself in a public place, the individual is open to the desirable possibility of engaging in sociable encounters with strangers and open to unwanted social entanglements or even social isolation. We are made vulnerable.

born out of a situational response to solitude in a moment of unstructured time.

While space refers to an area’s physical properties, place refers to its social meanings. Places embody history.

The mythical connotations of pigeon feeding in the square, and the ways that the “socialized” birds provided tourists with spontaneous and novel interactions, endowed the Piazza San Marco birds with a magical quality.

The pigeons’ alleged filth was tied to the filth and congestion that the city saw as choking the square.

By witnessing and participating in the feeding sessions over the next two weeks, I saw that these episodes were a time to eulogize, and briefly reconnect with, a place that no longer existed.

the Pigeon Action Group.

Collective memory haunted the space, fostered by diehard feeders, pigeons still tame enough to land on visitors, and the countless images of pigeons in the square still circulating in the media and literature.

A major part of how pigeons were framed as a problem in both squares was by being hitched to the mayors’ “quality of life” agenda, which sought to sanitize and bring greater order to public space.

1966 New York Times article,

‘a rat with wings.’”

our social and moral evaluations of animals are contingent on where they are found.

humans work to ensure that animals stay in their “proper place.”


Drawing on the anthropological insights of Mary Douglas, Philo claims that animals that “transgress” the “socio-spatial order” that humans have constructed around them become interpreted as “matter out of place.”

Like weeds in the cracks of pavement, pigeons represent chaotic, untamed nature in spaces designated for humans.


Their metaphorical “pollution” of city streets becomes crystallized through their link to humanity’s literal pollution—trash.

Part of our aversion to pigeons, then, stems from cultural insecurities about proximity to dirt and impurity. Mary Douglas argued, “In chasing dirt, in papering, decorating, tidying, we are not governed by an anxiety to escape disease, but are positively re-ordering our environment, making it conform to an idea.”

The pigeons of Trafalgar Square and Piazza San Marco are objectively different than most other street pigeons—they have been tamed.

In the process, they have become fully dependent on people for food and have stopped scavenging. Cultural products in a literal sense, their habits and their large flock sizes are a result of ritual human practices that engender a tradition. Most of the birds cannot survive without interacting with people, leading animal rights activists in both cities to ask the thorny question of whether people owe anything to these animals that they have made into urban scavengers. Once granted a space and a food supply, do these animals deserve a place in the city?

Audrey Watters

The Week in Robots

2 min read

The Army Wants You to Make Its Soldiers Pocket-Sized Drones

Drones Marshaled to Drop Lifesaving Supplies Over Rwandan Terrain

China’s Companies Poised to Take Leap in Developing a Driverless Car

These Aren’t the AIs We’re Looking For: Why We Need to Be Smarter About Intelligent Design

DJI, 3DR, Parrot and GoPro form new drone advocacy group

Terrifyingly Convenient

To Beat Go Champion, Google’s Program Needed a Human Army

Artificial Intelligence & Real Journalism

See this bionic dog from 1959

Why Rwanda Is Going to Get the World’s First Network of Delivery Drones

Restaurants Are Firing Their Robot Waitstaff for Poor Performance

A $2 Billion Chip to Accelerate Artificial Intelligence

Amazon Acquires Image Analysis Startup Orbeus

automate work, not people

Robots Probably Don’t Feel Great About You Grabbing Their Asses Either

Bots won’t stop

Rude Bot Rises

Marvin and The System

I’ll stop calling algorithms racist when you stop anthropomorphizing AI

Meet TacoBot – giving you a peek into the future of ordering tacos.

The Nailbot, A Smartphone Enabled Robot That Prints Custom Designs Directly Onto Fingernails

Investing With a Conscience, but Done by a Robot

Why We Can Wait For Amazon’s Drones

Self-Taught Robot Is Ready to Seize Another Warehouse Job

Are robots taking our jobs?

Robots jumpstart learning in some US districts

Robot Tasks & Creative Brains

You Can’t Make Me Call It A Robopus

Here Come the Marketing Bots

Toyota Joins the Race for Self-Driving Cars with an Invisible Copilot

21st-Century SmelloVision and the Truly Tricky Problem of Artificial Olfaction

SpaceX just landed a rocket on a drone ship for the first time

Facebook’s F8 Conference: Chat Bots Expected to Take Center Stage

Bistrobot

Wanted: Creative Humans to Make AI Personalities Sparkle

Alphabet’s newest robot is a leggy diva that walks with a confident strut

Turing Tests and the Problem of Artificial Olfaction

(Image credits: via Flickr, Twitter)

Audrey Watters

Link-jacking in Ed-Tech

1 min read

I've spent the last month or so researching the blockchain and its potential applications in education. I've spent the last week or so typing up my thoughts. This morning, I published a 5000-word piece -- an introduction to the blockchain, aimed at the ed-tech audience.

A few hours later, Edsurge published this:

Oh sure, I'm linked if you click to "read more." My 5000 words of research are summarized in a sentence or two. But what Edsurge, a site with more readers and more Twitter followers and millions more in VC dollars than Hack Education, has done is effectively redirected traffic to their page and away from me.

That's some shady, disrespectful shit right there.

One of the things I always appreciate about Stephen Downes' OLDaily is that when he features others' work, he puts a redirect in the headline so that clicking on the link in question takes you to the original content. That strikes me as much more ethical online behavior, one that builds a community of thinkers and writers in ed-tech rather than one that tries to funnel all news and information into one centralized location.

Audrey Watters

The Blockchain in Education: Even More Links

1 min read

I’m spending the weekend (finally) writing up my research on the blockchain and education. The drumbeat of why blockchain will be The Next Big Thing continues, as does my practice of linking to various articles touting its benefit. (You can find previous articles in this series here, here, here, and here.)

Even more links:

Blockchain Technology and Progress in Science

Ethereum, a Virtual Currency, Enables Transactions That Rival Bitcoin’s

Could blockchain technology revolutionize the music industry?

1,000 Bitcoin Wallets Won’t Replace One Financial Revolution

Will the blockchain help unlock the IoT revolution?

Own Your Achievements: Three Ways Blockchain Tech is Disrupting Education

(Image credits)

Audrey Watters

The Week in Robots

3 min read

Image credits: The Kitchen Robot of Tomorrow

Is AlphaGo Really Such a Big Deal?

Recalling Asimov’s Three Laws of Robotics in the Age of Machine Interoceptors…

Trump Deep Nightmare: Google’s Deep Dream AI run against a Donald Trump speech

The Bot Rulebook

The chat bots are coming, and they will take your jobs

How fairy tales could stop killer robots from taking over the world

The biggest mystery in AI right now is the ethics board that Google set up after buying DeepMind

The real cost of robotics

Peer-reviewed online expert system will help you if you’ve been poisoned

Tay Exposes the Fairy Tales We Tell Ourselves About Racists

It’s Game Over for the Robot Intended to Replace Anesthesiologists

Software That Reads Harry Potter Might Perform Some Wizardry

An Artificial Intelligence Comes to Terms With Sentience and Humanity in the Short Film ’27’

Sorry, Shoppers: Delivery Drones Might Not Fly for a While

If All Else Fails, 3D Models and Robots Might Rebuild Palmyra

How We’re Unwittingly Letting Robots Censor the Web

Whisper’s Master Of Content Moderation Is A Machine

The Age of Autonomous Robots Is Upon Us

Automated Anesthesiologist Suffers a Painful Defeat

Robo-Adviser Betterment Gets $100 Million in Venture Capital

Microsoft will unveil an army of AI bots today

After we make peace with robots doing all the work, will our lives have meaning?

Trains Botting: twitterbot posts a new emoji train landscape every 4 hours

Microsoft Says Maverick Chatbot Tay Foreshadows the Future of Computing

How to Fix Microsoft’s Offensive Chatbot Using Tips from Marvin Minsky and Improv Comedy

How Google DeepMind Plans to Solve Intelligence

Microsoft video shows how a blind software engineer uses AI to ‘see’ the world

Why Do So Many Digital Assistants Have Feminine Names?

An A.I. Competed for a Literary Prize, but Humans Still Did the Real Work

Clippy’s Back: The Future of Microsoft Is Chatbots

Microsoft’s neo-nazi chat bot rides again

Conversation as the new UI: Microsoft makes its chatbot pitch at Build 2016

Microsoft Says Chatbots are the Next Big Thing

Could AlphaGo Bluff Its Way through Poker?

Automated drug cabinets have 1400+ critical vulns that will never be patched

AI and Push Learning for Student Guidance and Advisory

Will the Next J.K. Rowling Be a Robot?

Gorgeous Multi-Drone Footage of Colorful Smoke Bombs Painting the Sky

This Scarlett Johansson robot is a crushing end to another week

Audrey Watters

Ed-Tech: Singular or Plural?

2 min read

Phil Hill has a new op-ed at The Chronicle of Higher Education (also re-posted to the E-Literate blog) about the need for ed-tech to support “vastly different types of professors,” not just the “ed-tech enthusiasts” but the “mainstream professors” as well.

Let’s admit it, there can be some real tension when a college is faced with choosing a new learning-management system, or any software used by more than one department.

Since the decision involves the administrators who will support the system – commonly called an LMS – and professors who will use it, who should lead the process? Should staff members just get input from faculty members, or should professors vote on the final decision? Or should professors run the process?

After our weekend at the Indie Ed-Tech event at Davidson, the use of the singular article to describe ed-tech adoption really struck me. An LMS choice. “The system.”

At Davidson, the folks from BYU’s CIO office – who I think are breaking new ed-tech ground in many interesting ways – told us that they support all LMSes, whichever the instructor chooses to adopt. In order to make this less burdensome for students, who now have courses scattered across multiple platforms, they’ve built an interface where notifications and calendaring can be centralized for each student.

This gets at the importance of APIs, no doubt: the ability to have modular, interchangeable, interoperable pieces of software that can be assembled or ignored as needed. It’s certainly a very different approach than adopting one, single enterprise system that will, inevitably, fail to meet everyone’s needs, no matter how much you try to support “vastly different types of professors.”

But BYU’s approach is notable too because it isn’t simply an effort at the latter; it’s also about rethinking education technology infrastructure so that it supports students’ diverse needs via a personal API. And to quote Kin,

No single API implementation, tool, service, specification or standard will set into motion the change that is needed. The real API story is about empowering every single individual to take control over their own digital self using APIs....

Indie ed-tech are a plural noun.

(Image credits)

Audrey Watters

How Do Schools Affect Autodidacticism?

3 min read

On the heels of my recent article on lifelong learning – a response to Pew’s recent study on the topic of “professional” and “personal” learning – Will Richardson prompted me to articulate my thoughts more clearly about schools’ effect on autodidacticism.

One way to think about this, of course, is in terms of whether or not schools encourage students’ agency, autonomy, curiosity. (And more precisely which students are encouraged and allowed to be intellectually inquisitive, and how and why.) But I want to come at it a slightly different way…

I want to think about this in terms of the sociological grammar of Will’s query. To put it another way: what are the effects of an institution on an -ism?

I want to start here because I want to be able to tease out how much of what we identify and in turn praise about autodidacticism is often more (or at least as much) about signaling cultural capital than it is about having a capacity for or even an interest in self-instruction.

What do we mean by “self-directed learning”? “Self-made,” “self-taught” – these are such iconic phrases describing a certain type of American individualism, for a certain type of man. These adjectives work in contrast to “self-radicalized,” a phrase used to describe a certain sort of “self-directed learning” that veers dangerously in the “wrong” direction.

So again, the question: what are the effects of an institution on an -ism?

Is autodidacticism about how you get to know something? Or is it about what you know? It is who you know? Is it how you know? (Or rather how you demonstrate knowing?)

Is autodidacticism an aptitude or an attitude? A behavior? A predilection? A performance?

Is autodidacticism a signal of learnedness?

I don’t write this to suggest that humans are not “natural learners.” But we should unpack what exactly that means, when that word “natural” gets applied. (We should always ask questions when that word "natural" gets applied to social practices.) Learning might be a biological process, sure; but it is also culturally constructed. And it’s culturally constructed within and without and against and inside and outside of institutions – that is, within and without and against and inside and outside of schools.

I find Tressie McMillan Cottom’s framing of the “roaming autodidacts” – the idealized students imagined by education technology – to be particularly useful partially because it highlights the assumptions made about the cultural construction of "good learners." It's an insightful framework too because that modifier “roaming” helps to underscore the ways in which what we perceive as educational self-directed-ness is often intertwined with a certain untethered-ness – the ability to navigate markets, places, spaces with little or limited risk. This, in turn, is a reflection of socioeconomic structures, of course. And it prompts me to ask not only how ed-tech might privilege autodidacticism but how much of autodidacticism is about privilege.

(Image credits)

Audrey Watters

The Week in Robots

3 min read

Peak robot?

The most expressive robot in the world

The rise of greedy robots

A Japanese AI program just wrote a short novel, and it almost won a literary prize

They said it couldn’t be done: Teaching robots good taste

South Korean Government Announces Nearly $1 Billion in AI Funding

Breaking Down The Drone Stack

Artificial Intelligence has crushed all human records in 2048. Here’s how the AI pulled it off

Some Artificial Intelligence Applications Are Making Humans Better People

Robots R’ Us: Funding And Deal Activity To Robotics See New Highs In 2015

Would You Trust A Robot To Rescue You From A Burning Building?

Google Hasn’t Given Up on Robots

Meet the queen of useless robots: ‘The internet is a weird place’

Chatbots for First-Year Student Success

Live stream of autonomous virtual deer wreaking havoc in fictional California town

Japanese robot die always rolls a six

What Is a Robot?

A Lifelike Robot Agrees to Destroy Humans During an Interview at SXSW

Alphabet Dismantling Robotics Effort; Seeks Buyer for Boston Dynamics

Google Is Sharing Its Powerful AI With Everyone in Its Cloud

When Animals Attack … Drones

One Genius’ Lonely Crusade to Teach a Computer Common Sense

Microsoft’s New AI Chatbot Speaks Like a Teen Because Some People Just Want to Watch the World Burn

Microsoft’s ‘teen girl’ AI turns into a Hitler-loving sex robot within 24 hours

Microsoft silences its new A.I. bot Tay, after Twitter users teach it racism

Tay, this weeks “It” bot

It’s Your Fault Microsoft’s Teen AI Turned Into Such a Jerk

Hey Microsoft, the Internet Made My Bot Racist, Too

Why Microsoft Accidentally Unleashed a Neo-Nazi Sexbot

Bad parent

Microsoft Created a Twitter Bot to Learn From Users. It Quickly Became a Racist Jerk.

How Microsoft’s teen Twitter bot turned into a racist nightmare

After Tay’s very public crazy racist Nazi sexbot breakdown, Microsoft’s like, ‘Tay-a culpa, guys’

Mechanomorphs and the politeness of machines

Jerks were able to turn Microsoft’s chatbot into a Nazi because it was a really crappy bot

Learning from Tay’s introduction

How to Make a Bot That Isn’t Racist

The Race Is On to Control Artificial Intelligence, and Tech’s Future

The History and Evolution of the Popular Dance Commonly Known as ‘The Robot’

(Image credits)

Audrey Watters

#IndieEdTech: The Compilation

2 min read

One of the things that I wanted to underscore in my recent keynote at Davidson College as part of its Indie Ed-Tech Data Summit was how very much I appreciate being part of a group of ed-tech thinkers who write publicly, openly on their own domains. And just look at the stories that have emerged in the past week* from some of the event's participants:

Alan Levine: Indie Ed Tech, Colleagues/Friends, APIs, Unexpected Emergent Ideas, and dot dot dot

Jim Groom: The Personal in Indie; And that, my friends, is the Personal API

Adam Croom: Framing Indie EdTech; Indie EdTech Design Sprint; Indie EdTech: Future and Funding

Audrey Watters: Indie Ed-Tech: Revue/Reflections

Tim Owens: Reflections on Indie Ed-Tech

Tim Klapdor: A Journey to discover what is Indie Ed-tech

Tom Woodward: Pushing/Pulling Data – Thinking Computationally? Differently? ?

Olga Belikov: #IndieEdTech Personal APIs & The Current State of Ed-Tech; #IndieEdTech Design Sprint;  #IndieEdTech Keynote Reflections

Kelly Flanagan: Indie Educational Technology

Kristen Eshlehman: Indie Ed-Tech

Andrew Rikard: How Personal APIs Let Students Design Their University

Kin Lane: This is How APIs Will Deliver the Change We Need; Indie Ed-Tech, The University And Personal APIs: Drawing Lines In The Sand To Define Our Digital Self

* March 31: I will continue to update this post as new articles emerge.

(Image credits)

Audrey Watters

Grief and Love and Writing (about Education)

1 min read

One of my favorite sports writers, Jessica Luther, was a guest on a recent episode of The Garbage Time Podcast with Katie Nolan. (You can listen via Soundcloud.) I follow Jessica’s work really closely, particularly her coverage of college sports and sexual violence. Too often, those of us who talk about/writing about/work in “ed-tech” fail to place it in context of institutions or culture, and frankly it’s hard for me to think about higher education without also thinking about some of the issues that sports bring to the forefront: labor, race, gender, loyalty, violence.

On The Garbage Time Podcast, Jessica and Katie talked about the struggles of writing about (or watching) something you love – that is, in their case, sports – when it continues to be beyond disappointing – dangerous, exploitative, wrong. How do you manage your feelings about this when it’s your work, when the thing you love is also this thing that makes you angry, when you have to grieve as you write…

(Image credits)

Audrey Watters

The Sustainability of Startups

2 min read

As my own records about ed-tech funding only go back a year, I’m struggling to answer a question: which ed-tech startups have not raised money in the past couple of years? That’s a prelude to the real question, I suppose: which ed-tech startups might be in danger of running out of money?

So far this year, investment in ed-tech is way, way down. And it’s not just ed-tech either. Across the tech sector, “startup funding is dying up.”

I’ve been a little obsessed the last few days, I admit, with the napkin math – well, mostly speculation – about how much money ed-tech startups are burning through and how quickly. The recommendation, according to venture capitalist Fred Wilson (back in 2011 at least): have at least $10,000 per employee per month in the bank.

Pick a startup. Look up how much money it’s raised (via, say, Crunchbase). Look up how many people are on its team (via, say, the company website). Do the math.

Add to the equation: does the startup have revenue? Could that revenue offset the burn? (How transparent is the startup about its funding and its revenue?)

Also worth asking: how often do you see this company at conferences? Is it a sponsor? Does it have an exhibit booth? The cost of these expenses, after all, can range from the tens to hundreds of thousands of dollars a pop.

Are there other signals that we can look for to gauge the health of a company? Perhaps. Perhaps blog and social media activity (although these can easily be “fluffed”); new feature releases; pivots and re-branding; and/or staffing levels and changes (including reviews from employees via Glassdoor).

(Image credits)

Audrey Watters

Indie Ed-Tech: Revue/Reflections

13 min read

Kin and I spent Friday and Saturday at Davidson College at a “Indie Ed-Tech Data Summit.” I have a ton of thoughts about content, community, process, definitions, outcomes, funding that I thought I’d quickly write down while it’s all fresh in my mind. (Heh. This is already 2000+ words.) Several of these issues are worth writing up in more detail – and I probably will. Eventually.

The Keynote: More Liner Notes

Here’s a link to the transcript of my talk. Here’s a link to the slides on SpeakerDeck. Here’s a link to a soundtrack (with some of the songs that I mentioned in the talk) on Spotify.

There were several major points that I hope my talk made: 1) “open” scholarship matters for knowledge-building (I hope that those of us working in Indie Ed-Tech model this by posting our thoughts on our blogs). 2) students should be given to the technological tools to participate in open scholarship and knowledge-building – that is, at the very least, students should own their own domains. 3) controlling your own scholarship – content, IP, data, metadata, privacy levels, security, identity formation and performance, community participation – is crucial. 4) ed-tech industry interests want to shape and control all of these, and that’s a dangerous (or at least very very dull) proposition. We can build something different.

Since Jim Groom coined the term “edu-punk” – co-opted almost as quickly as British punk rock was – musical references have been key metaphors in talking about alternatives to “mainstream” and “corporate” education technology. I tried to extend the metaphor in this keynote, building off a talk that Jim and Adam Croom gave at Stanford last fall, referencing in particular the algorithmic promises made by the music industry. How do you identify the perfect hit song? How do you make the perfectly personalized playlist? The push for the latter – “personalization” – is clearly evident in ed-tech. What will be the results of an algorithmic “personalization”? What will the future of education look like (or to use the music metaphor, “sound” like)?

Speaking of sound, when I set about writing a talk to deliver “live,” I think quite differently about the rhetorical moves I’m going to make than I do when I’m writing something that others will simply read on their own. Arguments in keynotes and public lectures aren’t quite the same – or needn’t be quite the same – as arguments in essays. (Some of this has to do with the performance element; some of this has to do – ugh – with having slides.) The challenge then is when I do publish the transcript of a talk, and it becomes an essay. There are expectations from readers about what an argument in an essay is supposed to do, what it’s supposed to contain, what “proof” looks like (or to use the music metaphor, “sounds” like).

And perhaps that’s why so many men have reached out in the last day or so to explain algorithms to me. I just didn’t give math enough due in my talk apparently. Bonus points for the email that started with “maybe you don’t get how algorithms work…”

The Design Sprint

This was the third or fourth (or more?) time that I’ve attended a hackathon-of-sorts around the issues of Indie Ed-Tech, and this one benefited greatly from the “design sprint” process led by Known’s Erin Richey and Ben Werdmüller. In the past, these events have spent a lot of time talking about the tools and initiatives that we all were building, and I did miss hearing about what other schools were up to – I’m particularly interested in the Personal API work that’s happening at BYU and at the Domain of One’s Own initiatives that are spreading across campuses. But in some way, it was smart to skip that part and to work instead on ideas and possible projects that emerged from the activities Erin and Ben led us through.

I was in a small group with Adam and with Alan Levine who’ve both already blogged about what we did, how we struggled, what we pitched: Alan’s take and Adam’s take. I believe our idea – a Quora, of sorts, for first generation college students – is a pretty good one and fairly do-able at that. As both Alan and Adam write, our group spent a lot of time talking about the importance of mentoring relationships to students, and we spun our wheels a lot because we refused to technologize that. (Note: it probably really sucks to be stuck with me in your design sprint. Apologies, group.)

For me, one of the most important features of the ed-tech thingy we sketched out was that it was not designed for those who Tressie McMillan Cottom calls “roaming autodidacts.” Nor was it designed to predict and mold the non-autodidact into some algorithmically preferable (and more profitable) version of “college student.”

What Is Indie Ed-Tech?

It’s easier for me to list what Indie Ed-Tech is against, what it is not, what it most certainly aspires to never ever be. (This, of course, brings up the charge that, as a critic, I’m only interesting in tearing things down, not interested in building. That’s silly, simplistic bullshit, but I’m used to the accusation by now.)

Ed-tech need not be exploitative. Ed-tech need not be extractive. Ed-tech need not be punitive. Ed-tech need not be surveillance. Ed-tech need not assume that the student is a cheat. Ed-tech need not assume that the student has a deficit. Ed-tech need not assume that learning can be measured or managed. Ed-tech need not scale.

I know that as whatever this Indie Ed-Tech thing moves forward that there will be attempts to define what it is and not just what it is not. (No doubt, many of the things that Tim Klapdor lists in his write-up from the weekend are fundamental – infrastructure plus scholarship, agency, and autonomy.)

But for me at least, I’m okay with leaving things a little open-ended, emergent, messy, and undefined – or at the very least, not so circumscribed it’s quickly or easily co-opted and sold right back to us as the latest ed-tech upgrade or Gates Foundation-funded effort.

What Is a Personal API?

After I gave my keynote on Friday afternoon, Kin led a workshop on APIs. (GitHub repo. Click on the link. Seriously. Kin’s work is amazing, and I never ceased to be impressed with how comfortable he makes non-technical people feel about working through highly technical concepts.)

Like Indie Ed-Tech, the shape/design/meaning of “Personal API” remained largely undefined this weekend. Rightly so, perhaps. There’s certainly a tension here between something an institution might provide for students and faculty and something that each of us should want to weave together to suit our own needs. (Note: I said “tension.” These two aren’t necessarily in opposition.)

All attempts to avoid definitions aside, for me, an essential element of Indie Ed-Tech does involve control of one’s “personal cyberinfrastructure.” It’s not simply about controlling one’s domain/data/content as the end-goal; but it is about recognizing how control of these elements is always intertwined with questions/practices of knowledge, identity, and power.

I once made a joke – one that Kin will never let me live down – that APIs reduce everything to a transaction. I’d like to crack open what we mean – technically, philosophically – by “transaction” so that it isn’t simply an invocation of economics. So that its synonym “interaction” isn’t simly “clicking.” Perhaps we can think of Personal API and “reciprocity.” Perhaps “exchange.” Perhaps, as Mark Sample used the word “transaction” to describe a comment I made in my keynote, “social contract.” That raises many questions, in turn, again, about information, social structures, and power.

Funding Indie Ed-Tech

I got word today that a decently-funded startup – it has raised over $7 million in the two-and-a-half years of its existence – has run out of money. While the co-founders will, for the time being, work to “keep the lights on,” it’s fired all its staff – some 30-odd employees.

So let’s dismiss from the outset this notion that, in order for ed-tech to be sustainable, it has to be venture-funded. Indeed, by their very definition, tech startups are not sustainable: they are high risk, and although the return on investment might be high if a company is acquired or has an IPO, neither of those are particularly common occurrences. Most startups fail.

On the final night of the Indie Ed-Tech Data Summit, Known’s Ben Werdmüller led a discussion about funding, one that he admitted was an attempt to make a case for a Silicon Valley investment model for Indie Ed-Tech, one patterned perhaps off of Matter, a media-focused startup incubator program that’s invested in Known. It does sound as though the startups that are backed by Matter work closely with the organizations and companies that are limited partners in the fund. These organizations – as investors – shape the types of products that their investment portfolio develop, partially through the design process that the attendees at the Indie Ed-Tech event went through. What would it look like, Ben asked us to consider, if universities became investors in ed-tech startups? What kinds of products would be built?

Of course, universities do have policies and practices in place for profiting from the technologies that are developed on campus. We most often think about this in terms of work done in science labs. How “Indie” (or “open”) could ed-tech be once the universities’ patent attorneys get involved? I don’t know…

An incomplete list of ed-tech products that started at universities and were later spun out into companies: PLATO, Blackboard, WebCT, TurnItIn, edX, HelioCampus, Degree Compass, Learning Catalytics, Cognitive Tutor (later Carnegie Learning), Udacity, Coursera. Not a stellar track record when it comes to “Indie.” As such, I don’t think the proximity to university staff or students is actually the key in differentiating “good ed-tech” from the “bad ed-tech” produced by those living in their Silicon Valley bubble. What matters more, to borrow from Tim Klapdor’s thoughts on Indie Ed-Tech: your vision for autonomy, agency, and scholarship.

How do we achieve a cultural (political, social, economic) shift so that we move towards organizations that value those things? (Kate Bowles’ post today on “Heresy and Kindness” resonates so deeply on this matter. It’s not really a question of “how do we build ‘good ed-tech?’” It’s “how do we strengthen our capacity for criticism and care?”)

Simply put: I don’t think venture capital is the way to go to fund Indie Ed-Tech. Nothing makes a good thing go bad faster than that. That being said, I do recognize that questions about funding are going to become increasingly important, particularly as austerity measures kick in.

(And I should note here, this isn’t simply about public budgets or even school budgets. There’s a pretty substantial funding squeeze happening right now in the tech sector. A lot of startups are about to go away. They’re going to close their doors. They’re going to be picked up (picked off) at a deep discount by the big education companies – the Pearsons and the Blackboards. “A good rule of thumb is multiply the number of people on the team by $10k to get the monthly burn,” VC Fred Wilson wrote back in 2011. You do the math on your favorite ed-tech startup – how much they’ve raised, how much you think they’re earning, how fast they’re burning through what’s in the bank – and you guess how long they’ll be around. Then you decide how much you want to build your course, your learning, your teaching, your scholarship on them.)

UCLA professor Miriam Posner wrote a really great piece last week – “Money and Time” – that explored some of the funding challenges around digital humanities projects. The work of Indie Ed-Tech is probably far more closely aligned with those types of projects than with the TurnItIns and LMSes that investors seem to love. Using industry-created technology tools, Miriam writes, has

…also had material effects on the kind of work we can produce, and the horizons of possibility our work can open. When we choose not to invest in our own infrastructure, we choose not to articulate a different possible version of the world.

In fact, this state of affairs is already very well-documented for edtech. By outsourcing development of key components of educational technology to for-profit vendors, we’ve chosen to invest in the development of software companies that mine our students’ data, encourage us to spy on their work, and lock us into a closed ecosystem of for-profit technology whose philosophy bears very little resemblance to the kinds of teachers we started out wanting to be.

To borrow from Miriam, I think we do need education institutions to “pay up” – to invest in scholarship about and teaching and learning with technology. But should the model for that investment be venture capital? I don’t think so. There are other ways to think about funding; there are other ways to think about investment; there are other ways to build software. There are better ways to build capacity; there are better ways to think about sustainability. There have to be, right?

Thanks to Kristen Eshelman and Adam Croom for organizing the Indie Ed-Tech event. Thanks to Davidson College for being one of the very few institutions that’s ever invited me back. Image credits, with apologies to Daft Punk.

Audrey Watters

iTunes Broke the Alphabet

2 min read

Today’s iTunes update has re-ordered my album collection when I sort alphabetically by artist (the way I typically display my music using that piece of software). And it’s dumb.

I’ve long been annoyed that it decided a-ha was first. Like, why does “a-” come before, say “a.” or “a[space]”? I've spent a lot of time thinking about how I could better display my music. Sort by genre. Sort by song title. But I like to group artists together.

Starting with a-ha was bad. But now, after today's iTunes update, it’s worse.

A Tribe Called Quest is now the first artist when sorting alphabetically by artist. But only two of their songs. The rest of People’s Instinctive Travels and the Paths of Rhythm, and the other album I own by them, are down with the letter T.

Yes, I have long hated that I have been confronted with “Take on Me” as the first song every time I open iTunes. (I owned the record Hunting High and Low, but only ever bothered to buy one song digitally.) What a terrible reminder of one of the best years in music. (That's 1984. Duh.)

Years and years of a-ha first. Years and years of hearing the opening bars of "Take on Me" if I accidentally hit "play." But now, seeing these two songs from A Tribe Called Quest first, and knowing that there are 40 more down with the Ts and that iTunes cannot get alphabetization right, is going to piss me off even more.

(Image credits)

Audrey Watters

Bruce Sterling on Blockchain and Bitcoin

1 min read

“The Blockchain is bulletproof but that doesn’t help you... The Bitcoin community are sinister. They're bad. They misbehave. They’re like 4Chan with extra greed and organized crime." -- Bruce Sterling at SXSW 2016

Audrey Watters

Still More Blockchain and Education Links

1 min read

The research (and clearly the hype about) blockchain in education continues. Look for some more formal articles/analysis from me in the coming weeks. Other posts in this series:

Still more links:

Searle’s Status Functions and the Educational Block Chain

Blockai uses the blockchain to help artists protect their intellectual property

A typical day in a blockchain-enabled world circa 2030

Would ‘Blockchain’ Tech Work for K–12 Schools?

Credit Transfer, Credit Trading

Blockchain-Based ID App Reimagines Internet Identity

Distributed Identity

How “blockchain” technology could influence education

Can K–12 Schools Take Advantage of Blockchain Tech?

Public-Sector Blockchain Pilot Pushes the Tech Forward

@Telegram, Raspberry Pi, Block Chains and Educational Transactions

What is Blockchain

Feedback from BadgeChain Presentation on Open Badges in Higher Education Call

Bruce Sterling's Closing Remarks, SXSW 2016 

(Image credits)

Audrey Watters

The Week in Robots

3 min read

Remember this date – 12 02 2012 – the day our species lost to AI - but won

Welcome to the cyborg fair

AI is closer than we know

Artificial intelligence and language

Go Grandmaster Lee Sedol Grabs Consolation Win Against Google’s AI

How Google’s AI Viewed the Move No Human Could Understand

Behind a Computer’s Surprise Victory, Hints of Global Economic Upheaval

Minecraft to become AI testbed

Our tech future: the rich own the robots while the poor have ‘job mortgages’

Modeled After Ants, Teams of Tiny Robots Can Move 2-Ton Car

The next time someone tells you robots are going to destroy our jobs, show them this

When You Ask AI For Emergency (Mental Health, Interpersonal VIolence) Help

Siri, Other Smartphone ‘Assistants’ May Fall Short in a Crisis: Study

An AI with 30 Years’ Worth of Knowledge Finally Goes to Work

Automation in spring 2016: two glimpses of the future of technology and education

Forget LEGO: Minecraft & open-source AI heralds new era for kids

Microsoft is using Minecraft to develop artificial intelligence tech for the real world

Artificial intelligence is mostly a matter of engineering?

AlphaGo and AI Progress

Musclebound “Bio-bots” Move Around in Response to Light

The Obama Administration’s Drone-Strike Dissembling

Mastering the game of Go with deep neural networks and tree search

Google’s Computer Program Beats Lee Se-dol in Go Tournament

Google’s AI Wins Fifth And Final Game Against Go Genius Lee Sedol

Go Master Salvages a Victory, Showing Ingenuity in the Face of a Formidable AI

Why the Final Game Between AlphaGo and Lee Sedol Is Such a Big Deal for Humanity

AlphaGo is Not the Solution to AI

What Artificial Intelligence Could Mean For Education

To Get Truly Smart, AI Might Need to Play More Video Games

In Two Moves, AlphaGo and Lee Sedol Redefined the Future

Will this new ‘socially assistive robot’ from MIT Media Lab (or its progeny) replace teachers?

If We Don’t Want AI to Be Evil, We Should Teach It to Read

The immortalist: Uploading the mind to a computer

The Terminator and the Washing Machine, A Look at Media Fears About Artificial Intelligence

Machine-Learning Algorithm Identifies Tweets Sent Under the Influence of Alcohol

See this drone that draws

‘Bizarre’ pro-Israel Robot Accused of Harassing Students at Brown University

What we learned in Seoul with AlphaGo

Don’t Forget Humans Created The Computer Program That Can Beat Humans At Go

Pizza Delivery Robots Have Arrived and the World Will Never Be the Same

This Factory Robot Learns a New Job Overnight

Using artificial intelligence in the classroom

Google Puts Boston Dynamics Up for Sale in Robotics Retreat

Conversation at the MIT Media Lab about cybernetics with Paul Pangaro

Why Google Is Selling Off Some of the Coolest Robots Ever Built

Library Robot Coming to Welsh University

Lufthansa Jet and Drone Nearly Collide at LAX

Five Lessons from AlphaGo’s Historic Victory

The iRobot Braava Jet Mopping Robot

(Image credits)

Audrey Watters

Probate, Continued...

2 min read

There's bound to be a ghost at the back of your closet
No matter where you live
There'll always be a few things, maybe several things
That you're going to find really difficult to forgive

There's going to come a day when you feel better
You'll rise up free and easy on that day
And float from branch to branch, lighter than the air
Just when that day is coming, who can say, who can say?

Our mother has been absent
Ever since we founded Rome
But there's going to be a party
When the wolf comes home

We're going to commandeer the local airwaves
To tell the neighbors what's been going on
And they will shake their heads and wag their bony fingers
In all the wrong directions, and by daybreak we'll be gone

I'm going to get myself in fighting trim
Scope out every angle of unfair advantage
I'm going to bribe the officials, I'm going to kill all the judges
It's going to take you people years to recover from all of the damage

Our mother has been absent
Ever since we founded Rome
But there's going to be a party
When the wolf comes home

-- The Mountain Goats

(Image credits)

Audrey Watters

The Week in Robots

2 min read

This new robot skin can glow, stretch – and walk

Taking Baby Steps Toward Software That Reasons Like Humans

Google’s AI Is About to Battle a Go Champion—But This Is No Game

Almost human

The Cybernetics of Learning in Stafford Beer and Gordon Pask

The Memory Trick Making Computers Seem Smarter

Smart Robots Make Strides, but There’s No Need to Flee Just Yet

Don’t Despair if Google’s AI Beats the World’s Best Go Player

Go Master Walloped by Emotionless Challenger, a Google Computer Program

Google’s AI Wins First Game in Historic Match With Go Champion

The Artificially Intelligent Doctor Will Hear You Now

Machine-Learning Algorithm Aims to Identify Terrorists Using the V Signs They Make

Aido is pretty much the robot they promised everyone back in the 1950s

Go Grandmaster Says He’s ‘in Shock’ But Can Still Beat Google’s AI

Computers don’t care about your MFA

The Sadness and Beauty of Watching Google’s AI Play Go

Fukushima robots struggling to withstand radiation levels

iMom short is an eerie look into the scary future of artificial intelligence

The Human-Robot Trust Paradox

Google’s AI Is Battering the Go World Champion in Style

Sebastian Thrun: AI Pioneer Seeks Education Revolution

The Rapid Rise of Federal Surveillance Drones Over America

A Robotic Home That Knows When You’re Hungover

The Potential of Emotional Reading Technology Becoming Available Through Artificial Intelligence

Why is Google’s Go win such a big deal?

Artificial intelligence just reached a milestone that was supposed to be 10 years away

Google’s robotic arms are teaching themselves to do things and it’s terrifying

Public Predictions for the Future of Workforce Automation

General Motors to Buy Cruise Automation in Push for Self-Driving Cars

Google’s AI Takes Historic Match Against Go Champ With Third Straight Win

In the Age of Google DeepMind, Do the Young Go Prodigies of Asia Have a Future?

The Future of Machine Intelligence

Our creative, beautiful, unpredictable machines

(Image credits)

Audrey Watters

Thatcher, Reagan, Clinton

5 min read

The blunt, pathetic reality today is that a little old lady has died, who in the winter of her life had to water roses alone under police supervision. If you behave like there’s no such thing as society, in the end there isn’t. Her death must be sad for the handful of people she was nice to and the rich people who got richer under her stewardship. It isn’t sad for anyone else. There are pangs of nostalgia, yes, because for me she’s all tied up with Hi-De-Hi and Speak and Spell and Blockbusters and “follow the bear”. What is more troubling is my inability to ascertain where my own selfishness ends and her neo-liberal inculcation begins. All of us that grew up under Thatcher were taught that it is good to be selfish, that other people’s pain is not your problem, that pain is in fact a weakness and suffering is deserved and shameful. Perhaps there is resentment because the clemency and respect that are being mawkishly displayed now by some and haughtily demanded of the rest of us at the impending, solemn ceremonial funeral, are values that her government and policies sought to annihilate. – Russell Brand

Today I reread the obituary that Russell Brand wrote for Margaret Thatcher. What with Nancy Reagan’s recent death (and her funeral today), I’ve been thinking a lot about the leaders of the 1980s and the lingering trauma from their rule. Between my British and my American heritage, Thatcher and Reagan both loom largely, conjuring memories of the suppression of striking workers and the threats of nuclear war and my family’s grocery store going out of business.

Brand’s essay is at once unforgiving about the suffering that Thatcher’s policies brought and compassionate about Thatcher, the pathetic and lonely (and deceased) little old lady.

I can’t quite bring myself to muster much compassion for Nancy Reagan. Over the last week, my local NPR station here in LA has been round-the-clock with coverage of her life and death – interviews with various Hollywood, Sacramento, and White House confidantes and reminders of where to park to take a shuttle to her public viewing. I’ve sat here, listening in silent fury, at words like “elegant” and “classy.”

I remember the 1980s – the Reagan Era in particular, my Thatcher Era memories are far fewer – almost antithetically to most political pundits’ recollections, it seems. Those on the right in particular laud Reagan as their “great communicator,” as the greatest President. They remember a very different economy, culture, geopolitics, militarization than I do. I remember all the lies about drugs and weapons. I remember Reagan as, at best, incompetent and, at worst, utterly callous. I remember, by the end of the 1980s, that everything had fallen apart, and Ronald and Nancy Reagan and Margaret Thatcher were all macabre Spitting Image puppets laughing at us.

How you narrate the past shapes how you envision the future.

As her comments following Nancy Reagan’s funeral today made clear, I remember the 1980s so differently than does Hillary Clinton. Clinton credited Reagan for opening up a national conversation about AIDS. To the contrary, the Reagans – the President and the First Lady – kept silent for years as people died. The Reagan Administration laughed – literally laughed – about those deaths. Reagan prevented Surgeon General Everett Koop from speaking out about the AIDS crisis. That there was a national conversation was thanks, in part, to the activism of AIDS activists, ACT-UP, Queer Nation. Activists confronted the Reagans’ silence; activists forced the conversation.

It’s not just the re-writing of the Reagan era. I certainly remember the 1990s differently than do many who tout the “good times” of the Clinton Presidency too. By “misspeaking” today, Hillary Clinton reminded us again of her worldview, her position (in and near power) in those decades. Perhaps these words about Reagan really are how she remembers the 1980s – it wouldn’t surprise me if that’s the case. It’s strikingly sad and out-of-touch. (That’s me, trying to be compassionate.)

Substitute one name in the closing paragraph of Brand’s essay for another. I think it still works:

I do not yet know what effect [Margaret Thatcher, Nancy Reagan, Hillary Clinton] has had on me as an individual or on the character of our country as we continue to evolve. As a child she unnerved me but we are not children now and we are free to choose our own ethical codes and leaders that reflect them.

(Image credits)

Audrey Watters

Upcoming Research

2 min read

In addition to working on an article on the history of virtual reality in education and a research project on the future of the blockchain in education, I’ve got these topics on the back burner:

  • The history of E-Rate. Who pushed for this? Who benefited? What might this history teach us about today’s ed-tech policy pushes?
  • The histories of the military and ed-tech. This is, of course, connected with VR and simulation. But I’d like to write something about the history of SCORM, including its most recent manifestation, xAPI.
  • The history/politics of DonorsChoose. I’m curious: What percentage of projects go buying tech? How much of an increase in taxes paid by tech companies would it take to pay for this? What does the popularity of DonorsChoose tell us about procurement and funding?
  • The history of the dot edu top level domain. I’ve inquired at Educause about this but no one seems to know the history. (?!?!)
  • An ed-tech-focused history of Montessori.
  • The history of TurnItIn. I'm interested more broadly in ed-tech innovations that get spun out of universities. But in the case of TurnItIn, I'd like to look at technology, cheating fears, and student IP.
  • David Horowitz, Ben Horowitz, Andreessen Horowitz, and the Politics of Software Eating Education.
  • A history of PLATO. This is the section that's really still lacking for Teaching Machines. So honestly, it needs to be first on this list.

(Image credits)

Audrey Watters

Ed-Tech Experts

3 min read

I remember attending an “big data” event a couple of years ago where one panel debated the future of expertise. Thanks to algorithms and “the unreasonable effectiveness of data,” one panelist insisted, we’ll soon see the end of expertise as well as the end of theory.

The argument struck me then and now as utterly wrong, in no small part because it ignores the importance of lived experience and embodied knowledge – features that artificial intelligence cannot boast. It also ignores that expertise isn’t always about knowing; it’s about performing an assuredness of know-how. In other words, it’s about status and privilege and power.


I’d like to keep track of who gets cited in education technology articles as “the expert.”

Who gets asked to contribute as an “analyst”?

Who’s asked to moderate the panel? Who’s asked to be a panelist?

Who gets the byline? Who gets believed?

Who narrates the world?


“Everyone’s an expert on education.” You hear that a lot – often lodged as a complaint that politicians or businesspeople are making policy and products without a deep understanding of the institution, of pedagogy and practice, of cognition, and the like. Most of us have had some experience as a student, true, but we’re often cautioned that we should not confuse that experience with expertise.

But what do we accept as expertise in education? What do we accept as expertise in education technology?

Who do we find believable? Who’s persuasive? What gives us pause? What makes us recognize that we need to be much more critical, much more skeptical about experts’ (and non-experts’) claims?


Briefly stated, the Gell-Mann Amnesia effect is as follows. You open the newspaper to an article on some subject you know well. In Murray’s case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward – reversing cause and effect. I call these the “wet streets cause rain” stories. Paper’s full of them.

In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know.

Michael Crichton


Why do you believe what you hear about ed-tech? When do you scrutinize some stories, and then turn the page and are convinced by the accuracy of the next one?

(Image credits)

Audrey Watters

The Alley

4 min read

I live in HB. It’s a small city – LA is made up of tons of these – with an area less than 1.5 square miles. One of its best features, I think, is The Strand, a paved path that runs for 22 miles along the beach from Pacific Palisades to Torrance. It’s a bike path, ostensibly, but it’s used by pedestrians too. Kin and I love it because we can amble along and talk and think and not have to worry about being hit by a car. (“Being hit by a car” is a big concern as a pedestrian in LA.)

Many of Hermosa Beach’s streets are, in fact, alleys, including two right near my apartment building. This is where I walk. Unlike the majority (?) of those in LA, I don’t own a car. I don’t drive. I walk. I walk to the farmer’s market, I walk to the post office, I walk to the restaurants, I walk to the bank, and so on. I choose to walk in the alley some days because it’s a faster route. I choose the alley some days because I want to avoid the main thoroughfare – both its pedestrian and motor traffic. I choose the alley because it’s interesting to walk along the back sides of buildings, to see the side that’s not designed to be seen. No surprise for LA, the alley designed for cars and trucks – for parking, for deliveries, for sanitation. I’m not sure why it’s a surprise for me when I nearly get mowed down by a vehicle, driving down an alley at top speed, treating it like a regular street.


As I was walking down the alley today, I noticed that a restaurant around the corner from us has finally fenced in an open patch of land behind it. Before now, there was just a sign reminding dog owners to pick up after their dogs. It’s a sign I can’t un-see and can’t stop thinking about: “oh god, the back of this restaurant is full of shit and piss.” Needless to say, I never eat there.

But as of today, they’ve erected a fence; I guess it’ll no longer be the place that the neighborhood’s pet-owning apartment dwellers take their dogs. (The neighborhood cats are likely to be undeterred.)

It’s an interesting choice: putting up a fence rather than, say, leveling and paving the area.

(I probably still won’t eat at the restaurant.)


We’re steered down certain routes by design. We’re steered away from others. The alley does not have a sidewalk. The message: I’m not really supposed to walk there. Take the main street. That’s where the entrances to the shops and restaurants are.


As I walked down the alley this afternoon, I couldn’t help but think about technology – its infrastructure and its ideology. (I mean, that's what I do all day: think about technology – its infrastructure and its ideology.) There’s a lot about the traffic and “flow” of technology that steers us down certain routes as well. We’re pushed towards certain interfaces with certain interactions in mind.

The “back end” like the “back alley” isn’t really designed for everyone. It isn't designed for "regular folks." And purposefully so.

Walk on the main street. Shop on the main street. Do not deviate.

Moreover, you’re not really encouraged to ask about who owns the buildings or investigate the political connections of businesses leaders within a community. You’re not supposed to notice the infrastructure at all.

(Image credits. Image does not depict my alley.)

Audrey Watters

Trend to Watch: Disrupting Procurement

3 min read

For the last couple of years – at the very least since the resurgence in venture-back ed-tech startups – there’s been a steady drumbeat of complaints that the procurement system at the K–12 level is broken. It’s inefficient. It's overly complicated. It's time-consuming. It’s “dysfunctional.” Broken. I’ve heard the complaint from ed-tech evangelists and ed-tech entrepreneurs. I’ve heard it from their investors, many of whom argue that the challenges of selling to schools is one of the things that makes education a difficult market to crack (and in turn ed-tech startups a poor investment).

There’s a lot that’s wrong with the process, no doubt – for starters, the hefty RFP requirements that almost by design tilt purchasing decisions towards the incumbent players and big companies. The folks who make the decisions about what to buy typically aren’t the people who are using the products in the classroom. (Here’s one history of ed-tech procurement – how Steve Jobs got the Apple II into schools – and how things changed from innovative teachers to district IT making these sorts of decisions.)

I’d add too that there’s not a lot of transparency in the procurement process; nor is it easy to find out afterwards which products schools bought or use – although that’s not something you hear companies moan about, funnily enough. (USC professor Morgan Polikoff’s research on textbook adoption has made this painfully clear. He’s sent FOIA requests to school districts, and in many cases they have been unwilling or unable to share their textbook data. And when they do, oh man, the mess that that data is in.) Although there's a fear of public scrutiny on the part of school administrators, that fear is shared by many entrepreneurs, I'd wager.

In the last few years, lots of consulting firms and organizations have offered their suggested solutions for fixing (what they see as) procurement problems. And Edsurge, for its part, has launched a “concierge” service in which it will help schools identify its tech needs and then buy things based on those needs (and then take a cut of the contracts, of course).

I’m watching Edsurge closely now as it expands from K–12 to higher education technology evangelism; and it wouldn’t surprise me if it offers the same “deal” to universities soon. Interestingly one of the first stories that it published with its new higher education editorial focus was on a procurement “solution” at UNC. Having watched the arguments about a broken procurement system be repeated incessantly in K–12, it’s not a surprise that the same folks are now going make them again and again for higher ed.

You know why education technology sucks? Procurement!, we learn from The Chronicle of Higher Education today. Go figure! It’s not really the fault of the companies who make shitty products; it’s the fault of the school system who buys them. And if we just make the process of buying stuff simpler, the products will (magically) get better. Or so the story goes. Keep your ears open for those telling this story this year...

(Image credits: Flickr. This story was cross-posted to 2016trends.hackeducation.com)

Audrey Watters

Which Ed-Tech Startups Haven't Raised Money Lately?

1 min read

Last week, the investment analysts at CB Insights released a report on venture capital trends, noting that the number of ed-tech deals are declining. This is echoed in my own calculations about recent trends in ed-tech funding.

But CB Insights’ report has also made me think about the converse of the questions that I’ve been asking about ed-tech investment – who’s raising money, and how much? That is: which companies are not raising money, and which haven’t in quite some time?

I can’t really think of a way to pursue this line of inquiry programmatically, particularly now that Crunchbase – which has been my go-to resource for funding data – now charges for API calls. So I’m going to go through old blog posts on Hack Education, and see if I can come up with a list of companies who haven’t raised any money in the last couple of years. Edmodo, for example, hasn’t seen any investment since 2014; Codecademy hasn’t seen any since 2012.

I’ll post my findings to funding.hackeducation.com.

(Image credits)

Audrey Watters

The Blockchain in Education: More Links

1 min read

Update: I’m still researching how/if the blockchain would work in education. I’m incredibly grateful for the help and feedback I’ve received from folks so far.

I plan to publish my work at the end of March, following an “Indie Ed-Tech” event I’m attending at Davidson College in a couple of weeks. I know that conversations about decentralization and identity will be at the forefront there.

Other posts in this series:

More links:

(Image credits)

Audrey Watters

The Week in Robots

2 min read

Image credits: Here there be robots: A medieval map of Mars

Nessie, the Educational Robot

Five Million Robots and Counting: A Developer Grows in Brooklyn

Stupid Tutoring Systems, Intelligent Humans

Google’s Artificial Intelligence Gets Its First Art Show

Start-Up Lessons From the Once-Again Hot Field of A.I.

Report Cites Dangers of Autonomous Weapons

Intelligence Unleashed: How Artificial Intelligence Will Improve Education

Spot the Four-Legged Boston Dynamics Robot Teases a Confused Dog in a Parking Lot

Carrie Fisher Works as Therapist For a Group of Frustrated Robots Who Really Dislike Humans

Atlas The Robot Can Enlist in the US Military Anytime She Wants

A Google Self-Driving Car Got Into a Crash With a Bus (And That’s Okay)

Intelligence Unleashed: An Argument for AI in Education

Daredevil Drone Flies through the Trees Like an Ace

AI & The Future Of Civilization

Computers read 1.8 billion words of fiction to learn how to anticipate human behaviour

Robotic Garage

MIT’s Twitter bot ‘Deep Drumpf’ uses AI to make Twitter great again

A Plan in Case Robots Take the Jobs: Give Everyone a Paycheck

Tony Dyson, the Creator of Star Wars’ R2-D2, Dies at 68

Australia uses drones to spot sharks and rescue their victims

SpaceX’s Rocket Loses Its Battle Against a Robot Boat (Again)

Is it OK to torture a robot?

Audrey Watters

Some Thoughts on "Coding" and "Technical Ghettos"

5 min read

An attempt to string together a bunch of thoughts and a handful of articles from the week:

Monday was Seymour Papert’s 88th birthday. Papert, of course, is one of the most important figures in education technology – the author of Mindstorms (1980) and The Children’s Machine (1993) and the co-creator of the educational programming language LOGO. He's one of my favorite people in the whole world and "the inventor of everything good in education," as Gary Stager puts it.

On Monday, Melinda D. Anderson published an article in The Atlantic asking “Will the Push for Coding Lead to ‘Technical Ghettos’?” That is, will educational inequalities surrounding CS education mean that students of color end up in low-pay, dead-end jobs in the tech industry?

A few days later, Edsurge published its list of ed-tech trends, one of which is computer science education. In its summary of this trend, Edsurge dates the history of teaching computer science in K–12 to 1984, when the College Board first offered the Advanced Placement exam in the subject. No mention of Papert in the timeline. No mention of LOGO (which was developed in 1967).

This is how you rewrite education technology history – and rewrite it to serve a particular narrative.

What does it mean to start your telling of the history of teaching computer science with an organization – the College Board – that reflects (if not perpetuates) some of the structural inequalities in schools? (That is to say: how SAT results are correlated with socioeconomic status; how the racial scoring gap on the SAT is growing; how there are socioeconomic, geographic, and racial differences in access to AP classes; how the racial scoring gap is growing in AP programs.) To start the history of computer science education with the AP centers computer science education on courseware and assessments aimed at college bound students.

Now arguably, this means that you can tell a history of expanding access (thanks to recent industry-led computer science efforts) as more schools are starting to offer more AP. (“Ka-ching,” as Edsurge is wont to write.) But this doesn’t get at issues of educational equity. Not remotely. The AP has not been a vehicle for that.

And arguably, neither has education technology. Its implementation – no surprise – has largely echoed and re-inscribed other structural and pedagogical inequalities in schools: there are class and race-based differences in who gets to use computers for inquiry and who gets to use computers for, say, test prep. I’ve written about this quite a lot, hoping that those who work in and write about ed-tech pay more attention to these inequalities.

And so I read MDA’s piece in The Atlantic with great interest (and I'm a little disappointed to see some of the pushback it's received  – pushback that seems to misconstrue MDA's concerns).

See, it’s not always clear to me what people actually mean when they say “everyone should learn to code,” and this lack of clarity provides a huge opportunity for inequitable practices, I’d contend, in no small part because “coding” is just one piece of understanding computers, computing, and computer science. And as the epithet “code monkey” suggests – a term used to describe an unskilled programmer who must follow someone else’s engineering design or commands – “coding” is not necessarily highly valued. When we look at this alongside some of the tech industry’s other employment practices – the lack of diversity; a tendency to hire from elite private universities or from elite social networks; outsourcing to developing countries; exploitation of the H1-B visa program – I think it’s only fair to ask lots of questions about what shape “learn to code” efforts will take.

Indeed, “learn to code” has become part of a larger trend  – “the employability narrative” – whereby education (K–12 and college) is reduced to job training. The rationale for computer science education is largely about employment or entrepreneurship, it seems. Again, this makes MDA’s questions incredibly relevant in light of the tech sector’s overwhelming whiteness – the overwhelming whiteness of its employees and its entrepreneurs. (And I think we can argue that this is only partly a pipeline problem.) MDA’s questions are also relevant in light of the history of racial and gender bias – “tracking” – in vocational education.

Let me bring this back, full circle, to Seymour Papert, whose work in teaching about children about computers is, at its core, about epistemology, not about entrepreneurship or employment. Learning to program a computer, according to Papert, is a rich intellectual pursuit, a powerful way to “think about thinking.” It is, as Gary Stager has written, “a liberal art – a way of having agency over an increasingly complex and technologically sophisticated world.”

From Papert’s book The Children’s Machine: “LOGO was fueled from the beginning by a Robin Hood vision of stealing programming from the technologically privileged (what I would in those early days in the 1960s have called the military-industrial complex) and giving it to children.”

Who are the “technologically privileged” these days? And are “learn to code” efforts part of a social justice vision, a Robin Hood-like act of stealing programming from them? Or rather, which computer science education efforts have equity and agency at their core, and which might be more about conscripting cheap and compliant labor for today’s version of that complex?

(Image credits)

Audrey Watters

Journalism!

3 min read

I get it. I do. I worked briefly as a tech blogger, and I understand that the job often requires you publish multiple stories a day (it was between 4 to 6 in my case), for which you’re paid piecemeal. There’s not a lot of time to do research. There’s not a lot of time to cultivate or interview sources. There’s not a lot of time to identify independent experts and get them to weigh in with a helpful quote or two. There’s not a lot of time, and there’s not a lot of incentive. Indeed, as tech journalism is largely “access journalism,” you’re better off not asking too many tough questions. So you simply rewrite press releases. Or you repeat the stories that the marketing person and CEO tell you. Or you rewrite stories that other sites have published.

From the last week alone:

Edsurge claimed to have the “exclusive” on a new White House initiative, the “Open eBooks” app. The article claimed these were open educational resources. They’re not, despite that adjective “open.” It wasn’t an exclusive story either, for what it’s worth. (Correction on the OER angle issued. Story still boasts “exclusive.”)

In another story, Edsurge reported that if you tweet with the hashtag , you can submit OER to the Learning Registry. A bit of digging raises a lot of questions about whether or not this is actually possible (or actually happening) and why the company running this campaign, Participate Learning, would require participants to authorize a Twitter app that, among other things, allows the app to update their profile information and harvest their email address. (No correction or update issued.)

In another story, Edsurge claimed that Nearpod now offers the “first publicly available virtual reality (VR) tool for schools.” It isn’t. There’s a long history of VR in education. (No correction or update issued.)

In another story, Edsurge reported that Google had launched a “safe search” engine for kids. It didn’t. The site in question, Kiddle.co, is run by the Russian founder of the site freakingnews.com, if that gives you any idea of how well the content will be curated. For lulz: search for “Moby Dick” on Kiddle.co. (Correction about Google’s non-ownership issued. No details on the actual owner or on the problems with the site.)

It’s not just a problem at Edsurge, of course, although that site has tried to position itself as the leading news source for education technology and so this list of errors from just one week's worth of stories is pretty troubling.

Can we please raise the bar on ed-tech journalism? Please? Can we not adopt the model that tech blogging has taken, churning out an endless stream of emptiness where all products are amazing, where all entrepreneurs are heroic innovators, where every press release is regurgitated without any skeptical or critical eye? I realize that venture capitalists have invested heavily in that model, but it hasn’t helped make entrepreneurs or consumers or, hell, investors themselves any smarter or better informed about tech, now has it?

(Image credits)

Audrey Watters

#TeamLuddite

2 min read

Richard DeMillo, responding to an interview Bryan Alexander conducted with him as part of the latter’s Future Trends Forum:

“It was an interesting (and not entirely accidental) contrast to a similar interview that Bryan had conducted a week before with Audrey Watters, who more or less savaged the very notion that higher education could be doing a better job.”

I hear this reductionism a lot. Because I am highly critical of the “Silicon Valley narrative” – investors’ and entrepreneurs’ vision for a more automated, privatized, libertarian/neoliberal education system – that I must think that the education system as-is is doing just fine. That’s absurd.

Framed this way, of course, there are only two “sides” to any education-related argument: you must accept the tech industry’s future narrative, or you are somehow against “the future” altogether. You must believe in technology’s teleology or else you stand in opposition to progress. Indeed, the only way that change that can possibly occur, so the industry story goes at least, is via technology.

And look at that frequently-used insult: “you’re a Luddite.” It’s become a shorthand for resisting technological change when in fact you could tell the same story as one of resisting corporate exploitation. It isn’t that the workers were opposed to using mechanized looms. They were opposed to being told they couldn’t work on their own looms, that they had to work on looms owned by factories. Their labor was no longer their own. Their work was no longer their own. The machines were no longer their own.

The problem isn’t simply the technology in and of itself -- are you for it or against it; it’s that technology is necessarily encased in structures and systems that we need to interrogate. And those who are quick to dismiss criticism of technology as somehow being “against the future” are often those most invested in protecting the structures and systems of ongoing exploitation.

(Image credits)

Audrey Watters

Identity, Catfishing, and Fingerprints

3 min read

I recently rejoined Facebook, having deleted my account a little over a year ago. It’s been interesting to watch the process of rebuilding a network there – that is, who’s “friending” spurs others to see my avatar in a list of recommended connections. I’ve only received two friend requests from my home town, none from folks I went to high school with, none from folks I went to graduate school with. I’ve received only one friend request from a family member. Some of this, of course, is due to folks’ usage (or lack thereof) of Facebook. Some of this is, no doubt, a reflection of a lack of interest in re-friending me. (And considering it’s an election year, good riddance.)

But what’s really interesting is that for every one friend request I’ve received from a person I know, I’ve received one friend request from a stranger, from names I do not recognize.

One of the things that prompted me to rejoin Facebook was the struggle that friends Alec Couros and Alan Levine have had with catfishing. I recently heard Alec say that he has to be on Facebook so that he can report imposters and so that he can try to have some semblance of control over his “identity” there. Facebook has done nothing to respond to this problem.

When I set up a new account on Facebook, one friend immediately DM’d me on Twitter to ask if it was really me. She knew that I strongly dislike Facebook, and so she was (rightly) suspicious about a new account there bearing my name and image.

But a lot of people seem quite happy to friend anyone and everyone on Facebook. And I wonder how/if that makes it easier for catfishing accounts to establish themselves as “real”?


One of the promises of the blockchain is that it’ll help address identity issues. Me, I’m still skeptical about a technological “solution” to something that seems a lot more complex that a cryptographic verifiability.

But then again, here are all are, performing out identities online, posting bits and bytes about ourselves online. How do we protect that? (Secure it. Keep it private. Keep it ours. Mark it as ours/ourselves.) Can we?

I think that the un-editability of the blockchain makes it a non-starter for identity discussions. Identity is, after all, always in flux. And yet, some sort of certainty around identity is going to be demanded – not just by banks or governments but by all the (social) networks in which their practices are increasingly intertwined.

How do we maintain our own “hosted lifebits” without reducing everything to an enforced and inescapable fingerprint (and I do fear that biometrics are going to be proposed as “the solution” for issues of identity)? Can we?

(Image credits)

Audrey Watters

The Week in Robots

2 min read

Now it is Facebook’s turn to be stupid about AI

Exploring the Intersection of Art and Machine Intelligence

Facebook is using AI to map where everybody lives to get the world online

What’s Next in Computing?

AI Helps Facebook’s Internet Drones Find Where the People Are

Twitter Bot

Drones Improving Medical Access

Wikipedia for Robots

IBM’s Watson has new APIs meant to gauge how you’re feeling (and which shoes you have on)

Stanford Lit Lab Trains Neural Network To Identify Suspense in Stories

The Latest Boston Dynamics Creation Escapes the Lab, Roams the Snowy Woods

To Keep America Safe, Embrace Drone Warfare

Atlas, The Next Generation

Watch the Next Generation Atlas Robot Get Bullied By A Mean Human (And Stay On His Feet)

Massive delivery of “just in time” videos; a way to change education and build useful AI

How to Think about Bots

All About Ava: AI and the Mysteries of Human Obedience

A Whimsical Robotic Adjustable Applause Machine That Lets a Person Give Themselves a Hand

The Robots Are Coming for Wall Street

Why journalists interested in drones should be watching an FAA reauthorization bill

Google’s DeepMind AI group unveils health care ambitions

A Robot That Has Fun at Telemarketers’ Expense

What Is a Robot?

Don’t Laugh: Yahoo’s Open Source AI Has a Secret Weapon

U.S. aviation regulator starts rule-making process for public drone flights

On the Personalities of Dead Authors

Facebook AI Research is donating 25 GPU servers to European academies

Boston Dynamics Atlas Robot Demo Turned Into a Parody Horror Video About a Robot Seeking Revenge

Children in Uganda watch a hobbyist’s drone fly for the first time, and totally flip out

(Image credits)

Audrey Watters

The Blockchain in Education: Questions

4 min read

“Is blockchain really a thing?” That’s probably the most common question I hear about a technology that, up until quite recently, was mostly and closely associated with the cryptocurrency Bitcoin. “Do I need to pay attention to blockchain?” many folks working in ed-tech are asking. “Do I really need to understand it?” Or, like other over-hyped and over-promised technologies, will it always be “on the horizon”? Or will it simply fade away?

I haven’t included blockchain or bitcoin in any of the “Top Ed-Tech Trends” series I’ve written. I’m still not sure there’s a “there” there. But with news this week that Sony plans to launch a testing platform powered by blockchain and that IBM plans to offer “blockchain-as-a-service,” I thought it might be time to do some research, write a clear explanation/analysis of what blockchain is, one that isn’t too technical but that doesn’t simply wave away important questions by resorting to buzzwords and jargon – that blockchain is “the most important IT invention of our age,” that it will open up “new possibilities,” “revolutionizing services of all kinds,” and so on.

So buzz and bullshit aside, what – if anything – can blockchain offer education technology? And more generally, how does blockchain work? (And then again, specifically how does it work in an educational setting?) What problems does blockchain solve? What are its benefits? What are its drawbacks? Who’s developing and who’s investing in the technology? To what end?

This is still very much a work-in-progress. But for those interested in reading up on their own, I have posted a list of resources and reading materials here.

I also have a list of questions that, despite spending the last few days learning about cryptocurrencies and “decentralized trust,” I still have about blockchain’s applicability to education:

  1. What happens to student privacy if educational records/transactions are available via a public ledger? Will a student have a say over who has access to their records?
  2. What happens if a students wants to correct that educational record or remove transactions, say, because she wants or needs a “fresh start”? The blockchain is uneditable, correct?
  3. Are organizations using a version of the Bitcoin blockchain? Or are they rolling their own? Are there going to be a bunch of separate edu-related blockchains? Will people gravitate to, say, IBM’s blockchain-as-a-service?
  4. What sort of infrastructure is required to run this technology (a wallet, the full blockchain database, a mining node, etc)? I mean, how “decentralized” and “distributed” and “open” is this really? If there end up being multiple, competing blockchain-as-a-service offerings, what will data interoperability look like?
  5. For non-Bitcoin-related blockchain efforts, is there still “mining”? If so, what does that look like? Is there a financial incentive to participate as a “miner”? As a node? Are there transaction fees? If there is no “mining,” is this non-Bitcoin-related blockchain secure? Is this going to require as earth-hatingly much power as the Bitcoin ecosystem does?
  6. When it comes to issues of “trust” and, say, academic certification, who is not trusted here? Is it the problem that folks believe students/employees lie about their credentials? Or is the problem that credential-issuing entities aren’t trustworthy? I mean, why/how would we “trust” the entity issuing blockchained credentials? (What is actually the source of “trust” in our current credentialing system? Spoiler alert: it’s not necessarily accreditation.) How would the trustworthiness of blockchained credential-issuing institutions be measured or verified? If it’s by the number of transactions (eg. badges issued), doesn’t that encourage diploma milling?

More soon…

Audrey Watters

An Invented History of Ed-Tech, Continued

1 min read

"This is such an exciting new product!" gushes Edsurge writer Blake Montgomery, in an interview with the CEO of Nearpod, a startup that has just pivoted from making an app that lets a teacher control what lessons and messages appear on students' iOS devices to making "virtual reality lessons." In fact, the latter are pretty much PowerPoint slides that can be viewed on any mobile device. With the use of a cardboard viewer, the images are now passed off as a VR experience. But hey, "this is such an exciting new product," says the journalist whose publication shares investors with the startup he's covering.

From the introduction to the article:

This is not the first publicly available VR tool for schools. Not remotely. Not by decades. But I'm sure that if you think you're first, you'll be ever-so-clever in your efforts to convice teachers to adopt this "exciting new product."

I'm working on a "history of VR in education" story currently, in part to counter just this sort of ahistorical bullshit.

Audrey Watters

The Blockchain in Education: Links

2 min read

I’ve started a new research project on the blockchain and what it might offer for education. I’m interested in gaining a better understanding of the technology, and I’d like to be better positioned to weigh in on the discussions in which blockchain is increasingly posited as some revolutionary solution for credentials, assessment, and identity management.

Thanks to some suggestions from The Internet, here’s a list of materials I’m working my way through:

Mastering Bitcoin: Unlocking Digital Cryptocurrencies

A Decentralized System for Education and Assessment

Decentralising Education and the Blockchain

How One School is Using Blockchain to Authenticate Degrees

UK Chief Science Adviser Urges Government to Start Deploying Blockchains for Public Services

How Blockchain Will Transform Business and Society

Certificates, Reputation, and the Blockchain

Forget Bitcoin — What Is the Blockchain and Why Should You Care?

Digital Portfolios + Open Badges + Blockchain = Personal Learning Ledger

Blockchains for Federated Student Data

Is Blockchain the most important IT invention of our age?

Academic Certificates on the Blockchain

Decentralized Public Key Infrastructure

My view on the current situation of Bitcoin and the Blockchain

Peering Deep into Future of Educational Credentialing

Heart of a Gambler

Programmable Blockchains in Context: Ethereum’s Future

Block Chain 2.0: The Renaissance of Money

The dawn of trustworthy computing

An Introduction to the Bitcoin Blockchain

The resolution of the Bitcoin experiment

The Disruption Higher Ed Doesn’t See Coming (and how it could respond, even lead, but probably won’t)

Understanding the blockchain

What is the impact of blockchains on privacy?

The Future of the Web Looks a Lot Like Bitcoin

Ending the bitcoin vs blockchain debate

From ePortfolios to OpenLedgers — via OpenBadges and BlockChains

Bitcoin and Cryptocurrency Technologies 

Blockchain of Thought

I'll continue to update this list as more suggestions come in. I'm specifically looking for two things: 1) an explanation of blockchain technology suitable for tech-savvy non-cryptographers and 2) an explanation of how this technology could be used/useful in education. (And specifically how the tech would work in a non-financial setting, too. No hand-waving away those questions or acting like "it's decentralized!" is sufficient.)

Mostly I'm curious about this: what education problem does blockchain actually solve?

Audrey Watters

The Week in Robots

2 min read

Not Part of Video’s Script: An Arrest for Flying a Drone

The Waze Effect: AI & The Public Commons

The Best AI Still Flunks 8th Grade Science

Even Barbie gets a smart home and a drone in 2016

Intelligent Office Chairs That Automatically Park Themselves Around the Conference Room Table

Precision Robots Quickly Arranging and Packing Batteries on a Fast-Moving Assembly Line

13 jobs that are quickly disappearing thanks to robots

Would you bet against sex robots? AI ‘could leave half of world unemployed’

We’ll Know Soon If Google Can Beat a Super Grandmaster at Go

Robot Art Raises Questions about Human Creativity

The Missing Link of Artificial Intelligence

Republican-Leaning Cities Are At Greater Risk Of Job Automation

New $5 Million X Prize for AI That Gives the Best TED Talk

X Prize And IBM Team Up On A New, $5 Million A.I. Competition

10 ways AI can go wrong: artificial intelligence v artificial stupidity

How STEM is Staying One Step Ahead of the Robots

Manual typewriter + servos = polyfingered robot dictaphone

Inside the Artificial Universe That Creates Itself

Automated Teachers, Augmented Reality And Floating Chairs

Killing people with drones is working out great for America, says ex CIA chief

(Image credits)

Audrey Watters

An Invented History of Ed-Tech

1 min read

(via "The Premium Model in EdTech: Aggregators" on Medium.com)

The history offered in thie article -- one aimed at startup entrepreneurs and investors -- is incorrect. The University of Phoenix did not invent e-learning. CALCampus was not the first to offer an online curriculum. To be sure, these are assertions made by various online education marketing content farms. But just because you read it on the Internet or saw it in an infographic doesn't make it true. (Startling, I know.)

What I find fascinating about the claims here are that this particular story posits that education technology and online learning is something developed by for-profit higher education. It's a convenient narrative: one that allows today's startups to invoke a particular story about ed-tech innovation: one that posits that public education has been resistant to technological change and that the "democratization of education" has and will come from markets.

Audrey Watters

Audrey Watters

A Few Thoughts on Disclosures

3 min read

There was a dust-up the other night on Academic Twitter, when Matt Gold challenged the editors of Hybrid Pedagogy of failing to adequately disclose that their journal was funded by the learning management system Instructure. In response, Hybrid Pedagogy published a couple of open letters, penned by two of its co-founders Sean Michael Morris and Jesse Stommel, that (I think) were meant to assuage any concerns.

(Disclosure: I have published articles in both Hybrid Pedagogy and Journal of Interactive Technology and Pedagogy, where Gold serves as reviews editor. More disclosure: I consider Stommel, Morris, and Gold to be colleagues and friends.)

I have a pretty lengthy disclosure page on my website, because I think it’s an ethical imperative to be as clear as possible to readers where one stands. I do not believe that there’s such a thing as objectivity; we are all deeply and impossibly subjective, and our writing necessarily reflects that situatedness. This isn’t simply a matter of finances – although my god, it’s absolutely essential that any financial relationships are disclosed. I’ve included information about my academic background, for example, because my experiences certainly shape the way in which I approach the field of education technology. Everything I write is implicated, and I hope that by disclosing as much as I can about myself that readers can have a better picture of me in/and my work.

Who we are, where we come from shapes what we say. Who we are, where we come from shapes what we leave unsaid.

I’m not a journalist by training, but I do think a lot about what it means to do this work in public as ethically as possible. I believe disclosures should appear on a dedicated page – again, for individual writers as well as for the publication itself – as well as on or even in individual articles. For what it’s worth, I’m a fan of how the tech site Re/Code handles this. Each writer has an ethics statement that appears next to their name on every article they publish. (See, for example, founder Kara Swisher’s disclosures.) This statement appears even when there isn't something in a story that needs to be disclosed. Re/Code stands apart from most technology and ed-tech publications that tend to obscure their relationships to investors or to the companies they cover. (Cough, Edsurge, cough.) When an organization values disclosure and when it requires its writers to pen one, I think that everyone becomes a little bit more aware of the way in which their views are informed by the relationships around them. Again, this isn't simply about money. It's about politics. It's about ideology. Disclosures about how the whole process works can reveal a lot about how the "sausage gets made" in pitching, writing, editing, and publishing. That's incredibly important, particularly for those of us who offer critical analysis about various institutional and organizational tendencies to obscure power relations.

And if nothing else, it’s always better to err on the side of disclosure, to share writers’ and publications’ background information and interests early and often and as fully as possible. That’s much more preferable than having something come up that questions one’s credibility and independence, as I think Gold’s concerns about Hybrid Pedadogy and its relationship with Instructure serves to underscore.

Audrey Watters

The Week in Robots

2 min read

Man Flies Drone Into Empire State Building

Race against the machine in learning ‘jobs’

Microsoft’s massive Turing test – are AI teachers on the horizon?

Finding fractured reality in Japan’s completely gonzo robot cabaret bar

Andy Rubin Unleashed Android on the World. Now Watch Him Do the Same With AI

The Jolly Roger Telephone Co., A Patient Robot That Wastes the Time of Telemarketers

Golf Robot Makes Hole-In-One at PGA Event

This robotic cockroach can survive almost anything

Welcome to the Age of Robot Animals

From Intelligent Diagnostics To The Next Siri: 10 Early-Stage AI Startups to Watch

How to Debug Marco Rubio, According to a Roboticist

Robot Fights Break Out in New Hampshire, Sparking Fears of a Robot Uprising

The truth about sex robots

A Strong Robot Hand with a Softer Side

Microsoft’s Cortana gets sexually harassed, but she fights back

A Drone’s-Eye View of an Intense Drone Race Through, Around and Over a Crowded Warehouse

A robot butler is replacing humans in some California hotels

Love at First Bot

Teaching an AI to play Mario – sociably

Automate This, Not That

An FPV Racing Drone Quickly Descends Burj Khalifa in Dubai, The World’s Tallest Building

What journalists need to know to fly a drone

The Insane, 80 MPH Drone Racing League Launches With Wrecks Aplenty

Google Car Exposes Regulatory Divide on Computers as Drivers

This robot will mix you a drink based on your weird tweets

Google’s Self-Driving AI Counts as a “Driver,” According to the Feds

The iBoardbot is an Internet-controlled whiteboard robot

A Programmable Smart Mannequin That Robotically Adjusts to a Wide Range of Body Measurements

Artificial Intelligence Offers a Better Way to Diagnose Malaria

Watch this beautiful ballet of drones (also, Disneyland Drones!)

Droneboarding, A New Variation of Snowboarding Using Powerful Drones to Pull Riders Across Snow

(Image credits)

Audrey Watters

Audrey Watters

Social Media, Algorithms, and Work

2 min read

A year or so ago, I left Facebook. I didn’t just delete the app from my phone; I deleted my whole account. I made the point of emailing close friends and family and letting them know ahead of time that I was planning on ditching out. I gathered email addresses and phone numbers so that I had other ways of contacting folks.

Kin’s stayed on Facebook, of course, and he’s kept me up-to-date on the kinds of things that seem to be posted only there: pictures of kids, birth and divorce announcements, job updates, vacation photos, new pets, and the like. I don’t feel like I’ve missed much – on an interpersonal level.

What I have probably missed is traffic to my stories. Paying attention to social media – for better or worse – has become another part of the “work” I do as a writer. And Facebook drives a lot more traffic than does Twitter, which up ’til now has been my social network of choice.

There are a lot of reasons I left Facebook, most of which involve issues of privacy and algorithmic control. But now that Twitter is introducing the latter, I’m feeling much less loyal to a platform that has already left me thinking about abandoning it a lot in the last few years due to harassment. I mean, if I have to play with social media in order to promote my work and if the reach of my work is now increasingly dictated by algorithms beyond my control or scrutiny, then why not utilize Facebook’s bigger platform?

Audrey Watters

Audrey Watters

The Week in Robots

2 min read

Can Drone Pilots Be Heroes?

When Class Is Run by a Robot

How One Intelligent Machine Learned to Recognize Human Emotions

Drone Lobbying Heats Up on Capitol Hill

The Rise of the Artificially Intelligent Hedge Fund

An Arduino Controlled Robot Capable of Solving a Rubik’s Cube in Just Over One Second

Microsoft Open Sources Its Artificial Brain to One-Up Google

Next Big Test for AI: Making Sense of the World

Marvin Minsky, Pioneer in Artificial Intelligence, Dies at 88

How Drones Are Helping Wineries Weather California’s Drought

Robot-Proof: How Colleges Can Keep People Relevant in the Workplace

Remembering Minsky

Your Future Self-Driving Car Will Be Way More Hackable

Marvin Minsky’s Marvelous Meat Machine

What Marvin Minsky Still Means for AI

The Drone Racing League, A Professional Sports League Dedicated to First Person View Drone Races

The race for the master algorithm has begun

The Artificial Intelligence That Solved Go

When Your Neighbor’s Drone Pays an Unwelcome Visit

Google’s AI Masters the Game of Go a Decade Earlier Than Expected

In a Huge Breakthrough, Google’s AI Beats a Top Player at the Game of Go

AlphaGo: using machine learning to master the ancient game of Go

Will Machines Eliminate Us?

The AI Goblin and the coming Internet of Trust

Go, Marvin Minsky, and the Chasm that AI Hasn’t Yet Crossed

How DARPA Took On the Twitter Bot Menace with One Hand Behind Its Back

Audrey Watters

Coursera and the MOOC Fees

3 min read

Last week, Coursera announced that it was going to start charging for certiicates for more of its classes, particularly those associated with "specializations." Students will still be able to sign up for free to watch videos; but they won't be able to have their assignments graded and they won't be able to get certificates without coughing up some money (between $30 and $100). So much for "democratizing" education.

Inside Higher Ed's Carl Straumsheim sent me an email this morning, asking for my thoughts on the signficance of Coursera's move. I'm posting here what I wrote to him:

Significant, yes. But not surprising.

Or rather, I’m not surprised that Coursera has finally made its intentions clear. The move to charge for certificates has been inevitable from the outset — particularly with the amount of venture capital that the company has raised. It's needed to develop a viable business plan (although I’m not fully convinced that this move will be it. I think Coursera is in the most precarious position of the “o.g.” MOOC providers).

It’s always been striking to me that theses three — Udacity, Coursera, edX — believed that the certificates (free or paid) they offered would be widely accepted, either by employers or by educational institutions. I’m not sure that it’s been true in either case.  As these companies have been so wrapped up in the narrative that college isn’t “worth it,” I think they have overestimated the value of their offerings. It helped, no doubt, when they could connect this value to the ideals of “free” and “open.” Do employers value a Coursera certificate? I’m not sure — particularly if a job applicant doesn’t already have a Bachelors degree. (This is where Udacity’s close partnership with the tech sector seems to be a little different than Coursera’s reliance on the institutional brand to denote the certificates’ value.)

I think it’s interesting too that it isn’t just the certificate that costs money now; it’s having one’s assessments graded. I know at the outset, Coursera was boasting that it could automate this process. But none of the MOOCs have really been able to do so. (Udacity has a huge team of piecemeal graders in India, and that’s one of the jobs it promises with its new job guarantee. LOL.) Coursera’s whole “peer grading” process seems to have fallen to the wayside too. The assessment piece is the most labor-intensive part (particularly since the instruction aspect is just videotaped). I’m not sure partner universities have been too thrilled by having those costs offloaded onto them.

Why, it’s almost as though they didn’t think any of this through and simply believed their own TED-induced hype.

 

Audrey Watters

The Week in Robots

2 min read

Taskmaster robots watch while you work in case you miss a step

Drone swarms will change the face of modern warfare

Apple Buys Artificial-Intelligence Startup Emotient

Could AI Solve the World’s Biggest Problems?

Is 2016 the year you let robots manage your money?

Are Drones the Future of Food Delivery?

You Can Ask A Robot To Review Your Investment Portfolio

They’re Racking up the Miles, but Are Self-Driving Cars Getting Safer?

China’s Baidu Releases Its AI Code

Should We Outsource Emotional Labor to Robots?

Alphabet Shakes Up Its Robotics Division

There’s No Such Thing as a Computer-Authored Work – And It’s a Good Thing, Too

The (un)certainty of US drone strikes

IBM is at it again; more lies about Watson and AI

For Now, Self-Driving Cars Still Need Humans

Nearly half of young people fear jobs will be automated in 10 years

Amazon’s serious about drones: Prime Air UAVs will carry 5-lb. packages 10 miles in 30 minutes

The Underwater Robot That Will Repair Fukushima

U.S. Proposes Spending $4 Billion on Self-Driving Cars

IHMC’s ATLAS Robot Learning to Do Some Chores

Robots Could Make the Supreme Court More Transparent

Don’t Blame Watson for IBM’s Slide

A Robotic Suitcase That Follows Its Owner, Avoids Obstacles and Pairs With a Smartphone App

District Approves Drone Use for Educational Purposes

Obsessing Over AI Is the Wrong Way to Think About the Future

Audrey Watters

Audrey Watters

Twitter Communities and Belonging

2 min read

This comment was posted on Jon Becker’s blog.

How do we separate the “personal” from the “professional”? I’m not sure it’s ever possible; I don’t think it would be desirable.

There are lots of times I see Tweets on topics I don’t care about: television shows, awards shows, baseball games, presidential debates, and so on. It’s not that hard to ignore them. (I use Tweetbot, for what it’s worth, which does make it easy to mute individuals and mute hashtags.) Me, I’m far more likely to unfollow educators who tweet uncritically about ed-tech than I am to unfollow those who tweet about breakfast or basketball.

What I find interesting about your comments here are that you feel as though you are not a part of a sports-Twitter community in quite the same way as you are the education-Twitter one (and perhaps, by extension, that being more a part of the former diminishes your position in the latter). But do you think these two communities have similar practices, shared values? Do you think “community” looks the same when it comes to talking about these two topics on Twitter? How are they shaped – differently? – by loyalty, by fandom (and I realize that’s just one small part of sports-Twitter and, arguably, one small part of education-Twitter too)? How are they shaped by collaboration and not competition?

I wonder if, even with a sports-focused account, you’d find the same sort of connection via sports-Twitter that you’ve found via education-related circles.

Audrey Watters

Audrey Watters

FERPA and the Privacy Façade

2 min read

A response to a Jim Groom blog post here.

I think you nailed it when you say that what counts as an "education record" and therefore protected by FERPA is really unclear. Here's the Department of Education's completely unhelpful definition:  "The term 'education records' is defined as those records that contain information directly related to a student and which are maintained by an educational agency or institution or by a party acting for the agency or institution." I think we recognize that this includes transcripts, grades, and so on. But does an education record include metadata about how frequently a student uses a piece of software? Does it include their IP address?

Schools and their vendors are allowed to collect and share data if it has an "educational purpose" but WTF is that? With our current obsession with educational data, I can see people arguing that every click a student makes on a computer has an "educational purpose."

FERPA purportedly prevents the disclosure of personally identifiable information without consent. Clicking "I agree" to the Terms of Service does count as consent, and I highly doubt students (or their professors) read what they've agreed to. Companies can also claim that the data they're collecting is not personally identifiable. Again, these definitions -- and how they're full of weasel words -- matter.

Audrey Watters

The Year in Review: Other Speaking Engagements

1 min read

In previous years, I've written a review of my years "in numbers," counting the number of keynotes, the number of cities I visited, the number of bands I've seen, etc. I need to write my year in words and tears instead this year. Nonetheless I'll make note of some of those numbers here.

Other speaking engagements:

A panel at API Strategy and Practice. A webinar with Alan Levine on Connected Learning TV. A webinar with Bryan Alexander on "Developments in Higher Education Educational Technology: The Horizon Report in Action." A workshop at ICDE. A workshop at the Digital Pedagogy Lab. A guest appearance on TWIT.tv.

Audrey Watters

The Year in Review: Publishing Elsewhere

1 min read

In previous years, I've written a review of my years "in numbers," counting the number of keynotes, the number of cities I visited, the number of bands I've seen, etc. I need to write my year in words and tears instead this year. Nonetheless I'll make note of some of those numbers here.

Publishing beyond my own domain:

My work appeared in Europa World of Learning, Boundary2, World Innovation Summit for Education blog, The Kernel, Bright, Hybrid Pedagogy, The Journal of Interactive Technology and Pedagogy, Inside Higher Ed, and Tech & Learning.

Audrey Watters

The Year in Review: Keynotes

1 min read

In previous years, I've written a review of my years "in numbers," counting the number of keynotes, the number of cities I visited, the number of bands I've seen, etc. I need to write my year in words and tears instead this year. Nonetheless I'll make note of some of those numbers here.

Keynotes and Presentations:

 

Audrey Watters

The Year in Review: Travel

1 min read

In previous years, I've written a review of my years "in numbers," counting the number of keynotes, the number of cities I visited, the number of bands I've seen, etc. I need to write my year in words and tears instead this year. Nonetheless I'll make note of some of those numbers here.

Where I travelled *:

  • San Francisco
  • Toronto (twice)
  • Sydney
  • Washington DC
  • Berlin
  • Barcelona (thrice)
  • Eugene (twice)
  • Philadelphia
  • Casper
  • Madison
  • Amsterdam
  • Brooklin
  • Sun City
  • Pretoria
  • Olympia
  • Austin

* I only count cities in which I stayed overnight.

Totals:

  • 16 different cities
  • 4 continents

Audrey Watters

The Week in Robots

1 min read

The billion-dollar robot question – how can we make sure they’re safe?

Why Do I Have to Call This App “Julie”?

automation is coming to a job near you

Watson Medical Algorithm

A Master Algorithm Lets Robots Teach Themselves to Perform Complex Tasks

AI Machine Learns to Drive Using Crowdteaching

The kid who unlocked the iPhone just built a self-driving car in his garage

Your Algorithmic Self Meets Super-Intelligent AI

Marcel Hirscher nearly hit by falling drone camera in slalom run

Start-Up Personalizes Books for Children, With Robot as Co-Author

Boston Dynamics’ Robo-Dogs Pulling a Sleigh Is a Terrifying Glimpse of Christmas Future

Ford and Google Could Be Making the Model T of Automated Driving

With a Handshake and Selfie, Another World Leader Surrenders to the Robots

Madeline the Robot Tamer

A Silicon Valley for Drones, in North Dakota

Audrey Watters

My favorite pigeon

1 min read

So if I do pick one pigeon picture for the new Hack Education "look," it's likely this one:

Pigeon

Audrey Watters

Still More Pigeons Pics

1 min read

More pigeons:

Allow Me To Repeat Myself

Day 4 | Mamma pigeon's just a few feet away from the nest. And pappa pigeon is away as well. Wondering how she left the eggs unattended like this. First time parents maybe!

Pigeons

Pigeons

Birds of a Feather

Pigeon portrait

Pigeon - HSK_9944

pigeons, lovejoy fountain

The Evil Pigeon

Common Wood Pigeon

My Friend.

Pigeon One Step

pigeon

Pigeon face

Pigeon

pigeon portrait

Pacific Band tailed pigeon (Patagioenas fasciata))

Pigeon

Pigeons in Love

(Nice photo, but I hate the gendered implications.)

Pigeon

(Unnecessary coloration)

Common Wood Pigeon

Audrey Watters

More Pigeon Pics

1 min read

I'm obsessed... I can't stop looking for pigeon pictures, and thinking about how great pigeons remain as my major metaphor for ed-tech.

pigeon portrait

Pigeon

Victoria Crowned Pigeon

The pigeon man

Blue Crowned Pigeon

(Some of these pigeons would take none of B. F. Skinner's shit)

Plum-face

Sulawesi Green Imperial Pigeon

Common Wood Pigeon

Audrey Watters

End-of-the-Year Projects

1 min read

Now that I've finished my "Top Ed-Tech Trends of 2015" series, I want to turn to some other projects to wrap up the year. I still have to finish recording audio for The Monsters of Education Technology and The Revenge of the Monsters of Education Technology. I need to do some more work on the "Adopt-a" efforts for which Kin has received a grant.

And I need to give all my sites a lick and a polish. I need to make sure that code for the Jekyll template I'm using is up-to-date. And I want to change the header image. That is, I want to change out one pigeon picture for another.

I've spent the afternoon looking at pigeon pictures on Flickr. (Side note: I want to throttle people who license their photos openly but place text with a copyright notice on the image.)

I can't really find one photo that I like more than the one I use now. So I think I'm going to use a different pigeon photo for each project (which is, in turn, its own GitHub repo).

Here are some of the images I really like:

Untitled

Dirty Damn Bird

Tamduva / Domestic Pigeon

Pigeon

There are not the  pigeons that you are looking for

Clone vs Pigeon 2° try

Photogenic Pigeon

Audrey Watters

Audrey Watters

The Week in Robots

1 min read

How Elon Musk and Y Combinator Plan to Stop Computers From Taking Over

Tinder robot behaves to palm sweatiness

Do Robots Dream of Electric Arts Council Grants?

Elon Musk’s Billion-Dollar AI Plan Is About Far More Than Saving the World

Alpha 1s, A Programmable Humanoid Robot That Sings, Dances & Does Kung Fu

Getting a Drone for Xmas? You’ll Have to Tell the Feds

Drone Registration Rules Are Announced by F.A.A.

Ride-Along in One of China’s First Self-Driving Cars

What Will It Take to Build a Virtuous AI?

New XPRIZE competition offers $7 million for the best ocean-exploring robots

Now AI Machines Are Learning to Understand Stories

Drone Maker Lily Announces a Product Delay and New Funding

The best Twitter bots of 2015

Can This Man Make AI More Human?

Baidu’s Deep-Learning System Rivals People at Speech Recognition

California D.M.V. Stops Short of Fully Embracing Driverless Cars

SynTouch Is Giving Robots the Ability to Feel Textures Like Humans Do

FAA Finally Admits Names And Home Addresses In Drone Registry Will Be Publicly Available

How Machines Write Poetry

How to 3-D-Print a Hydraulic-Powered Robot

How Twitter Bots Turn Tweeters into Activists

Audrey Watters

A Few (Non-Spoiler) Thoughts on The Force Awakens

3 min read

Kin and I saw the new Star Wars movie last night. With the exception of the first movie back in 1977, I’ve seen all of the films on opening day. (I saw Episode IV at the Cooper Theatre in Denver, but not on opening day. It was a magnificent Cinerama theater that, unfortunately, no longer exists.) I was six. That movie and its sequels provided the storylines for much of my imagination, for much of my childhood play.

It's something I wanted to share with my son. When The Phantom Menace came out, Isaiah was five. Our tickets were for a morning show, and I had to pull him out of school. His kindergarten teacher was so rude, challenging my priorities and decision-making. I remember sitting through Episode 1, so disappointed in the movie but so doubtful too of my own parenting decisions.

I didn’t want to be excited about The Force Awakens. But there’s something about that John Williams soundtrack. There’s something about those characters. I was swept away, and I admit, I was a little emotional as the opening credits came on screen.

The movie is good. It’s not great. But it’s good. I remember my first reaction to news that George Lucas had sold the Star Wars franchise to Disney was disgust. But Lucas had been such a terrible steward of the story. How could Disney be any worse, I eventually came to believe. And that’s what I felt last night: JJ Abrams was a better steward and storyteller. He certainly gave us better dialogue, and the characters – new and familiar – were pretty well-developed. I cared about them in ways I never cared about anyone in Episodes 1 through 3. Hell, those movies made me dislike Obi Wan.

I don’t want to write any spoilers, but there’s a scene near the end of Episode VII that made the heart of childhood me explode with joy. It’s something I wish I’d had as a girl growing up with IV, V, and VI. It’s something I wish were in the movies that Isaiah saw as a kid too. As someone who loves science fiction but has come in the last few decades to really resent Star Wars, I am very thankful that what I watched last night gave me back something to love.

Audrey Watters

The Week in Robots

1 min read

Tea Making Robot Teforia Brews Up $5.1 Million In Seed Funding

The Three Laws of Robotics

Google and Facebook Race to Solve the Ancient Game of Go With AI

China Wants to Replace Millions of Workers with Robots

Facebook Open Sources Its AI Hardware as It Races Google

The ethical dilemma of self-driving cars

Baidu’s Self-Driving Car Takes On Beijing Traffic

The Internet Is for Humans, Not Robots

AI breakthroughs that made 2015 a landmark year for computers

Machines Outdo Humans in Identifying Characters

Google’s Verily Is Spinning Off ‘Verb,’ a Secretive Robot-Surgery Startup

Risk to Aircraft From Drones Being Debated

This AI Algorithm Learns Simple Tasks as Fast as We Do

A Learning Advance in Artificial Intelligence Rivals Human Abilities

Facebook Joins Stampede of Tech Giants Giving Away Artificial Intelligence Technology

A Billion-Dollar Effort to Make a Kinder AI

Elon Musk Snags Top Google Researcher for New AI Non-Profit

Audrey Watters

The Week in Robots

1 min read

‘World’s sexiest robot’ causes a frenzy at Beijing tech conference

Penn State imagines a world where robots write the books

Automation is a Job Engine, New Research Says

How Facebook’s AI Researchers Built a Game-Changing Go Engine

Teaching AI to Play Atari Will Help Robots Make Sense of Our World

A Supercharged System to Teach Robots New Tricks in Little Time

Computers learn to create photos of bedrooms and faces on demand

“Sign Dolls”: Robot Sign Wavers

Walk-Man humanoid robot could be future of dangerous work

Re-understanding “Understanding Computers and Cognition”

On Cyber Monday, Friendly Robots Are Helping Smaller Stores Chase Amazon

An Amazon Drone is No Longer Just a Temp Worker in Its Warehouse

Drone Pilots have Bank Accounts and Credit Cards Frozen by Feds for Exposing US Murder

Wikipedia Deploys AI to Expand Its Ranks of Human Editors

Artificial Intelligence Aims to Make Wikipedia Friendlier and Better

Drone Videos Could Help Amazon Sell Prime Subscriptions

Hundred-foot tall, century-old smokestack collapses on an excavator. A drone films it.

Artificial intelligence service gives Wikipedians ‘X-ray specs’ to see through bad edits

Audrey Watters

Audrey Watters

Gratitude 2015

2 min read

2015 has been a really terrible year for me, and I’m glad that we’re quickly approaching its end. It’s not as though things get magically better when we flip the calendar over to January. But I’m looking forward to 2016 nonetheless. Ugh. Except for that whole presidential election thing.

Despite the awfulness, I have so much to be grateful for: my health, my work, my friends, my family.

I’m thankful for Kin, who has put up with me during all the times I’ve collapsed into tears and fury. I’m thankful for Fred – that we could bravely face a pretty shitty situation in Casper together. I’m thankful for Isaiah, who has proven to be the bravest kid. I'm thankful for my mom's support through all of this.

I’m thankful for all the people from whom I learn every day, particularly Tressie and Jose. I’m thankful for those friends who’ve cried with me. I’m thankful Chris, who’s the busiest person I know and still has time to be a solid friend. I’m thankful for Kate, who I finally met face-to-face this year and who has held me while we stared into a deep pit of despair.

I’m thankful for the people who read my work. I’m thankful for the people who listen to me speak – particularly those who know I’m going to say something that makes people feel a little uncomfortable and who still welcome me at their events.

I’m thankful that there are people who care, that there are people who take to the streets in protest, that there are those who work tirelessly for social justice.

I’m thankful I live in the sunshine, even when the world seems so dark.

Audrey Watters

The Week in Robots

2 min read

The Doomsday Invention

Our Robotic Children: The Ethics of Creating Intelligent Life

The fraudulent claims made by IBM about Watson and AI. They are not doing “cognitive computing” no matter how many time they say they are.

Man whose drone got too close to LAPD helicopter given three years’ probation

The Machine-Vision Algorithm for Analyzing Children’s Drawings

Microsoft Machine Learning Advances to Sensing Emotions

Google Open-Sourcing TensorFlow Shows AI’s Future Is Data

Drone Registration Expected to Be Simple, F.A.A. Says

Scary robot lumberjack makes deforestation too easy

Artificial Intelligence Draws New Connections for Personalization

How Robots Can Quickly Teach Each Other to Grasp New Objects

The US Military Used Lasers to Shoot Down a Drone in 1973

Silly Robots!

AI Advances Make It Possible to Search, Shop with Images

Drone Maker DJI Adds Technology to Limit Where Its Machines Can Fly

A Robotic Tabletop Makes Simple Structures All by Itself

Our devices are not turning us into unfeeling robots

IBM’s Watson Technology Powering New (Free) Holiday Shopping Trends and Products Forecast App for iOS

DIRO the Bear, A Smart Teddy Bear Robot Toy That Can Listen, Think & Learn

Robotic Cats to Keep Seniors Company

Audrey Watters

The Week in Robots

2 min read

FAA Will Test Drones’ Ability to Steer Themselves Out of Trouble

Cop pulls over Google self-driving car, finds no driver to ticket

Are Robots Threatening Jobs or Are We Taking Them Ourselves Through Self-Service Automation?

Photographers and Filmmakers Using Drones to Reach New Heights

More universities are adding drone programs

The paradox of automation’s “last mile”

Crowdfunded robot dragonfly project in trouble

TensorFlow: smarter machine learning, for everyone

Google Just Open Sourced TensorFlow, Its Artificial Intelligence Engine

Google Offers Free Software in Bid to Gain an Edge in Machine Learning

TensorFlow, Google’s Open Source AI, Signals Big Changes in Hardware Too

Drone video: IHOP parking lot collapses, swallowing cars

What Google’s New Open-Source Software Means for Artificial-Intelligence Research

The Dream Life of Driverless Cars

Gentlemen, Start Your Drones

MIT’s shape-shifting bot can be a phone, lamp or exoskeleton

Robot Makes Sure Stores Don’t Run Out of Doritos

Google’s robot group struggles to fill leadership vacuum as it shoots for ambitious launch before 2020

The hot new job in Silicon Valley is being a robot’s assistant

Artificial intelligence: ‘Homo sapiens will be split into a handful of gods and the rest of us’

Two Pranksters Built a Pizza Rat Robot to Terrify New Yorkers

Audrey Watters

Top Ed-Tech Trends of 2015?

1 min read

In a few weeks, I'll start my annual look back at the year in ed-tech. I still haven't narrowed down the "top trends" to just ten. Nor have I started the process of pouring back through each's week's news round-up that I write. But as it stands, here's my running list (some of which can obviously be consolidated, changed, expanded):

  1. The Business of Ed-Tech
  2. The Politics of Ed-Tech
  3. Privacy, Data, and Algorithms
  4. Outsourcing and Privatization
  5. Free College?
  6. The Collapse of For-Profit Higher Ed (Or Not)
  7. Online Education (the Hype formerly Known as “MOOC”)
  8. The Common Core State Standards
  9. Opt-Out and Testing
  10. Competency-Based Education
  11. School as Skills Training - “The Employability Narrative”
  12. Annotation (and the Indie Web)
  13. Free Speech and Social Media

Audrey Watters

The Week in Robots

2 min read

Do we love robots because we hate ourselves?

This Surveillance Drone Never Needs to Land

Automation Will Change Jobs More Than Kill Them

Elon Musk Admits Humans Can’t Be Trusted with Tesla’s Autopilot Feature

Toyota Invests $1 Billion in Artificial Intelligence in U.S.

Robot to Attend School for 10-Year-Old Cancer Patient

Soon, Gmail’s AI Could Reply to Your Email for You

Facebook’s Artificial-Intelligence Software Gets a Dash More Common Sense

Facebook Aims Its AI at the Game No Computer Can Crack

Quadcopter attacked by swarm of bees

Skype Founders Build a Robot for Suburban Streets

The MotoBot, An Autonomous Motorcycle-Riding Robot That Claims to Surpass Human Skill

Academic Search Engine Grasps for Meaning

Is artificial intelligence going to wipe us out in 30 years?

Augury’s Gadget Lets Machines Hear When They’re About to Die

MIT PhD Student Builds an Autonomous Drone Capable of Avoiding Obstacles While Flying at 30mph

Self-driving cars involved in more crashes than normal vehicles

Why Japan Is Building a Fleet of ‘Robot Taxis’

Baidu, the ‘Chinese Google,’ Is Teaching AI to Spot Malware

Artificial Nose “Smells” When Food Is About to Go Bad

Magic cards generated by neural networks

Audrey Watters

The Week in Robots

2 min read

Inside the Hunt for the ‘Master Algorithm’ of Artificial Intelligence

Inside the surprisingly sexist world of artificial intelligence (I’m sorry. But why are we surprised?)

‘Killer Robots’ Must Be Stopped, Activists Say

Why We Should Welcome Our New Robot Overlords

Should We Be Excited or Terrified About AI Politicians?

Robot Radiologists Will Soon Analyze Your X-Rays

If We Want Humane AI, It Has to Understand All Humans

Drone carrying cellphones, drugs, hacksaw blades crashes at Oklahoma prison

Walmart Seeks Permit to Do Tests With Drones

A Drone with a Sense of Direction

Robots Can Now Teach Each Other New Tricks

Facebook’s AI Can Caption Photos for the Blind on Its Own

Intel Is Building Artificial Smarts Right Into Its Chips

The 1950s “Miracle Kitchen” of the Future Had Its Own Roomba

Finding Artificial Intelligence Through Storytelling — An Interview with Dr. Roger Schank

Haiku by a Robot

See MIT’s odd new jumping cube robots

Become Horrifying With Baidu’s Creepy, Cool Face-Morphing AI App

The Tricky Challenge of Making Machines That “See”

An Elementary School Class Performs Kraftwerk’s ‘Die Roboter’

Marvin Minsky Reflects on a Life in AI

Audrey Watters

Visualizing the Educational Industrial Complex (Poorly)

4 min read

This snapshot of the “US Educational Industrial Complex” made the rounds via social media over the weekend. The title says “snapshot,” I realize, but I have to point out that it’s missing a lot of information, particularly related to educational technology and venture capital.

It highlights “biggest players” as the Gates Foundation, the Walton Family Foundation and the Broad Foundation. Indeed these philanthropic organizations do grant hundreds of millions of dollars a year to various ed-reform efforts. But reform isn’t simply a matter of philanthropy; it’s a matter for business too.

The image posits several key nodes that aren’t philanthropic orgs: Pearson and Salmon River Capital, for example. I’m not certain why the latter is the venture capital firm highlighted. Its portfolio includes Capella Education Company (parent company to the for-profit Capella University) and Parchment (the latest startup by Blackboard founder Matthew Pitinsky, whose name is misspelled on the image).

The image does link Salmon River Capital to other powerful nodes, notably through Ted Mitchell who now serves as the Undersecretary of Education. But Mitchell was also the CEO of NewSchools Venture Fund – a fund supported in part by the Gates Foundation and I'd contend far more influential in ed-reform than Salmon River Capital. NSVF invests heavily in the charter school and ed-tech startup space. Its portfolio includes ClassDojo, Edsurge, the Edcamp Foundation, Khan Academy, KIPP, and Rocketship Education.

Also missing: Learn Capital, another venture capital firm that invests exclusively in education, whose biggest limited partner is Pearson. Learn Capital’s portfolio includes AltSchool, Coursera, Edsurge, and General Assembly.

Also missing: Deborah Quazzo, the founder and managing partner of GSV Advisors, another venture capital firm that invests heavily in education. She resigned this summer from her position on the Chicago Public Schools Board, following scrutiny into how her investment portfolio profited from doing business with the district. GSV’s portfolio includes 2U, Coursera, Edsurge, Dreambox Learning, and General Assembly.

My point (in part) is that the “educational industrial complex” is incredibly complex, and this particular image does not depict that very well or very thoroughly. Take 2U, for example, mentioned above. Its founders: John Katzman, best known perhaps, as the founder of the test prep company The Princeton Review. He’s also the founder of a company called Noodle and an investor in ed-tech startups, including Edsurge. 2U’s other founder (and currently CEO) is Chip Paucek who was also once the CEO of Hooked on Phonics. 2U, before IPOing last year, shared investors with Blackboard and Parchment.

I’m fascinated by these relationships. It’s partially why I track on ed-tech startup investment. I want to be able to draw a better map – although in my case I use words and not visualizations.

And even if the data in the Educational Industrial Complex map is partial, I worry that the visualization aspect makes it even more misleading. We can’t tell how much money (how much power) these different nodes contribute to shaping education policy. There is no data for us to view for ourselves: how much does the Gates Foundation invest? How does that compare to the Broad Foundation? How much money does Pearson make from the GED? How does that compare to the SAT? To its Common Core-related exams? Do the lines and the bubbles accurately visualize the money or the power? I’m not convinced that they do.

As I've stated already it's worth asking what data is missing here? A lot, I’d argue. (A line between the SAT and Khan Academy and the Common Core, for starters. The world of venture capital and its connections to the Common Core and to the Gates Foundation and to the Department of Education.) What "counts" as "education reform"? Who counts and who's implicated -- and according to this graphic at least, who's let off the hook?

Audrey Watters

The Revenge of the Monsters of Education Technology (Forthcoming)

2 min read

(I think) I’m done with speaking engagements for the year. As I did last year, I’m going to publish my talks as a book. And as it’s a follow-up to last year’s book, I’m calling it The Revenge of the Monsters of Education Technology.

I plan to spend the next few weeks editing the chapters. I’m also going to record audio for it and for The Monsters of Education Technology. The audio version(s) will be bundled with the electronic version. I will make copies available in a variety of formats on my site, but will sell them via Gumroad (and possibly other platforms).

I still need to decide the sub-sections and the order of the chapters. I still need to write an introduction and a conclusion. But my goal is to have this ready to hit "publish" by the end of November – just in time to begin work on my “Top Ed-Tech Trends of 2015” series.

Here are the chapters (so far):

  1. The Algorithmic Future of Education
  2. Technology Imperialism, the Californian Ideology, and the Future of Higher Education
  3. Existing Digitally
  4. And So, Without Ed-Tech Criticism…
  5. Teaching Machines and Turing Machines: The History of the Future of Labor and Learning
  6. Is It Time to Give Up on Computers in Schools?
  7. Learning Networks, Not Teaching Machines
  8. The Golden Lasso of Education Technology
  9. Ed-Tech’s Inequalities
  10. Men (Still) Explain Technology to Me: Gender and Education Technology
  11. The History of the Future of Education

Audrey Watters

Audrey Watters

A New Hope

1 min read

In May 1999, i pulled my kid out of kindergarten early so we could see Star Wars Episode 1 on opening day. (his teacher was so pissed.)

I was pissed; the movie was terrible. nonetheless i just bought tickets for opening day of Episode 7 for the kid and me.

To be honest, I've become more of a Star Trek person. That is, I prefer Gene Roddenberry's fantasy to George Lucas's. Politically. Lucas's films were so racist, so sexist. (I wish I could remember the name of the article we read in film class that compared Star Wars to Birth of a Nation.)

So while I was nervous about J. J. Abrams' revisioning of the Star Trek canon, I'm eager to see him hack at Star Wars'. If the trilogy follows a white woman and a black man, I will happily watch these films. Take my money, Disney.

Wait, who am I kidding. it's likely I'll happily watch them regardless.

Audrey Watters

Hype and Hope: Thoughts on my ICDE Workshop

3 min read

The last couple of workshops that I've given have been fairly technical – GitHub for Beginners sort of thing. But I wasn’t sure how technical I could or should be for the ICDE crowd. Plus the title I’d been assigned was “The History of Education Technology: Hype or Hope.” That’s not a particularly workshop-y title, so I was pretty stumped about what to say. And that was compounded by the fact that I had no idea who the participants would be or what they’d want to do.

I made some slides – available here – but mostly winged it.

The room was small with four circular tables, so I decided small group discussion was the way to go. Most of those in the workshop – I didn’t get a headcount, but I’m guessing 30–35 participants – were from the global South.

I asked the groups to talk about which education technologies they thought were the most hyped (and in which they had the most hope). And I am only jotting down these notes here so that I remember their response, because I think it was quite revealing:

Overwhelmingly, the room agreed that mobile learning was the most hyped ed-tech.

I couldn’t help but think about all the VCs and (ed-) tech entrepreneurs I’ve seen hold up their smartphone and extol the coming “mobile (learning) revolution”: “everyone has a cellphone now.” Well, no. Not everyone. And even when they do, those phones do not necessarily have Internet access. And even when phones have data plans, that still doesn’t mean that mobile learning is adequate or sufficient or a “revolution” or even a good idea.

It was fascinating to hear about African educators’ response to the hype about mobile learning: the challenges of reading educational materials on a phone, for example; the challenges of watching educational videos (either streaming or downloaded), the lack of experience using technologies for school and not merely for entertainment. Some of these concerns are common everywhere, I reckon. I think what struck me was that saying mobile learning was “the most hyped” really got to the core of some of the problems with ed-tech’s promises. I mean, MOOCs and OER were mentioned in the group discussions too – but these all fall under (into?) this larger category of mobile learning, I’d argue.

So fundamentally, I think, ed-tech remains overhyped – from the specific to the general.

Audrey Watters

Audrey Watters

The Week in Robots

1 min read

Stephen Hawking: robots could give us all material abundance, unless rich people hoard all the wealth

Robot See, Robot Do: How Robots Can Learn New Tasks by Observing

Computer Scientists Wield Artificial Intelligence to Battle Tax Evasion

Anti-surveillance activists send a drone to pamphlet-bomb an NSA complex in Germany

When Robots Come To Pray

Watch Drones Drop Thousands of Moths on Crops

Deep Learning Robot Takes 10 Days to Teach Itself to Grasp

Hacking Wireless Printers With Phones on Drones

F.A.A. Proposes Big Fine for Drone Violations

Thought process: Building an artificial brain

FAA Seeks Record $1.9M Fine Against Drone Company, Claims It Endangered “The Safety Of Our Airspace”

Facebook’s Internet Drone Team Is Collaborating with Google’s Stratospheric Balloons Project

America’s Long History of Hiding Drone Deaths

Can We Shape the Robot Revolution?

A Smart Bomb in Every Garage? Driverless Cars and the Future of Terrorist Attacks

Robot dog can navigate unknown terrain with the help of a flying drone

Betamax and chill but one of you is a robot and the other is a seal

Audrey Watters

Ed-Tech as D&D

3 min read

I’m supposed to run a workshop next week in Sun City, and I really have no idea what I’m going to say or what we’re going to do. The title is something about the hype and hope and history of ed-tech. Not sure how to “workshop” that.

One idea I had, which I am not certain is going to work for an international audience, would be to write imagined futures of education – design fiction for ed-tech. Or specifically ed-tech as D&D.

I often invoke Bruce Sterling’s 2013 talk on “fantasy prototypes and real disruption” as he talks there about dragons – in this case, tech startups’ dragons: part of a

tacit allegiance between the hackerspace favelas of the startups and offshore capital in tax-avoidance money-laundering, building a globalized networked society. …We’re all auto-colonialized by the austerity. That’s your big dragon. That’s your actual dragon. And as long as you are making the rich guys richer you are not disrupting the austerity. You are one of its top facilitators.

And so I think this a lot: what are education’s dragons? What are education technology’s dragons? How is ed-tech actually a facilitator for education's biggest dragon?

Slaying that dragon – that’s an epic tale, right? It’s fantasy, sure. It is also, as Sterling suggests, the basis for a fantasy prototype, where we can use tales to think about the future that we’re working towards (or the future that we’re working against).

Let’s imagine, in a D&D sort of framework, if we “roll” a character: Charisma. Intelligence. Wisdom. Strength. Grit (LOL), I mean Constitution. Dexterity. Choose a Class. (Oh – you get to choose a Class in D&D.) Choose a Race. (Mhmmm. Yeah. Choose that.) Choose an Alignment. Pick your Gender. (Pick! Roll!) Imagine your backstory.

Then, how would we, in a D&D framework, move that character through the “dungeons” of education? What are the obstacles, traps, monsters, rewards? (What are the technological obstacles, traps, monsters, rewards?)

Anyway, it’s not a fully fleshed out workshop by any means, but I’d sure love to be a Dungeon Master and eventually walk a group through an adventure like this…

I'd love to stew some more with more folks on ed-tech's dragons. On who gets to be ed-tech's heroes. What are the narratives we write for them? What are the maps someone (the dungeon master -- who is that again?) has ordained for adventurers to walk through? How do we go off map? How do we write a different tale?

How do we slay this dragon?

Audrey Watters

A Proposal for Proponents of Ed Tech (A Comment)

4 min read

I left this comment for "A Proposal for Proponents of Ed Tech" -- it's a reminder that I wish Known would be a federated system for commenting, so that if I left a comment on a site I would always have a version here.

It sounds as though you teach at a school where students have a great deal of opportunity. And as the world operates as it does, that already gives your students advantages that "more technology in the classroom" probably really change. "Exposure to technology" and all its affordances sound like something that's already part of your students' lives -- at home and in the classroom. And more importantly, "more technology" certainly doesn't mean "more student-centeredness," which I think you allude to already. Student-centeredness is a pedagogical and political decision -- and in many ways too a reflection of the privileged circumstances in which you work and in which your students get to live and learn.

I think "tech for tech's sake" is silly. I can't say "you must use it; the future demands it." To me, arguments for ed-tech full of buzzwords about tech as facilitating "collaboration" and "creation" ring empty too. Technologies can easily reinforce the rules of a rigid and sterile classroom. Just because a teacher tweets doesn't mean they're progressive or on the cutting edge of anything. Just because a student writes an essay in Google Docs doesn't mean she gets a checkmark for successfully incorporating tech into her learning. Most of ed-tech does hand out a checkmark for tool usage; and I think you're right to hesitate.

That being said, I do think there are interesting projects that can be done with computers in a high school history class. And I don't mean typing up essays or looking up sources on the Web. I mean a more in-depth pursuit of how a digital world might reshape how we "know" and how we represent the past -- through things like computation, algorithmic analysis of resources, mapping and GIS, digital archiving, metadata, historical preservation of "born digital" items, changing access to primary sources, "original copies," ownership and IP, oral history ("what counts" as oral now?), "memory" (computer vs human), storytelling (does hyperlinking change linearity, for example?), museum work and aesthetics, identity, privacy, institutional history and information security, changes to scholarship (is the essay still relevant?) and to knowledge generation, history-according-to-wikipedia, and so on. These involve (I think) sophisticated thinking with and about history and tech.

I think the field of history will surely change because of new technologies that change the "work" that historians do. History as a field and a discipline has, of course, always been in flux (and not simply because the College Board decided what was on the AP exam.) Are most history professor at college "digital historians"? LOL. No. But hey... Maybe your students can take their jobs :)

For what it's worth, unlike Tom, I do not believe that technology is speeding things up or making what we do traditionally as scholars or teachers de facto irrelevant. I don't think we know what the future will hold. We don't know what technologies our students will use -- we aren't responsible to train them in the usage of tools (maybe a graduate degree in history does more of this. maybe.) What a depressing idea to suggest that's what ed-tech is: training in how to tweet.

History gives us an opportunity to understand our past and understand how our storytelling frames the present and the future. It's worth helping students think about how their own individual, personal pasts might be different -- their own memories -- because of the persistence of certain artifacts and because of the fragility of others. That's not a "college-prep history" thing per se. But it is, I think, one of the values of studying history: so we can understand ourselves now.

 

Audrey Watters

Tech Journalism Is Terrible, Part XXI#@!@

1 min read

Headline: Facebook Reaches Deal to Beam the Internet to Africa With a Satellite

Actual story: “Facebook announced on Monday that it had joined with Eutelsat, a French satellite company, to provide a selection of free Internet services across sub-Saharan Africa using a satellite that would start orbiting the globe in the second half of next year.”

That is: A selection of the Internet (or rather, Facebook) is now available to a selection of African nations – 14 out of 54 on the continent.

“‘Facebook’s mission is to connect the world,’ Chris Daniels, vice president of Internet.org, said in a statement,” which The New York Times just takes at face value. Jesus.

Audrey Watters

Minimum Viable Ed-Tech: The VR Edition

5 min read

An excerpt from this week’s newsletter

Highly recommended: tweet something trollish before you get on a plane for 10+ hours. (e.g. this tweet.) How many people will take advantage of your Internet silence to mansplain ed-tech to you?

Anyway, Mattel has a new View-Master that uses Google Cardboard. The history of the future of toys, or something. That Google Cardboard = View-Master should perhaps maybe possibly give you pause about how AMAZING Google Cardboard is. But nope. Hype and revolution. Same as it ever was…

More thoughts…

See, here’s the thing. I realize that Mattel’s new View-Master is appealing for the sake of nostalgia. My grandparents had an early stereoscope at their house, and it was always one of my favorite toys – the slides were fascinating because, unlike the content of the classic red View-Master I had myself, these were not full-color images of Disneyland or Disney movies. They were black-and-white scenes from from the early 1900s – I was utterly fascinated by the furniture, the costumes, the poses.

I suppose I spent a fair number of hours with one or other of these pressed to my face. But I would never call the View-Master “VR.” Yes, there’s a distortion that makes the images appear to be three-dimensional. But I’ve always imagined that “VR” meant a more immersive experience than that. The emphasis, if you will, should be on the “reality” not simply on the “virtual.”

Seriously, can you imagine if a teacher said “my students looked at pictures of Verona through the View-Master and now they have a better understanding of Romeo and Juliet”? We’d scoff, wouldn’t we? Yet that’s precisely the crap I’m hearing about Google Cardboard.

Oh, I realize that Google Cardboard seems to have impressed a lot of folks in ed-tech. But that’s a low bar. Look at Google Docs and Google Spreadsheets, for example: their big selling point – besides being free – is that they don’t have all the bells and whistles of the more bloated Microsoft Office. It’s “minimum viable productivity software.” Looks at the Google Chromebook. It’s a “minimum viable laptop.” Nothing but the minimum viable for our schools...

Similarly, you could call Google Cardboard “minimum viable virtual reality.” Here are the necessary components, which Google boasts you can assemble yourself for about $20: a piece of cardboard, 45 mm focal length lenses, magnets, Velcro, a rubber band, an optional NFC tag, and an Android phone.

The “virtual reality” offered by Google Cardboard comes via the display of a smartphone phone, distorted by those 45 mm lenses. The “virtual reality” offered by Google Expeditions, the special field trip lessons created by Google, are “panoramas,” according to the Google blog: “360° photo spheres, 3D images and video, ambient sounds – annotated with details, points of interest, and questions that make them easy to integrate into curriculum already used in schools.”

They’re videos, people. They’re photographs. It's Google Earth. The view is just held up to each student’s face rather than projected at the front of the class on a screen. Yes, some special VR apps are being developed for Android. But this is no Oculus Rift, which is rumored to retail for around $350 when it eventually hits the market. There are no gloves for you to manipulate this Cardboard "virtual reality." You just watch. You cannot really interact.

I’ve already written about how I think Google Expeditions will be just another example of how ed-tech furthers inequality. Actual, real field trips are already on decline, particularly for low-income students. And actual, real field trips really do have a lasting educational impact – one that watching a film via a device strapped on your face just can’t rival.

I’ve heard a lot lately that “no one is arguing that virtual field trips will replace field trips.” Yeah. Bullshit. Field trips have already been excised from the school day to make way for other things – more test-prep, more testing via computer, for starters.

But there’s something else that Google Cardboard is going to replace too: these cheap Google Expeditions – and this flawed argument that this counts as “virtual reality” – are likely going to prevent (or at least slow) more immersive VR experiences from ever entering schools too. Why pay for that when you can convince yourself that the 21st century version of the View-Master counts as VR?

Audrey Watters

Audrey Watters

No Takers

3 min read

A friend of mine just wrapped up a stint as an op-ed writer for a major publication. She’s relieved, she says, to not have to come up with an opinion-a-week. (It was not just a matter of coming up with an opinion – it was coming up with one that editors liked, thought timely and clickworthy, and wanted to publish.)

The pressure to write “hot takes” is ridiculous, and often online writers are expected to churn them out quite rapidly, not just regularly. You’re supposed to response to the latest Internet outrage (or virality, I suppose) of the day. And there’s little time to sit and stew and to figure out what your “take” might be. It has to be served fresh.

(Or, alternately, it can simply be served “dumb.” That’s how I’d characterize David Brooks’ columns, for example. Wouldn’t you?)

I’ve thought a lot lately about changes I want to (need to) make to my work life, in part because of [redacted financial expense]. So some days, I want to write more freelance pieces and travel less. But then again, I really enjoy the types of written pieces that I compose as keynotes. And jesus, I’ve got to finish Teaching Machines.

So I don’t know. I’ve thought about pursuing a position as an op-ed writer somewhere. (“Contributor” or whatever.) Got any leads?

But I do worry about having to have a “hot take” regularly. I mean, I have opinions all the fucking time. But I worry about having to craft that into an interesting argument to the drumbeat of an editorial calendar. I see so much lousy content online that obviously serves just this: it met the deadline. It met the word count. But it said nothing.

Perhaps this all just reflects some of my feelings of self-doubt lately – that I’m not really up-to-snuff when it comes to being a writer. (This is, of course, one of the benefits of speaking. You get immediate encouragement and typically pretty friendly responses from people face-to-face. The Internet, on the other hand, is full of assholes.)

Or perhaps it reflects the in-between-ness of my work: it’s not academic; it’s not pop. It’s grounded in research, but it’s full of opinion. It’s specialist, but a specialty with limited appeal (particularly when you’re critical of the field).

I’ve been working on Hack Education for five years now. It’s the longest I’ve worked at one “job” in my life. (Grad school doesn’t count, does it?) Maybe it’s time for something different? (Fortunately I have two ideas for a “what’s next” book-wise – neither project deals directly with ed-tech. What a relief.)

Audrey Watters

Turkle, Reviled/Revisited

4 min read

There aren’t a lot of women who (get to) write “big ideas” books about technology. I can only name a handful. But one of them, Sherry Turkle, has a new book coming out.

She’s in marketing mode at the moment, with a recent interview with NPR’s Scott Simon and an op-ed in The New York Times. Jonathan Franzen, who’s become quite the joke himself, has reviewed the book for the NYT as well, which is probably just the latest excuse some folks need to make fun of Turkle’s ideas in turn.

I’m frequently struck by how ungenerous the readings of her work often are by those in education technology. Her previous book, Alone Together, has been reduced to a caricature – an anti-technology screed, one that refuses to recognize that any good “connection” can come from computer technologies, one that insists, in a parallel to Nicholas Carr’s claims (or at least a caricature of his work) that “the Internet is making us stupid,” that the Internet is making us anti-social.

I only recently read Alone Together, as I didn’t realize – based on the tweet-length reviews of it I’d seen from educators, I guess – that it addressed artificial intelligence at length. (Most of the responses I’d seen focused on what it purportedly said about social media and social networking.)

Me, I’m really interested in our views of and reactions to robots, particularly the push from certain quarters that we accept their inevitability. I found the book to be quite provocative, especially as she examines the development of “caring machines” – this has significant implications for ed-tech and for teaching machines, as I argued in my keynote this summer at UW Madison. (Turkle points out that companion robots are introduced to children and the elderly – “the most vulnerable” – first. Ed-tech trivia: Some of Sebastian Thrun’s early work was on building robots for nursing homes.)

When Turkle writes that “there is psychological risk in the robotic moment,” I’m reminded that her background is in psychology (not, say, in cultural studies or history or educational technology). This shapes her methodology; this shapes her analysis. The emphasis in that phrase – “there is psychological risk in the robotic moment” – shouldn’t be on “the robotic moment.” That is, I don’t think that Turkle is writing with a nostalgia for a time in the past in which social connections were whole and healthy (as some have suggested). The emphasis should be on “psychological risk.” In other words, I think Turkle has this sense of whole and healthy Self as informed by psychoanalysis, not by history. That Self is challenged by many things – all sorts of traumas, losses, needs, desires, vulnerabilities – and one of the new challenges is computer technology.

I don’t think psychology explains everything. Not even close. (My background, of course, is in literary and cultural studies.) I think history matters – indeed, I’m really interested in the relationship between the history of educational psychology and the history of ed-tech. And I insist that structure matters too – institutions, cultures, societies, systems – and I wonder if psychology focuses on the individual at the expense of all that. (Does this explain, I wonder, why Turkle’s work sometimes feels like it veers towards white middle class anxiety?) I wonder if psychology of tech focuses too much on the individual and the tech.

I’m not really interested in defending Turkle. But it is interesting to me that it’s her books (as opposed to male critics of tech) that seems to elicit the most negative responses from those in ed-tech – responses that seem to often put words in her mouth. Hmm.

And I do admit, I’ve got some fondness for Turkle: I first read Turkle’s work as an undergrad in a women’s studies class in the early 1990s. More accurately, I first read Turkle and her (then) husband Seymour Papert’s work as an undergrad in a women’s studies class. I learned of Papert through Turkle, and the former has shaped my thoughts on ed-tech profoundly. It’s my own nostalgia, I realize, for books in the 1980s – for a different moment – that could be so hopeful about tech.

Audrey Watters

Tech Journalism is Terrible (and To Be Honest, Rarely "Journalism")

6 min read

Freddie deBoer has an article in the latest Full Stop Quarterly about Google’s Deep Dream and the failures of tech journalism to correctly explain the project.

It’s just the latest example of tech journalism’s deficiencies, but it’s a fairly significant one, I’d argue, as it involves a development in artificial intelligence. After all, there’s been a steady drumbeat of reports predicting that “robots are coming to take our jobs.” Yet there’s strikingly little understanding of what AI really can and cannot do. Instead there’s this belief in the stories about emerging technologies – almost a blind faith, really – that computers can do anything. And whether that’s accurate or not (um, it’s not), this is a narrative with profound implications for social behaviors, expectations, and institutions.

But tech journalism can’t help us figure any of this out; it offers us very little guidance towards a critical understanding of “how tech works” or “what it means.” As deBoer writes,

With click-begging headlines, useless metaphors, vague discussion of essential information, and the general ambient woowoo that chokes our tech media, stories about Deep Dream have demonstrated the capacity for aggregation-style internet journalism to mislead.

Most tech journalism has become the PR wing of the tech industry and the champions of techno-solutionism.

In lambasting coverage of Deep Dream, deBoer notes (almost in passing) that “A friend of mine who’s a science journalist tells me that she was informed by Google that they would be doing no interviews about the project. An explanatory blog post from the Google research team is useful, though it (understandably) fails to reveal some of the specifics about the program.” With no PR briefing and with little to copy-and-paste from the company blog post, it’s not that surprising that tech writers turned to metaphorical language to fill in the story, to meet the requisite word count. Tech writers don’t necessarily have backgrounds in tech or science, and if they don’t understand Deep Dream or artificial neural networks themselves, they can’t really explain it to their readers. (Of course, they could turn to academic researchers for insight, but the tech industry has pushed another narrative there, that there’s no innovation happening at universities. So yeah. Why ask a professor to comment?! What would they know?!)

Even when companies do provide PR briefings, it hardly means that the coverage is any better. Rarely are assertions made by marketers or CEOs challenged; rarely are other sources outside a company interviewed.

In part, it’s because tech journalism is access journalism (as opposed to, say, investigative journalism). Tech journalists are often given the news – they’re pitched the stories or they’re emailed the press releases. If another publication has a story first, it’s simply re-written, perhaps with someone’s Tweet tacked on for “analysis.” “Churnalism.” The tech blogs, a significant (and venture-backed) part of the tech industry, churn out tens of stories a day; that means their writers frequently pen multiple articles a day – there’s no time for fact-checking; there’s little incentive to do so.

There’s a strong disincentive to be critical. That could mean loss of access. (deBoer cites Gizmodo’s questionable coverage of Deep Dream, but it’s worth remembering what happened back in 2010 when that particular tech blog reported on the next iPhone. As in: this fall’s Apple press event was the first one Gizmodo was invited to in almost 6 years.) It’s not just about loss of access for a publication: critical coverage could also dampen the opportunity to move on from tech blogger to more lucrative jobs like marketing executive (a fairly common career path) or venture capitalist.

It’s not just Deep Dream coverage. Tech journalism serves to drive sales and drive investment for the industry; it serves to drive ad revenue for the publications themselves. And it is utterly shot-through with bullshit. deBoer observes,

TechCrunch’s Jay Samit claims that driving your car will be illegal by 2030, a statement of such grandiose, shit-eating delusion I have to admire it. Look beyond how self-impressed and confident Samit is, and you find a flagrant underestimation of the technological, infrastructural, economic, and legal challenges to self-driving cars, and, in extension, of human agency, of the random chance and luck that drive history, of the stunning political risks that any politicians would endure in enacting such a law, and of the fact that people really, really love their cars. No one can blame Samit for existing in a context where he is rewarded for being ridiculous rather than shamed.

Replace “self-driving cars” with “MOOCs” or “adaptive learning” – all of which tap into the faulty understanding of artificial intelligence and human versus machine learning – and you can see my day-to-day frustration with ed-tech journalism.

I’m often reminded of a story about Mike Arrington’s move from tech blogger to venture capitalist by then-tech editor for The Atlantic Alexis Madrigal – a move that even other tech writers at the time thought was pretty damn unethical. Madrigal argues that

The generally accepted sense of journalistic ethics says you shouldn’t have financial conflicts of interest and that this is not negotiable at the individual level. Journalism ethics reside in publications and more broadly within the idea of the fourth estate.

But the specific ethical principles of journalism were only true for certain types of publications, largely newspapers and magazines aka the mainstream media (MSM). Now, we’ve got a whole bunch of new types of publications with readerships rivaling the MSM but that are something different altogether.

Many websites are functioning largely as trade magazines that occasionally commit acts of journalism. (emphasis added)

“What has to happen to our online economy to stop creating the incentives that make bad journalism like this happen?” asks deBoer. But what if the two – the tech industry and journalism – are simply utterly at odds?

Just when we need good tech journalism (and ed-tech journalism) the most – journalism that is rigorous, informed, critical, and ethical that can help us understand how (and for whom) technologies work – we find that the tech industry has chipped away at the media industry, undermining its economics and its ethics in the process. Instead we're left with trade magazines, that can only retell the magical, dreamy promises that the tech industry has whispered to them.

Audrey Watters

What We Don't Know

3 min read

Earlier this year, I was at an API event chatting with @wirehead about my progress on Teaching Machines and about the history of education psychology and education automation, when he made a really sharp observation that’s stuck with me since: “In a couple of hundred years, people will view our ‘science of the mind’ much like we now view ‘the humours’.”

The humours: the ancient but long-standing theory that the human body is filled with (by the Greek’s accounts) four substances – blood, yellow bile, black bile, and phlegm corresponding to air, fire, earth, and water respectively – that when out of balance, cause disease. We can scoff, I suppose, as “science” now tells us otherwise. But for a long time the humours did inform science; at least, they certainly informed medicine. Those who believed in the humours were not un- or anti-scientific.

And despite “knowing better” now, the humours haven’t disappeared from the way in which we see personality or well-being. If nothing else, we still carry with us today this idea that staying healthy is all about “balance.”

Psychology, cognitive science, and neuroscience – all we think we know about thinking and knowing – will some day sound as silly as the stuff that Hippocrates wrote around 400 BC. Some of it already does. That’s not to say these aren’t “science” – they’re just primitive. Some of what we thought we knew about “the mind” has already been challenged – Freud’s contributions, for example, not to mention the field’s (non-) replicability, its racism.

But none of that really matters, does it? Shouting “it’s not a real science” gets us nowhere. With or without “proof” we seem to be increasingly convinced of and by “the science of the mind.” We know it works, mostly by the bestsellers and the advertisements, and as such we seek its insights.

So whether it’s accurate or not, “the science of the mind” – even as a nascent field of study – has already had profound implications for social behaviors, expectations, and institutions.

This is particularly true in education, which purports to be about “the mind” – intelligence, knowledge, expertise and the practices to extend those things from one generation or one body to the next.

But it’s even moreso for education technology which literally has its roots in the field of psychology. So if psychology is today’s “humours,” what does that mean for the tools that we’re building to implement its core beliefs?

Audrey Watters

500 Words

2 min read

I was supposed to finish Teaching Machines this summer. That didn’t happen.

Instead of a summer of (book, idea) creation, I had a summer of great loss.

I was supposed to finish Teaching Machines in September. That didn’t happen either.

I had forgotten, I suppose, how debilitating grief can be.

I’ve (almost) finished off two of the other writing projects that have lingered on because of summer crises. They’re in editors’ hands now. Completing those was difficult, much more difficult than it should have been.

Any writing right now feels painful. The words just don’t want to come. The thoughts just don’t want to form.

I’m going to ease back into writing by doing the following: writing 500 words a day (give or take) here on this site. I’m writing here and not on audreywatters.com and not on hackeducation.com because this place, I feel, can be a rougher sketchpad for my ideas. Hopefully things I jot down here can be fleshed out into actual essays and talks and articles and book chapters. Hopefully things I jot down here will be worth hitting the "share" button. But there’s no pressure to do any of that.

For now I’ve just got to get back into the practice of thinking and making sentences.

Audrey Watters

Audrey Watters

The Californian Ideology and The Future of...

3 min read

I submitted the title of the keynote I'll deliver next month at the ICDE conference in South Africa: "Technology Imperialism, the Californian Ideology, and the Future of Higher Education." It's the middle phrase in that series that I've been thinking about this week, in part because I just read Richard Beck's We Believe the Children: A Moral Panic in the 1980s (inspired to do so by this book review by Rebecca Onion).

The book is a cultural analysis of the panic surrounding child sexual abuse (and often Satanism) in the 1980s, and the focus of much of the book is on the McMartin Preschool in Manhattan Beach -- right up the road from where I now live. Among other things, it traces developments in the field of psychology surrounding sexual abuse, repressed memories, and multiple personalities, alongside the explosion of self-help groups particularly in and around California. The book also makes the point that this all put the focus of the (so-called) epidemic of child sexual abuse on the individuals accused, not on structural issues. Moreover, the focus was often on perpetrators outside the family, and as Onion describes in her review, very much a reflection of middle-class anxieties about childcare.

I vaguely remember the McMartin case, which holds the record as the longest criminal trial in US history. I certainly remember the moral panic surrounding Satanism, probably because I was as a teen a huge fan of heavy metal. It was quite strange to read Beck's book while sitting here in Hermosa Beach, as there were several places I know well now that make an appearance, including St. Cross Episcopal Church, just a few blocks from where I live. One of the accused teachers in the McMartin case allegedly held a Satanic black mass there (and a preschool that ran out of the church was closed in the midst of all the panic about sex abuse in the area).

Every Thursday at noon (for the past few weeks at least) I have actually gone to St. Cross Episcopal Church for an Al-Anon meeting. This week, I sat in the meeting unable to really concentrate fully on everyone's "shares." I was thinking instead about Beck's book, about this strange history of the South Bay, about the relationship between the "Californian Ideology" and "the mind." I was struck too in the meeting by this relentless focus on the individual and her/his own story -- although I think that "alcoholism" has a strange agency in the Al-Anon framework -- and not, again, on the systems that lead to addiction, abuse, violence. One of the things I've found really helpful about Al-Anon is that you hear echoes of your own story in others'; but that resonance doesn't point towards structural issues. It just points to lots of individuals sharing the same experiences.

I want to think more about this piece -- it's connected to the self-help movement and it's connected to psychology and cognitive science more generally and it's connected to the media and to computer technology: this focus on the individual. It is a cornerstone of the "Californian ideology." And I think it's shaping the future of education in powerful but unexamined ways.

 

Audrey Watters

Audrey Watters

Audrey Watters

Top Ed-Tech Trends of 2015?

1 min read

A couple more thoughts on the "top ed-tech trends" that I'm jotting down here (because where else):

  • How might the recent instability of the Chinese economy affect all that ed-tech investment startups have received there?
  • Trigger warnings, "Quit Lit," social media shaming, etc -- what do these genres (and by genre, I mean the onslaught of "think pieces" and Vox-splaining) say about higher ed (as represented by The Media)

Audrey Watters

Audrey Watters

Audrey Watters

Education Rankings via Klout. Still Bullshit. The 2015 Edition

13 min read

I didn’t plan to write about Michael Petrilli’s latest list of “Top K–12 Education Policy People on Social Media 2015,” in no small part because I said something last year. Last year was the second in a row I made the list; I didn’t make the list this year. Petrilli’s qualifications for “Top” changed. Ell. Oh. Ell.

To revisit, here’s what I wrote this time last year:

Renouncing My Klout

For the second year in the row, I’m on (Thomas B. Fordham Institute president) Michael Petrilli’s list of “The Top Twitter Feeds in Education Policy.”

Truth be told, it’s only the second year because last year my friend José Vilson asked why there were so few women and people of color on Petrilli’s list and volunteered my name as someone who might be missing. Thus I was added to the list after it was initially published.

And I’m only on the list this year because I didn’t delete my Klout account – one of the metrics Petrilli uses to determine eligibility – soon enough.

Confession: I noticed Petrilli tweet a week or so ago that he was in the middle of prepping this year’s list; and it reminded me that I needed to delete my Klout account. I’ve never cared about my Klout score and I’ve never used the account, but Klout has, without my consent, created an account and a score for me. Thanks, technology industry!

You actually have to log in – even if you’ve never signed up for Klout – to request the company delete your account. I just did this last night for Hack Education’s Twitter account – an account that is, for all intents and purposes, an RSS bot. And I did this for the Klout account linked to @audreywatters. But apparently not in time to disqualify me from Petrilli’s list.

The Fault in our Algorithms

Naming “the top” is a power play, no doubt. But Klout is an incredibly flawed way to rank the “Top Twitter Feeds in Education Policy” in part because the score doesn’t simply reflect Twitter “influence.” (Whatever “influence” might be. More on that below.) The company encourages users to link their Facebook, Google+, LinkedIn, Foursquare, and Instagram accounts as well as their Twitter accounts and uses data from all these services to calculate and to boost Klout scores. (It also uses Wikipedia and Bing search result data to determine the score.)

I don’t have a Facebook or LinkedIn account. There’s no Wikipedia entry for “Audrey Watters.” So my Klout score, I imagine, is lower for it.

I say “I imagine” because it’s not clear how the Klout score is actually derived. The company says it uses “more than 400 signals from eight different networks to update your Klout Score every day” and uses “machine learning models” to make sense of all the social media data it sucks up. For what it’s worth, however, several years ago someone reverse-engineered the Klout score and argued that about 94% of the differences in people’s scores could be accounted for by the number of their Twitter followers. Surely, it’s tweaked the algorithm since then. Surely.

But we don’t know. It’s a black box, the company’s “secret sauce.”

Of course, complaints about Klout aren’t new. Science fiction author John Scalzi has said that he quit Klout because “I suspect the service is in fact a little bit socially evil.” Fellow SF writer Charles Stross has also described Klout as “evil” – and quite possibly illegal (as data collection without consent violates UK privacy laws).

Ideology and Ranking

But even if we did know the algorithm that drives the Klout score, I’d still want to ask questions about the meaning of the measurement and the weight that the number – any ranking system, really – carries. Why, if nothing else, are we so obsessed with ranking?

What purposes does Klout serve? Whose purposes does Klout serve? Why is Michael Petrilli or Forbes or Rick Hess or any of the other popular list-makers interested in a ranking or rating system for those in education?

See, this isn’t simply about “influence”; it’s about ideology.

I’m in the middle of writing a chapter for Teaching Machines that examines the histories of “intelligence” and ed-tech – intelligence testing, artificial intelligence, “intelligent tutoring systems.” Much like “influence,” “intelligence” is something difficult to define let alone quantify. And yet we do.

We can debate, as philosophers have for ages, the meaning of these terms – “intelligence,” “influence.” But more importantly, we should ask: why do these characteristics matter? To whom do they matter? And once there’s a practice in place that has defined these terms and has designed measurement tools to assess them and a scale to rank them, we should ask what purposes these designations serve. I don’t mean what sorts of perks do you get with your Klout score or your IQ; I mean for us to consider how might these ranking systems reinscribe hierarchy and inequality, all the while purporting to offer an “objective” tool that reflects ability.

Sorta like “science,” but not.

So yes, I’ve been thinking a lot lately about the power that comes with crafting definitions, with promoting standards, with devising measurement systems – and the role that technology and algorithms will increasingly play here.

Whose interests do these definitions and standards and measurements and algorithms serve? What sorts of (often unexamined) legacies do these practices carry forward?

From the OED:

   psychometry: from the Greek ψῡχο- psycho- + -µετρια measuring – literally “soul-” or “mind-measuring.”

   1. The (alleged) faculty of divining, from physical contact or proximity only, the qualities or properties of an object, or of persons or things that have been in contact with it.

   The first reported use of this word was 1854 – J. R. Buchanan’s “lectures on the neurological system of anthropology” in which he wrote “The influence of Psychometry will be highly valuable ‥. in the selection from candidates for appointments to important offices.”

   2. The measurement of the duration and intensity of mental states or processes.

   The first reported use for this definition was 1879 – Frances Galton who wrote “Psychometry ‥. means the art of imposing measurement and number upon operations of the mind, as in the practice of determining the reaction-time of different persons.”

As Mark Garrison writes in his book A Measure of Failure: The Political Origins of Standardized Testing, “Standardized testing – or the theory and practice known as ‘psychometrics’ – … is not a form of measurement. Psychometrics is best understood as the development of tools for vertical classification and the production of social value.”

Psychometry claims to measure the mind. Klout claims to measure online influence. But look at the OED. Look at those definitions: influence and intelligence. Psychometry and Klout. I’m fascinated how they seem to dovetail so neatly in today’s education politics and how readily they become a sort of “disciplinary power” that maintains the functioning of schools, economies, and other hierarchical systems. Who “measures up”?

I stand by what I wrote then (which always feels like a bit of a triumph for a blogger). But I want to add a few more thoughts, particularly as Petrilli has changed his “formula” for awarding his “top” honors:

Klout remains a ridiculously flawed metric.

The startup was acquired (for $200 million) by Lithium Technologies last year, and it really hasn’t been heard from much since. Oh sure, it continues to push out PR boasting on how it “measures influence” in social media marketing but it’s no clearer today about what that actually means. And if someone is going to adopt it as their metric in education, they should probably grok that. As I noted in my blog post last year, there are a lot of signals that purportedly feed Klout’s algorithm, but there’s no transparency on what those are.

So here I must push back on some of what Petrilli writes about Klout, things that strike me as underscoring that he doesn’t care about the algorithm; he’s only obsessed with its output. (Things that, I confess, I find analogous to education reformers’ larger obsession with “what the data says," only insofar as it confirms their political beliefs.) Petrilli doesn’t seem to have explored what Klout measures or how – hence his surprise that people with a lot of followers on Twitter don’t necessarily have high Klout scores. Here’s his explanation:

Perhaps they are powerhouses on Facebook or other social media platforms, or are particularly effective at stirring “engagement” on Twitter (such as getting prominent folks to re-tweet their posts).

“Engagement” is in quotations here, and Petrilli indicates with punctuation that this might a questionable metric. But then he doesn’t follow up with a question – he follows up with a statement which is really an assumption on his part about what Klout might count as engagement. We simply don’t know. Folks might “engage” with Diane Ravitch’s tweets, but rarely does she engage. Rarely does Arne Duncan’s Twitter account "engage" either, unless a Department of Education employee is sitting at the keyboard fielding questions under his name, often under the guise of some department-sanctioned hashtag-sponsored chat. And do folks respond? LOL yes, he’s the Secretary of Education. Does Klout include any sentiment analysis? We don’t know. (How does being “verified” on Twitter – a service made available to politicians, celebrities, and journalists who work at mainstream news organizations – feed one’s Klout score? Again, we don’t know.) Now Xian F’znger Barrett, on the other hand, engages on Twitter. Oh my does he. There have been times I’ve clicked on one of his tweets, and I see that the thread is over 300 Tweets long of Xian going back and forth over and over and over and over and over with some corporate education reformer. Perhaps that’s “engagement” according to Klout. Certainly that’s Xian’s high tolerance for bullshit in 140 characters, according to Audrey.

Petrilli says that folks should sign up for Klout. Again, he doesn't seem to understand that Klout signs you up without your permission. He says that several people on his list of "top" policy people based on Twitter followers don't have Klout accounts. Funny, I was able to find Klout accounts for most of them. I'm pretty sure that Alfie Kohn (who does not really Tweet) never actually signed up for Klout. (His Klout score nonetheless: 64.) In fact, I'm sure that if someone told Alfie Kohn how to log in and delete his Klout account (Alfie: call me), he'd do just that.

So Petrilli's call to "sign up, folks!" is just silly. And honestly, it's just wrong. Klout signs you up without your consent -- wow, that's a lot like how education data collection works. Funny, right?

Who’s eligible for “the list”?

So, I didn’t make Petrilli’s list this year – not because my Klout score was non-existent. (Dammit.) It’s because he decided that “ed-tech” is not “ed policy,” and as such, I – along with a whole crew of ed-tech folks – was deemed ineligible.

Cool story, bro.

“Who counts” is another gate-keeping method to keep prestige metrics “prestigious” – simply eliminate the riff-raff from the outset. It’s likely that the list you create only includes the people you see and the people you see is colored by prestige and privilege and in-group politics. Again, I’ve written about this before as it pertains to another education reformer’s ranking for education scholars. If you make a list of “the top,” you should be honest that it’s nothing more than your viewpoint of “who counts.” It’s your top. In Petrilli’s case, it’s white-guy-conservative-think-tank-top. So golf clap for those who score.

Petrilli, to his credit, did make an effort this year – particularly after being chastised in years past for having no women or people of color on the list – to expand the pool of candidates he rated. That is, he asked on Twitter for nominations, so if you 1) use Twitter, 2) follow him on Twitter, 3) follow people who retweet Petrilli, 4) don’t have Petrilli blocked, you might have seen a call to participate. (Sampling error? Cronyism? I dunno. I’m a folklorist. Not an K–12 education policy expert. Clearly.)

But even with input from his public, Petrilli’s list of 500 overlooks some folks that I think are pretty damn influential when it comes to K–12 education policy. Oh say, Rahm Emanuel – he’s got 119,000 followers on Twitter, for what it’s worth, and can boast a Klout score of 84. What about Scott Walker – 198,000 followers on Twitter and a Klout score of 87. Dude, that means he’s more influential than Arne Duncan! You know who’s even more influential than Scott Walker? John Legend (Klout score: 88; Twitter follower count: 7,260,000).

Sure, you can say “oh no, Scott Walker and John Legend and Rahm Emanuel and Audrey Watters aren’t 100% talking about K–12 education policy therefore they aren’t influential in K–12 education policy like how I mean you should be focused on and influential in K–12 education policy” – but then I call bullshit on your listing and your choosing and your metrics from the start. Who is on Petrilli's list, with the exception of those who simply auto-tweet announcements from their blogs/departments/magazines, only tweets about education policy?

And, what counts as “education policy”? Probably only the messages and the voices that powerful people want to gesture towards including at their table. Gesture. And Klout is a usefully opaque metric for them to decide to whom that gesture “looks like.” Then they can throw up their hands and be all like "man... that's what the algorithm said. And the algorithm is science."

Audrey Watters

Audrey Watters

Top Ed-Tech Trends of 2015?

1 min read

There are still many months before I start my annual round-up of ed-tech trends, but I think about the project year round. Here are the things I'm tracking on -- clearly I've got to narrow this down by December:

  1. The Business of Ed-Tech
  2. The Politics of Ed-Tech
  3. Privacy
  4. Data and Algorithms
  5. Outsourcing and Privatization
  6. Free College?
  7. The Collapse of For-Profit Higher Ed (Or Not)
  8. Online Education (the Hype formerly Known as “MOOC”)
  9. The Common Core State Standards
  10. Opt-Out and Testing
  11. School as Skills Training - "The Employability Narrative"
  12. Annotation (and the Indie Web)

Audrey Watters

Audrey Watters

My Eulogy for Granny

4 min read

My granny was something of a Letter Writer, as I imagine all of her family will attest. She had to be, what with four children who went away to boarding school, two of whom moved from England to North America, and with grandchildren and great grandchildren scattered around the world.

Letter writing has become a lost art perhaps, now that we can easily and cheaply make international phone calls, now that we can shoot off an email, now that we can just post a status update on Facebook. Letters are different; they feel somehow more thoughtful, more special. Of course, I don’t want to sound too nostalgic for Granny’s letters; some of them were pretty banal: who she’d had tea with, what she’d eaten for Sunday lunch, that sort of thing.

But some of her letters, to borrow a term from J. K. Rowling, were “Howlers.” A Howler, for those unfamiliar with the Harry Potter series, is a magical letter that comes in a red envelope. A Howler expresses great disappointment or anger. Its contents are enchanted to be read aloud – very loudly – in the writer’s voice. The letter gets hotter and hotter upon delivery and eventually bursts into flames. You cannot ignore a Howler as it will still insult the recipient, even if it’s unopened.

Granny’s Howlers didn’t come with a red envelope, which was a pity as but sometimes you weren’t expecting that sort of message from her. I mean, sometimes you knew what was coming – I received my fair share of Howlers from Granny, I admit. Earlier this summer when my brother and I were cleaning out my dad’s house, we found stacks of letters – those that Granny had written Mum, those that she’d written Fred and me. He asked if I wanted to read them again, and I said “My god, no,” sort of lamenting that they hadn’t burst into flames like the magical Howlers did. You don’t need to hear those messages twice.

I don’t have as many memories of Granny being cross in person. Firm, yes. But not howling mad. Perhaps I tried to be on my best behavior when I was with her in person. Perhaps I was too intimidated to do otherwise. My dad, incidentally, never called her “Betty,” but always said “Lady Pretty” – a sign of deference and distance.

The geographical distance that separated us from Granny meant that much of the time we were together were spent – with great intention and planning, no doubt – doing “fun activities.” As we lived so far apart, growing up I saw Granny mostly on summer vacations – we’d visit England or Canada, or she’d come to the States. We went to the Grand Canyon with Granny. We went to Yellowstone with Granny. We went to Windsor Castle and Buckingham Palace so on.

Memories are shaped in part through photos and stories. I come from a long line of letter writers and scrapbook makers. So to my memory, in my mind at least, Granny was always doing things. Or at least the letters and the scrapbooks sure made it seem that way.

And I learned from Granny as we all should – I mean, she lived to be 98 – to stay active – mentally and physically. Go for very long walks. Do the crossword. Learn the two-letter ascrabble words. Enjoy the garden. Enjoy a glass of wine with dinner, by all means.

Live life. Like me, Granny was widowed young. And I learned from her, implicitly I suppose, that death of a loved one should not ever stop us from living. Indeed, it provides us an opportunity to examine what we do and don’t do and what we need to do better. (To be clear, I mean, reflect on yourself for yourself. No writing Howlers.)

Audrey Watters

Audrey Watters

The Week in Robots

2 min read

Hitchhiking Robot Lasts Just Two Weeks in US Because Humans Are Terrible

Teach Your Robot to Do the Dishes

Military Robots: Armed, but How Dangerous?

Drone footage of gray whales and calves

Egyptian Lingerie and the Robot Future

Shall I Compare Thee To An Algorithm? Turing Test Gets A Creative Twist

Annoyed Fisherman on a Pier Casts His Line at a Drone Flying Overhead Successfully Hooking It

Your Lawyer May Soon Ask This AI-Powered App for Legal Help

Drone drug delivery at prison spurs yard fight

Will we be able to convert robots to Christianity?

Drone Researchers See the Technology Grounded by Federal Safety Rules

The proposed ban on offensive autonomous weapons is unrealistic and dangerous

Tiny Drones That Navigate with Insect Eyes

Google Patents a Way For Robocars to Decide When Not to Drive

The Future of Work: Machines Pulling Ahead, Quickly

Making Robots Talk to Each Other

The Age of the Robot Worker Will Be Worse for Men

Robots more likely to take “male” jobs

The Future of Work: Don’t Blame the Robots

Hitchhiking Robot, Safe in Several Countries, Meets Its End in Philadelphia

HitchBOT Was A Literal Pile Of Trash And Got What It Deserved

Vlogger Claims to Have Surveillance Cam Footage of hitchBOT’s Death

Here’s Video of the Jerk Who Killed hitchBOT

Vloggers Faked a Surveillance Video, But Did They Destroy hitchBOT?

Whoever Found hitchBOT Brought Him to Meet Kevin Smith Last Night

Fiery Drone Resembling the Human Torch Flies Around New York to Promote ‘Fantastic Four’ Film

Will This Adorable Patrol Robot Ever Find What It’s Looking For?

Audrey Watters

The Week in Robots

2 min read

The Guy Who Taught AI to ‘Remember’ Is Launching a Startup

Interview with Alexander Rose, contestant on the ABC series Battlebots

Drone Film Festival

teaBOT, A Robot That Mixes and Brews Custom Blends of Loose Leaf Teas

Facebook Taking Open-Source Software Ethos to Drones

A Facebook Project to Beam Data From Drones Is a Step Closer to Flight

California offers $75,000 bounty to catch drone pilots who slowed wildfire efforts

How Drones Can Help Rural Clinics

Facebook Is About to Test Its Enormous Solar-Powered Drone

Meet Facebook’s Stratospheric Internet Drone

The KQED Series ‘Deep Look’ Explores the Capability of Harvard’s Swarm of 1,024 Autonomous Kilobots

Meet The World’s Smallest Precision-Controlled Drone

Our robot overlords will excel at ping-pong

Carnegie Mellon Wins Fifth International RoboCup

See Lego robot controlled by DIY exosuit

Amazon Lays Out Its Vision for a Sky Thronging with Delivery Drones

‘What Do Machines Sing Of?’, A Robot That Endlessly Performs Hit Ballads From the 1990s While Adding Emotion

A Programming Language For Robot Swarms

Elon Musk and Stephen Hawking Call for Ban on Autonomous Military Robots

The serene robot art of KMNDZ can’t be undone

The Random User, A Robotic Computer Mouse With an Artificial Finger That Clicks Around the Internet Randomly

The automation myth: Robots aren’t taking your jobs— and that’s the problem

Robots Might Take Your Job, But Here’s Why You Shouldn’t Worry

Kentucky man shoots down drone hovering over his backyard

Robot Journalism in Germany

A Behind-The-Scenes Look at What Goes Into Building a BattleBots Robot

Audrey Watters

Future Teacher From the Past

1 min read

The history of the future of teacher evaluation, by Hall Davidson in 1992:

I watched this on Saturday night when I was at Gary Stager's birthday party. Hall was there. So was Roger Wagner (of Roger Wagner Hyperstudio fame) and Sylvia's boss from when she worked at Davidson, making among other things Math Blaster. (Dammit, I can't remember his name.) Ah, the history of ed-tech. And oh my, those of us who are still working in the field, decades later. The stories we can tell...

Audrey Watters

Audrey Watters

You Must Struggle to Truly Remember This Past...

1 min read

You must struggle to truly remember this past in all its nuance, error, and humanity. You must resist the common urge toward the comforting narrative of divine law, toward fairy tales that imply some irrepressible justice. The enslaved were not bricks in your road, and their lives were not chapters in your redemptive history. They were people turned to fuel for the American machine. Enslavement was not destined to end, and it is wrong to claim our present circumstance—no matter how improved—as the redemption for the lives of people who never asked for the posthumous, untouchable glory of dying for their children. Our triumphs can never compensate for this. Perhaps our triumphs are not even the point. Perhaps struggle is all we have because the god of history is an atheist, and nothing about his world is meant to be. So you must wake up every morning knowing that no promise is unbreakable, least of all the promise of waking up at all. This is not despair. These are the preferences of the universe itself: verbs over nouns, actions over states, struggle over hope.

~Ta-Nehisi Coates, Between the World and Me

Audrey Watters