Mapping the Hidden

Mapping Open Eyebeam

Small talk given at New School on March 6, 2017, as part of Acid Architecture: Trans-Thinking in the Age of Cognitive Capitalism led by Ed Keller of the Center for Transformative Media. Speakers Warren Neidech, McKenzie Wark, Keller, and Sanford Kwinter spoke eloquently about neuroplasticity, acid thinking (both literal and metaphoric), the binds of cognitive capitalism. Mr. Wark gave an espeically amazing talk about Barbarella. Fun times and great conversation with thinkers I have admired and respected forever and a day … Stream was recorded and will be up soon!

unnamed

I have spent the last five years or so researching and working with software-based artists who speculate on AI and emerging models of cognition. Most of them are interested in the search for true AI and its hard to formalize problems: how do we think and create, for one, and how AI and nonhuman agency make for new modes of seeing and creating. Their vision of AI is often sublime, psychedelic, and can clue us in to entry points in resisting cognitive capitalism, one of the core themes of this session.

Before that, I spent a good number of years working with and around game designers and game studios, which have bigger budgets than successful film studios, and have the sole, unabashed, and highly interesting project of creating an effective world – one that suspends your belief for enough time, or has you believing in it enough to spend extended periods – fifty hours, eighty hours, within its rules.

And though artists and poets clue us in to the possibility of collective dreaming, the creation of an acid architecture, it is games design that clues one into the complexity of machine intelligences today. Algorithms in a game system express many types of power, and reveal, in action, that no system or platform is neutral, that values are stitched into each choice of code. Those values are created through taste, training, education, histories both personal and communal, gender, race, and of course, class.

If you spend enough time in the art world, or the intersection of art and technology, you’ll find yourself in a lot of agonized discussions over platforms, and surveillance, and data oversight, and whether to get off the platform, or stay on. How to try to force change from within the platform, or app-, or phone. Getting on or off a platform seems to be entirely the wrong question when the construction of platforms is not just the work of Silicon Valley; it is the project of academia, of government, and the intimate relationships these sectors have with one another.

In my current research at Eyebeam, I am studying how cognitive capitalism buries and hides its extractive intent through double speak and duplicitous language, through several fronts: narrative engineering designed to develop trust and belief, through simulations that cast empathy through conversation as their goal, and through tech- incubator spaces in which creative labor and continual self-pitching are a mode of survival, and one’s myth of a personal journey must be moving and authentic.

Throughout, I hope to show how the gap between what a company says it does and actually does is most visible in its development of artificial intelligence. And one of the most accessible spaces for us to understand how cognitive capitalism works is done is the design of basic AI, namely, virtual assistants, bots with conversational interfaces. Narrative engineering routinely draws on scholars from cognitive linguistics, researchers who can differentiate and map syntax down to the closest phoneme as it shapes your sense of comfort and trust in a conversation.

I am working closely with narrative engineers developing virtual assistants which are planned for seeding in nearly every industry platform. They’re delightfully open about the work they do to create an ideal AI conversationalist – not too human, but humanoid, sweet, soft, compliant, molding itself to the rhythms of our conversations and desires. They talk a great deal about trust, and belief, and how both are constructed through tiny inflections in conversation, word choices that make their bots seems trustworthy and friendly, or not.

Perhaps logging off and out at distinct intervals is a strategy, but a sustained counter-effort has to tackle the unique character of cognitive capitalism, how its face looks, how its forms seed in our most intimate spaces. In order to create the right kind  “resistance,” as we’re past simply smashing a computer in a field, it is crucial to understand how our thinking and speech is mobilized and reworked.

We feel the work of cognitive capitalism as we do it – the tunnel vision, the exhaustion and data fatigue, the fractured attention spans. The experience of palipnosia [written about on this blog here], in which the ghostly afterimage of the screen imprints on the world after you look up from it, is a small metaphor for this. In ten to fifteen years, the average Internet user has had their attention span mangled. It manifests as depression and panic and anxiety, as inhuman amounts of stress we place on ourselves to keep up with new twenty-four seven rhythms. Isolation and fear, paired with a distinct sense that our brains cannot keep up with the level of processing, the information deluge. And underneath, the knowledge that the flood is designed, and the fatigue is designed for.

Brands and products seek and use creative work and labor to create a tone, a bounded relational exchange in which we feel close to them. It isn’t enough to have your brand loyalty, but also to capture your own double speak, your switches in tone, your evasion in a conversation with a bot or virtual assistant to understand why you move away from a service. The feedback loop this creates is incredibly powerful, and most importantly, unseen; you barely feel or realize it happening, as it takes place in every casual, offhand interaction with a chatbot, your visual field bending under the pressure of looking at screens, hour after hour, month to month, year upon year.

As cognitariat, we work in a system that is fundamentally hostile to us, and even more importantly, that hostility is coded as care, as beneficial, as erasing all difficulty on the path of progress. Creative labor is coded as necessary for survival in the work world. The interest of AI companies in the creative mind, their understanding of it as a “site of freedom,” is telling, with groups within companies encouraging artists to embed themselves in residence, or mind-brain-and-behavior groups studying painters and novelists and poets.

So in these short minutes, I would like to first offer a portrait in lieu of diagnosis, and then a set of possible solutions or counteractions.

Narrative Engineering

Calmness, Smoothness

Designing for Trust

At the frontline of intentionally stupid or dumb AI is where cognitive capitalism is continually designed in a way we can grasp. Even as we speculate on the steps along the very long road to true AI, any systems blazing with intelligence can be used as a clue to its maker’s intent. Its design is managed both by consultants and in laboratories, filled with mind and brain researchers poached from academia who study linguistics, vision, affect, psychology and the workings of the brain.

John Seely Brown, who directed Xerox PARC, and served on the boards of many companies from Amazon to Polycom, was ruthlessly clear on  how technology could be designed to fade into the backgrounds of our lives. Folding in cognitive labor is an explicit part of that, especially as the architecture of the internet has moved from open to closed in twenty years time.

AI researchers hone in on a very difficult field, psycholinguistics, the recreation of how humans produce language, learn and understand it, in order to develop finer computational models. Further, these researchers use reams of real-world data, open playgrounds in which subjects speak freely. Deploying machine learning, these interfaces can understand contrasts in tone, construction of affect, and the linguistic process at the level of the phoneme, or sound. This is the technical apparatus of a contemporary dumb AI.

As for the language itself, we see the employment of more creative writers in AI. This is a fascinating shift. Microsoft’s infamous TAY bot was written largely by comedians; Cortana’s writing team is filled with script writers and novelists; there are poets and novelists working at Google on building the language of the future. The Virtual Assistant Summit in San Francisco had speakers from Pixar, Disney, and Dreamworks.

Screenwriters and other creative writers form a strong labor force helping people to buy into fictions built into technology through characters designed to delight. They help the user want to embrace and provide clarity – of thoughts, positions.

In a number of the more heavily designed virtual assistants, great extractive power is presented as softness, compliance. Their calmness and smoothness are continually refined, engineered, programmed. They are designed to fade and blend in, seamlessly folded into the fabric of daily life and consciousness. Here, though, the affect is shaped with an end in mind – belief, trust – a level of comfort the user feels with a new if harmless friend. In the replication of a type of language partner, cognitive neuroscience is mixed in with the insights of social psychology, of comedy, of screenwriting, to create a believable character, an effective one.

The masquerades of companies, the abyss between what organizations say and do, is hidden in language. The language of interfaces, platforms, apps-, buries intent through language, edited down to seem innocuous.

And you, the user, the cognitive laborer, your creativity, your potential for invention, is the main blood offering you give. But it isn’t enough to get your attention – but also to retain it for as long as possible. Your trust, your capacity for belief, these are the entry points through which to hold you. This is why billions of dollars are poured into the design of a brand’s affect and personality.

I think, here, of Cayce’s sensitivity to brands in Gibson’s Pattern Recognition – her deep understanding (and Gibson’s, as well) that the brand personality, our affinities with brands, say a great deal about us. Cayce’s intimacy with brand psychology is actually the entry point to a reality we understand very well, in which “I’m building my brand” is both an ironic, self-aware joke and completely a recognizable experience.

Moving on from extraction, we how might our linguistic structures change, in response to continually engaging with consumer chatbots? How has our language already become more functional, ruthless, efficient, shaped by time spent writing in comment threads, on Twitter, Facebook, forums, for ten years, twenty years? How does this change in language and visual culture affect writers, poets, artists, makers?

If there were ever to be a movement towards a radical AI or leftist AI, or simply, an AI not bound by and produced by extractive modes, we need a close assessment of the liquid interfaces that are being coded to shape language, and vision, and further, how its dominant mode is plasticity. The more machine learning is built in, the more an interface molds to your form, your speech patterns, and your needs, so the interface seems less and less adversarial, and more your companion. Your trusted friend, your ally, your yes man, stealing from your back pocket as you shake its hand.

Boutang wrote in Cognitive Capitalism of the living, thinking labor that is knowledge-work, how this knowledge worker’s labor is resistant to consumption, in response to which Ken Wark has pointed out that knowledge work can certainly be lifeless or dull. Information systems can reify dead or dulled thinking; they can replicate crude analysis, crude innovation.

The logic of artificial language replicates labor that is unseen, as sex workers on phonechat lines are studied for their linguistic rhythms and patterns of speech by bot writers. Cognitive labor maps over previously hidden labor. Facebook click farmers work in warehouses in Bangladesh that stored clothing sweatshops for decades.

I will also briefly point out the entire apparatus of machine vision, as it appropriates image-making, often working unseen, processing and naming images by logics we no longer have access to. The machine’s vision, its pattern recognition, when trained on people, is a sure violence,  an ethical violence. As Hannah Black and Simone Brown remind us, such surveillance is not new at all.

Harder to track is how we will change, interpersonally, relationally, as a result of active, artificial seeing through these neural networks that can recall a million faces with precision, a kind of seeing that shapes policing, surveillance, and the definition of criminality. Machinic vision now touches all of us, but still unevenly. It might “see” certain people more clearly, more violently defining them, than others. And that ‘clear seeing’ is one of the most important works of cognitive capitalism. It’s not coding that is worrisome but who is writing the algorithm, whose biases are built into the code.

How to break up this endless mental mallscape? What is liberation for the cognitive laborer? Closely linked to this, what are the dreams of the cognitariat? Is it still the dream of the artist or that of radical movements, the hope for collective, passionate restoration of  one’s work, and making, at last, to oneself?

Unsurprisingly, we find models in science fiction, in poetry, in literature, and film, where scenarios of the brain used and abused to create ideal consumers are written so well. The brain is the frontier along which materialism wages its various wars. Creating “spaces of freedom” is not any easier now; it is harder and harder given how powerful machine intelligence is.

Rather than stupefying or demoralizing, I like to think of this overwhelming cybernetic stronghold as honestly, galvanizing, and that might be given my games background. I believe the logic of systems as we’re designing them can be countered through research, strategy, and play. Further, cognitive capitalism burns out along with AI research, which is hitting an incredibly difficult plateau. Fei Fei, the former lead of Stanford’s AI Lab, calls this period of AI as akin to having a two-year old, before the “terrible twos” hits; we are so far behind on the path to true AI that there will be many spaces for intervention.

I want to offer just a few possibilities for building such spaces.

Reveals: Tracking and Unstitching Design

Practicing Machine Vision

Mimicking Plasticity

Risk and Chaos Through Simulation

In understanding the digital design of emotion, of sense and affect by companies, perhaps there is a clue to how these same categories can be designed or modeled for other ends, for communities, actively countering destructive political and social modes. One clue might be the past dreams of cybernetic socialism, analyzed beautifully by Armin Medosch in his book New Tendencies. Tracking the design and un-stitching it, mapping how cognitive labor and expression is used is step one to refusing to be captured and stored, to be calcified.

To understand machine vision, we can also practice it. A great number of contemporary artists practice a kind of brutal and ecstatic merging with machinic vision, a practiced intimacy with simulations. They actively practice the kind cognitive flexibility to teach us how to see like machines. The artificial eye might move on indefinitely, with or without us, but we can learn to look along with it. They intuitively zoom, crop and select for meaning in a practice of seeing. They seek out the monstrous and strange, breaking systems apart for new end states. They model how we might navigate collective machine seeing, the distribution of intelligence across forms. 

AA1.png

Watching this Ian Cheng piece, for instance, I’m jolted out of the standard god-view of a simulation. I have to try and understand this being at its level, within its context. I’m trying to peer past the surface to guess at the rules organizing, beneath. Even as I feel the compulsion to interpret and name, I am refused. As a result, my relationality with the object and further, how it affects me, is paramount. This isn’t animation, but a simulation, in which Ian has designed and coded rulesets for characters and objects. A handful of basic algorithms drive the movements, and are let free to interact and behave as they will. Ian’s chosen premise, whether an environment, a character, an animal, an object plays out, runs indefinitely, with no end state, no final form.

This composition with soft things, like, software, or behavior, or cognition, as these soft and felt elements drive and direct the flesh, spending time with these fantastic objects trains one to see anew in a few ways. We can better understand the interplay between the emergent, the algorithmic, and the story or narrative, and see how we intuitively, watching, develop an emotionally cohesive and coherent way to relate to the uncanny.

What this teaches me is how to tolerate inevitable contingency. A live simulation allows us to relate to the chaos of our own lives. It is a space in which humans can examine and re-calibrate their relationship to radical change. We can potentially learn to be comfortable with no end state, with ambiguity. In examining this exchange between simulation and viewer, between computer eye and human eye, one might see how the artificial eye moves indefinitely, without us. We’re in thrall to its eye, forced to imagine what that computational seeing means, but we also see our own agency in this process of interpretation, narrating, and naming.

Poetics: Intentional Opacity, or Double (and Triple, and) Speak

The Right to Obscurity

This feels like a good moment to consider what the cultural creative, the poet, the writer, the musician, can offer in terms of acid architecture, or surrealism, of alternate visual images, alternate systems of distributed intelligence, of experimental conversation. Surrealism allows for love, imagination, and freedom to explore potentiality, subverting deadened structures, whether dead language or dead vision.

I think of Bartleby, in Melville’s story, who replied, to everything, “I’d prefer not to.” I would prefer not to participate in this farce, of transparency, of knowability, of sure positions, of ideas being clear. But if you’re under siege, if your body is too seen, your words too tracked, if, in sum, participation is demanded, there is always opacity. Poets practice this, the right to opacity, the right to obscurity, as a powerful ways to establish spaces of freedom, of safety.

The poet and writer Édouard Glissant, spoke most beautifully about this in the film One World in Relation. And I’ll close with his words:

I even openly claim the right to obscurity, which is not enclosure, apartheid, or separation. The obscure is simply renouncing the false truths of transparencies. We have suffered greatly from the transparent models of high humanity, of degrees of civilization that must be ceaselessly worked through, of blinding Knowledge … The transparency of the Enlightenment is finally misleading. We must reclaim the right to opacity. It is not necessary to understand someone–in the verb ‘to understand’ [French: comprendre] there is the verb ‘to take’ [French: prendre]–in order to wish to live with them. When two people stop loving, they usually say to each other, ‘I no longer understand you.’ As though to love, it were necessary to understand, that is, to reduce the other to transparency. 

Leave a comment