Level of Confidence: Machine Vision Lost in the Void of Ambiguity

Sebastian Logue
11 min readOct 28, 2020

In late 2014, Mexico was engulfed in protests over the mysterious disappearance of forty-three normalista school students in Iguala, Mexico. The subsequent years have been filled with inquiries, governmental and independent, but answers have still not totally surfaced, despite the recent creation of a new “truth committee” to reopen the investigation. In the midst of the aftermath, Rafael Lozano-Hemmer published his open source artwork Level of Confidence, six months after the disappearances (Lozano-Hemmer). Since then, Level of Confidence has been shown around the world at over forty separate exhibitions. Level of Confidence, by reversing the directionality of facial recognition software and machine vision, illuminates the confounding potential for uncertainty around facts and events in an age prolific with information and data, revitalizing and complicating the cold, supposedly-objective gaze of AI image processing.

Owing to its basis in specific events, Level of Confidence is a poignant monument to the disappearance of the normalista students, however, through memorialization, Level of Confidence provides a formal commentary on technology and power, deliberately turning facial recognition, a tool developed and utilized by law enforcement and militaries to identify enemy threats, and using it on victims (Paglen). By implementing technology generally associated with aggressive surveillance in a humanitarian capacity, Lozano-Hemmer deteeths it, not only exposing facial recognition software in a public setting, where it is rarely confronted, but also using it as a tool of accountability rather than power. The forty-three missing students, victims of widespread corruption in the mexican state and federal governments, are symbols of the lack of accountability, endemic is mexican politics, that Level of Confidence seeks to subvert. Lozano-Hemmer takes the forty-three disappeared students as a tragic example in a work that illustrates the lopsided power that exists in modern surveillance. In this, Level of Confidence becomes an important work in the discourse around neutralizing the power dynamics of computer vision. It exemplifies a repatriation of technology that is often used against the people, turned around to work for the good of the people, a rebellion against the concept of the surveillance state. As in drone warfare, automatic license plate reading, or your nearest Amazon-enabled supermarket, machine imaging has been developed by and for the most influential corporations and governments in the world, reinforcing the vectors of visibility, power, and accountability (Peglan). Lozano-Hemmer appropriates facial recognition for his work intentionally as a method to disarm the one sided dynamics of power. Lozano-Hemmer’s work is a redirect, pointing computer vision back at the systems of politics and power that abuse it, leveraging the tools of ever-more invasive digital surveillance to undresses the elusive medium of the work itself as a pointed critique of Mexico’s lack of accountability for law enforcement and military entities.

In light of the state-of-the-art technology that Lozano-Hemmer utilizes in the work, the disappearance of the forty-three students is curious because of its evasion of documentation. There is a certain irony in the contrast between the artificial intelligence at the core of Level of Confidence and the grainy, vague, security-cam and smartphone footage that exists from the night of the kidnapping (“The 43”). The reality of what occurred that night is still unknown, and that uncertainty clashes harshly with the modern information and data culture that makes way for a work like Level of Confidence. Even comprehensive accounts like Netflix’s docuseries The 43 can only supply a confusing and disjointed timeline at best, struggling for primary source material to cement enough waypoints to be sure of anything. Though it is alleged that the mexican national security agency and military have critical footage that could shed important light on the case of the forty-three students, the government denies having any such footage, either meaning that it really does not exist, or it is being hidden from the public, in both cases reinforcing Hemmer-Lozano’s point (“The 43”). This prompts viewers of Level of Confidence to wonder, which is better? If more documentation and surveillance were available; if Iguala, Mexico looked like London or Beijing, cameras on every corner, would we know what happened (Lange)? Would accountability be possible, or would the situation be exacerbated by even more disproportionate right to visibility? Level of Confidence carries a strong political warning about the power that facial recognition technology can exert, but it also asks provocative and complex questions about the contemporary tension between privacy and information. Hemmer-Lozano brings out the contrast between the exceptional information capabilities of the medium of the work, and the absence of information around the subject of the work. Level of Confidence is about vision in a situation where real vision does not exist, about the dangers of excessive or inadequate visibility and the inability for society to exist at a balance of the two. The work forces the viewer to ask themselves an ultimately inconclusive question about the potential benefits and drawbacks of machine imaging. Level of Confidence problematizes itself in so far as it is a positive application of computer vision, but simultaneously indicates a situation where that same technology could have an opposite effect.

Level of Confidence is also a powerful vehicle for empathy, a product of the technology utilized, matching users with their closest likeness among the victims. By showing each viewer the victim they most closely resemble, Hemmer-Lozano requires the viewer to confront the subject of the work at a personal level, coming, almost literally, face-to-face with the reality of the event. This kind of personalization of the crime humanizes the victims and allows viewers to relate and imagine themselves as one of the disappeared students. Coming face-to-face with another individual is also a symbolic gesture in western culture, indicating a respect that contributes to the sense of gravity that surrounds the work (de Vries). The face functions as a vital device for social marking in the western world, and so thrusting the viewer into a head-on situation that has customized to them creates an intimate dynamic between the work and the viewer (de Vries). The viewer is shown, to the best extent of the work, their own self staring back at them, a visceral experience that drags the viewer into the work whether they desire it or not. Level of Confidence is not a work to be observed on the wall from the plush position of a museum bench, a contained microcosm to be looked at from a safe distance. No, Level of Confidence envelops the viewer, absorbing them into the work until they are not so much looking at, but being in and of the work. In a virtual culture where human emotion and experiences are flattened, Level of Confidence runs against that notion, using digital media to bring the viewer into the piece, unwittingly making them a part of the it and requiring them to see themselves rendered in the victims.

Yet, even in its unusual ability to evoke real emotion in the viewer, Level of Confidence again encounters the messy relationship between human and machine, empathy and apathy, privacy and surveillance, equality (justice) and power. Though the work does find empathy in showing the viewer themselves in the forty-three victims, the rating that the work assigns to the match, the level of confidence, is a foil to that emotion, arguing for a reduction of complex, organic, human features into a single statistic. Level of Confidence creates a poignant experience for the viewer, but the level of confidence reminds the viewer that the work is ultimately the product of a computer algorithm that does not feel what the viewer does. The AI does not see a human face in its fullness, but rather reduces it to a set of geometries, only what is necessary to differentiate one person from another, and nothing more (Lange). The facial recognition software can aid the viewer in their search for meaning, but in the end it cannot form the same feeling of understanding and loss that the viewer does. Due to this rift between human and machine, the level of confidence, as a simple mathematical metric to describe that complex emotions of the viewer, seems misplaced and dehumanizing in the context of the tragedy. This, coupled with the fact that the software will inevitably never achieve a one-hundred percent level of confidence since all forty-three students are almost certainly dead, seems to emphasize that, although the AI used in Level of Confidence recognizes the viewer’s face and engages them in an emotional and moving experience, the facial recognition software is ultimately a machine, unavailable to discretion, understanding or feeling. This is also ignoring that the software is ultimately a fallible construct in its own right, susceptible to mistakes and false-positives (Lange).

However, maybe this too is the point; the students, or their remains, have yet to be found. Then, the level of confidence also stands as a reminder of the uncertainty engrained in the story of the disappearance, not just the program’s probability of a face match. Herein lies the paradoxical difficulty, the level of confidence is a symbol that contradicts itself, a two-faced metaphor that shoots itself in the foot. And perhaps this is Lozano-Hemmer’s endgame. The chaos, the disjointed statements, the discontinuous narrative and timeline: this is the story of the 43. It is a story of loss, and to that extent Level of Confidence evokes empathy in the viewer, but more importantly, it is a story of ambiguity, uncertainty, and misinformation. The computer vision of Level of Confidence strives to make a positive match, the pride of a high-tech military industrial complex, obsessed with goals, numbers and results encoded in binary. This is the irony that exists at the heart of Level of Confidence. Despite widespread surveillance and the ability to computationally identify human faces, there are still circumstances where that exist in a vacuum of certainty. The biometric program used in the work uses the level of confidence as a ploy to feign a level of discretion, the pretend it has the capacity for a spectrum, for multiple existence, but the ultimate failing of the machine is that it actually cannot leap to the one-hundred percent confidence that it purports to excel at. The computer may suppose “objective” sight, but if that vision is predicated on quantification data, and fact, that in the case of the 43, doesn’t exist, how is it any less subjective than human vision? And does the uncertainty that results fundamentally undermine the ability of the program to rationalize events like humans? Very little video footage exists of the night the 43 normalistas disappeared, almost no cell phone GPS data, no call records, a handful of texts. Even in The 43 docuseries, one of the most comprehensive accounts of the event, with interviews from many of the leading reporters, assembled nearly five years after the fact, the producers struggle to scrape together what modern data they can find, repeatedly replaying the same short clips of smartphone footage (“The 43”). With digital technology infiltrating nearly every aspect of modern life, it becomes hard to imagine a scenario devoid of the trace data that the AI relies on to determine certainty (Paglen). So, in the face of such a disappearance, the computer bawks. In lieu of hard facts to form patterns from, the machine hunts and searches where there is nothing to be found, attempting to invent its way out of ambiguity with statistics like a level of confidence, while humans, can come to terms with the indescribable. Human existence is inherently uncertain, but unlike the machine, we are aware of our ephemerality and, to avoid total paralysis, have developed ways to live with that uncertainty, capable of rationalizing contradictions, paradoxes, and confounding variables. This leaves the level of confidence, the last crude defense of Lozano-Hemmer’s facial recognition program, lost in a territory that is unequivocally human, a failing attempt at neither comprehending, nor fully solving the mystery of the 43. The level of confidence becomes a conflicted, hamstrung measurement, dehumanizing through its cold calculus, but simultaneously unable to provide the concrete answers that “hard data” promises. It is a metric that exists poorly in all states, a shoddy jack of all trades and a master of none, a model developed by an AI for an enigmatic realm of inconclusivity that it cannot register. Some scholars like Benjamin Bratton argue that Artificial Intelligence should deliberately avoid taking on and conforming to human value systems and characteristics as a way of ensuring that AI doesn’t take on human biases and notions (Bratton). This theoretically makes sense, but given the inevitability that Artificial Intelligence will be applied to deeply sensitive human subjects, Level of Confidence shows how algorithms don’t differentiate and will eventually stray into affairs for which they are not prepared. Algorithms, with their pretense of objectivity, are rendered useless by the concept of the unaccounted for (Peglan). Where humans find ways to exist in a state of uncertainty, programs become restless, unable to settle, fumbling for direction in a darkness that can’t be modelled and approximated into submission. The level of confidence, for which Lozano-Hemmer’s piece is titled, becomes the defining factor of the work because it exhibits the broader and subject-specific implications of uncertainty in society dominated by machine intelligence.

The empathic engine of Level of Confidence is also drawn into question when it is considered in relation to Hito Steyerl’s theories around “neurocurating”. Level of Confidence, by utilizing facial recognition technology to adapt the work to the viewer, veers dangerously close to the precipice of neurocurating, pandering, albeit for a good cause, to the viewer’s empathic instincts. Steyerl warns of artworks that use eye-tracking and artificial intelligence algorithms to analyze interest in a work and generate metrics (Steyerl). Although Level of Confidence is open about its use of facial recognition technology, it still does just this, collecting the facial data of the viewer and creating metrics, the level of confidence, around their relationship to the artwork.

Ultimately, Level of Confidence is a conflicted work, pulling itself in a number of different directions that are not necessarily parallel. However, this seems to be the message. The disappearance of the 43 normalistas has been mired in debate, lack of accountability, and speculation, with few fixed points in the narrative. Lozano-Hemmer explores that tension and uncertainty by contrasting large scale ideas around power dynamics, especially aided by computer vision, empathy, and the algorithmic incapacity for uncertainty. Level of Confidence is a monument, and a warning. It is a way of honoring the missing and realizing their plight for the viewer, and a provocative work that asks the viewer indecisive questions about the role of artificial intelligence in society, both for the public and the individual. Level of Confidence balances a complex set of interactions, between the viewer, the technology, and the subject, the missing 43 normal school students, each raising its own list of questions and complicating the other components, creating a messy experiment at the intersection of cutting-edge facial recognition, human emotion and social connotations, absence, and loss.

Bibliography

de Vries, Patricia, and Willem Schinkel. “Algorithmic Anxiety: Masks and Camouflage in Artistic Imaginaries of Facial Recognition Algorithms.” Big Data & Society, Jan. 2019, doi:10.1177/2053951719851532.

“The 43.” Performance by Paco Ignacio Taibo, Netflix, Netflix, 15 Feb. 2019, www.netflix.com/title/81045551.

Bratton, Benjamin, “For You/For You Not: On Representation and AI” in Size Matters! (De)Growth of the 21st Century Art Museum, Beatrix Ruf (Editor),Koenig Books.

Lange, Christy. “Surveillance, Bias and Control in the Age of Facial Recognition Software.” Frieze, 4 June 2018, frieze.com/article/surveillance-bias-and-control-age-facial-recognition-software.

Lozano-Hemmer, Rafael. “Level of Confidence.” Hemmer, 2015, www.lozano-hemmer.com/level_of_confidence.php.

Paglen, Trevor, et al. “Invisible Images (Your Pictures Are Looking at You).” The New Inquiry, The New Inquiry, 8 Dec. 2016, thenewinquiry.com/invisible-images-your-pictures-are-looking-at-you/.

Steyerl, Hito. Duty Free Art. Verso, 2017.

--

--