top of page

Freedomlands

“What is at stake is not just one more piecemeal innovation but control over what very well could represent an entirely new computational platform: pervasive, ambient artificial intelligence.” - Gideon-Lewis Kraus

With the rise of new machines and internet-based capabilities, information in the 21st century has rapidly taken over the way individuals learn, process, and communicate changing environments in the global context. In particular digital applications now serve as de facto online windows into others’ personal lives in mediated forms, we must now question how today’s AI-assisted technology responsibly builds spaces for marginalized communities who face physical displacement. What results is an incongruous parallel between two forces: the cultural history that is lived by the inhabitants themselves, and the history that reverberates from the mediated experience, which continues to pervade the circulation of information. 

In Freedomlands, I explore digital storytelling tactics for understanding how marginalized communities can be deeply understood and re-examined from a humanistic standpoint by exploring points of dissonance in modern AI tools. In particular, I focus on the current humanitarian refugee crisis. By mining for found images, videos, and texts representative of refugee camps as quasi-cities and using existing AI tools like CaptionBot, Google Translate, and Google Arts and Culture, and Google Maps to find inconsistencies in the narrative landscape, I question the very nature of the machine and its ability to contextualize the human journey.

35951010;1916x1080.png

Aerial satellite view of Calais Jungle as of 2017. The unofficial camp was in fact demolished in October of 2016 by French authorities. 3,000+ have been moved out since a year ago, and now must claim asylum in qualified regions of France or face deportation.

Google street view oscillates between 2008 and 2016 in the Grand Synthe region of Dunkirk, France. Before serving as the first official refugee camp in Europe, it was an uninhabited park space for the middle-class populated suburb.

In 2016, Microsoft launched a new AI experiment called CaptionBot, which uses neural networking to detect images and apply captions to them. Upon entering the web page, the bot claimed to understand the content of any photograph and try to describe it as well as any human. I decided to see how well it could caption images of refugee camps I found from online journal articles. While CaptionBot is often used as a humorous example of AI’s shortcomings, at what point does humour become ethically dangerous?

bottom of page