On the Emergence of General Computation from Artificial Intelligence
We are witnessing a massive paradigm change towards AI-based general computation. In this post, I argue that the idea of “the stack” as a framework for analysis is dead. In large-scale contemporary models, where second-order reasoning about formal languages is an emergent effect, nothing of critical value is to be gained from the assumption of hierarchical complexity. At the same time, we are entering the age of “humanist” hacking. Testing the limits of models like ChatGPT involves, among other things, an understanding of rhetoric, metalanguage, poetry, and performativity.
Ten Years of Image Synthesis
Deep learning models have become so good at generating images that, at this point, it is more than clear that they are here to stay. How did we end up here? This timeline traces some milestones – papers, architectures, models, datasets, experiments – starting from the beginning of the current “AI summer” ten years ago.
There Is No (Real Life) Use Case for Face Super Resolution
Deep learning has opened up a plethora of amazing possibilities in computer vision and beyond. In this post, I argue that none of them absolutely depend on (real world) face datasets. Faces can be nicely aligned, faces are easy to come by, and synthesizing realistic faces is a good benchmark of a model’s generative capabilities. Yet, the responsibility that comes with face datasets, outweighs all of this. Malicious applications will always be, rightfully, presumed by default.
What Could an Artificial Intelligence Theater Be?
While my current research is concerned with interpretable machine learning, my background is in experimental music theater. In this brief post, I explore some of the intersections of theater and computation in general, and theater and machine learning in particular that I suggest enable this trajectory. Based on this exploration, I present some speculative thoughts on potential future developments at the interface of theater and machine learning.
Embrace the Latent Space. Notes on the Curatorial Challenges of an Emerging Media Art Form
In this post, I look at the problem of exhibiting AI art through the lens of alien phenomenology. I argue that “works” of AI art necessarily consist of the entirety of a latent space, of all the images we can produce from such a space: hundreds and thousands of images, interesting images, boring images, mode-collapse images, adversarial images – all of them. Solving the problem of display for AI art would mean finding a practical way of exhibiting entire latent spaces to make tangible the machine’s perspective on the world and thus raise the question of machine creativity.
A Syllogism in Turing’s 1950 Paper
Among the many fascinating and clairvoyant arguments of Turing’s 1950 paper on “Computing Machinery and Intelligence” is a refutation of what Turing calls the “argument from informality of behavior”. This brief post disentangles the syllogism at the core of this particular passage.
Intuition and Epistemology of High-Dimensional Vector Space
In this post I trace how vector space models generate knowledge, for the particular case of the digital humanities. What is the price of the commensurability that vector space models provide? In other words: if we use a vector space model to compare two or more complex aesthetic objects, what are the implicit epistemological assumptions enabling the commensurability of these objects?