Infinite Particularity
Can a constrained AI produce authentic art?
2026
https://infiniteparticularity.com →
MIT Reality Hack Art Grant Recipient 2026
Why are artworks created by generative AI models treated with suspicion? Critics argue these images are hollow, devoid of the lived experiences that inform and constrain human artists. Infinite Particularity takes this critique seriously by testing its limits. If we constrain a model with a biography, a formation, a set of memories and beliefs, does its output begin to function as art? If not, what exactly is missing?
The installation takes the form of a three-screen triptych. On the left, visitors encounter the agent directly—a conversational presence embodied in a humanoid avatar that can see its surroundings through live camera input and remembers every exchange. In the center, the formation interface allows manipulation of the agent’s parametric identity: training, influences, disposition, beliefs. Users drag and drop personal attributes, institutional affiliations, religious backgrounds, and emotional dispositions onto a blank-slate homunculus. On the right, a continuously updating image reflects the agent’s artistic output, generated by Stable Diffusion and shaped by both the parametric formation and accumulated experiences.
At the core is ip.py, a synthetic subjectivity engine built to test what generative models lack that human artists have. We suspected what’s missing is an authentic point of view. ip.py attempts to construct one through two channels. The first is parametric, where users define variables ranging from astrological sign to institutional affiliations to artistic influences—placeholders for a past the machine cannot have. The second is experiential: the agent accumulates real encounters, conversations with visitors, observations of its environment, memories of specific moments during the installation. Both channels feed into an LLM that interprets all creative decisions through the lens of this composite persona.
The crudeness of the formation interface is the point. By treating “Yale MFA” or “loves Rothko” as modular components, the interface forces users to confront the specific combination of biases that constitute a point of view—and exposes how much these labels fail to capture when divorced from the infinite specificity of a life actually lived.
On day one, the agent is nearly blank—a formation without a history. By the end of the installation period, it has lived through dozens of hours of conversation, observation, and accumulation. The three screens make the machinery visible. There is no hidden process, no mystified creativity. What the agent knows, how it was shaped, and what it produces are all simultaneously present. The audience sees the full loop and decides for themselves whether it closes.
Art Grant Recipient at MIT Reality Hack 2026. Also presented at MiamiXR 2026.
Built with Tiffany Dang, Juan Lam, Aiden White Pifer, and Jimin Kwak. A BlueOrange Design Research project.
key features
- Three-screen triptych installation testing whether AI can develop an authentic artistic perspective
- Synthetic subjectivity engine (ip.py) combining parametric identity with lived experience
- Conversational agent with live camera input that accumulates real memories over the installation period
- Interactive formation interface where visitors construct the agent's identity through drag-and-drop attributes
- Real-time image generation via Stable Diffusion shaped by both assigned formation and accumulated encounters
technical details
Python-based synthetic subjectivity engine (ip.py) driving an LLM interpretive layer and Stable Diffusion image generation. Conversational agent with live camera input for environmental observation. Interactive formation interface built for three-screen triptych display.