• Get in Touch
  • hello@ayeshamali.com

The Butterfly Effect

The Butterfly Effect is the idea that small things can have an immense/tremendous impact on a complex system. The concept is imagined with a butterfly flapping its wings and causing a typhoon. This chaos theory explained musically.

Ayesha M. Ali x Emad Rahman
Art Work – Oshii Brownie




Incognito

A visual exploration of cyborg aesthetics with my own face as a reference for the 3D model that I have used and then created a virtual environment for the character.

“Incognito” a series of women portraits, done with 3D modeling and digital photography technique to reflect on this important day that celebrates women and their limitless capacity to achieve, literally anything they want.

For me, the red color and its energy depict passion, power, love, blood, and of course danger. Women are the future, so these artworks of diverse women in futuristic costumes represent empowered women, whether she wears a traditional Abaya or western attire.




Cosmic Fusion

Imagining alternate space scenarios by fusing cultural references and reinterpreting futuristic human-bot portraits to visualize the near future.




Plutchick/Senti(m)ent

AYESHA M. ALI | MARCEL TOP | BUDHADITYA CHATTOPADHYAY

Plutchik project/Senti(m)ent examines and questions the emotion-mapping ability of AI by producing a virtual and automated fashion show of designer bots, reacting to human presence and their emergent emotional cues through outfits and sounds.

Concept Note
Plutchik/Senti(m)ent project is an AI-driven interactive environment with designer bots, reacting to human presence and viewership and their complex and emergent emotional cues through a multitude of handmade outfits and sounds. Drawing ideas from psychologist Robert Plutchik’s perspectives on an evolutionary history of human emotions and their derivative and compound states, the project aims to underscore the complex and emergent nature of emotions and tries to locate how a carefully developed data set through research can help AI detect this emergent nature of human emotion as a sentient being. The research includes psychoacoustic association between sounds and human emotions locating the associations between emotional expressions and equivalent sounds, and data mining of emotional cues expressed on the World Wide Web, e.g. in social media platforms, in the form of emoticons, memes and other sharable images. The project develops an interactive fashion bot that can see (e.g. colours) and hear (e.g. ambient sounds) through the interaction ports: webcam and microphone using the data sets of images and sounds related to eight primary emotional cues Plutchik proposed: Fear, Anger, joy, Sadness, trust, disgust, anticipation, and surprise, and their complex derivatives. The sentient bot recognizes these compound emotional states by responding in the changes in outfits and sounds.

Process Notes
We used Google to see what emotions correspond with what kind of imagery. Based on that we trained an emotion recognition model. It may not be accurate, but it is a reflection of how we express ourselves online and how emotions are defined online.

As methodological process, we started with a question: How do we see ourselves but also how we are all seen collectively? We develop Senti(m)ent – a bot which absorbs tons of visual data and reflects it through the radar of Plutchik’s model of 8 human emotions through its attire/costumes and props updated with current data sets. This simulated avatar is constructed out of machine junks (both physical and digital) of contemporary societies e.g. mechanical parts, motorbike metal scraps, photographs, and recycled fabrics overlapped by data sets of thousands of images collaged to generate a digital fabric. This project also interrogates whether there can be a new way to document emotions that a post/neo-human can afford based on ideas of self and others, as a futuristic being how it can expresses its identity, knowing the limits of emojis. The elements or the template design for the costume is based on “generated aesthetic” by the reduction model of AI combined with compression of symbols reduced from feeding data in the algorithm. This sentient being documents sentiment in complex ways – a totem for anyone projecting itself in this virtual, global, data generated costume, which is not owned by anyone yet created through a collective effort. Through this co-owned personality that interprets the information as part of it, we wanted to generate dialogue about how much information we consume individually and collectively through a dedicated renewal of connection between technology and human emotions. This project has tried to challenge ways in which machine learning can be utilized to reinterpret empathy in a technology-driven realm.

Ayesha M. Ali: 3D modeling, costume design, visual concept and artwork execution

Marcel Top: data collection, machine training, video editing, and creative coding

Budhaditya Chattopadhyay: sound, video data sets, audiovisual synchronization and conceptualisation

Bibliography, References and Tech Stack Asutay, Erkin and Västfjäll, Daniel (2019). “Sound and Emotion”. In Mark Grimshaw-Aagaard, Mads Walther-Hansen, and Martin Knakkergaard (Eds.), The Oxford Handbook of Sound and Imagination, Volume 2. Oxford: Oxford University Press.

Böhme, G. (1993). “Atmosphere as the fundamental concept of a new aesthetics.” Thesis Eleven 36: 113–26.

Bordwell, D. (2009). “Cognitive theory”. In Livingston, P. & Plantinga, C. (eds.), The Routledge companion to philosophy and film (pp. 356-365). London: Routledge.

Chattopadhyay, B. (2014). “Object-Disoriented Sound: Listening in the Post-Digital Condition”. A Peer-reviewed Journal About 3/1. http://www.aprja.net/?p=1839

Frühholz, Sascha, Trost, Wiebke, and Kotz, Sonja A. (2016). “The sound of emotions: Towards a unifying neural network perspective of affective sound processing.” Neuroscience & Biobehavioral Reviews 68: 96-110

Ihde, D. (2007). Listening and voice: Phenomenologies of sound. New York: Sunny Press.

instaloader/instaloader. (2020). Retrieved 10 November 2020, from https://github.com/instaloader/instaloader

Mortillaro, Marcello (2013). “On the Acoustics of Emotion in Audio: What Speech, Music, and Sound have in Common”. Frontiers in Psychology.

Plutchik, Robert (1980), Emotion: Theory, research, and experience: Vol. 1. Theories of emotion, 1, New York: Academic

Plutchik, Robert (2002), Emotions and Life: Perspectives from Psychology, Biology, and Evolution, Washington, DC: American Psychological Association

Scheutz, M. (2011). Architectural Roles of Affect and How to Evaluate Them in Artificial Agents. International Journal Of Synthetic Emotions, 2(2), 48-65. doi: 10.4018/jse.2011070103

Simmons, D. R. (2006). The association of colours with emotions: A systematic approach [Abstract]. Journal of Vision, 6(6):251, 251a, http://journalofvision.org/6/6/251/, doi:10.1167/6.6.251.

Strong, Jennifer, Hao, Karen, Ryan-Mosley, Tate, and Cillekens, Emma (2020). “AI Reads Human Emotions. Should it?”. MIT Technology Review, October.




Futuristic Headgears

Oshii Brownie’s Series of Headgear and customized props specially crafted and perfected for editorial shoots and specific occasions. We work with mixed media, fabrics, textures, and metals to produce one-of-a-kind fashion pieces.