This project explores different types of data art, and goes on to propose a type of data art that creatively visualizes various emotions contained in lyrics of different songs.
Data art, like art itself, is difficult to clearly define. One of the ways could be to identify the intent of the creator; if they intended their product to actually be art.
Data art can also have some major overlaps with data visualization. Some ways in which both can be compared:
Data visualisation seems to sacrifice aesthetics for insights,whereas data art seems to do the opposite
Data art might not be good data visualisation, but it might promote what’s called “disinterested analysis”: provoking a non-expert into thought by using strong connected emotions
Both data visualisation and data art consist of two things: objects and features. An example of objects can be cities, with their features being their weather attributes
Overly focussing on usability can snatch away the aspect of beauty, spontaneity and emotion from data art
Data art is allowed to commit ‘sins of data visualisation’- unclear presentation of information and blurred readability of data just for the sake of aesthetics
Data visualisations provide quantitative insights, whereas data art may provide more qualitative insights open to interpretation
Data art can be classified based on form, temporality and generators.
Can be hand or machine made, or 3D printed
Can use traditional or contemporary materials
Exhibited by products purely in digital space
Can be 2 or 3 dimensional
Unmoving in space and time
Abstract forms which are aesthetically pleasing- waves, liquids, fractals, etc.
Respond to user movement and inputs
New datapoints are calculated in real-time
Change over time based on predefined values
Not dependent on real-time factors or inputs
Made using mathematical rules defined algorithmically
Complex algorithms might become computationally taxing
Using huge datasets gathered from the real world
Creativity lies in converting flavourless data into art
I propose a method to generate digital 3D models using lyrical emotions of different songs, and present data art created using this method.
Music as a dataset is readily available, and is consumed by the public in massive volumes. It is relatable and appealing. The data art I present, instead of focussing on music’s quantitative factors like pitch and amplitude, focusses on the actual emotion behind what its lyrics say, which more often than not, is completely different from the song’s beats and tempo.
Most visualizations are static, considering the entire song at once. Some of them are dynamic shapes that changes in real time.
Presents classical scores as static art pieces
Hardly gives any qualitative insights, but aesthetically pleasing
Visualizes technical aspects of songs as segmented discs
Only understood in relation to other visualizations; not standalone
Data viz for lyrics, melody, harmony and rythm
Focus on usability above aesthetics
The aim of this project is to not focus on accuracy as much as the emotions in the songs; its mood and meaning. The result should visually capture the artist’s interpretation of the essence of the song based on the data provided to the algorithm, resulting in a different form for each musical piece.
I present data art as a ‘growing’ 3D model that develops based on the underlying emotions of a song. The process of generating this piece is as follows:
For more engaging pieces, it’s good to choose songs that are ‘emotional rollercosters’. Also, heavily lyrical songs are preferred, with minimal repetitions. Following 3 songs are considered:
Sad core, Hopeful façade
Angry and arousing, Heavily lyrical
Light and positive, Hopeful
Song lyrics, once laid out line by line, have 2 inputs: quantitative and qualitative. Quantitative input is time; how long it took for the singer to say that line. Qualitative input is the emotional variable assigned to that line. This project uses an arousal-valence dimensional model called the Russel’s Circumplex Model of Affect, which places different emotions on a scatter plot matrix based on their positivity and arousal values.
Based on emotions assigned to different lyrics, the total time each emotion appears is noted. Based on the emotions’ location in the matrix, zones are assigned as seen below. Both arousal and valence are divided into 8 zones each, giving a total of 16 combinations. This gives three numerical values; the positivity zone (from-4 to +4), the arousal zone (from -4 to +4) and the running time (in seconds).
Using three.js, different emotions are visualised as randomly growing lines emanating from a common origin, in 3D space. For each emotion, the drawing algorithm takes as in put the positivity zone, arousal zone and the running time.
Direction of growth, with positive emotions upwards, neutral sideways, and negative downwards
Positive emotions are white, negative ones are black
Controls jitter of growth
Highly arousing emotions have more jitter; low arousal emotions have a smoother growth
More time leads to a longer growth strand for a particular emotion
Data from lyrics of three songs; Space Bound, How Far I’ll Go and Eastside is taken to create three different art pieces.
Positive façade with a deeply negative core
No positive emotions pointing straight upwards
Mostly depressing and angry lyrics
Absence of strong positivity; downward growth is straight down
High jitter; high arousal
Happy, positive yet serious
Many highly arousing and strongly positive lyrics
No deeply negative emotions
The product is a subjective manifestation of the artist’s interpretation of a song’s emotions
An aesthetic piece that promotes disinterested analysis, providing a gist of the song, and qualitative insights open to interpretation
Future work is to refine the algorithm and automating the process of emotion assignment
A detailed project report with references can be accessed from the link below: