Part III – Topic and Lyrical Content Correlation

In part II of this post, we explored a topic model built for the whole black metal lyrics data set (if you don’t know what a topic model is, read this as well, but to sum things up let’s just say topic modeling is a process that enables discovery of the “meaning” underlying a document, with minimum human intervention). In said post we analyzed 1) the relationship between topics, and 2) the importance of individual words in their characterization by means of a force directed graph, which (let’s face it) is a bit of a bubbly mess.
In order to understand better the second point stated above, I decided to build a zoomable treemap. In it, each large box (distinguished from the surrounding boxes by a label and a distinct color) represents a topic, i.e. a set of words that are somehow related and occur in the same context(s). By clicking on a label, the map zooms into it and presents the ten most relevant words within that topic. For example, by clicking on “Coldness”, you’ll see the top 10 terms that compose it (“ice”, “frost”, “snow” and so on). The area of each word is proportion to its importance in characterizing the topic: in our “Coldness” example, “cold” occupies as larger area than the rest, being the most relevant word in this context.
Similarly, the total area of each topic is proportional to its incidence in the black metal lyrics data set. For example, “Fire & Flames” has a larger area than “Mind & Reality” or “Universe & Cosmos”, making it more likely to occur when infering the topics that characterize a song.

By the way, these topic labels were chosen manually. Unfortunately I couldn’t devise an automated process that would do that for me (if anyone has an inkling on how to do this, let me know) so I had to pick meaningful and reasonable (I hope) representative titles for each set of words. In most cases, like the aforementioned “Coldness”, the concept behind the topic is evident. There are, however, a few cases where I had to be a bit more creative because the meaning of the topic is not so obvious (“Urban Horror” comes to mind).

There are also two topics which are quite generic, with terms that could occur in almost any context, so they’re simply labeled “Non-descriptive”.

As mentioned in part II of this post, one goal of this whole mess is to find out which lyrics “embody” a specific topic. Given that the lyrical content of a song is seen by the topic model as a mixture of topics, then we’re interested in discovering lyrics that are composed solely (or almost in their entirety, let’s say more than 90%) of a single topic. Using the topic inferencing capabilities of the Stanford Topic Model Tool I did just that, selecting at least 3 representative lyrics for 14 of the topics above. They’re displayed in the collapsible tree below.

For the most part the lyrics seem to have a high degree of correlation with the topic assigned to them: for instance Immortal’s “Mountains of Might” fits the “Coldness” topic fairly well (surprise, surprise…) and Vondur’s cover of an Elvis Presly song obviously falls into the heart stuff category. But there is one intriguing result: after reading Woods of Infinity’s “A Love Story”, I was expecting it to have the “Dreams & Stuff from the Heart” topic assigned to it. It falls in the “Fucking” topic instead, so maybe the algorithm detected something (creepy) between the lines.

 

Credits:

The zoomable treemap was built from Bill White’s Treemap with Title Headers.

The collapsible tree was inspired by this tree and this other tree.

Part II – Topic Discovery in Black Metal Lyrics (All Bands)

In Part I of this post, we examined a topic model built from a subset of the black metal lyrics data set. It was a preliminary experiment with regards to topic discovery, and we only explored the 10 most frequent topics underlying lyrics authored by French bands.

In this second part we will examine the relationship, i.e. similarity/dissimilarity, between topics of a topic model. The model now presented was built using the original data set in its entirety, with Stanford’s Topic Modeling Toolbox (STMT). I chose this tool over Mallet for a number of reasons, the most important being that it allows to “slice” the topics with respect to a particular aspect(s) of the data set such as time or band (a functionality I’ll explore in the next post).

The pre-processing stage of the data set included removal of typical stop words (such as the, or and and), and not so typical ones: STMT allows removal of the most frequent words in the data set (if they’re too common, it’s very likely they’re not that informative). I’ve also removed lyrics with less than 5 words and used those lyrics that have 85% probability of being detected as written in english (this will allow for the best translations of lyrics not written originally in english to also be included in this analysis). The outcome of this pre-processing step is a data set comprised of approximately 52000 distinct lyrics.

Once again, I had to manually set a value for the number of topics (in subsequent experiments I’ll explore the possibility of determining the ideal number automatically) so I picked 30, a nice round number (my guess is as good as yours). Below you’ll find a list of some of them. Remember that a topic is a set of related words (the first and fifth listed here are my personal favourites):

  • space, universe, stars, chaos, void, cosmic, light, infinite
  • hell, evil, satan, demons, souls, god, unholy, infernal
  • fucking, metal, fuck, kill, lust, whore, shit, rape, cunt, bitch,
  • cold, winter, wind, winds, ice, snow, frozen, land, mountains, frost
  • human, earth, race, humanity, destruction, mankind, war, plague, destroy
  • fire, burning, burn, flames, flame, burns, ashes, soul, fires
  • dark, ancient, power, soul, shadows, evil, spirit, eternal
  • pain, soul, hatred, mind, suffering, hate, veins, anger, thoughts, madness

[Click here for the whole 30 topic word distribution list] ( note that the numbers inside parentheses represent the “weight” of the word in the topic: the higher the weight, the greater its importance in characterizing the topic)

Its a pretty diverse list of topics. Some are quite generic, and others occur in a very small percentage of lyrics, but for the most part they seem concise and informative. There are also a number of them that could be related to each other: a topic about sea and waves is probably closer to another comprised of words such as wind or sky, than to a topic about pain and hatred.

One question that arises is how to determine this hypothetical relationship between the topics. One of the outputs of the SMTM is a document by topic matrix, that is, a matrix where each line corresponds to a lyric and each column to a topic: their intersection gives us the “weight” of a topic on a particular lyric.

What’s important to retain here is that each topic can be represented by a column of values (its weights in the data set). If we want to determine the relationship between two topics in the corpus, we can use their representations as vectors of numbers and apply some sort of measure to it, such as the Jensen-Shannon divergence. This metric actually gives us the dissimilarity between two vectors: the higher is value, the higher is their degree of unrelatedness.

In the force directed graph below, each topic is represented as a cluster of its top 7 words, being that words with higher weight in the topic are associated to circles with larger radiuses than the rest. The higher the divergence between two topics, the further apart they are in the graph (or should be, it’s not perfectly rendered because d3 is really not my strong point, but it will give you an idea of these distances for a few pairs of topics – refresh and reshuffle as much as you like!). In addition, if you mouseover a particular topic, the graph will highlight its links to other topics: the thinner a link is, the more unrelated the topics will be.  

Another question that comes to mind when visualizing these topics is what lyrics “embody” them best. Remember that lyrics can been seen as a mixture of topics, so one that is composed in its majority of a single topic, will probably represent it better than another that as a small percentage (say, 10%) of that same topic. This will be addressed in the next post, so stay tuned.

PartI – Topic Discovery in Black Metal Lyrics (French Bands)

Counting occurrences of single words is not the most informative way of discovering the meaning (or a possible meaning) of a text. This is mainly because both the relationship between words and the context in which they occur are ignored. A more significant result would be discovering sets of correlated terms that express ideas or concepts underlying the text. Topic modeling addresses this issue of topic discovery, and more importantly, does so with (almost) no human supervision.

‘Topic’ is defined here as a set of words that frequently occur together. Quoting from Mallet: “using contextual clues, topic models can connect words with similar meanings and distinguish between uses of words with multiple meanings”This is especially important in a data set like the black metal lyrics given that there are a number of words (such as death, life and blood) that appear in different contexts.

So, how does a topic modeling tool work? According to this excellent introduction to this subject, its main advantage is that it doesn’t need to know anything about the text except the words that are in it.  It assumes that any piece of text is composed of words taken from “baskets” of words, where each basket corresponds to a topic. Then it becomes possible to mathematically decompose a text into the probable baskets from whence the words first came. The tool goes through this process repeatedly until it settles on the most likely distribution of words into topics.

What results from this approach is that a piece of text is seen as a mixture of different topics, being that each topic has a weight associated. The higher the weight of a topic, the more important it is to characterize the text. For sake of a practical example, let’s say that we have  the following topics

  1. cosmic, space, universe, millions, stars …
  2. dna, genetic, evolution, millions, years ….
  3. business, financial, millions, funding, budget ….

Notice how the word “millions” shows up in different contexts: you can have a business text talking about millions of dollars or a science text mentioning evolution over millions of years. Taking the following text as a test case for our simple example topic model…

“The Hubble Space Telescope (HST) is a space telescope that was carried into orbit by a Space Shuttle in 1990. Hubble’s Deep Field has recorded some of the most detailed visible-light images ever, allowing a deep view into space and time. Many Hubble observations have led to breakthroughs in astrophysics, such as accurately determining the rate of expansion of the universe […]ESA agreed to provide funding and supply one of the first generation instruments for the telescope […] From its original total cost estimate of about US$400 million, the telescope had by now cost over $2.5 billion to construct.

…it does seems reasonable that it can be seen as a mixture of topics 1 and 3 (with topic 1 having a higher weight than topic 3):

What would a black metal topic model look like? To find out, I’ve made a couple of preliminary experiments using lyrics from French black metal bands (future experiments will explore other subsets of the lyrics corpus, and hopefully build a topic model for the entire data set, if time allows). The model described in this post was generated with Mallet, setting the number of topics to look for to 20, and using its most basic processing techniques: stop word removal, non-alphanumeric characters removal, feature sequences with bigrams, and little else.

For reasons of economy (and also not to bore you to tears) I’ll just list the top 10, that is, the 10 topics that have a higher “weight” in characterizing the French lyrics subset (the remaining 10 have very small weights). Each is represented by 9 terms:

  1. life, time, death, eyes, pain, soul, feel, mind, world
  2. night, dark, light, cold, black, darkness, moon, sky, eternal
  3. world, life, human, death, earth, humanity, end, hatred, chaos
  4. blood, body, black, tears, flesh, heart, eyes, love, wind
  5. satan, blood, black, god, hell, evil, lord, christ, master
  6. war, blood, death, fight, fire, black, kill, rise, hell
  7. land, gods, blood, people, proud, men, great, king, ancestors
  8. god, time, void, light, death, reality, stars, matter, infinite
  9. god, lord, fire, divine, holy, light, flesh, man, great
  10. fucking, shit, fuck, make, time, trust, love, suck, dirty

The first one seems to be all over the place: life, time and death can be applied to a ton of subjects, and indeed they seem to characterize to some extent about half of the data set. Also, some terms appear quite often in different contexts (blood, black, death and even god). But there are a couple of interesting ones, such as topics 2, 7 and 10. And because looking at lists of words is tedious, here’s a word cloud that represents them using a horrid sexy color palette. Each topic has a different color, and the larger the font, the more preponderant it is in the subset.

One practical application of a topic model is using it to describe a text. Let’s take, for example, the lyrics for Blut Aus Nord’s “Fathers of the Icy Age” and “ask” our topic model what’s the composition of this particular piece of text. The outcome is:

  • Topic 7 (54.25%): land, gods, blood, people, proud, men, great, king, ancestors
  • Topic 2 (42.34%):  night, dark, light, cold, black, darkness, moon, sky, eternal
  • Other topics – less than 3.41%

We can interpret this song as a mixture of two topics, and in my opinion, the first one (let’s call it “ancient pagan stuff of yore”) seems to be pretty accurate. What about more personal lyrics such as T.A.o.S. “For Psychiatry”? Here’s what we get:

  • Topic 10 (40.95%): fucking, shit, fuck, make, time, trust, love, suck, dirty
  • Topic 1 (26.13%): life, time, death, eyes, pain, soul, feel, mind, world
  • Topic 3 (12.05%): world, life, human, death, earth, humanity, end, hatred, chaos

It’s a bit too generic for my liking, but we’re not that far off the mark. All in all, topic modeling appears to be quite useful for the discovery of concepts in our data set. There are, however, a few drawbacks to this approach. One of them is that the number of topics has to be set manually – in an ideal case the algorithm should figure out by itself the appropriate number. The other is the simplicity of the features, future experiments should focus on improving the lyrics representation with richer features. At any rate, these are promising results that can be further improved.

Usage of Specific Terms through Time

What about the usage of specific words in black metal lyrics across time? Have common terms – like life or death – been mentioned in a constant fashion through the years, or has their frequency changed dramatically?

The figure below plots the frequencies of a few selected words against time (in years). I’ve chosen death, life and time because they are among the most frequent terms in the whole lyrics data set. As for god and satan, well, if you don’t know why I picked them then that probably means you’re not acquainted with black metal at all, so I’ll refer you to Google or the nearest (decent) record shop to sort that out.

I’ve bundled a few synonyms and hyponyms with each term, taken from WordNet. This means that, for example, the occurrence count for satan also includes the counts of similar terms such as lucifer and devil.

Looking at the plot we can see that death was at its highest point around 1998 and has been decreasing since then (being surpassed by life in 2006/07), up until 2012. And notice how satan closely follows god across the years. This probably means that most lyrics than mention one of these entities, also mention the other.

Part II – Frequent Words in Black Metal Lyrics

In the last post we tried to discover the most common terms used in black metal lyrics. One of the first questions that popped up was if there are there differences between countries, regarding the most frequent words. To answer this (in a very small scale) I’ve subsetted the original data set into two smaller sets: one for lyrics penned by Norwegian bands and the other for Iraqi bands. The following bar plot shows the top 15 most frequent words found in the lyrics of Norwegian bands. It does not seem to differ much from the global top 15, presented in our previous post.

Below you’ll find the most frequent words in the lyrics of Iraqi bands. Not only does it look much different from the Norwegian bar plot, it also differs significantly from the global results. I find it very interesting that lies corresponds to 0.9% of the total occurrences. This and the presence of both truth and blashpemy seems to point to some sort of deeper meaning here.  Or maybe it’s just all a coincidence because, again, with no contextual analysis we can’t really infer much. At any rate, it’s very likely that the lyrical concerns of Norwegian and Iraqi bands are distinct.

Part I – Frequent Words in Black Metal Lyrics

Ever wondered what, if any, patterns are there to be discovered in black metal lyrics? Well, I did, and started by simply finding out which words occur the most in this data set. After some cleaning and pre-processing, I’ve ended up using lyrics of 76039 songs by 24086 bands, from 116 different countries. Stop words (which can be roughly defined as very common and very uninformative words like the or or) were removed in this pre-processing stage. In the end, a total of 258610 distinct words occur, with the number of occurrences summing up to 5304046.

The following bar plot shows the top 15 most used words across the whole lyrics data set.

The most common term is death (not at all unexpected) represents 0.7% of the total number of occurrences of all distinct words. Other more or less expected results such as blood or darkness also make an appearance, but it is somewhat intriguing to find time in the top 5. So, what does this all mean? Well, not much (yet): simply counting the number of occurrences of individual words is not a good indicator of “meaning” because it discards the context in which the words appear, as well as the relationships between them, but provides very helpful hints.