Classification of Political Statements by Orator with DSX

After all that keyword extraction thing from political speeches, it occurred to me it would be interesting to find out if it’s possible to build a model that predicts the political orator to which a statement, or even a complete speech, belongs to. By statement, I mean a sentence with more than a couple of words drawn from a speech (not including here interviews or political debates, for example). I took the 327 speeches by 12 US presidents used in the previous post as the basis of a document data set and added to it a few dozen speeches by other, non-american, dictatorial, political leaders as to create a set appropriate for a classification task.

I intended to explore two different routes:

  1. Build a predictive model from a collection of sentences previously classified as either being uttered by an US President or by some other politician (all non-American political leaders of the XXth century, all dictators) during an official speech. This can be defined then as a binary classification problem: either a statement is assigned to a politician of the class “USPresident” or it isn’t. All the sentences (33500 in total) were drawn from the speeches mentioned above.
  2. Build another model from a collection of speeches by a diverse group of orators, such that the model can assign correctly to a a previously unseen speech the person associated with it. This can be defined as a multi-class classification problem.

This all sounds reasonable and potentially interesting (and, who knows, even useful), but building predictive models from text-based data is a very cumbersome task because there’s always a multitude of things to decide beforehand, which includes:

  • How to represent the text? Learning algorithms can’t deal with text in its original, “as-is” format, so there’s a number of preprocessing steps to take in order to transform it into a set of numerical/categorical/ordinal/etc. features that make sense. There are numerous feature types and transformations I could explore here, like representing the text as a weighted vector space model, using word based features, character-based features, using part-of-speech tags or entities as additional features, build topic models and use the topic probabilities for each document, and so on. The problem is that I do not have enough time (nor patience) to decide efficiently the most appropriate feature representation for my speech/sentences data set.
  • Dimensionality curse: Assuming I’ve managed to find some good text representation, it’s almost certain the final dimensions of the data set to be presented to the learning algorithm will be prohibitive. Again, there are numerous feature selection methods that can be employed to help me ascertain which features are more informative and discard the rest. I don’t really care about trying them all.
  • What learning algorithm is appropriate? Finally, which algorithms to use for these two classification tasks. Again, there are hundreds of them out there, not to mention countless parameters to tune, cross-validation techniques to test, different evaluation measures to optimize, and so on.

As to avoid losing too much time with all of this stuff just for the sake of a blog post, I decided to use DSXfor two simple reasons: 1) it accepts text in its original format and does all the feature transformation/selection/extraction steps all by itself, so I don’t need to worry about that stage, and 2) it tests hundreds of different algorithms and combinations of algorithms to find the best model for the data.

The only pre-processing done to the data sets prior to uploading them as csv files to DSX was:

  1. Subsetting each data set into one training portion, from which to build a predictive model, and a testing portion used to evaluate the model on data unknown to it (and make sure there was no overfitting).
  2. To make things more challenging, I replaced all entities mentioned by their entity type. This is because a sentence or speech mentioning specific dates, people and locations can be easily assigned to the correct orator using those entities alone. For example, “It is nearly five months since we were attacked at Pearl Harbor” is obviously something that only FDR could have said. “Pearl Harbor” is a clear hint of the true class of the sentence, and to make things more difficult to DSX, it gets replaced with the placeholder “LOCATION”. A similar replacement is used for entities like organizations, dates or persons with the help of the Stanford NLP Core toolkit.

The first model built was the one for the binary version of the data set (i.e., a sentence either belongs to an US president or to a non-american political leader), using a total of 26792 sentences. Of a total of 8500 examined models, DSX found one generated with the Iterative OLS algorithm to be the best, estimating accuracy (that is, the percentage of sentences correctly assigned to their respective class) to fall between 76% and 88%, and average recall (that is, the averaged percentages of correct assignments for each class) to fall in the range of 78% to 88%. Given that the “NON US PRESIDENT” class is about two thirds of the “US PRESIDENT” average recall is a better evaluation measure than regular accuracy, for this particular data set.

1

ForecastThis DSX estimated qualities of the best predictive model for the binary political sentences data set.

To make sure the model is not overfitting the training data, and that the estimates above are correct, I sent DSX a test set of sentences with no labels assigned, and compared the returned predictions with the ground truth. Turns out accuracy is around 82% and average recall approximately 80%. This is a great result overall and it means we’ve managed to build a model that could be useful, for example, for automatic annotation of political statements.

And just for the record, here’s a few example of sentences that the model did not get right:

  • Sentences by US Presidents marked as belonging to non-US political leaders (dictators):
    • We have no territory there, nor do we seek any.
    • That is why we have answered this aggression with action.
    • Freedom’s fight is not finished.
  • Sentences by non-US political leaders (dictators) marked as belonging to US presidents
    • The period of war in [LOCATION] is over.
    • The least that could be said is that there is no tranquillity, that there is no security, that we are on the threshold of an uncontrollable arms race and that the danger of a world war is growing; the danger is growing and it is real.

I doubt a person could do much better just by reading the text, with no additional information.


The second model was built from a train set of 232 speeches, each labeled with the respective orator (11 in total). The classes are very unbalanced (that is, the number of examples for each label varies greatly), and some of them are quite small, which makes average recall the best measure to pay attention to when asserting the quality of the predictions made by the model. The best model DSX found was built with Multiquadric Kernel Regression, and although it has a hard time learning three of the eleven classes (see figure below), it’s actually a lot better than what I expected given the skewness of the data, and the fact that all entities were removed from the text.

2

ForecastThis DSX predictive model for political speeches by 11 orators. The best model was built with Multiquadratic Kernel Regression.

And what about the model’s performance in the test set? It more or less follows the estimated performance of the trained model: it fails to classify correctly speeches by Hitler (classifying them as belonging to FDR instead), and by Nixon (which are assigned to Lyndon B. Johnson). On the other hand, it does classify correctly all the instances of Reagan, FDR, Stalin, and most of Bill Clinton’s speeches. I’m sure if I provided a few more examples for each class, the results would greatly improve.

To conclude: this model, alongside the very good model obtained for the first data set, illustrates how it is possible to quickly obtain predictive models useful for text annotation of political speeches. And all this with minimal effort, given that DSX can evaluate hundreds of different models very quickly, and also handle the feature engineering side of things, prior to the supervised learning step.

*Disclaimer: I work for ForecastThis, so shameless self-promotion trigger warning goes here.

Sources/Tools

  • ForecastThis DSX
  • US Presidential speeches harvested from the Miller Center Speech Archive

 

 

Death Row Inmates: Last Words

Word clouds aren’t the best of data visualizations. They’re often too simplistic, representing a small sample of words out of context. I felt, however, that a word cloud would be appropriate to convey the most frequent terms present in the last statements of death row inmates.

That’s because the majority of these statements is typically comprised of few sentences, where the inmates say goodbye to their families. Many apologise to their victims, some protest their innocence until the end. A few simply state they’re now ready to die. There’s not much variety here, so representing the top terms proportionally to their frequency will not, I think, be an inaccurate representation.

The following word cloud was generated after stop word removal of 518 last statements of Texas death row inmates, executed between 1982 and 2014. These statements were harvested from the Texas Department of Criminal Justice website. 

Here’s the top 15 words and their counts:

  • love: 634
  • family: 290
  • god: 203
  • life: 149
  • hope: 131
  • lord: 130
  • forgive: 127
  • people: 125
  • peace: 96
  • jesus: 96
  • give: 95
  • death: 92
  • pain: 81
  • strong: 81
  • warden: 77

It’s not at all suprising that “love”, “god” and “family” are at the very top. Here’s a sample of the most common bigrams and trigrams (ie, sets of two and three words):

  • “i love you”
  • “i would like”
  • “i am sorry”
  • “i am ready”
  • “i am going”
  • “thank you”
  • “my family”
  • “forgive me”
  • “stay strong”

 

Around the World with Satan

The following map displays, for each country, the rank of the word “Satan” in black metal lyrics written between 1980 and 2013. This ranking is calculated as the ratio of the total number of times “Satan” occurs to the maximum raw frequency of any term in the country’s lyrics, after stopwords removal. The darker the shade of blue, the higher up in the term-ranking is “Satan” for that country. Filipino bands throw the S-word around a lot more than the rest of the world, at least in comparison with other frequent terms in their lexicon, followed by a number of countries in Latin America.

Click here for a larger version.

As for the (as in, “first”) most frequent word for each country, here’s a selected sample with some amusing entries:

  • Brunei: human
  • Kazakhstan: rape
  • Mongolia: soul
  • Costa Rica: lord
  • Honduras: cold
  • Barbados: ocean
  • Jamaica: pussy
  • Japan: hell

Note that actual size of each country’s corpus, that is, the total number of terms has some influence over the computed ratios. Since some countries are a lot more prolific, blackmetal-wise, than others, take this analysis with a grain of salt.

 

Using NLP to build a Black Metal Vocabulary

Black metal is typically linked, since its inception, to Satanic or anti-Christian themes. With the proliferation of bands in the 90s (after the Norwegian boom) and subsequent emergence of sub-genres, other topics such as paganism, metaphysics, depression and even nationalism came to the fore.

In order to discover the terminology used to explore these lyrical themes, I’ve devised a couple of term extraction experiments using the black metal data set. The goal here is to build a black metal vocabulary by discovering salient words and expressions, that is terms that when used in BM lyrics carry more information than when used in a “normal” setting. For instance, the terms “Nazarene” or “Wotan” have a much higher weight in the black metal domain than  in the general purpose corpus used for comparison. Once again note that this does not necessarily mean that these two words occur very frequently in BM lyrics (I’d bet that “Satan” or “death” have a higher number of occurrences), but it indicates that, when they do, they carry more information within the BM context.

This task was carried through JATE‘s implementations of the GlossEx and C-value algorithms. The part-of-speech of each term (that is, the “type” of term) was discovered with the StanfordNLP toolkit. The top 50 of each type (with the exception of adverbs) are listed in the table below. For the sake of visualization, I make a distinction between named entities/locations and the other nouns, being that the former are depicted in the word maps at the end of this post.

I’ve also included, in the last column of the table, the top term combinations. It’s noteworthy how much of these combinations are either negations of something (“no hope”, “no god”, “no life” and so on), or concerned with time (“eternal darkness”, “ancient times”). Such preoccupation with large extensions of “time” is also evident in the top adverbs (“eternally”, “forever”, “evermore”),  adjectives (“endless”, “eternal”) and even nouns (“aeon” or “eon”).

ADJECTIVES ADVERBS VERBS NOUNS TERM COMBINATIONS
Endless Nevermore Desecrate Forefather Life and death
Unhallowed Eternally Smolder Armor Human race
Luciferian Tomorrow Travel Aeon No light
Infernal Infernally Fuel Splendor No hope
Necromantic Forever Spiral Pentagram Eternal night
Paralyzed Anymore Dethrone Perdition No god
Pestilent Mighty Throne Specter Full moon
Unholy Skyward Envenom Misanthrope No life
Illusive Evermore Lay Cross Black metal
Untrodden Earthward Resound Magick Cold wind
Astral Someday Mesmerize Nihil No place
Misanthropic Astray Abominate Ragnarok No escape
Unmerciful Onward Paralyze Blasphemer No return
Cruelest Verily Blaspheme Profanation Eternal life
Blackest Deathly Impale Misanthropy No fear
Eternal Forth Cremate Malediction Flesh and blood
Wintry Unceasingly Bleed Revenant No matter
Bestial Weightlessly Procreate Damnation Fallen angel
Reborn Anew Enslave Conjuration Eternal darkness
Putrid Demonically Awake Undead No man
Darkest Behold Nothingness Dark night
Unblessed Intoxicate Armageddon Lost soul
Colorless Devour Lacerate No end
Diabolic Bury Wormhole Ancient time
Demonic Demonize Eon No remorse
Wrathful Forsake Devourer No reason
Nebular Enshroud Impaler No longer
Vampiric Writhe Sulfur Black cloud
Unchained Destroy Betrayer Dark forest
Armored Entomb Deceiver Human flesh
Immortal Raze Bloodlust Endless night
Hellish Flagellate Reaper Ancient god
Hellbound Unleash Horde Mother earth
Unnamable Convoke Blasphemy Black wing
Prideful Crucify Eternity Night sky
Colorful Fornicate Defiler Dark side
Unbaptized Torment Immolation Eternal sleep
Unforgotten Venerate Soul Black hole
Satanic Beckon Abomination Black heart
Morbid Defile Flame Flesh and bone
Sempiternal Distill Hail No chance
Mortal Immolate Malignancy Dark cloud
Honorable Welter Wrath Final battle
Glooming Run Pestilence Eternal fire
Willful Sanctify Gallow No peace
Lustful Eviscerate Disbeliever No future
Everlasting Unchain Witchery Black soul
Impure Ravage Satanist Final breath
Promethean Mutilate Lust Black night

Most salient entities: many are drawn from the Sumerian and Nordic mythologies. I’ve also included in this bunch groups of animals (“Beasts”, “Locusts”).

Most salient locations. I’ve also included in this bunch non-descript places (“Northland”). Notice how most are concerned with the afterlife (surprisingly, “hell” is not one of them).

It occurred to me that these results could be the starting point of an automatic lyric generator (like the now defunct Scandinavian Black Metal Lyric Generator). Could be a fun project, if time allows (probably not).

Resources:

IBM GlossEx

Jason Davies’ D3 Word Cloud

JATE – Java Automatic Text Extraction

StanfordNLP Core

Middle-Earth Entity Recognition in Black Metal Lyrics

The influence of JRR Tolkien in black metal is pervasive, almost since its beginning. One of BM’s most (in)famous outfits, Burzum, took its name from a word invented by the Middle-Earth creator that signifies “darkness” in the Black Speech of Mordor. Other Norwegian acts such as Gorgoroth or Isengard adopted their names from notable Middle-Earth locations. Perhaps the best example is the Austrian duo Summoning, who have incorporated in their releases inumerous lyrical references (well, not inumerous, about 70 actually) to Tolkien’s works.

The references to Middle-Earth mythology abound in both lyrics and band monikers. Using a list of notable characters’ names and geographic locations as the basis for a named entity recognition task, I set out to find which are the most cited in the black metal data set.

With this list and a small Java NER script implemented for this task, I found 149 bands which have chosen a Middle-Earth location or entity for their name. Angmar is the most popular (6), closely followed by Mordor (5) and Sauron (5). With 4 occurrences each, there’s also Orthanc, Moria, Nargothrond, Carcharoth, Gorthaur and Morgoth.

As for actual lyrical references to these entities, I found a grand total of 736 of them. The ones that have at least two occurrences are depicted in the bubble chart below. It’s not surprising at all to find that the most common references (Mordor, Morgoth, Sauron, Moria, Saruman and Carcharoth) belong to malevolent characters, or dark and dangerous places, of the Tolkienesque lore. The “Black Gate” is also mentioned a lot, but it could have a meaning outside of the Middle-Earth mythology.

Credits

Bubble Chart built from Mike Bostock’s example

PartI – Topic Discovery in Black Metal Lyrics (French Bands)

Counting occurrences of single words is not the most informative way of discovering the meaning (or a possible meaning) of a text. This is mainly because both the relationship between words and the context in which they occur are ignored. A more significant result would be discovering sets of correlated terms that express ideas or concepts underlying the text. Topic modeling addresses this issue of topic discovery, and more importantly, does so with (almost) no human supervision.

‘Topic’ is defined here as a set of words that frequently occur together. Quoting from Mallet: “using contextual clues, topic models can connect words with similar meanings and distinguish between uses of words with multiple meanings”This is especially important in a data set like the black metal lyrics given that there are a number of words (such as death, life and blood) that appear in different contexts.

So, how does a topic modeling tool work? According to this excellent introduction to this subject, its main advantage is that it doesn’t need to know anything about the text except the words that are in it.  It assumes that any piece of text is composed of words taken from “baskets” of words, where each basket corresponds to a topic. Then it becomes possible to mathematically decompose a text into the probable baskets from whence the words first came. The tool goes through this process repeatedly until it settles on the most likely distribution of words into topics.

What results from this approach is that a piece of text is seen as a mixture of different topics, being that each topic has a weight associated. The higher the weight of a topic, the more important it is to characterize the text. For sake of a practical example, let’s say that we have  the following topics

  1. cosmic, space, universe, millions, stars …
  2. dna, genetic, evolution, millions, years ….
  3. business, financial, millions, funding, budget ….

Notice how the word “millions” shows up in different contexts: you can have a business text talking about millions of dollars or a science text mentioning evolution over millions of years. Taking the following text as a test case for our simple example topic model…

“The Hubble Space Telescope (HST) is a space telescope that was carried into orbit by a Space Shuttle in 1990. Hubble’s Deep Field has recorded some of the most detailed visible-light images ever, allowing a deep view into space and time. Many Hubble observations have led to breakthroughs in astrophysics, such as accurately determining the rate of expansion of the universe […]ESA agreed to provide funding and supply one of the first generation instruments for the telescope […] From its original total cost estimate of about US$400 million, the telescope had by now cost over $2.5 billion to construct.

…it does seems reasonable that it can be seen as a mixture of topics 1 and 3 (with topic 1 having a higher weight than topic 3):

What would a black metal topic model look like? To find out, I’ve made a couple of preliminary experiments using lyrics from French black metal bands (future experiments will explore other subsets of the lyrics corpus, and hopefully build a topic model for the entire data set, if time allows). The model described in this post was generated with Mallet, setting the number of topics to look for to 20, and using its most basic processing techniques: stop word removal, non-alphanumeric characters removal, feature sequences with bigrams, and little else.

For reasons of economy (and also not to bore you to tears) I’ll just list the top 10, that is, the 10 topics that have a higher “weight” in characterizing the French lyrics subset (the remaining 10 have very small weights). Each is represented by 9 terms:

  1. life, time, death, eyes, pain, soul, feel, mind, world
  2. night, dark, light, cold, black, darkness, moon, sky, eternal
  3. world, life, human, death, earth, humanity, end, hatred, chaos
  4. blood, body, black, tears, flesh, heart, eyes, love, wind
  5. satan, blood, black, god, hell, evil, lord, christ, master
  6. war, blood, death, fight, fire, black, kill, rise, hell
  7. land, gods, blood, people, proud, men, great, king, ancestors
  8. god, time, void, light, death, reality, stars, matter, infinite
  9. god, lord, fire, divine, holy, light, flesh, man, great
  10. fucking, shit, fuck, make, time, trust, love, suck, dirty

The first one seems to be all over the place: life, time and death can be applied to a ton of subjects, and indeed they seem to characterize to some extent about half of the data set. Also, some terms appear quite often in different contexts (blood, black, death and even god). But there are a couple of interesting ones, such as topics 2, 7 and 10. And because looking at lists of words is tedious, here’s a word cloud that represents them using a horrid sexy color palette. Each topic has a different color, and the larger the font, the more preponderant it is in the subset.

One practical application of a topic model is using it to describe a text. Let’s take, for example, the lyrics for Blut Aus Nord’s “Fathers of the Icy Age” and “ask” our topic model what’s the composition of this particular piece of text. The outcome is:

  • Topic 7 (54.25%): land, gods, blood, people, proud, men, great, king, ancestors
  • Topic 2 (42.34%):  night, dark, light, cold, black, darkness, moon, sky, eternal
  • Other topics – less than 3.41%

We can interpret this song as a mixture of two topics, and in my opinion, the first one (let’s call it “ancient pagan stuff of yore”) seems to be pretty accurate. What about more personal lyrics such as T.A.o.S. “For Psychiatry”? Here’s what we get:

  • Topic 10 (40.95%): fucking, shit, fuck, make, time, trust, love, suck, dirty
  • Topic 1 (26.13%): life, time, death, eyes, pain, soul, feel, mind, world
  • Topic 3 (12.05%): world, life, human, death, earth, humanity, end, hatred, chaos

It’s a bit too generic for my liking, but we’re not that far off the mark. All in all, topic modeling appears to be quite useful for the discovery of concepts in our data set. There are, however, a few drawbacks to this approach. One of them is that the number of topics has to be set manually – in an ideal case the algorithm should figure out by itself the appropriate number. The other is the simplicity of the features, future experiments should focus on improving the lyrics representation with richer features. At any rate, these are promising results that can be further improved.

Usage of Specific Terms through Time

What about the usage of specific words in black metal lyrics across time? Have common terms – like life or death – been mentioned in a constant fashion through the years, or has their frequency changed dramatically?

The figure below plots the frequencies of a few selected words against time (in years). I’ve chosen death, life and time because they are among the most frequent terms in the whole lyrics data set. As for god and satan, well, if you don’t know why I picked them then that probably means you’re not acquainted with black metal at all, so I’ll refer you to Google or the nearest (decent) record shop to sort that out.

I’ve bundled a few synonyms and hyponyms with each term, taken from WordNet. This means that, for example, the occurrence count for satan also includes the counts of similar terms such as lucifer and devil.

Looking at the plot we can see that death was at its highest point around 1998 and has been decreasing since then (being surpassed by life in 2006/07), up until 2012. And notice how satan closely follows god across the years. This probably means that most lyrics than mention one of these entities, also mention the other.

Part II – Frequent Words in Black Metal Lyrics

In the last post we tried to discover the most common terms used in black metal lyrics. One of the first questions that popped up was if there are there differences between countries, regarding the most frequent words. To answer this (in a very small scale) I’ve subsetted the original data set into two smaller sets: one for lyrics penned by Norwegian bands and the other for Iraqi bands. The following bar plot shows the top 15 most frequent words found in the lyrics of Norwegian bands. It does not seem to differ much from the global top 15, presented in our previous post.

Below you’ll find the most frequent words in the lyrics of Iraqi bands. Not only does it look much different from the Norwegian bar plot, it also differs significantly from the global results. I find it very interesting that lies corresponds to 0.9% of the total occurrences. This and the presence of both truth and blashpemy seems to point to some sort of deeper meaning here.  Or maybe it’s just all a coincidence because, again, with no contextual analysis we can’t really infer much. At any rate, it’s very likely that the lyrical concerns of Norwegian and Iraqi bands are distinct.

Part I – Frequent Words in Black Metal Lyrics

Ever wondered what, if any, patterns are there to be discovered in black metal lyrics? Well, I did, and started by simply finding out which words occur the most in this data set. After some cleaning and pre-processing, I’ve ended up using lyrics of 76039 songs by 24086 bands, from 116 different countries. Stop words (which can be roughly defined as very common and very uninformative words like the or or) were removed in this pre-processing stage. In the end, a total of 258610 distinct words occur, with the number of occurrences summing up to 5304046.

The following bar plot shows the top 15 most used words across the whole lyrics data set.

The most common term is death (not at all unexpected) represents 0.7% of the total number of occurrences of all distinct words. Other more or less expected results such as blood or darkness also make an appearance, but it is somewhat intriguing to find time in the top 5. So, what does this all mean? Well, not much (yet): simply counting the number of occurrences of individual words is not a good indicator of “meaning” because it discards the context in which the words appear, as well as the relationships between them, but provides very helpful hints.