As a part of the R4DS June Challenge and the “Summer of Data Science” Twitter initiative started by Data Science Renee, I decided to improve my text mining skills by working my way through Tidy Text Mining with R by Julia Silge and David Robinson. I wanted a fun dataset to use as I made my way through the book, so I decided to use every line from The Office. I could write an entire blog post about why I love The Office and why it is such a great show, but I will refrain. The good thing about using this dataset is that I’ve seen every episode (except for seasons 8 and 9) multiple times; needless to say, I know this data very well.
Let’s get started!
library(tidyverse)
library(tidytext)
library(scales)
library(googlesheets)
library(igraph)
library(ggraph)
library(widyr)
library(psych)
library(kableExtra)
library(knitr)
library(plotly)
library(ggcorrplot)
library(reticulate)
library(cleanNLP)
library(packcircles)
library(patchwork)
Getting and Cleaning the Data
Fortunately, someone created a googlesheet sourced from officequotes.net with every line from The Office.
# get key for data sheet
sheet_key <- gs_ls("the-office-lines") %>%
pull(sheet_key)
# register sheet to access it
reg <- sheet_key %>%
gs_key()
# read sheet data into R
raw_data <- reg %>%
gs_read(ws = "scripts")
id | season | episode | scene | line_text | speaker | deleted |
---|---|---|---|---|---|---|
1 | 1 | 1 | 1 | All right Jim. Your quarterlies look very good. How are things at the library? | Michael | FALSE |
2 | 1 | 1 | 1 | Oh, I told you. I couldn’t close it. So… | Jim | FALSE |
3 | 1 | 1 | 1 | So you’ve come to the master for guidance? Is this what you’re saying, grasshopper? | Michael | FALSE |
4 | 1 | 1 | 1 | Actually, you called me in here, but yeah. | Jim | FALSE |
5 | 1 | 1 | 1 | All right. Well, let me show you how it’s done. | Michael | FALSE |
6 | 1 | 1 | 2 | [on the phone] Yes, I’d like to speak to your office manager, please. Yes, hello. This is Michael Scott. I am the Regional Manager of Dunder Mifflin Paper Products. Just wanted to talk to you manager-a-manger. [quick cut scene] All right. Done deal. Thank you very much, sir. You’re a gentleman and a scholar. Oh, I’m sorry. OK. I’m sorry. My mistake. [hangs up] That was a woman I was talking to, so… She had a very low voice. Probably a smoker, so… [Clears throat] So that’s the way it’s done. | Michael | FALSE |
This data, like the majority of data isn’t perfect, but it’s in pretty good shape. There are some clean up steps we need to do:
- Filter out deleted scenes
- Remove text in brackets ([]) and put in a new column called actions
- There are 4000+ instances of ??? found in the data mainly in the last two seasons. The ??? replaces … - ’ and “. For now I’m just going to replace all instances with ’ since that seems to be the majority of the cases
- Change speaker to lower case since there is some inconsistent capitalization
- Some entries for speakers have actions ([]), which I’ll remove
- Fix misspellings in the speaker field (e.g. Micheal instead of Michael)
mod_data <- raw_data %>%
filter(deleted == "FALSE") %>%
mutate(actions = str_extract_all(line_text, "\\[.*?\\]"),
line_text_mod = str_trim(str_replace_all(line_text, "\\[.*?\\]", ""))) %>%
mutate_at(vars(line_text_mod), funs(str_replace_all(., "���","'"))) %>%
mutate_at(vars(speaker), funs(tolower)) %>%
mutate_at(vars(speaker), funs(str_trim(str_replace_all(., "\\[.*?\\]", "")))) %>%
mutate_at(vars(speaker), funs(str_replace_all(., "micheal|michel|michae$", "michael")))
Exploring the Data
total_episodes <- mod_data %>%
unite(season_ep, season, episode, remove = FALSE) %>%
summarise(num_episodes = n_distinct(season_ep)) %>%
as.integer()
total_episodes
## [1] 186
Searching around on the interwebs indicates that there were 201 episodes of the office, however the data I have contains 186 episodes. Wikipedia counts some episodes like “A Benihana Christmas” as two, but I’m not sure why. The data from officequotes.net closely matches the episode breakdown on IMdB with the exception of season 6. Officequotes.net counts Niagara parts 1 & 2 as one episode and The Delivery parts 1 & 2 as one episode instead of two. Since, I am working with the officequestions.net data, I’m going with the idea that there were 186 episodes total.
# proportion of episodes each character was in
episode_proportion <- mod_data %>%
unite(season_ep, season, episode, remove = FALSE) %>%
group_by(speaker) %>%
summarise(num_episodes = n_distinct(season_ep)) %>%
mutate(proportion = round((num_episodes / total_episodes) * 100, 1)) %>%
arrange(desc(num_episodes))
total_scenes <- mod_data %>%
unite(season_ep_scene, season, episode, scene, remove = FALSE) %>%
summarise(num_scenes = n_distinct(season_ep_scene)) %>%
as.integer()
# proportion of scenes each character was in
scene_proportion <- mod_data %>%
unite(season_ep_scene, season, episode, scene, remove = FALSE) %>%
group_by(speaker) %>%
summarise(num_scenes = n_distinct(season_ep_scene)) %>%
mutate(proportion = round((num_scenes / total_scenes) * 100, 1)) %>%
arrange(desc(num_scenes))
Dwight was the only character in every episode.
Despite making only one appearance in the last two seasons of the show, Michael was still in the most scenes.
Determining the Main Characters
For parts of my analysis, I wanted to look at the main characters, but beyond Michael, Dwight, Jim, and Pam, determining who the “main characters” are is a little challenging. There are lots of ancillary characters that lurk in the background or get their own plot lines later in the show. I defined the main characters based on % of lines for the entire series. I included a character as a main character if they had at least 1% of all the lines. Yes, this excludes characters like Nellie and Robert California who played larger roles late in the series, but I wasn’t a big fan of those seasons, so it’s ok.
line_proportion <- mod_data %>%
count(speaker) %>%
mutate(proportion = round((n / sum(n)) * 100, 1)) %>%
arrange(desc(n))
# define main characters based on line proportion
main_characters <- factor(line_proportion %>%
filter(proportion >= 1) %>%
pull(speaker) %>%
fct_inorder()
)
Now that we have the main characters defined, we can look at the the percent of lines each character had over the 9 seasons of the show.
line_proportion_by_season <- mod_data %>%
group_by(season) %>%
count(speaker) %>%
mutate(proportion = round((n / sum(n)) * 100, 1)) %>%
arrange(season, desc(proportion))
line_proportion_over_time <- line_proportion_by_season %>%
filter(speaker %in% main_characters) %>%
ggplot(aes(x = season, y = proportion, color = speaker, label = proportion)) +
geom_point(size = 2) +
geom_line() +
scale_x_continuous(breaks = seq(1, 9, 1)) +
theme_minimal() +
theme(legend.position = "none") +
labs(y = "% of lines",
title = "% of Lines by Season") +
theme(plot.title = element_text(hjust = 0.5)) +
facet_wrap(~ factor(str_to_title(speaker), levels = str_to_title(main_characters)), ncol = 3) +
geom_text(vjust = -1.2, size = 3.5) +
ylim(0, 50) +
scale_color_manual(values = office_colors)
line_proportion_over_time
Text Analytics
Word Frequencies
I’ll start by tokenizing the text into words, removing the standard stop words (very common words that only add noise to the analysis), and plotting the most frequent words.
tidy_tokens <- mod_data %>%
select(line = id, line_text_mod, everything(), -line_text, -actions, -deleted) %>%
unnest_tokens(word, line_text_mod, strip_numeric = TRUE) %>%
mutate_at(vars(word), funs(str_replace_all(., "'s$", "")))
tidy_tokens_no_stop <- tidy_tokens %>%
anti_join(stop_words, by = "word")
Looking at the most frequent words revealed words like “yeah”, “hey”, “uh”, “um”, “huh”, “hmm”, and “ah.” I’m going to add these to the stop words and remove them from the analysis.
custom_stop_words <- bind_rows(data_frame(word = c("yeah", "hey", "uh", "um", "huh", "hmm", "ah", "umm", "uhh", "gonna", "na", "ha", "gotta"),
lexicon = c("custom")),
stop_words)
tidy_tokens_no_stop <- tidy_tokens %>%
anti_join(custom_stop_words, by = "word")
After I removed those stop words, I was interested in looking at word frequencies by character.
“Michael” is the most frequently used word for almost all of the characters. Given he is the main character and interacts with everyone that isn’t too surprising. A lot of characters use the words “time”, “god”, “guy(s)”, “love”, and “office” frequently. The word “party” is used frequently by Angela and Phyllis because they are on the party planning committee.
These word frequencies are interesting, but we see a lot of the same words used by different characters. If we want to understand the words that are unique to each character, we can use tf-idf. The tf-idf is defined as term frequency (tf) multiplied by inverse document frequency (idf). This gives us a measure of how unique a word is to a given character. Calculating tf-idf attempts to find the words that are important (i.e., common) for a given character, but not too common across all characters.
tidy_tokens_tf_idf <- tidy_tokens %>%
count(speaker, word, sort = TRUE) %>%
ungroup() %>%
filter(speaker %in% main_characters) %>%
bind_tf_idf(word, speaker, n)
This is amazing and fun to see! There are so many good character nuances revealed. A lot of characters’ children show up here “Cece” (Pam), “Astrid” (Jan), “Melissa” (Stanley), “Phillip” (Angela), etc. There are also several love interests that appear. We also see that lyrics from Angela’s favorite Christmas song Little Drummer Boy bubble to the top as well as her love of cats. Pam’s work as an artist shows with the words “mural”, “paint”, and “defaced” (the mural was defaced). Kevin’s love of M&Ms is shown. “Ethics” and “ethical” indicate Holly’s work in HR. Overall, this gives us some good insight into each character’s quirks.
Now that we’ve discovered differences between characters, let’s look at similarities. How correlated are the word frequencies between each character of The Office?
frequency_by_character <- tidy_tokens_no_stop %>%
filter(speaker %in% main_characters) %>%
count(speaker, word, sort = TRUE) %>%
group_by(speaker) %>%
mutate(proportion = n / sum(n)) %>%
select(-n) %>%
spread(speaker, proportion)
cor_all <- corr.test(frequency_by_character[, -1], adjust = "none")
cor_plot <- ggcorrplot(cor_all[["r"]],
hc.order = TRUE,
type = "lower",
method = "circle",
colors = c("#E46726", "white", "#6D9EC1"),
lab = TRUE,
lab_size = 2.5)
cor_plot
I was a little surprised to find that the two characters who’s words are most correlated are Dwight and Pam. Michael and Jim are a close second.
Jan and Darryl had the least similar vocabularies.
Given this info, I wanted to see which words Dwight and Pam shared.
pam_dwight_words <- frequency_by_character %>%
select(word, pam, dwight) %>%
ggplot(aes(x = pam, y = dwight, color = abs(pam - dwight), label = word)) +
geom_abline(color = "gray40", lty = 2) +
geom_jitter(alpha = 0.1, size = 2.5, width = 0.3, height = 0.3) +
scale_x_log10(labels = percent_format()) +
scale_y_log10(labels = percent_format()) +
labs(x = "Pam",
y = "Dwight",
title = "Word Frequncy Comparison: Dwight and Pam") +
theme(legend.position = "none")
ggplotly(pam_dwight_words, tooltip = c("word"))