What are the most frequent words in Herman Melville's novel, Moby Dick, and how often do they occur?
In this notebook, we'll scrape the novel Moby Dick from the website Project Gutenberg (which contains a large corpus of books) using the Python package requests
. Then we'll extract words from this web data using BeautifulSoup
. Finally, we'll dive into analyzing the distribution of words using the Natural Language ToolKit (nltk
) and Counter
.
The Data Science pipeline we'll build in this notebook can be used to visualize the word frequency distributions of any novel that you can find on Project Gutenberg. The natural language processing tools used here apply to much of the data that data scientists encounter as a vast proportion of the world's data is unstructured data and includes a great deal of text.
Let's start by loading in the three main Python packages we are going to use.
# Importing requests, BeautifulSoup, nltk, and Counter
import requests
from bs4 import BeautifulSoup
import nltk
from collections import Counter
%%nose
import sys
def test_example():
assert ('requests' in sys.modules and
'bs4' in sys.modules and
'nltk' in sys.modules and
'collections' in sys.modules), \
'The modules requests, BeautifulSoup, nltk, and Counter should be imported.'
To analyze Moby Dick, we need to get the contents of Moby Dick from somewhere. Luckily, the text is freely available online at Project Gutenberg as an HTML file: https://www.gutenberg.org/files/2701/2701-h/2701-h.htm .
Note that HTML stands for Hypertext Markup Language and is the standard markup language for the web.
To fetch the HTML file with Moby Dick we're going to use the request
package to make a GET
request for the website, which means we're getting data from it. This is what you're doing through a browser when visiting a webpage, but now we're getting the requested page directly into Python instead.
# Getting the Moby Dick HTML
r = requests.get('https://s3.amazonaws.com/assets.datacamp.com/production/project_147/datasets/2701-h.htm')
# Setting the correct text encoding of the HTML page
r.encoding = 'utf-8'
# Extracting the HTML from the request object
html = r.text
# Printing the first 2000 characters in html
print(html[0:2000])
%%nose
def test_r_correct():
assert r.request.path_url == '/assets.datacamp.com/production/project_147/datasets/2701-h.htm', \
"r should be a get request for 'https://s3.amazonaws.com/assets.datacamp.com/production/project_147/datasets/2701-h.htm'"
def test_text_read_in_correctly():
assert len(html) == 1500996, \
'html should contain the text of the request r.'
This HTML is not quite what we want. However, it does contain what we want: the text of Moby Dick. What we need to do now is wrangle this HTML to extract the text of the novel. For this we'll use the package BeautifulSoup
.
Firstly, a word on the name of the package: Beautiful Soup? In web development, the term "tag soup" refers to structurally or syntactically incorrect HTML code written for a web page. What Beautiful Soup does best is to make tag soup beautiful again and to extract information from it with ease! In fact, the main object created and queried when using this package is called BeautifulSoup
.
# Creating a BeautifulSoup object from the HTML
soup = BeautifulSoup(html, "html.parser")
# Getting the text out of the soup
text = soup.get_text()
# Printing out text between characters 32000 and 34000
print(text[32000:34000])
%%nose
import bs4
def test_text_correct_type():
assert isinstance(text, str), \
'text should be a string.'
def test_soup_correct_type():
assert isinstance(soup, bs4.BeautifulSoup), \
'soup should be a BeautifulSoup object.'
We now have the text of the novel! There is some unwanted stuff at the start and some unwanted stuff at the end. We could remove it, but this content is so much smaller in amount than the text of Moby Dick that, to a first approximation, it is okay to leave it in.
Now that we have the text of interest, it's time to count how many times each word appears, and for this we'll use nltk
– the Natural Language Toolkit. We'll start by tokenizing the text, that is, remove everything that isn't a word (whitespace, punctuation, etc.) and then split the text into a list of words.
# Creating a tokenizer
tokenizer = nltk.tokenize.RegexpTokenizer('\w+')
# Tokenizing the text
tokens = tokenizer.tokenize(text)
# Printing out the first 8 words / tokens
tokens[0:8]
%%nose
import nltk
def test_correct_tokenizer():
correct_tokenizer = nltk.tokenize.RegexpTokenizer('\w+')
assert isinstance(tokenizer, nltk.tokenize.regexp.RegexpTokenizer), \
'tokenizer should be created using the function nltk.tokenize.RegexpTokenizer.'
def test_correct_tokens():
correct_tokenizer = nltk.tokenize.RegexpTokenizer('\w+')
correct_tokens = correct_tokenizer.tokenize(text)
assert isinstance(tokens, list) and len(tokens) > 150000 , \
'tokens should be a list with the words in text.'
OK! We're nearly there. Note that in the above 'Or' has a capital 'O' and that in other places it may not, but both 'Or' and 'or' should be counted as the same word. For this reason, we should build a list of all words in Moby Dick in which all capital letters have been made lower case.
# Create a list called words containing all tokens transformed to lower-case
words = []
for word in tokens:
words.append(word.lower())
# Printing out the first 8 words / tokens
words[:8]
%%nose
correct_words = [token.lower() for token in tokens]
def test_correct_words():
assert correct_words == words, \
'words should contain every element in tokens, but in lower-case.'
It is common practice to remove words that appear a lot in the English language such as 'the', 'of' and 'a' because they're not so interesting. Such words are known as stop words. The package nltk
includes a good list of stop words in English that we can use.
# Getting the English stop words from nltk
sw = nltk.corpus.stopwords.words('english')
# Printing out the first eight stop words
sw[:8]
%%nose
def test_correct_sw():
correct_sw = nltk.corpus.stopwords.words('english')
assert correct_sw == sw, \
'sw should contain the stop words from nltk.'
We now want to create a new list with all words
in Moby Dick, except those that are stop words (that is, those words listed in sw
).
# Create a list words_ns containing all words that are in words but not in sw
words_ns = [word for word in words if word not in sw]
# Printing the first 5 words_ns to check that stop words are gone
words_ns[:5]
%%nose
def test_correct_words_ns():
correct_words_ns = []
for word in words:
if word not in sw:
correct_words_ns.append(word)
assert correct_words_ns == words_ns, \
'words_ns should contain all words of Moby Dick but with the stop words removed.'
Our original question was:
What are the most frequent words in Herman Melville's novel Moby Dick and how often do they occur?
We are now ready to answer that! Let's answer this question using the Counter
class we imported earlier.
# Initialize a Counter object from our processed list of words
count = Counter(words_ns)
# Store 10 most common words and their counts as top_ten
top_ten = count.most_common(10)
# Print the top ten words and their counts
print(top_ten)
%%nose
def test_correct_count():
correct_counter = Counter(words_ns)
assert ((count == correct_counter)), \
'Did you correctly initailize a `Counter` object with `words_ns`?'
def test_top_ten():
top_ten_correct = count.most_common(10)
assert ((top_ten == top_ten_correct)), \
'Did you correctly store the top ten words and their counts in the variable `top_ten`?'
Nice! Using our variable top_ten
, we now have an answer to our original question.
The natural language processing skills we used in this notebook are also applicable to much of the data that Data Scientists encounter as the vast proportion of the world's data is unstructured data and includes a great deal of text.
So, what word turned out to (not surprisingly) be the most common word in Moby Dick?
# What's the most common word in Moby Dick?
most_common_word = 'whale'
%%nose
def test_most_common_word():
assert most_common_word.lower() == 'whale', \
"That's not the most common word in Moby Dick."