1 year ago

#380792

test-img

Harika

NLTK word_tokenize returning empty

I am trying to tokenize the words and sentences in a text document but it is returing empty for both. Could you please check and share why am I seeing this?

Please find the code below (not attaching the text document as it is large (443 KB):

f = open('txt_link.txt','r',errors='ignore') 

nltk.download('punkt') 
nltk.download('wordnet')
nltk.download('averaged_perceptron_tagger')
from nltk.tokenize import word_tokenize

raw_doc = f.read()
raw_doc = raw_doc.lower() #converts text to lowercase
sent_tokens = nltk.sent_tokenize(raw_doc)
word_tokens = nltk.word_tokenize(raw_doc) 

word_tokens[:2]
sent_tokens[:2]

[]

Thank you

python

nltk

tokenize

0 Answers

Your Answer

Accepted video resources