Dear All,
Find the steps in Twitter Sentimental Analysis using Python
1. Import necessary packages
from nltk import NaiveBayesClassifier as nbc
from nltk.tokenize import word_tokenize
from itertools import chain
import csv
2. Read the input file using csv reader and generate a list of those tweets
3. Generate a vocabulary
vocabulary = set(chain(*[word_tokenize(i[0].lower()) for i in training_data]))
4. Generate training data
feature_set = [({i:(i in word_tokenize(sentence.lower())) for i in vocabulary},tag) for sentence, tag in training_data]
5. Train the classifier
classifier = nbc.train(feature_set)
6. Generate output csv file
writer = csv.writer(csvoutput, lineterminator='\n')
7. Generate Test Input
featurized_test_sentence = {i:(i in word_tokenize(test_sentence.lower())) for i in vocabulary}
8. Classfiy and create output data
row.append(classifier.classify(featurized_test_sentence))
all.append(row)
9. Flush output data to an output csv file
writer.writerows(all)
Find the steps in Twitter Sentimental Analysis using Python
1. Import necessary packages
from nltk import NaiveBayesClassifier as nbc
from nltk.tokenize import word_tokenize
from itertools import chain
import csv
2. Read the input file using csv reader and generate a list of those tweets
3. Generate a vocabulary
vocabulary = set(chain(*[word_tokenize(i[0].lower()) for i in training_data]))
4. Generate training data
feature_set = [({i:(i in word_tokenize(sentence.lower())) for i in vocabulary},tag) for sentence, tag in training_data]
5. Train the classifier
classifier = nbc.train(feature_set)
6. Generate output csv file
writer = csv.writer(csvoutput, lineterminator='\n')
7. Generate Test Input
featurized_test_sentence = {i:(i in word_tokenize(test_sentence.lower())) for i in vocabulary}
8. Classfiy and create output data
row.append(classifier.classify(featurized_test_sentence))
all.append(row)
9. Flush output data to an output csv file
writer.writerows(all)
No comments:
Post a Comment