Python NLTK | tokenize.WordPunctTokenizer() Last Updated : 30 Sep, 2019 Comments Improve Suggest changes Like Article Like Report With the help of nltk.tokenize.WordPunctTokenizer()() method, we are able to extract the tokens from string of words or sentences in the form of Alphabetic and Non-Alphabetic character by using tokenize.WordPunctTokenizer()() method. Syntax : tokenize.WordPunctTokenizer()() Return : Return the tokens from a string of alphabetic or non-alphabetic character. Example #1 : In this example we can see that by using tokenize.WordPunctTokenizer()() method, we are able to extract the tokens from stream of alphabetic or non-alphabetic character. Python3 1=1 # import WordPunctTokenizer() method from nltk from nltk.tokenize import WordPunctTokenizer # Create a reference variable for Class WordPunctTokenizer tk = WordPunctTokenizer() # Create a string input gfg = "GeeksforGeeks...$$&* \nis\t for geeks" # Use tokenize method geek = tk.tokenize(gfg) print(geek) Output : ['GeeksforGeeks', '...$$&*', 'is', 'for', 'geeks'] Example #2 : Python3 1== # import WordPunctTokenizer() method from nltk from nltk.tokenize import WordPunctTokenizer # Create a reference variable for Class WordPunctTokenizer tk = WordPunctTokenizer() # Create a string input gfg = "The price\t of burger \nin BurgerKing is Rs.36.\n" # Use tokenize method geek = tk.tokenize(gfg) print(geek) Output : ['The', 'price', 'of', 'burger', 'in', 'BurgerKing', 'is', 'Rs', '.', '36', '.'] Comment More infoAdvertise with us Next Article Python NLTK | tokenize.WordPunctTokenizer() J Jitender_1998 Follow Improve Article Tags : Python Python-nltk Practice Tags : python Similar Reads Python NLTK | nltk.tokenizer.word_tokenize() With the help of nltk.tokenize.word_tokenize() method, we are able to extract the tokens from string of characters by using tokenize.word_tokenize() method. It actually returns the syllables from a single word. A single word can contain one or two syllables. Syntax : tokenize.word_tokenize() Return 1 min read Python NLTK | nltk.tokenize.TabTokenizer() With the help of nltk.tokenize.TabTokenizer() method, we are able to extract the tokens from string of words on the basis of tabs between them by using tokenize.TabTokenizer() method. Syntax : tokenize.TabTokenizer() Return : Return the tokens of words. Example #1 : In this example we can see that b 1 min read Python NLTK | nltk.tokenize.SpaceTokenizer() With the help of nltk.tokenize.SpaceTokenizer() method, we are able to extract the tokens from string of words on the basis of space between them by using tokenize.SpaceTokenizer() method. Syntax : tokenize.SpaceTokenizer() Return : Return the tokens of words. Example #1 : In this example we can see 1 min read Python NLTK | nltk.tokenize.SExprTokenizer() With the help of nltk.tokenize.SExprTokenizer() method, we are able to extract the tokens from string of characters or numbers by using tokenize.SExprTokenizer() method. It actually looking for proper brackets to make tokens. Syntax : tokenize.SExprTokenizer() Return : Return the tokens from a strin 1 min read Python NLTK | nltk.tokenize.StanfordTokenizer() With the help of nltk.tokenize.StanfordTokenizer() method, we are able to extract the tokens from string of characters or numbers by using tokenize.StanfordTokenizer() method. It follows stanford standard for generating tokens. Syntax : tokenize.StanfordTokenizer() Return : Return the tokens from a 1 min read Python NLTK | nltk.TweetTokenizer() With the help of NLTK nltk.TweetTokenizer() method, we are able to convert the stream of words into small  tokens so that we can analyse the audio stream with the help of nltk.TweetTokenizer() method. Syntax : nltk.TweetTokenizer() Return : Return the stream of token Example #1 : In this example whe 1 min read Python NLTK | tokenize.regexp() With the help of NLTK tokenize.regexp() module, we are able to extract the tokens from string by using regular expression with RegexpTokenizer() method. Syntax : tokenize.RegexpTokenizer() Return : Return array of tokens using regular expression Example #1 : In this example we are using RegexpTokeni 1 min read Python NLTK | nltk.WhitespaceTokenizer With the help of nltk.tokenize.WhitespaceTokenizer() method, we are able to extract the tokens from string of words or sentences without whitespaces, new line and tabs by using tokenize.WhitespaceTokenizer() method. Syntax : tokenize.WhitespaceTokenizer() Return : Return the tokens from a string Exa 1 min read Tokenize text using NLTK in python To run the below python program, (NLTK) natural language toolkit has to be installed in your system.The NLTK module is a massive tool kit, aimed at helping you with the entire Natural Language Processing (NLP) methodology.In order to install NLTK run the following commands in your terminal. sudo pip 3 min read Python NLTK | nltk.tokenize.mwe() With the help of NLTK nltk.tokenize.mwe() method, we can tokenize the audio stream into multi_word expression token which helps to bind the tokens with underscore by using nltk.tokenize.mwe() method. Remember it is case sensitive. Syntax : MWETokenizer.tokenize() Return : Return bind tokens as one i 1 min read Like