Pre-trained Word embedding using Glove in NLP models
Last Updated :
02 Jul, 2025
In modern Natural Language Processing (NLP), understanding and processing human language in a machine-readable format is essential. Since machines interpret numbers, it's important to convert textual data into numerical form. One of the most effective and widely used approaches to achieve this is through word embeddings.
What is GloVe?
GloVe (Global Vectors for Word Representation) is an unsupervised learning algorithm designed to generate dense vector representations also known as embeddings. Its primary objective is to capture semantic relationships between words by analyzing their co-occurrence patterns in a large text corpus.
GloVe approach is unique as it effectively combines the strengths of two major approaches:
At the core of GloVe lies the idea of mapping each word into a continuous vector space where both the magnitude and direction of the vectors reflect meaningful semantic relationships.
For instance, the relationship captured in a vector equation can be like: king - man + woman = queen.
To achieve this, GloVe constructs a word co-occurrence matrix where each element reflects how often a pair of words appears together within a given context window. It then optimizes the word vectors such that the dot product between any two word vectors approximates the pointwise mutual information (PMI) of the corresponding word pair. This optimization allows GloVe to produce embeddings that effectively encode both syntactic and semantic relationships across the vocabulary.
Glove Data
Glove has pre-defined dense vectors for around every 6 billion words of English literature along with many other general-use characters like commas, braces and semicolons. The algorithm's developers frequently make the pre-trained GloVe embeddings available. It is not necessary to train the model from scratch when using these pre-trained embeddings which can be downloaded and used immediately in a variety of natural language processing (NLP) applications. Users can select a pre-trained GloVe embedding in a dimension like 50-d, 100-d, 200-d or 300-d vectors that best fits their needs in terms of computational resources and task specificity.
Here d stands for dimension. 100d means in this file each word has an equivalent vector of size 100. Glove files are simple text files in the form of a dictionary. Words are key and dense vectors are values of key.
How GloVe works?
The creation of a word co-occurrence matrix is the fundamental component of GloVe. This matrix provides a quantitative measure of the semantic affinity between words by capturing the frequency with which they appear together in a given context. Further, by minimising the difference between the dot product of vectors and the pointwise mutual information of corresponding words, GloVe optimises word vectors. It is able to produce dense vector representations that capture syntactic and semantic relationships.
Creating Vocabulary Dictionary
Vocabulary is the collection of all unique words present in the training dataset. The first dataset is tokenized into words, then all the frequency of each word is counted. Then words are sorted in decreasing order of their frequencies. Words having high frequency are placed at the beginning of the dictionary.
Dataset= {The peon is ringing the bell}
Vocabulary= {'The':2, 'peon':1, 'is':1, 'ringing':1}
Algorithm for word embedding
- Preprocess the text data.
- Created the dictionary.
- Traverse the glove file of a specific dimension and compare each word with all words in the dictionary,
- if a match occurs, copy the equivalent vector from the glove and paste into embedding_matrix at the corresponding index.
Code Implementation
Here we will see step by step implementation
1. Importing Libraries
We will import numpy and tenserflow for it.
Python
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
import numpy as np
2. Creatong Vocabulary
Here we will create a list of words to build a vocabulary
Python
# Define a sample corpus
texts = ['text', 'the', 'leader', 'prime', 'natural', 'language']
3. Initialize and Fit Tokenizer
- Initializes a tokenizer and fits it on the text corpus.
- Prints the number of unique words and their assigned indices.
Python
# Create and fit the tokenizer
tokenizer = Tokenizer()
tokenizer.fit_on_texts(texts)
# Output the word-index dictionary
print("Number of unique words in dictionary =", len(tokenizer.word_index))
print("Dictionary is =", tokenizer.word_index)
4. Define function to Create embedding matrix
- Defines function that loads GloVe word vectors from file.
- Creates an embedding matrix matching tokenizer word indices with GloVe vectors.
Python
def embedding_for_vocab(filepath, word_index, embedding_dim):
vocab_size = len(word_index) + 1 # +1 for padding token (index 0)
embedding_matrix_vocab = np.zeros((vocab_size, embedding_dim))
with open(filepath, encoding="utf8") as f:
for line in f:
word, *vector = line.split()
if word in word_index:
idx = word_index[word]
embedding_matrix_vocab[idx] = np.array(vector, dtype=np.float32)[:embedding_dim]
return embedding_matrix_vocab
5. Download GloVe File
Here we will Download and Unzip the GloVe file
Python
# Download the GloVe dataset
!wget http://nlp.stanford.edu/data/glove.6B.zip
# Unzip the file
!unzip -q glove.6B.zip
Output:
GloVe File6. Load GloVe Embeddings and Create Matrix
- Sets embedding dimension and file path.
- Generates the embedding matrix using the vocabulary.
Python
# Set embedding dimension (match this with glove file)
embedding_dim = 50
# Path to GloVe file
glove_path = './glove.6B.50d.txt'
# Generate embedding matrix
embedding_matrix_vocab = embedding_for_vocab(glove_path, tokenizer.word_index, embedding_dim)
7. Access Embedding Vector for a Word
Prints the GloVe vector for the word with index 1
in the tokenizer.
Python
# Print the dense vector for the first word in the tokenizer index
first_word_index = 1 # Tokenizer indexes start from 1
print("Dense vector for word with index 1 =>", embedding_matrix_vocab[first_word_index])
Dense embeddings of word at index - 1GloVe Embeddings Applications
GloVe embeddings are a popular option for representing words in text data and have found applications in various natural language processing (NLP) tasks. The following are some typical uses for GloVe embeddings:
- Text Classification: GloVe embeddings can be utilised as features in machine learning models for sentiment analysis, topic classification, spam detection and other applications.
- Named Entity Recognition (NER): By capturing the semantic relationships between words and enhancing the model's capacity to identify entities in text, GloVe embeddings can improve the performance of NER systems.
- Machine Translation: GloVe embeddings can be used to represent words in the source and target languages in machine translation systems which aim to translate text from one language to another, thereby enhancing the quality of the translation.
- Question Answering Systems: To help models understand the context between words and produce more accurate answers, GloVe embeddings are used in question-answering tasks.
- Document Similarity and Clustering: GloVe embeddings enable applications in information retrieval and document organization by measuring the semantic similarity between documents or grouping documents according to their content.
- Word Analogy Tasks: In word analogy tasks, GloVe embeddings frequently yield good results. For instance, the generated vector for "king-man + woman" might resemble the "queen" vector, demonstrating the capacity to recognize semantic relationships.
- Semantic Search: In semantic search applications where retrieving documents or passages according to their semantic relevance to a user's query is the aim, GloVe embeddings are helpful.
Similar Reads
Natural Language Processing (NLP) Tutorial Natural Language Processing (NLP) is a branch of Artificial Intelligence (AI) that helps machines to understand and process human languages either in text or audio form. It is used across a variety of applications from speech recognition to language translation and text summarization.Natural Languag
5 min read
Introduction to NLP
Natural Language Processing (NLP) - OverviewNatural Language Processing (NLP) is a field that combines computer science, artificial intelligence and language studies. It helps computers understand, process and create human language in a way that makes sense and is useful. With the growing amount of text data from social media, websites and ot
9 min read
NLP vs NLU vs NLGNatural Language Processing(NLP) is a subset of Artificial intelligence which involves communication between a human and a machine using a natural language than a coded or byte language. It provides the ability to give instructions to machines in a more easy and efficient manner. Natural Language Un
3 min read
Applications of NLPAmong the thousands and thousands of species in this world, solely homo sapiens are successful in spoken language. From cave drawings to internet communication, we have come a lengthy way! As we are progressing in the direction of Artificial Intelligence, it only appears logical to impart the bots t
6 min read
Why is NLP important?Natural language processing (NLP) is vital in efficiently and comprehensively analyzing text and speech data. It can navigate the variations in dialects, slang, and grammatical inconsistencies typical of everyday conversations. Table of Content Understanding Natural Language ProcessingReasons Why NL
6 min read
Phases of Natural Language Processing (NLP)Natural Language Processing (NLP) helps computers to understand, analyze and interact with human language. It involves a series of phases that work together to process language and each phase helps in understanding structure and meaning of human language. In this article, we will understand these ph
7 min read
The Future of Natural Language Processing: Trends and InnovationsThere are no reasons why today's world is thrilled to see innovations like ChatGPT and GPT/ NLP(Natural Language Processing) deployments, which is known as the defining moment of the history of technology where we can finally create a machine that can mimic human reaction. If someone would have told
7 min read
Libraries for NLP
Text Normalization in NLP
Normalizing Textual Data with PythonIn this article, we will learn How to Normalizing Textual Data with Python. Let's discuss some concepts : Textual data ask systematically collected material consisting of written, printed, or electronically published words, typically either purposefully written or transcribed from speech.Text normal
7 min read
Regex Tutorial - How to write Regular Expressions?A regular expression (regex) is a sequence of characters that define a search pattern. Here's how to write regular expressions: Start by understanding the special characters used in regex, such as ".", "*", "+", "?", and more.Choose a programming language or tool that supports regex, such as Python,
6 min read
Tokenization in NLPTokenization is a fundamental step in Natural Language Processing (NLP). It involves dividing a Textual input into smaller units known as tokens. These tokens can be in the form of words, characters, sub-words, or sentences. It helps in improving interpretability of text by different models. Let's u
8 min read
Python | Lemmatization with NLTKLemmatization is an important text pre-processing technique in Natural Language Processing (NLP) that reduces words to their base form known as a "lemma." For example, the lemma of "running" is "run" and "better" becomes "good." Unlike stemming which simply removes prefixes or suffixes, it considers
6 min read
Introduction to StemmingStemming is an important text-processing technique that reduces words to their base or root form by removing prefixes and suffixes. This process standardizes words which helps to improve the efficiency and effectiveness of various natural language processing (NLP) tasks.In NLP, stemming simplifies w
6 min read
Removing stop words with NLTK in PythonIn natural language processing (NLP), stopwords are frequently filtered out to enhance text analysis and computational efficiency. Eliminating stopwords can improve the accuracy and relevance of NLP tasks by drawing attention to the more important words, or content words. The article aims to explore
9 min read
POS(Parts-Of-Speech) Tagging in NLPParts of Speech (PoS) tagging is a core task in NLP, It gives each word a grammatical category such as nouns, verbs, adjectives and adverbs. Through better understanding of phrase structure and semantics, this technique makes it possible for machines to study human language more accurately. PoS tagg
7 min read
Text Representation and Embedding Techniques
NLP Deep Learning Techniques
NLP Projects and Practice
Sentiment Analysis with an Recurrent Neural Networks (RNN)Recurrent Neural Networks (RNNs) are used in sequence tasks such as sentiment analysis due to their ability to capture context from sequential data. In this article we will be apply RNNs to analyze the sentiment of customer reviews from Swiggy food delivery platform. The goal is to classify reviews
5 min read
Text Generation using Recurrent Long Short Term Memory NetworkLSTMs are a type of neural network that are well-suited for tasks involving sequential data such as text generation. They are particularly useful because they can remember long-term dependencies in the data which is crucial when dealing with text that often has context that spans over multiple words
4 min read
Machine Translation with Transformer in PythonMachine translation means converting text from one language into another. Tools like Google Translate use this technology. Many translation systems use transformer models which are good at understanding the meaning of sentences. In this article, we will see how to fine-tune a Transformer model from
6 min read
Building a Rule-Based Chatbot with Natural Language ProcessingA rule-based chatbot follows a set of predefined rules or patterns to match user input and generate an appropriate response. The chatbot canât understand or process input beyond these rules and relies on exact matches making it ideal for handling repetitive tasks or specific queries.Pattern Matching
4 min read
Text Classification using scikit-learn in NLPThe purpose of text classification, a key task in natural language processing (NLP), is to categorise text content into preset groups. Topic categorization, sentiment analysis, and spam detection can all benefit from this. In this article, we will use scikit-learn, a Python machine learning toolkit,
5 min read
Text Summarization using HuggingFace ModelText summarization involves reducing a document to its most essential content. The aim is to generate summaries that are concise and retain the original meaning. Summarization plays an important role in many real-world applications such as digesting long articles, summarizing legal contracts, highli
4 min read
Advanced Natural Language Processing Interview QuestionNatural Language Processing (NLP) is a rapidly evolving field at the intersection of computer science and linguistics. As companies increasingly leverage NLP technologies, the demand for skilled professionals in this area has surged. Whether preparing for a job interview or looking to brush up on yo
9 min read