Full Stack UNIT3
Full Stack UNIT3
What is NoSQL?
NoSQL Database is a non-relational Data Management System, that does not
require a fixed schema. It avoids joins, and is easy to scale. The major purpose of
using a NoSQL database is for distributed data stores with humongous data
storage needs. NoSQL is used for Big data and real-time web apps. For example,
companies like Twitter, Facebook and Google collect terabytes of user data every
single day.
NoSQL database stands for “Not Only SQL” or “Not SQL.” Though a better
term would be “NoREL”, NoSQL caught on. Carl Strozz introduced the NoSQL
concept in 1998.
Traditional RDBMS uses SQL syntax to store and retrieve data for further
insights. Instead, a NoSQL database system encompasses a wide range of
database technologies that can store structured, semi-structured, unstructured and
polymorphic data.
Why NoSQL?
The concept of NoSQL databases became popular with Internet giants like
Google, Facebook, Amazon, etc. who deal with huge volumes of data. The system
response time becomes slow when you use RDBMS for massive volumes of data.
To resolve this problem, we could “scale up” our systems by upgrading our
existing hardware. This process is expensive.
The alternative for this issue is to distribute database load on multiple hosts
whenever the load increases. This method is known as “scaling out.”
1
NoSQL database is non-relational, so it scales out better than relational databases as
they are designed with web applications in mind.
Features of NoSQL
Non-relational
Schema-free
NoSQL is Schema-Free
Simple API
Offers easy to use interfaces for storage and querying data provided
APIs allow low-level data manipulation & selection methods
Text-based protocols mostly used with HTTP REST with JSON
Mostly used no standard based NoSQL query language
Web-enabled databases running as internet-facing services
Distributed
2
Mostly no synchronous replication between distributed nodes Asynchronous
Multi-Master Replication, peer-to-peer, HDFS Replication
Only providing eventual consistency
Shared Nothing Architecture. This enables less coordination and higher
distribution.
NoSQL Databases are mainly categorized into four types: Key-value pair, Column-
oriented, Graph-based and Document-oriented. Every category has its unique
attributes and limitations. None of the above-specified database is better to solve all
the problems. Users should select the database based on their product needs.
Data is stored in key/value pairs. It is designed in such a way to handle lots of data
and heavy load.
3
Key-value pair storage databases store data as a hash table where each key is unique,
and the value can be a JSON, BLOB(Binary Large Objects), string, etc.
For example, a key-value pair may contain a key like “Website” associated with a
value like “Guru99”.
It is one of the most basic NoSQL database example. This kind of NoSQL database is
used as a collection, dictionaries, associative arrays, etc. Key value stores help the
developer to store schema-less data. They work best for shopping cart contents.
Redis, Dynamo, Riak are some NoSQL examples of key-value store DataBases. They
are all based on Amazon’s Dynamo paper.
Column-based
HBase, Cassandra, HBase, Hypertable are NoSQL query examples of column based
4
database.
Document-Oriented:
Document-Oriented NoSQL DB stores and retrieves data as a key value pair but the
value part is stored as a document. The document is stored in JSON or XML formats.
The value is understood by the DB and can be queried.
In this diagram on your left you can see we have rows and columns, and in the right,
we have a document database which has a similar structure to JSON. Now for the
relational database, you have to know what columns you have and so on. However,
for a document database, you have data store like JSON object. You do not require to
define which make it flexible.
Graph-Based
A graph type database stores entities as well the relations amongst those entities. The
entity is stored as a node with the relationship as edges. An edge gives a relationship
between nodes. Every node and edge has a unique identifier.
5
Graph base database mostly used for social networks, logistics, spatial data.
Neo4J, Infinite Graph, OrientDB, FlockDB are some popular graph-based databases.
CAP theorem is also called brewer’s theorem. It states that is impossible for a
distributed data store to offer more than two out of three guarantees
1. Consistency
2. Availability
3. Partition Tolerance
Consistency:
The data should remain consistent even after the execution of an operation. This
means once data is written, any future read request should contain that data. For
example, after updating the order status, all the clients should be able to see the same
data.
Availability:
The database should always be available and responsive. It should not have any
downtime.
Partition Tolerance:
Partition Tolerance means that the system should continue to function even if the
communication among the servers is not stable. For example, the servers can be
partitioned into multiple groups which may not communicate with each other. Here, if
part of the database is unavailable, other parts are always unaffected.
Eventual Consistency
The term “eventual consistency” means to have copies of data on multiple machines
to get high availability and scalability. Thus, changes made to any data item on one
machine has to be propagated to other replicas.
Basically, available means DB is available all the time as per CAP theorem
Soft state means even without an input; the system state may change
Eventual consistency means that the system will become consistent over time
6
Advantages of NoSQL
Disadvantages of NoSQL
No standardization rules
Limited query capabilities
RDBMS databases and tools are comparatively mature
It does not offer any traditional database capabilities, like consistency when
multiple transactions are performed simultaneously.
When the volume of data increases it is difficult to maintain unique values as
keys become difficult
Doesn’t work as well with relational data
The learning curve is stiff for new developers
Open source options so not so popular for enterprises.
7
MongoDB system overview
What is MongoDB?
MongoDB Features
MongoDB Example
1. The _id field is added by MongoDB to uniquely identify the document in the
collection.
2. What you can note is that the Order Data (OrderID, Product, and Quantity )
which in RDBMS will normally be stored in a separate table, while in
MongoDB it is actually stored as an embedded document in the collection
itself. This is one of the key differences in how data is modeled in MongoDB.
8
Below are a few of the common terms used in MongoDB
1. _id – This is a field required in every MongoDB document. The _id field
represents a unique value in the MongoDB document. The _id field is like the
document’s primary key. If you create a new document without an _id field,
MongoDB will automatically create the field. So for example, if we see the
example of the above customer table, Mongo DB will add a 24 digit unique
identifier to each document in the collection.
Just a quick note on the key difference between the _id field and a normal collection
field. The _id field is used to uniquely identify the documents in a collection and is
automatically added by MongoDB when the collection is created.
Below are the few of the reasons as to why one should start using MongoDB
9
1. Document-oriented – Since MongoDB is a NoSQL type database, instead of
having data in a relational type format, it stores the data in documents. This
makes MongoDB very flexible and adaptable to real business world situation
and requirements.
2. Ad hoc queries – MongoDB supports searching by field, range queries, and
regular expression searches. Queries can be made to return specific fields
within documents.
3. Indexing – Indexes can be created to improve the performance of searches
within MongoDB. Any field in a MongoDB document can be indexed.
4. Replication – MongoDB can provide high availability with replica sets. A
replica set consists of two or more mongo DB instances. Each replica set
member may act in the role of the primary or secondary replica at any time.
The primary replica is the main server which interacts with the client and
performs all the read/write operations. The Secondary replicas maintain a copy
of the data of the primary using built-in replication. When a primary replica
fails, the replica set automatically switches over to the secondary and then it
becomes the primary server.
5. Load balancing – MongoDB uses the concept of sharding to scale
horizontally by splitting data across multiple MongoDB instances. MongoDB
can run over multiple servers, balancing the load and/or duplicating data to
keep the system up and running in case of hardware failure.
As we have seen from the Introduction section, the data in MongoDB has a flexible
schema. Unlike in SQL databases, where you must have a table’s schema declared
before inserting data, MongoDB’s collections do not enforce document structure. This
sort of flexibility is what makes MongoDB so powerful.
1. What are the needs of the application – Look at the business needs of the
application and see what data and the type of data needed for the application.
Based on this, ensure that the structure of the document is decided
accordingly.
2. What are data retrieval patterns – If you foresee a heavy query usage then
consider the use of indexes in your data model to improve the efficiency of
queries.
3. Are frequent inserts, updates and removals happening in the database?
Reconsider the use of indexes or incorporate sharding if required in your data
modeling design to improve the efficiency of your overall MongoDB
environment.
Below are some of the key term differences between MongoDB and RDBMS
10
structure is known as a collection. The collection contains
documents which in turn contains Fields, which in turn are
key-value pairs.
In RDBMS, the row represents a single, implicitly structured
Row Document
data item in a table. In MongoDB, the data is stored in
documents.
In RDBMS, the column denotes a set of data values. These in
Column F i e l d MongoDB are known as Fields.
Joins Embedded documents
In RDBMS, data is sometimes spread across various tables and in order to show a
complete view of all data, a join is sometimes formed across tables to get the data. In
MongoDB, the data is normally stored in a single collection, but separated by using
Embedded documents. So there is no concept of joins in MongoDB.
Apart from the terms differences, a few other differences are shown below
1. Relational databases are known for enforcing data integrity. This is not an
explicit requirement in MongoDB.
2. RDBMS requires that data be normalized first so that it can prevent orphan
records and duplicates Normalizing data then has the requirement of more
tables, which will then result in more table joins, thus requiring more keys and
indexes.As databases start to grow, performance can start becoming an issue.
Again this is not an explicit requirement in MongoDB. MongoDB is flexible
and does not need the data to be normalized first.
The MongoDB shell is a great tool for navigating, inspecting, and even manipulating
document data. If you’re running MongoDB on your local machine, firing up the shell
is as simple as typing mongo and hitting enter, which will connect to MongoDB at
localhost on the standard port (27017). If you’re connecting to a MongoDB Atlas
cluster or other remote instance, then add the connection string after the command
mongo .
{_id: ObjectId("5effaa5662679b5af2c58829"),
email: “[email protected]”,
name: {given: “Jesse”, family: “Xiao”},
age: 31,
addresses: [{label: “home”,
street: “101 Elm Street”,
city: “Springfield”,
state: “CA”,
zip: “90000”,
country: “US”},
{label: “mom”,
street: “555 Main Street”,
city: “Jonestown”,
province: “Ontario”,
country: “CA”}]
11
}
List Databases
> db.users.findOne()
{
"_id": ObjectId("5ce45d7606444f199acfba1e"),
"name": {given: "Alex", family: "Smith"},
"email": "[email protected]"
"age": 27
}
>
Find a Document by ID
The MongoDB Query Language (MQL) uses the same syntax as documents, making
it intuitive and easy to use for even advanced querying. Let’s look at a few MongoDB
query examples.
12
> db.users.find().limit(10)
…>
Note that we enclose “name.family” in quotes, because it has a dot in the middle.
Query Documents by Numeric Ranges
// All posts having “likes” field with numeric value greater than one:>
db.post.find({likes: {$gt: 1}})
// All posts having 0 likes> db.post.find({likes: 0})
// All posts that do NOT have exactly 1 like> db.post.find({likes: {$ne: 1}})
Managing Indexes
13
MongoDB allows you to create indexes, even on nested fields in subdocuments, to
keep queries performing well even as collections grow very large.
Create an Index
> db.user.getIndexes()
[
{
"v" : 2,
"key" : {
"_id" : 1
},
"name" : "_id_",
"ns" : "my_database.user"
},
{
"v" : 2,
"key" : {
"name.given" : 1
},
"name" : "name.given_1",
"ns" : "my_database.user"
}]
Note that by default, collections always have an index on the _id field, for easy
document retrieval by primary key, so any additional indexes will be listed after that.
Drop an Index
> db.user.dropIndex("name.given_1")
Inventory collection
[
{ "item": "journal", "qty": 25, "size": { "h": 14, "w": 21, "uom": "cm" },
"status": "A" },
{ "item": "notebook", "qty": 50, "size": { "h": 8.5, "w": 11, "uom": "in" },
"status": "A" },
{ "item": "paper", "qty": 100, "size": { "h": 8.5, "w": 11, "uom": "in" },
"status": "D" },
{ "item": "planner", "qty": 75, "size": { "h": 22.85, "w": 30, "uom":
"cm" }, "status": "D" },
{ "item": "postcard", "qty": 45, "size": { "h": 10, "w": 15.25, "uom":
"cm" }, "status": "A" }
14
]
Select All Documents in a Collection
db.inventory.find()
Specify OR Conditions
Parse incoming request bodies in a middleware before your handlers, available under
the req.body property.
Note As req.body’s shape is based on user-controlled input, all properties and values
in this object are untrusted and should be validated before trusting. For example,
req.body.foo.toString() may fail in multiple ways, for example the foo property may
not be there or may not be a string, and toString may not be a function and instead a
string or other user input.
Installation
API
15
middlewares will populate the req.body property with the parsed body when the
Content-Type request header matches the type option, or an empty object ({}) if there
was no body to parse, the Content-Type was not matched, or an error occurred.
bodyParser.json([options])
Returns middleware that only parses json and only looks at requests where the
Content-Type header matches the type option. This parser accepts any Unicode
encoding of the body and supports automatic inflation of gzip and deflate encodings.
A new body object containing the parsed data is populated on the request object after
the middleware (i.e. req.body).
Options
The json function takes an optional options object that may contain any of the
following keys:
inflate
When set to true, then deflated (compressed) bodies will be inflated; when false,
deflated bodies are rejected. Defaults to true.
limit
Controls the maximum request body size. If this is a number, then the value specifies
the number of bytes; if it is a string, the value is passed to the bytes library for parsing.
Defaults to '100kb'.
reviver
The reviver option is passed directly to JSON.parse as the second argument. You can
find more information on this argument in the MDN documentation about
JSON.parse.
strict
When set to true, will only accept arrays and objects; when false will accept anything
JSON.parse accepts. Defaults to true.
type
The type option is used to determine what media type the middleware will parse. This
option can be a string, array of strings, or a function. If not a function, type option is
passed directly to the type-is library and this can be an extension name (like json), a
mime type (like application/json), or a mime type with a wildcard (like */* or */json).
If a function, the type option is called as fn(req) and the request is parsed if it returns a
truthy value. Defaults to application/json.
verify
The verify option, if supplied, is called as verify(req, res, buf, encoding), where buf is
16
a Buffer of the raw request body and encoding is the encoding of the request. The
parsing can be aborted by throwing an error.
bodyParser.raw([options])
Returns middleware that parses all bodies as a Buffer and only looks at requests
where the Content-Type header matches the type option. This parser supports
automatic inflation of gzip and deflate encodings.
A new body object containing the parsed data is populated on the request object after
the middleware (i.e. req.body). This will be a Buffer object of the body.
Options
The raw function takes an optional options object that may contain any of the
following keys:
inflate
When set to true, then deflated (compressed) bodies will be inflated; when false,
deflated bodies are rejected. Defaults to true.
limit
Controls the maximum request body size. If this is a number, then the value specifies
the number of bytes; if it is a string, the value is passed to the bytes library for parsing.
Defaults to '100kb'.
type
The type option is used to determine what media type the middleware will parse. This
option can be a string, array of strings, or a function. If not a function, type option is
passed directly to the type-is library and this can be an extension name (like bin), a
mime type (like application/octet-stream), or a mime type with a wildcard (like */* or
application/*). If a function, the type option is called as fn(req) and the request is
parsed if it returns a truthy value. Defaults to application/octet-stream.
verify
The verify option, if supplied, is called as verify(req, res, buf, encoding), where buf is
a Buffer of the raw request body and encoding is the encoding of the request. The
parsing can be aborted by throwing an error.
bodyParser.text([options])
Returns middleware that parses all bodies as a string and only looks at requests where
the Content-Type header matches the type option. This parser supports automatic
inflation of gzip and deflate encodings.
A new body string containing the parsed data is populated on the request object after
the middleware (i.e. req.body). This will be a string of the body.
17
Options
The text function takes an optional options object that may contain any of the
following keys:
defaultCharset
Specify the default character set for the text content if the charset is not specified in
the Content-Type header of the request. Defaults to utf-8.
inflate
When set to true, then deflated (compressed) bodies will be inflated; when false,
deflated bodies are rejected. Defaults to true.
limit
Controls the maximum request body size. If this is a number, then the value specifies
the number of bytes; if it is a string, the value is passed to the bytes library for parsing.
Defaults to '100kb'.
type
The type option is used to determine what media type the middleware will parse. This
option can be a string, array of strings, or a function. If not a function, type option is
passed directly to the type-is library and this can be an extension name (like txt), a
mime type (like text/plain), or a mime type with a wildcard (like */* or text/*). If a
function, the type option is called as fn(req) and the request is parsed if it returns a
truthy value. Defaults to text/plain.
verify
The verify option, if supplied, is called as verify(req, res, buf, encoding), where buf is
a Buffer of the raw request body and encoding is the encoding of the request. The
parsing can be aborted by throwing an error.
bodyParser.urlencoded([options])
Returns middleware that only parses urlencoded bodies and only looks at requests
where the Content-Type header matches the type option. This parser accepts only
UTF-8 encoding of the body and supports automatic inflation of gzip and deflate
encodings.
A new body object containing the parsed data is populated on the request object after
the middleware (i.e. req.body). This object will contain key-value pairs, where the
value can be a string or array (when extended is false), or any type (when extended is
true).
Options
The urlencoded function takes an optional options object that may contain any of the
following keys:
18
extended
The extended option allows to choose between parsing the URL-encoded data with
the querystring library (when false) or the qs library (when true). The “extended”
syntax allows for rich objects and arrays to be encoded into the URL-encoded format,
allowing for a JSON-like experience with URL-encoded. For more information,
please see the qs library.
Defaults to true, but using the default has been deprecated. Please research into the
difference between qs and querystring and choose the appropriate setting.
inflate
When set to true, then deflated (compressed) bodies will be inflated; when false,
deflated bodies are rejected. Defaults to true.
limit
Controls the maximum request body size. If this is a number, then the value specifies
the number of bytes; if it is a string, the value is passed to the bytes library for parsing.
Defaults to '100kb'.
parameterLimit
The parameterLimit option controls the maximum number of parameters that are
allowed in the URL-encoded data. If a request contains more parameters than this
value, a 413 will be returned to the client. Defaults to 1000.
type
The type option is used to determine what media type the middleware will parse. This
option can be a string, array of strings, or a function. If not a function, type option is
passed directly to the type-is library and this can be an extension name (like
urlencoded), a mime type (like application/x-www-form-urlencoded), or a mime type
with a wildcard (like */x-www-form-urlencoded). If a function, the type option is
called as fn(req) and the request is parsed if it returns a truthy value. Defaults to
application/x-www-form-urlencoded.
verify
The verify option, if supplied, is called as verify(req, res, buf, encoding), where buf is
a Buffer of the raw request body and encoding is the encoding of the request. The
parsing can be aborted by throwing an error.
Errors
The middlewares provided by this module create errors using the http-errors module.
The errors will typically have a status/statusCode property that contains the suggested
HTTP response code, an expose property to determine if the message property should
be displayed to the client, a type property to determine the type of error without
matching against the message, and a body property containing the read body, if
19
available.
The following are the common errors created, though any error can come through for
various reasons.
This error will occur when the request had a Content-Encoding header that contained
an encoding but the “inflation” option was set to false. The status property is set to
415, the type property is set to 'encoding.unsupported', and the charset property will
be set to the encoding that is unsupported.
This error will occur when the request contained an entity that could not be parsed by
the middleware. The status property is set to 400, the type property is set to
'entity.parse.failed', and the body property is set to the entity value that failed parsing.
This error will occur when the request contained an entity that could not be failed
verification by the defined verify option. The status property is set to 403, the type
property is set to 'entity.verify.failed', and the body property is set to the entity value
that failed verification.
request aborted
This error will occur when the request is aborted by the client before reading the body
has finished. The received property will be set to the number of bytes received before
the request was aborted and the expected property is set to the number of expected
bytes. The status property is set to 400 and type property is set to 'request.aborted'.
This error will occur when the request body’s size is larger than the “limit” option.
The limit property will be set to the byte limit and the length property will be set to
the request body’s length. The status property is set to 413 and the type property is set
to 'entity.too.large'.
This error will occur when the request’s length did not match the length from the
Content-Length header. This typically occurs when the request is malformed,
typically when the Content-Length header was calculated based on characters instead
of bytes. The status property is set to 400 and the type property is set to
'request.size.invalid'.
This error will occur when something called the req.setEncoding method prior to this
middleware. This module operates directly on bytes only and you cannot call
20
req.setEncoding when using this module. The status property is set to 500 and the
type property is set to 'stream.encoding.set'.
This error will occur when the request is no longer readable when this middleware
attempts to read it. This typically means something other than a middleware from this
module read the reqest body already and the middleware was also configured to read
the same request. The status property is set to 500 and the type property is set to
'stream.not.readable'.
This error will occur when the content of the request exceeds the configured
parameterLimit for the urlencoded parser. The status property is set to 413 and the
type property is set to 'parameters.too.many'.
This error will occur when the request had a charset parameter in the Content-Type
header, but the iconv-lite module does not support it OR the parser does not support it.
The charset is contained in the message as well as in the charset property. The status
property is set to 415, the type property is set to 'charset.unsupported', and the charset
property is set to the charset that is unsupported.
This error will occur when the request had a Content-Encoding header that contained
an unsupported encoding. The encoding is contained in the message as well as in the
encoding property. The status property is set to 415, the type property is set to
'encoding.unsupported', and the encoding property is set to the encoding that is
unsupported.
Examples
This example demonstrates adding a generic JSON and URL-encoded parser as a top-
level middleware, which will parse the bodies of all incoming requests. This is the
simplest setup.
21
2))})
Express route-specific
This example demonstrates adding body parsers specifically to the routes that need
them. In general, this is the most recommended way to use body-parser with Express.
All the parsers accept a type option which allows you to change the Content-Type that
the middleware will parse.
Mostly all modern-day web applications have some sort of data storage system at the
backend. For example, if you take the case of a web shopping application, data such
as the price of an item would be stored in the database.
The Node js framework can work with databases with both relational (such as Oracle
and MS SQL Server) and non-relational databases (such as MongoDB).
Over the years, NoSQL database such as MongoDB and MySQL have become quite
popular as databases for storing data. The ability of these databases to store any type
of content and particularly in any type of format is what makes these databases so
famous.
Node.js has the ability to work with both MySQL and MongoDB as databases. In
order to use either of these databases, you need to download and use the required
22
modules using the Node package manager.
For MySQL, the required module is called “mysql” and for using MongoDB the
required module to be installed is “Mongoose.”
With these modules, you can perform the following operations in Node.js
1. Manage the connection pooling – Here is where you can specify the number of
MySQL database connections that should be maintained and saved by Node.js.
2. Create and close a connection to a database. In either case, you can provide a
callback function which can be called whenever the “create” and “close”
connection methods are executed.
3. Queries can be executed to get data from respective databases to retrieve data.
4. Data manipulation, such as inserting data, deleting, and updating data can also
be achieved with these modules.
Documents
{
{Employeeid : 1, Employee Name : Guru99},
{Employeeid : 2, Employee Name : Joe},
{Employeeid : 3, Employee Name : Martin},
}
1. Installing the NPM Modules
You need a driver to access Mongo from within a Node application. There are a
number of Mongo drivers available, but MongoDB is among the most popular. To
install the MongoDB module, run the below command
23
Code Explanation:
1. The first step is to include the mongoose module, which is done through the
require function. Once this module is in place, we can use the necessary
functions available in this module to create connections to the database.
2. Next, we specify our connection string to the database. In the connect string,
there are 3 key values which are passed.
3. The next step is to actually connect to our database. The connect function
takes in our URL and has the facility to specify a callback function. It will be
called when the connection is opened to the database. This gives us the
opportunity to know if the database connection was successful or not.
4. In the function, we are writing the string “Connection established” to the
console to indicate that a successful connection was created.
5. Finally, we are closing the connection using the db.close statement.
If the above code is executed properly, the string “Connected” will be written to the
console as shown below.
24
var MongoClient = require('mongodb').MongoClient;
var url = 'mongodb://localhost/EmployeeDB';
cursor.each(function(err, doc) {
console.log(doc);
});
});
Code Explanation:
1. In the first step, we are creating a cursor which points to the records which are
fetched from the MongoDb collection. We also have the facility of specifying the
collection ‘Employee’ from which to fetch the records. The find() function is used to
specify that we want to retrieve all of the documents from the MongoDB collection.
2. We are now iterating through our cursor and for each document in the cursor
we are going to execute a function.
3. Our function is simply going to print the contents of each document to the
console.
It is also possible to fetch a particular record from a database. This can be done by
specifying the search condition in the find() function. For example, suppose if you
just wanted to fetch the record which has the employee name as Guru99, then this
statement can be written as follows
If the above code is executed successfully, the following output will be displayed in
your console.
Output:
You will be able to clearly see that all the documents from the collection are
retrieved. This is possible by using the find() method of the mongoDB
25
connection (db) and iterating through all of the documents using the cursor.
db.collection('Employee').insertOne({
Employeeid: 4,
EmployeeName: "NewEmployee"
});
});
Code Explanation:
1. Here we are using the insertOne method from the MongoDB library to insert a
document into the Employee collection.
2. We are specifying the document details of what needs to be inserted into the
Employee collection.
If you now check the contents of your MongoDB database, you will find the record
with Employeeid of 4 and EmployeeName of “NewEmployee” inserted into the
Employee collection.
Note: The console will not show any output because the record is being inserted in
the database and no output can be shown here.
To check that the data has been properly inserted in the database, you need to execute
the following commands in MongoDB
1. Use EmployeeDB
26
2. db.Employee.find({Employeeid :4 })
The first statement ensures that you are connected to the EmployeeDb database. The
second statement searches for the record which has the employee id of 4.
1. Updating documents in a collection – Documents can be updated in a
collection using the updateOne method provided by the MongoDB library.
The below code snippet shows how to update a document in a mongoDB
collection.
db.collection('Employee').updateOne({
"EmployeeName": "NewEmployee"
}, {
$set: {
"EmployeeName": "Mohan"
});
}
})
;
27
Code Explanation:
1. Here we are using the “updateOne” method from the MongoDB library, which
is used to update a document in a mongoDB collection.
2. We are specifying the search criteria of which document needs to be updated.
In our case, we want to find the document which has the EmployeeName of
“NewEmployee.”
3. We then want to set the value of the EmployeeName of the document from
“NewEmployee” to “Mohan”.
If you now check the contents of your MongoDB database, you will find the record
with Employeeid of 4 and EmployeeName of “Mohan” updated in the Employee
collection.
To check that the data has been properly updated in the database, you need to execute
the following commands in MongoDB
1. Use EmployeeDB
2. db.Employee.find({Employeeid :4 })
The first statement ensures that you are connected to the EmployeeDb database. The
second statement searches for the record which has the employee id of 4.
db.collection('Employee').deleteOne(
28
{
"EmployeeName": "Mohan"
}
);
});
Code Explanation:
1. Here we are using the “deleteOne” method from the MongoDB library, which
is used to delete a document in a mongoDB collection.
2. We are specifying the search criteria of which document needs to be deleted.
In our case, we want to find the document which has the EmployeeName of
“Mohan” and delete this document.
If you now check the contents of your MongoDB database, you will find the record
with Employeeid of 4 and EmployeeName of “Mohan” deleted from the Employee
collection.
To check that the data has been properly updated in the database, you need to execute
the following commands in MongoDB
1. Use EmployeeDB
2. db.Employee.find()
The first statement ensures that you are connected to the EmployeeDb database. The
second statement searches and display all of the records in the employee collection.
Here you can see if the record has been deleted or not.
Once you have MySQL up and running on your computer, you can access it by using
Node.js.
To access a MySQL database with Node.js, you need a MySQL driver. This tutorial
will use the "mysql" module, downloaded from NPM.
To download and install the "mysql" module, open the Command Terminal and
execute the following:
Create Connection
demo_db_connection.js
con.connect(function(err) {
if (err) throw err;
console.log("Connected!");
});
Save the code above in a file called "demo_db_connection.js" and run the file:
Run "demo_db_connection.js"
Connected!
Now you can start querying the database using SQL statements.
Query a Database
Use SQL statements to read from (or write to) a MySQL database. This is also called
"to query" the database.
The connection object created in the example above, has a method for querying the
database:
con.connect (function(err) {
if (err) throw err;
console.log("Connected!");
con.query(sql, function (err, result) {
if (err) throw err;
console.log("Result: " + result);
});
});
30
The query method takes an sql statements as a parameter and returns the result.
Creating a Database
con.connect(function(err) {
if (err) throw err;
console.log("Connected!");
con.query("CREATE DATABASE mydb", function (err, result) {
if (err) throw err;
console.log("Database created");
});
});
Save the code above in a file called "demo_create_db.js" and run the file:
Run "demo_create_db.js"
Connected!
Database created
Creating a Table
Make sure you define the name of the database when you create the connection:
Example
con.connect(function(err) {
if (err) throw err;
console.log("Connected!");
var sql = "CREATE TABLE customers (name VARCHAR(255), address
VARCHAR(255))";
con.query(sql, function (err, result) {
if (err) throw err;
console.log("Table created");
});
});
Save the code above in a file called "demo_create_table.js" and run the file:
Run "demo_create_table.js"
Connected!
Table created
Example
con.connect(function(err) {
if (err) throw err;
console.log("Connected!");
var sql = "INSERT INTO customers (name, address) VALUES ('Company Inc',
'Highway 37')";
con.query(sql, function (err, result) {
if (err) throw err;
console.log("1 record inserted");
});
32
});
Save the code above in a file called "demo_db_insert.js", and run the file:
Run "demo_db_insert.js"
Connected!
1 record inserted
To insert more than one record, make an array containing the values, and insert a
question mark in the sql, which will be replaced by the value array:
INSERT INTO customers (name, address) VALUES ?
Example
con.connect(function(err) {
if (err) throw err;
console.log("Connected!");
var sql = "INSERT INTO customers (name, address) VALUES ?";
var values = [
['John', 'Highway 71'],
['Peter', 'Lowstreet 4'],
['Amy', 'Apple st 652'],
['Hannah', 'Mountain 21'],
['Michael', 'Valley 345'],
['Sandy', 'Ocean blvd 2'],
['Betty', 'Green Grass 1'],
['Richard', 'Sky st 331'],
['Susan', 'One way 98'],
['Vicky', 'Yellow Garden 2'],
['Ben', 'Park Lane 38'],
['William', 'Central st 954'],
33
['Chuck', 'Main Road 989'],
['Viola', 'Sideway 1633']
];
con.query(sql, [values], function (err, result) {
if (err) throw err;
console.log("Number of records inserted: " + result.affectedRows);
});
});
Save the code above in a file called "demo_db_insert_multple.js", and run the file:
Run "demo_db_insert_multiple.js"
Connected!
Number of records inserted: 14
Example
Select all records from the "customers" table, and display the result object:
con.connect(function(err) {
if (err) throw err;
con.query("SELECT * FROM customers", function (err, result, fields) {
if (err) throw err;
console.log(result);
});
});
Run "demo_db_select.js"
34
C:\Users\Your Name>node
this result:
[
{ id: 1, name: 'John', address: 'Highway 71'},
{ id: 2, name: 'Peter', address: 'Lowstreet 4'},
{ id: 3, name: 'Amy', address: 'Apple st 652'},
{ id: 4, name: 'Hannah', address: 'Mountain 21'},
{ id: 5, name: 'Michael', address: 'Valley 345'},
{ id: 6, name: 'Sandy', address: 'Ocean blvd 2'},
{ id: 7, name: 'Betty', address: 'Green Grass 1'},
{ id: 8, name: 'Richard', address: 'Sky st 331'},
{ id: 9, name: 'Susan', address: 'One way 98'},
{ id: 10, name: 'Vicky', address: 'Yellow Garden 2'},
{ id: 11, name: 'Ben', address: 'Park Lane 38'},
{ id: 12, name: 'William', address: 'Central st 954'},
{ id: 13, name: 'Chuck', address: 'Main Road 989'},
{ id: 14, name: 'Viola', address: 'Sideway 1633'}
]
When selecting records from a table, you can filter the selection by using the
"WHERE" statement:
Example
var con =
mysql.createConnection({ host
: "localhost",
user: "yourusername",
password:
"yourpassword",
database: "mydb"
});
con.connect(function(er
r) { if (err) throw err;
con.query("SELECT * FROM customers WHERE address = 'Park Lane
38'", function (err, result) {
if (err) throw
err;
console.log(resu
lt);
});
});
35
Save the code above in a file called "demo_db_where.js" and run the file:
Run "demo_db_where.js"
C:\Users\Your Name>node
this result:
[
{ id: 11, name: 'Ben', address: 'Park Lane 38'}
]
Wildcard Characters
You can also select the records that starts, includes, or ends with a given letter
or phrase.
Example
Select records where the address starts with the letter 'S':
var con =
mysql.createConnection({ host
: "localhost",
user: "yourusername",
password:
"yourpassword",
database: "mydb"
});
con.connect(function(er
r) { if (err) throw err;
con.query("SELECT * FROM customers WHERE address LIKE 'S%'",
function (err, result) {
if (err) throw
err;
console.log(resu
lt);
});
});
Save the code above in a file called "demo_db_where_s.js" and run the
C:\Users\Your Name>node
36
demo_db_where_s.js Which will give you
this result:
[
{ id: 8, name: 'Richard', address: 'Sky st 331'},
{ id: 14, name: 'Viola', address: 'Sideway 1633'}
]
Use the ORDER BY statement to sort the result in ascending or descending order.
The ORDER BY keyword sorts the result ascending by default. To sort the
result in descending order, use the DESC keyword.
Example
var con =
mysql.createConnection({ host
: "localhost",
user: "yourusername",
password:
"yourpassword",
database: "mydb"
});
con.connect(function(er
r) { if (err) throw err;
con.query("SELECT * FROM customers ORDER BY name", function (err, result)
{
if (err) throw
err;
console.log(resu
lt);
});
});
Save the code above in a file called "demo_db_orderby.js" and run the
C:\Users\Your Name>node
this result:
[
37
{ id: 3, name: 'Amy', address: 'Apple st 652'},
{ id: 11, name: 'Ben', address: 'Park Lane 38'},
{ id: 7, name: 'Betty', address: 'Green Grass 1'},
{ id: 13, name: 'Chuck', address: 'Main Road 989'},
{ id: 4, name: 'Hannah', address: 'Mountain 21'},
{ id: 1, name: 'John', address: 'Higheay 71'},
{ id: 5, name: 'Michael', address: 'Valley 345'},
{ id: 2, name: 'Peter', address: 'Lowstreet 4'},
{ id: 8, name: 'Richard', address: 'Sky st 331'},
{ id: 6, name: 'Sandy', address: 'Ocean blvd 2'},
{ id: 9, name: 'Susan', address: 'One way 98'},
{ id: 10, name: 'Vicky', address: 'Yellow Garden 2'},
{ id: 14, name: 'Viola', address: 'Sideway 1633'},
{ id: 12, name: 'William', address: 'Central st 954'}
]
ORDER BY DESC
Example
var con =
mysql.createConnection({ host
: "localhost",
user: "yourusername",
password:
"yourpassword",
database: "mydb"
});
con.connect(function(er
r) { if (err) throw err;
con.query("SELECT * FROM customers ORDER BY name DESC", function
(err, result) {
if (err) throw
err;
console.log(resu
lt);
});
});
Save the code above in a file called "demo_db_orderby_desc.js" and run the
C:\Users\Your Name>node
38
demo_db_orderby_desc.js Which will give you
this result:
[
{ id: 12, name: 'William', address: 'Central st 954'},
{ id: 14, name: 'Viola', address: 'Sideway 1633'},
{ id: 10, name: 'Vicky', address: 'Yellow Garden 2'},
{ id: 9, name: 'Susan', address: 'One way 98'},
{ id: 6, name: 'Sandy', address: 'Ocean blvd 2'},
{ id: 8, name: 'Richard', address: 'Sky st 331'},
{ id: 2, name: 'Peter', address: 'Lowstreet 4'},
{ id: 5, name: 'Michael', address: 'Valley 345'},
{ id: 1, name: 'John', address: 'Higheay 71'},
{ id: 4, name: 'Hannah', address: 'Mountain 21'},
{ id: 13, name: 'Chuck', address: 'Main Road 989'},
{ id: 7, name: 'Betty', address: 'Green Grass 1'},
{ id: 11, name: 'Ben', address: 'Park Lane 38'},
{ id: 3, name: 'Amy', address: 'Apple st 652'}
]
Delete Record
You can delete records from an existing table by using the "DELETE FROM"
statement:
Example
var con =
mysql.createConnection({ host
: "localhost",
user: "yourusername",
password:
"yourpassword",
database: "mydb"
});
con.connect(function(er
r) { if (err) throw err;
var sql = "DELETE FROM customers WHERE address = 'Mountain
21'"; con.query(sql, function (err, result) {
if (err) throw err;
console.log("Number of records deleted: " + result.affectedRows);
});
});
Notice the WHERE clause in the DELETE syntax: The WHERE clause
specifies which record or records that should be deleted. If you omit the WHERE
clause, all records will be deleted!
39
Save the code above in a file called "demo_db_delete.js" and run the
C:\Users\Your Name>node
this result:
You can delete an existing table by using the "DROP TABLE" statement:
Example
var con =
mysql.createConnection({ host
: "localhost",
user: "yourusername",
password:
"yourpassword",
database: "mydb"
});
con.connect(function(er
r) { if (err) throw err;
var sql = "DROP TABLE
customers"; con.query(sql,
function (err, result) { if (err)
throw err;
console.log("Table deleted");
});
});
Save the code above in a file called "demo_db_drop_table.js" and run the
C:\Users\Your Name>node
this result:
Table deleted
40
Update Table
You can update existing records in a table by using the "UPDATE" statement:
Example
var con =
mysql.createConnection({ host
: "localhost",
user: "yourusername",
password:
"yourpassword",
database: "mydb"
});
con.connect(function(er
r) { if (err) throw err;
var sql = "UPDATE customers SET address = 'Canyon 123' WHERE
address = 'Valley 345'";
con.query(sql, function (err,
result) { if (err) throw err;
console.log(result.affectedRows + " record(s) updated");
});
});
Notice the WHERE clause in the UPDATE syntax: The WHERE clause
specifies which record or records that should be updated. If you omit the WHERE
clause, all records will be updated!
Save the code above in a file called "demo_db_update.js" and run the
C:\Users\Your Name>node
this result:
1 record(s) updated
You can limit the number of records returned from the query, by using the
"LIMIT" statement:
Example
var con =
mysql.createConnection({ host
: "localhost",
user: "yourusername",
password:
"yourpassword",
database: "mydb"
});
con.connect(function(er
r) { if (err) throw err;
var sql = "SELECT * FROM customers LIMIT
5"; con.query(sql, function (err, result) {
if (err) throw err;
console.log(result);
});
});
Save the code above in a file called "demo_db_limit.js" and run the
C:\Users\Your Name>node
this result:
[
{ id: 1, name: 'John', address: 'Highway 71'},
{ id: 2, name: 'Peter', address: 'Lowstreet 4'},
{ id: 3, name: 'Amy', address: 'Apple st 652'},
{ id: 4, name: 'Hannah', address: 'Mountain 21'},
{ id: 5, name: 'Michael', address: 'Valley 345'}
]
If you want to return five records, starting from the third record, you can use
the "OFFSET" keyword:
Example
var con =
mysql.createConnection({ host
: "localhost",
user: "yourusername",
42
password:
"yourpassword",
database: "mydb"
});
con.connect(function(er
r) { if (err) throw err;
var sql = "SELECT * FROM customers LIMIT 5 OFFSET
2"; con.query(sql, function (err, result) {
if (err) throw
err;
console.log(resu
lt);
});
});
Note: "OFFSET 2", means starting from the third position, not the
Run "demo_db_offset.js"
C:\Users\Your Name>node
this result:
[
{ id: 3, name: 'Amy', address: 'Apple st 652'},
{ id: 4, name: 'Hannah', address: 'Mountain 21'},
{ id: 5, name: 'Michael', address: 'Valley 345'},
{ id: 6, name: 'Sandy', address: 'Ocean blvd 2'},
{ id: 7, name: 'Betty', address: 'Green Grass 1'}
]
Shorter Syntax
You can also write your SQL statement like this "LIMIT 2, 5" which returns the
same as the offset example above:
Example
var con =
mysql.createConnection({ host
: "localhost",
user: "yourusername",
43
password:
"yourpassword",
database: "mydb"
});
con.connect(function(er
r) { if (err) throw err;
var sql = "SELECT * FROM customers LIMIT 2,
5"; con.query(sql, function (err, result) {
if (err) throw
err;
console.log(resu
lt);
});
});
Note: The numbers are reversed: "LIMIT 2, 5" is the same as "LIMIT 5 OFFSET 2"
Handling Cookies in NodeJS
This will help the website you have visited to know more about you and customize
your future experience.
For example;
Cookies save your language preferences. This way, when you visit that
website in the future, the language you used will be remembered.
When a user visits a cookie-enabled website for the first time, the browser will
prompt the user that the web page uses cookies and request the user to accept
cookies to be saved on their computer. Typically, when a makes a user request, the
server responds by sending back a cookie (among many other things).
This cookie is going to be stored in the user’s browser. When a user visits the
website or sends another request, that request will be sent back together with the
cookies. The cookie will have certain information about the user that the server can
use to make decisions on any other subsequent requests.
When you first make a login request and the server verifies your credentials, the
server will send your Facebook account content. It will also send cookies to your
browser. The cookies are then stored on your computer and submitted to the server
with every request you make to that website. A cookie will be saved with an
44
identifier that is unique to that user.
When you revisit Facebook, the request you make, the saved cookie, and the server
will keep track of your login session and remember who you are and thus keep you
logged in.
The different types of cookies include:
Session cookies - store user’s information for a short period. When the
current session ends, that session cookie is deleted from the user’s
computer.
Third-party cookies - are used by websites that show ads on their pages or
track website traffic. They grant access to external parties to decide the
types of ads to show depending on the user’s previous preferences.
Let’s dive in and see how we can implement cookies using Node.js. We will
create and save a cookie in the browser, update and delete a cookie.
To set up a server and save cookies, import the cookie parser and express
modules to your project. This will make the necessary functions and objects
accessible.
45
Step - 2 Get your application to use the packages
You need to use the above modules as middleware inside your application, as
shown below.
//setup express appconst app = express()// let’s you use the cookieParser in your
applicationapp.use(cookieParser());
This will make your application use the cookie parser and Express modules.
This is the port number that the server should listen to when it is running. This will
help us access our server locally. In this example, the server will listen to port
8000, as shown below.
Now we have a simple server set. Run node app.js to test if it is working.
And if you access the localhost on port 8000 (http://localhost:8000/), you should
get an HTTP response sent by the server. Now we’re ready to start implementing
cookies.
Setting cookies
Let’s add routes and endpoints that will help us create, update and delete a cookie.
We will set a route that will save a cookie in the browser. In this case, the cookies
will be coming from the server to the client browser. To do this, use the res
object and pass cookie as the method, i.e. res.cookie() as shown below.
To confirm that the cookie was saved, go to your browser’s inspector tool
If the server sends this cookie to the browser, this means we can iterate the
incoming requests through req.cookies and check the existence of a saved cookie.
You can log this cookie to the console or send the cookie request as a response to
the browser. Let’s do that.
// get the cookie incoming requestapp.get('/getcookie', (req, res) => { //show the
saved cookiesconsole.log(req.cookies)res.send(req.cookies);});
Again run the server using node app.js to expose the above
route (http://localhost:8000/getcookie) and you can see the response on
the browser.
One precaution that you should always take when setting cookies is security. In
the above example, the cookie can be deemed insecure.
For example, you can access this cookie on a browser console using JavaScript
(document.cookie). This means that this cookie is exposed and can be exploited
through cross-site scripting.
You can see the cookie when you open the browser inspector tool and execute the
47
following in the console.
document.cookie
The saved cookie values can be seen through the browser console.
As a precaution, you should always try to make your cookies inaccessible on the
client-side using JavaScript.
By default, sameSite was initially set to none (sameSite = None). This allowed
third parties to track users across sites. Currently, it is set to Lax (sameSite = Lax)
meaning a cookie is only set when the domain in the URL of the browser matches
the domain of the cookie, thus eliminating third party’s domains. sameSite can also
be set to Strict (sameSite = Strict). This will restrict cross-site sharing even
between different domains that the same publisher owns.
You can also add the maximum time you want a cookie to be available on
the user browser. When the set time elapses, the cookie will be
automatically deleted from the browser.
in this case, we are accessing the server on localhost, which uses a non-HTTPS secure origin. For the sake of
testing the server, you can set secure: false. However, always use true value when you want cookies to be
created on an HTTPS secure origin.
48
Furthermore, you cannot access the cookie using JavaScript, i.e., document.cookie.
Typically, cookies can be deleted from the browser depending on the request that a
user makes. For example, if cookies are used for login purposes, when a user
decides to log out, the request should be accompanied by a delete command.
Here is how we can delete the cookie we have set above in this example. Use
res.clearCookie() to clear all cookies.
// delete the saved cookieapp.get('/deletecookie', (req, res) => { //show the saved
cookiesres.clearCookie()res.send('Cookie has been deleted successfully');});
Open http://localhost:8000/deletecookie, and you will see that the saved cookie
has been deleted.
Authentication
You were already aware of the authentication process because we all do it daily,
whether at work (logging into your computer) or at home (logging into a website).
Yet, the truth is that most “things” connected to the Internet require you to prove
your identity by providing credentials.
49
Authorization
So, authorization occurs after the system authenticates your identity, granting you
complete access to resources such as information, files, databases, funds, places,
and anything else. That said, authorization affects your capacity to access the
system and the extent to which you can do so.
What is JWT
JSON Web Tokens (JWT) are an RFC 7519 open industry standard for
representing claims between two parties. For example, you can use jwt.io to
decode, verify, and produce JWT.
code.
Linux
In step 1, we initialized npm with the command npm init -y, which
automatically created a package.json.
We need to create the model, middleware, config directory and their files, for
example user.js,auth.js,database.js using the commands below.
50
mkdir model middleware configtouch config/database.js middleware/auth.js
model/user.js
We can now create the index.js and app.js files in the root directory of our
project with the command.
We will validate user credentials against what we have in our database. So the
whole authentication process is not limited to the database we’ll be using in this
article.
Now, let’s create our Node.js server and connect our database by adding the
following snippets to your app.js, index.js , database.js .env in that order.
In our
database.js.
config/database
.js:
In our app.js:
jwt-project/app.js
require("dotenv").config();require("./config/database").connect();const express =
require("express");const app = express();app.use(express.json());// Logic goes
heremodule.exports = app;
In our index.js:
jwt-project/index.js
If you notice, our file needs some environment variables. You can create a new
.env file if you haven’t and add your variables before starting our application.
In our .env.
To start our server, edit the scripts object in our package.json to look like the
one shown below.
The snippet above has been successfully inserted into app.js, index.js, and
database.js. First, we built our node.js server in index.js and imported the app.js
file with routes configured.
Both the server and the database should be up and running without crashing.
We’ll define our schema for the user details when signing up for the first time and
validate them against the saved credentials when logging in.
Now let’s create the routes for register and login, respectively.
In app.js in the root directory, add the following snippet for the registration and
login. app.js
We’ll be implementing these two routes in our application. We will be using JWT
to sign the credentials and bycrypt to encrypt the password before storing them in
our database.
below. app.js
// ...app.post("/register", async (req, res) => {// Our register logic starts heretry
{// Get user inputconst { first_name, last_name, email, password } =
req.body; last_name))
// Validate
{ user input if (!(email && password && first_name &&
res.status(400).send("All input is required"); } // check
if user already exist // Validate if user exist in our database const oldUser =
awaitUser.findOne({email});if(oldUser){ res.status(409).send("User Already Exist. return
Please Login");}
//Encrypt user
password encryptedPassword = await bcrypt.hash(password, 10); // Create user in our databas
Modify the /login route structure we created earlier to look like shown below.
// ...app.post("/login", async (req, res) => {// Our login logic starts heretry
{// Get user inputconst { email, password } = req.body;// Validate user
input required");
if (!(email && password)) { res.status(400).send("All input is
} // Validate if user exist in our database const user = await
User.findOne({ email }); user.password))){
if (user && (await bcrypt.compare(password,
// Createtoken const token = process.env.TOKEN_KEY,
jwt.sign( { user_id: user._id, email }, );// save user token
{ expiresIn: "2h", }
user.token = token;// userres.status(200).json(user); }
res.status(400).send("Invalid Credentials"); } catch (err) { console.log(err);}
// Our register logic ends here});// ...
Using Postman to test, we’ll get the response shown below after a successful login.
54
Step 7 - Create middleware for authentication
We can successfully create and log in a user. Still, we’ll create a route that
requires a user token in the header, which is the JWT token we generated earlier.
Add the following snippet inside
auth.js. middleware/auth.js
Now let’s create the /welcome route and update app.js with the following
snippet to test the middleware.
app.js
See the result below when we try to access the /welcome route we just created
without passing a token in the header with the x-access-token key.
55
We can now add a token in the header with the key x-access-token and re-
56
57