0% found this document useful (0 votes)
20 views59 pages

Anti-Jamming Techniques in Wireless Networks

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

Topics covered

  • Swing components,
  • data throughput,
  • packet loss,
  • source routing,
  • system testing,
  • jamming attacks,
  • sliding-mode-based control,
  • policy-based routing,
  • Java technology,
  • use case diagram
0% found this document useful (0 votes)
20 views59 pages

Anti-Jamming Techniques in Wireless Networks

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

Topics covered

  • Swing components,
  • data throughput,
  • packet loss,
  • source routing,
  • system testing,
  • jamming attacks,
  • sliding-mode-based control,
  • policy-based routing,
  • Java technology,
  • use case diagram

Introduction:

The simplest methods to defend a network against jamming attacks comprise physical layer
solutions such as spread-spectrum or beam forming, forcing the jammers to expend a greater
resource to reach the same goal. However, recent work has demonstrated that intelligent jammers
can incorporate cross layer protocol information into jamming attacks, reducing resource
expenditure by several orders of magnitude by targeting certain link layer and MAC
implementations as well as link layer error detection and correction protocols . Hence, more
sophisticated anti-jamming methods and defensive measures must be incorporated into higher-
layer protocols, for example channel surfing or routing around jammed regions of the network.

The majority of anti-jamming techniques make use of diversity. For example, anti-jamming
protocols may employ multiple frequency bands, different MAC channels, or multiple routing
paths. Such diversity techniques help to curb the effects of the jamming attack by requiring the
jammer to act on multiple resources simultaneously. In this paper, we consider the anti-jamming
diversity based on the use of multiple routing paths. Using multiple-path variants of source
routing protocols such as Dynamic Source Routing (DSR) or Ad-Hoc On-Demand Distance
Vector (AODV) , for example the MP-DSR protocol , each source node can request several
routing paths to the destination node for concurrent use. To make effective use of this routing
diversity, however, each source node must be able to make an intelligent allocation of traffic
across the available paths while considering the potential effect of jamming on the resulting data
throughput.

In order to characterize the effect of jamming on throughput, each source must collect
information on the impact of the jamming attack in various parts of the network. However, the
extent of jamming at each network node depends on a number of unknown parameters, including
the strategy used by the individual jammers and the relative location of the jammers with respect
to each transmitter-receiver pair. Hence, the impact of jamming is probabilistic from the
perspective of the network1, and the characterization of the jamming impact is further
complicated by the fact that the jammers' strategies may be dynamic and the jammers themselves
may be mobile.
In order to capture the non-deterministic and dynamic effects of the jamming attack, we
model the packet error rate at each network node as a random process. At a given time, the
randomness in the packet error rate is due to the uncertainty in the jamming parameters, while
the time-variability in the packet error rate is due to the jamming dynamics and mobility. Since
the effect of jamming at each node is probabilistic, the end-to-end throughput achieved by each
source-destination pair will also be non-deterministic and, hence, must be studied using a
stochastic framework. In this article, we thus investigate the ability of network nodes to
characterize the jamming impact and the ability of multiple source nodes to compensate for
jamming in the allocation of traffic across multiple routing paths. Our contributions to this
problem are as follow: We formulate the problem of allocating traffic across multiple routing
paths in the presence of jamming as a lossy network how optimization problem. We map the
optimization problem to that of asset allocation using portfolio selection theory. We formulate
the centralized traffic allocation problem for multiple source nodes as a convex optimization
problem.

PROJECT OVERVIEW

This project considers the problem of joint congestion control and scheduling in
wireless networks with quality-of-service (QoS) guarantees. Different from per-
destination queuing in the existing works, which is not scalable, this paper considers per-
link queuing at each node, which significantly reduces the number of queues per node.
Under per-link queuing, we formulate the joint congestion control and scheduling
problem as a network utility maximization (NUM) problem and use a dual decomposition
method to separate the NUM problem into two sub problems, i.e., a congestion control
problem and a scheduling problem. Then, we develop a sliding-mode-based (SM)
distributed congestion control scheme and prove its convergence and optimality property.
Different from the existing schemes, our congestion control scheme is capable of
providing multiclass QoS under the general scenario of multipath and multichip; in
addition, it is robust against network anomalies, e.g., link failures, because it can achieve
multipath load balancing.
CHAPTER 2

SYSTEM ANALYSIS

2.1 FEASIBILITY SYSTEM

My project meet the performance metric my archiving the following feasibility study

Technical Feasibility

This is concerned with specifying equipment and software that will successfully satisfy
the user requirement. The technical needs of the system may vary considerably, but might
include:

• The facility to produce outputs in a given time.

My project process the tasks in milliseconds thus by minimizing time complexity

• Response time under certain conditions.

In my project responds effetely in every condition if there is an worst case then my


project hybrid virtual potential field to solve the problem

• Ability to process a certain volume of transaction at a particular speed.

The speed of transaction in my system depend on the hardware of the network connection
that we use

• Facility to communicate data to distant locations.

Our project deal with the WIFI nodes which can communicate with the distant nodes
In examining technical feasibility, configuration of the system is given more importance than the
actual make of hardware. The configuration has given the complete picture about the project.

Economic Feasibility

There is no need for extra hardware for this project just some alteration in software and
we are using it in existing protocol so they will not be cost effective

Operational Feasibility

This is mainly related to human organizational and political aspects. The points to be considered
are:

• What changes will be brought with the system?

There is only an little changes that was brought to the system which will run
automatically without affecting any other aspect

• What organizational structure is disturbed?

There is no change or disturbance in structure of organization in our project

• What new skills will be required? Do the existing staff members have these skills? If not, can
they be trained in due course of time?

The concept in our project runs automatically so there are no new skills required.

This feasibility study is carried out by a small group of people who are familiar with information
system technique and are skilled in system analysis and design process.
Proposed projects are beneficial only if they can be turned into information system that will meet
the operating requirements of the organization. This test of feasibility asks if the system will
work when it is developed and installed.

2.2 EXISTING SYSTEM

 The joint congestion control and scheduling can be formulated as a network utility
maximization (NUM) problem.
 Under the setting that each node has no buffer, the NUM problem can be solved
in a distributed manner by iteratively updating the link price, which is the sum of
the per-hop prices
 This mechanism requires that each link calculates and feeds back the per-hop
prices to the source, thus inducing a lot of overhead. This overhead is unbearable
for relay nodes in a large-scale wireless network. Thus there is no Congestion
control.
 In wireless network, distributed algorithms are desirable due to the unavailability
of global information. Generally, the QoS constraints make it difficult to solve the
congestion control problem in a distributed manner.
 Directly projecting the non-QoS-constrained solution to the QoS-constrained
region usually results in a degraded performance.
 It avoids this problem by reducing the rate of source node packets injected into
the network. Since the packet rate is reduced the throughput is decreased so that
efficient data transfer is lost. Most of existing system works on congestion control
in WSNs has only focused on the traffic control.

DISADVANTAGES

1. In some conditions complete end-to-end paths rarely or never exist between


sources and destinations within the MANET, due to high node mobility or low
node density. These networks may experience frequent partitioning, with the
disconnections lasting for long periods.
2. Poor Performance of TCP in ICNs
3. Problem on TCP’s reliable data transfer.

2.3 PROPOSED SYSTEM


 The proposed system avoids the above issue without reducing the rate of packets
by splitting the files in to many small packets and scatter them along multiple
paths consisting of idle and under loaded nodes thus by reducing Congestion.
 In such a context, a network should be designed to provide quality of service
(QoS) to inelastic traffic such as video and audio and simultaneously provide high
data rate for elastic traffic, e.g., e-mail and web traffic.
 Therefore, how to perform congestion control and scheduling efficiently is crucial
to achieve these key features for a wireless network.
 In this project, we first consider the congestion control and scheduling under
constant wireless channels for simplicity, and then extend the results to time-
varying wireless channels and evaluate the performance.
 Future wireless networks are expected to consist of a variety of heterogeneous
components and will aim to support interactive applications among distributed
users.

ADVANTAGE OF PROPOSED SYSTEM

In proposed system each spited packets is sent through different free routs
following

• Faster transfer of data

• Traffic is very low

• Very low packet loss


• Efficient use of bandwidth
• Congestion is very low

CHAPTER 3

SYSTEM CONFIGURATION

3.1 HARDWARE REQUIRMENTS


 Processor : Intel I3 Core Processor
 Ram : 4 GB (or) Higher
 Hard disk : 1TB
3.2 SOFTWARE REQUIRMENTS

 Web Server : Apache Tomcat Latest Version

 Server-side Technologies : Java, Java Server Pages

 Client-side Technologies : Hyper Text Markup Language, Cascading Style


Sheets, Java Script, AJAX

 Database Server : MS SQL

 Operating System : Windows (or) Linux (or) Mac any version

ABOUT THE SOFTWARE:

Java Technology
Java technology is both a programming language and a platform.

The Java Programming Language


The Java programming language is a high-level language that can be characterized by all
of the following buzzwords:

 Simple
 Architecture neutral
 Object oriented
 Portable
 Distributed
 High performance
 Interpreted
 Multithreaded
 Robust
 Dynamic
 Secure

With most programming languages, you either compile or interpret a program so that you
can run it on your computer. The Java programming language is unusual in that a program is
both compiled and interpreted. With the compiler, first you translate a program into an
intermediate language called Java byte codes —the platform-independent codes interpreted by
the interpreter on the Java platform. The interpreter parses and runs each Java byte code
instruction on the computer. Compilation happens just once; interpretation occurs each time the
program is executed. The following figure illustrates how this works.

JAVA working Models


You can think of Java bytecodes as the machine code instructions for the Java Virtual
Machine (Java VM). Every Java interpreter, whether it’s a development tool or a Web browser
that can run applets, is an implementation of the Java VM. Java bytecodes help make “write
once, run anywhere” possible. You can compile your program into bytecodes on any platform
that has a Java compiler. The bytecodes can then be run on any implementation of the Java VM.
That means that as long as a computer has a Java VM, the same program written in the Java
programming language can run on Windows 2000, a Solaris workstation, or on an iMac.

JAVA platform Independent


The Java Platform
A platform is the hardware or software environment in which a program runs.
We’ve already mentioned some of the most popular platforms like Windows 2000,
Linux, Solaris, and MacOS. Most platforms can be described as a combination of the
operating system and hardware. The Java platform differs from most other platforms in
that it’s a software-only platform that runs on top of other hardware-based platforms.

The Java platform has two components:


 The Java Virtual Machine (Java VM)
 The Java Application Programming Interface (Java API)
You’ve already been introduced to the Java VM. It’s the base for the Java platform
and is ported onto various hardware-based platforms.
The Java API is a large collection of ready-made software components that provide
many useful capabilities, such as graphical user interface (GUI) widgets. The Java API is
grouped into libraries of related classes and interfaces; these libraries are known as
packages. The next section, What Can Java Technology Do?, highlights what
functionality some of the packages in the Java API provide.
The following figure depicts a program that’s running on the Java platform. As the
figure shows, the Java API and the virtual machine insulate the program from the
hardware.

Native code is code that after you compile it, the compiled code runs on a specific
hardware platform. As a platform-independent environment, the Java platform can be a
bit slower than native code. However, smart compilers, well-tuned interpreters, and just-
in-time bytecode compilers can bring performance close to that of native code without
threatening portability.
What Can Java Technology Do?
The most common types of programs written in the Java programming language are
applets and applications. If you’ve surfed the Web, you’re probably already familiar with
applets. An applet is a program that adheres to certain conventions that allow it to run
within a Java-enabled browser.

However, the Java programming language is not just for writing cute, entertaining applets
for the Web. The general-purpose, high-level Java programming language is also a
powerful software platform. Using the generous API, you can write many types of
programs.
An application is a standalone program that runs directly on the Java platform. A special
kind of application known as a server serves and supports clients on a network. Examples
of servers are Web servers, proxy servers, mail servers, and print servers. Another
specialized program is a servlet. A servlet can almost be thought of as an applet that runs
on the server side. Java Servlets are a popular choice for building interactive web
applications, replacing the use of CGI scripts. Servlets are similar to applets in that they
are runtime extensions of applications. Instead of working in browsers, though, servlets
run within Java Web servers, configuring or tailoring the server.
How does the API support all these kinds of programs? It does so with packages of
software components that provide a wide range of functionality. Every full
implementation of the Java platform gives you the following features:
 The essentials: Objects, strings, threads, numbers, input and output, data
structures, system properties, date and time, and so on.
 Applets: The set of conventions used by applets.
 Networking: URLs, TCP (Transmission Control Protocol), UDP (User Data gram
Protocol) sockets, and IP (Internet Protocol) addresses.
 Internationalization: Help for writing programs that can be localized for users
worldwide. Programs can automatically adapt to specific locales and be displayed
in the appropriate language.
 Security: Both low level and high level, including electronic signatures, public
and private key management, access control, and certificates.
 Software components: Known as JavaBeansTM, can plug into existing component
architectures.
 Object serialization: Allows lightweight persistence and communication via
Remote Method Invocation (RMI).
 Java Database Connectivity (JDBCTM): Provides uniform access to a wide range
of relational databases.
The Java platform also has APIs for 2D and 3D graphics, accessibility, servers,
collaboration, telephony, speech, animation, and more. The following figure depicts what
is included in the Java 2 SDK.
How Will Java Technology Change My Life?
We can’t promise you fame, fortune, or even a job if you learn the Java programming
language. Still, it is likely to make your programs better and requires less effort than
other languages. We believe that Java technology will help you do the following:
 Get started quickly: Although the Java programming language is a powerful
object-oriented language, it’s easy to learn, especially for programmers already
familiar with C or C++.
 Write less code: Comparisons of program metrics (class counts, method counts,
and so on) suggest that a program written in the Java programming language can
be four times smaller than the same program in C++.
 Write better code: The Java programming language encourages good coding
practices, and its garbage collection helps you avoid memory leaks. Its object
orientation, its JavaBeans component architecture, and its wide-ranging, easily
extendible API let you reuse other people’s tested code and introduce fewer bugs.
 Develop programs more quickly: Your development time may be as much as
twice as fast versus writing the same program in C++. Why? You write fewer
lines of code and it is a simpler programming language than C++.
 Avoid platform dependencies with 100% Pure Java: You can keep your program
portable by avoiding the use of libraries written in other languages. The 100%
Pure JavaTM Product Certification Program has a repository of historical process
manuals, white papers, brochures, and similar materials online.
 Write once, run anywhere: Because 100% Pure Java programs are compiled into
machine-independent bytecodes, they run consistently on any Java platform.
Distribute software more easily: You can upgrade applets easily from a central server. Applets
take advantage of the feature of allowing new classes to be loaded “on the fly,” without
recompiling the entire program.

JAVA SWINGS

Introduction to swing:

Swing contains all the components. It’s a big library, but it’s designed to have appropriate
complexity for the task at hand – if something is simple, you don’t have to write much code but
as you try to do more your code becomes increasingly complex. This means an easy entry point,
but you’ve got the power if you need it. Swing has great depth. This section does not attempt to
be comprehensive, but instead introduces the power and simplicity of Swing to get you started
using the library. Please be aware that what you see here is intended to be simple. If you need to
do more, then Swing can probably give you what you want if you’re willing to do the research
by hunting through the online documentation from Sun.

Benefits of swing:

Swing components are Beans, so they can be used in any development environment that supports
Beans. Swing provides a full set of UI components. For speed, all the components are
lightweight and Swing is written entirely in Java for portability. Swing could be called
“orthogonality of use;” that is, once you pick up the general ideas about the library you can apply
them everywhere. Primarily because of the beans naming conventions.

Keyboard navigation is automatic – you can use a Swing application without the mouse, but you
don’t have to do any extra programming. Scrolling support is effortless – you simply wrap your
component in a JScroll Pane as you add it to your form. Other features such as tool tips typically
require a single line of code to implement. Swing also supports something called “pluggable look
and feel,” which means that the appearance of the UI can be dynamically changed to suit the
expectations of users working under different platforms and operating systems. It’s even possible
to invent your own look and feel.

The swing component classes are

SWING COMPONENT CLASSES

1. JFrame

The components added to the frame are referred to as its contents; these are managed by the
contentPane. To add a component to a JFrame, we must use its contentPane [Link] is a
Window with border, title and buttons. When JFrame is set visible, an event dispatching thread is
started. JFrame objects store several objects including a Container object known as the content
pane. To add a component to a JFrame, add it to the content pane.

JFrame Features

It’s a window with title, border, (optional) menu bar and user-specified [Link] can be
moved, resized, iconified. It is not a subclass of JComponent.
Delegates responsibility of managing user-specified components to a content pane, an instance of
JPanel.

Centering JFrame’s

By default, a Jframe is displayed in the upper-left corner of the screen. To display a frame at a
specified location, you can use the setLocation(x, y) method in the JFrame class. This method
places the upper-left corner of a frame at location (x, y).

The Swing API keeps improving with abstractions such as the setDefaultCloseOperation method
for the JFrame

Crating a JFrame Window


Step 1: Construct an object of the JFrame class.

Step 2: Set the size of the Jframe.

Step 3: Set the title of the Jframe to appear in the title bar (title bar will be blank if no title is set).

Step 4: Set the default close operation. When the user clicks the close button, the program stops
running.

Step 5: Make the Jframe visible.

How to position JFrame on Screen?

[Link]( null );

[Link]

JWindow is Swing’s version of Window and is descended directly from that class. Like
Window, it uses BorderLayout by default. Almost all Swing components are lightweight except
JApplet, JFrame, JDialog, and JWindow.

[Link]

JLabel, descended from JComponent, is used to create text labels.


A JLabel object provides text instructions or information on a GUI — display a single line of
read-only text, an image or both text and image.
We use a Swing JLabel when we need a user interface component that displays a message or an
image. JØLabels

 Provide text instructions on a GUI


 lRead-only text
 lPrograms rarely change a label’s contents
 lClass JLabel (subclass of JComponent)
[Link]

JTextField allows editing/displaying of a single line of text. New features include the ability to
justify the text left, right, or center, and to set the text’s font. When the user types data into them
and presses the Enter key, an action event occurs. If the program registers an event listener, the
listener processes the event and can use the data in the text field at the time of the event in the
program. JTextField is an input area where the user can type in characters. If you want to let the
user enter multiple lines of text, you cannot use Jtextfield’s unless you create several of them.
The solution is to use JTextArea, which enables the user to enter multiple lines of text.

[Link]

JPasswordField (a direct subclass of JTextField) you can suppress the display of input. Each
character entered can be replaced by an echo character. This allows confidential input for
passwords, for example. By default, the echo character is the asterisk, *. When the user types
data into them and presses the Enter key, an action event occurs. If the program registers an
event listener, the listener processes the event and can use the data in the text field at the time of
the event in the program. If you need to provide an editable text field that doesn’t show the
characters the user types – use the JPasswordField class.

[Link]

JTextArea allows editing of multiple lines of text. JTextArea can be used in conjunction with
class JScrollPane to achieve scrolling. The underlying JScrollPane can be forced to always or
never have either the vertical or horizontal scrollbar.
[Link]

The abstract class AbstractButton extends class JComponent and provides a foundation for a
family of button classes, including JButton. A button is a component the user clicks to trigger a
specific action.

There are several types of buttons in Java, all are subclasses of AbstractButton.

 command buttons: is created with class JButton. It generates ActionEvent.


 toggle buttons: have on/off or true/false values.
 check boxes: a group of buttons. It generates ItemEvent.
 radio buttons: a group of buttons in which only one can be selected. It generates
ItemEvent.

8. JRadioButton

J Radio Button is similar to J Check box, except for the default icon for each class. A set of radio
buttons can be associated as a group in which only one button at a time can be selected.

[Link]

J Check Box is not a member of a checkbox group. A checkbox can be selected and deselected,
and it also displays its current state.

[Link]

. J Combo Box is like a drop down box — you can click a drop-down arrow and select an option
from a list. It generates Item Event. For example, when the component has focus, pressing a key
that corresponds to the first character in some entry’s name selects that entry. A vertical scrollbar
is used for longer lists.
CHAPTER 4

SYSTEM DESIGN

4.1 NORMALIZATION:

The basic objective of normalization is to be reducing redundancy which means that


information is to be stored only once. Storing information several times leads to wastage of
storage space and increase in the total size of the data stored.
If a Database is not properly designed it can gives rise to modification anomalies. Modification
anomalies arise when data is added to, changed or deleted from a database table. Similarly, in
traditional databases as well as improperly designed relational databases, data redundancy can be
a problem. These can be eliminated by normalizing a database.

Normalization is the process of breaking down a table into smaller tables. So that each
table deals with a single theme. There are three different kinds of modifications of anomalies and
formulated the first, second and third normal forms (3NF) is considered sufficient for most
practical purposes. It should be considered only after a thorough analysis and complete
understanding of its implications.

FIRST NORMAL FORM (1NF):

This form also called as a “flat file”. Each column should contain data in respect of a
single attributes and no two rows may be identical. To bring a table to First Normal Form,
repeating groups of fields should be identified and moved to another table.

SECOND NORMAL FORM (2NF):

A relation is said to be in 2NF if it is 1NF and non-key attributes are functionality


dependent on the key attributes. A ‘Functional Dependency’ is a relationship among attributes.
One attribute is said to be functionally dependent on another if the value of the first attribute
depends on the value of the second attribute. In the given description flight number and halt code
is the composite key.

THIRD NORMAL FORM (3NF) :

Third Normal Form normalization will be needed where all attributes in a relation tuple are not
functionally dependent only on the key attribute. A transitive dependency is one in which one in
which one attribute depends on second which is turned depends on a third and so on.
INPUT FORMAT
Network Wi-Fi nodes Alpha numeric
Available paths Alpha numeric
File Any file format
(txt,doc,jpg,ect)
Data Flow Diagram

Level 0

Process
Node prediction and Optimization of
packet sending routing

Efficient file transfer


and receiver

Level 1
Node Selecting a
Node prediction and
prediction file
packet sending

Network

Converting in
Storage to array of
Device bytes

Sending
the file
Level 2:

Schedule Sending files


Optimization of through free
routing path

Forwarding
Packet

Traverse through
network

Level 3:

Receiving Receiving array


Efficient file transfer packets of bytes
and receiver
Storage
Device

USE CASE DIAGRAM


A use case in software engineering and systems engineering is a description of
steps or actions between a user or "actor" and a software system which lead the
user towards something useful. The user or actor might be a person or something
more abstract, such as an external software system or manual process. Use cases
are a software modeling technique that helps developers determine which features
to implement and how to gracefully resolve errors
File selection

Packet spliting

Byte conversion

Sender

Dynemic routing

Sending

Receiving file

File conversion

receiver

File Joining

Saving file
SEQUENCE DIAGRAM

Sequence diagram in Unified Modeling Language (UML) is a kind of


interaction diagram that shows how processes operate with one another and in
what order. It is a construct of a Message Sequence Chart.

Alice : Sender Network Nodes Bob : Receiver

Select a file

spliting file

request

Responce

Identifing free path

Scheduling according to free path

sending file

forwarding

Forwarding

Receiving

Joining file

Acknowledgement
CLASS DIAGRAM
A class diagram is a type of static structure diagram that describes the structure of
a system by showing the system's classes, their attributes, and the relationships
between the classes.

 Class: Class is a description of a set of objects that share the same attributes,
operations, relationships and semantics. A class implements one or more
interfaces.
 Interface: Interface is a collection of operations that specify a service of a class or
component. An interface describes the externally visible behavior of that element.
An interface might represent the complete behavior of a class or component.
 Collaboration: Collaboration defines an interaction and is a society of roles and
other elements that work together to provide some cooperative behavior. So
collaborations have structural as well as behavioral, dimensions. These
collaborations represent the implementation of patterns that make up a
[Link] such as
 Dependency: Dependency is a semantic relationship between two things in which
a change to one thing may affect the semantics of the other thing.
 Generalization: A generalization is a specialization / generalization relationship in
which objects of the specialized element (child) are substitutable for objects of the
generalized element (parent).

Association: An association is a structural relationship that describes a set of


links, a link being a connection among objects. Aggregation is a special kind of
association, representing a structural relationship between a whole and its parts.
Source Receiver
Ipaddr Ipaddr
Portno Portno
Filepath source

Convert() Receive()
send() convert()
save()

Wireless nodes
Ipaddr
Portno
Dest

Send()
Receive()
Collaboration Diagram:

Collaboration defines an interaction and is a society of roles and other elements


that work together to provide some cooperative behavior. So collaborations have
structural as well as behavioral, dimensions. These collaborations represent the
implementation of patterns that make up a system

1: Select a file
2: spliting file
5: Identifing free path
6: Scheduling according to free path 11: Joining file

Alice : Bob :
Sender Receiver
12: Acknowledgement

3: request
7: sending file 10: Receiving
4: Responce 9: Forwarding

8: forwarding
Network Nodes
Activity diagram

sender

select file

convert into
array of byte

split into small calculate


packets the traffic

sending request to
adjacent nodes

using three way handshake


protocol find the time

schedule find the


file un-traffic route

send the
file

No Yes
If destination

receive
packets
intermediat
e node
forward join the
packets

save the
file
ER Diagram

IP Address Neighbor nodes

n 1
Inter mediate node Forwarding

1
Send

Ipaddr File
name File name path Dest
IP
Receiving

CHAPTER 5
SYSTEM DESCRIPTION

Modules

There are three modules in this project they are:

• Path prediction and packet splitting

• Scheduling and packet sending

• Packet receiving and joining


Module Description

Path prediction and packet splitting

In this module the path for the transaction of the file is detected
and the data is splitted into number of packets depending on the size of the data.
In lightly loaded networks, the surface of the “bowl” is smooth, and hence our
algorithm acts just like the shortest path routing.

Scheduling and packet sending

In this module it schedules the order of the packets to send. An 8-bit field
for depth and another 8-bit field for queue length. We assume that all nodes in the
network are homogenous and have the same buffer size, thus TADR can get the
normalized queue length. The reason for not sending the potential Vm directly is
that it will cost more space to store a floating point number than two integers.
Packet receiving and joining

A typical routing loop is caused by a local minimal potential, which is a


hollow in our bowl model. At the beginning, nodes around this minimal potential
node may send their packets to it, so this hollow will be filled up after some time.
Once the potential of this node goes higher than that of any node around it, the
node will send back packets.
ALGORIHTM USED IN THIS PROJECT:
AES is based on a design principle known as a substitution-permutation network, and is fast in
both software and hardware.[7] Unlike its predecessor DES, AES does not use a Feistel network.
AES is a variant of Rijndael which has a fixed block size of 128 bits, and a key size of 128, 192,
or 256 bits. By contrast, the Rijndael specification per se is specified with block and key sizes
that may be any multiple of 32 bits, both with a minimum of 128 and a maximum of 256 bits.

AES operates on a 4×4 column-major order matrix of bytes, termed the state, although some
versions of Rijndael have a larger block size and have additional columns in the state. Most AES
calculations are done in a special finite field.

The key size used for an AES cipher specifies the number of repetitions of transformation rounds
that convert the input, called the plaintext, into the final output, called the ciphertext. The
number of cycles of repetition are as follows:

 10 cycles of repetition for 128-bit keys.


 12 cycles of repetition for 192-bit keys.
 14 cycles of repetition for 256-bit keys.

Each round consists of several processing steps, including one that depends on the encryption
key itself. A set of reverse rounds are applied to transform ciphertext back into the original
plaintext using the same encryption key.

High-level description of the algorithm

1. KeyExpansion—round keys are derived from the cipher key using Rijndael's key
schedule.
2. Initial Round
1. AddRoundKey—each byte of the state is combined with the round key using
bitwise xor.
3. Rounds
1. SubBytes—a non-linear substitution step where each byte is replaced with another
according to a lookup table.
2. ShiftRows—a transposition step where each row of the state is shifted cyclically a
certain number of steps.
3. MixColumns—a mixing operation which operates on the columns of the state,
combining the four bytes in each column.
4. AddRoundKey
4. Final Round (no MixColumns)
1. SubBytes
2. ShiftRows
3. AddRoundKey

The SubBytes step

In the SubBytes step, each byte in the state is replaced with its entry in a fixed 8-bit lookup table,
S; bij = S(aij).

In the SubBytes step, each byte in the state matrix is replaced with a SubByte using an 8-bit
substitution box, the Rijndael S-box. This operation provides the non-linearity in the cipher. The
S-box used is derived from the multiplicative inverse over GF(28), known to have good non-
linearity properties. To avoid attacks based on simple algebraic properties, the S-box is
constructed by combining the inverse function with an invertible affine transformation. The S-
box is also chosen to avoid any fixed points (and so is a derangement), and also any opposite
fixed points.

The ShiftRows step


In the ShiftRows step, bytes in each row of the state are shifted cyclically to the left. The number
of places each byte is shifted differs for each row.

The ShiftRows step operates on the rows of the state; it cyclically shifts the bytes in each row by
a certain offset. For AES, the first row is left unchanged. Each byte of the second row is shifted
one to the left. Similarly, the third and fourth rows are shifted by offsets of two and three
respectively. For blocks of sizes 128 bits and 192 bits, the shifting pattern is the same. Row n is
shifted left circular by n-1 bytes. In this way, each column of the output state of the ShiftRows
step is composed of bytes from each column of the input state. (Rijndael variants with a larger
block size have slightly different offsets). For a 256-bit block, the first row is unchanged and the
shifting for the second, third and fourth row is 1 byte, 3 bytes and 4 bytes respectively—this
change only applies for the Rijndael cipher when used with a 256-bit block, as AES does not use
256-bit blocks. The importance of this step is to make columns not linear independent If so, AES
becomes four independent block ciphers.

The MixColumns step

In the MixColumns step, each column of the state is multiplied with a fixed polynomial c(x).

In the MixColumns step, the four bytes of each column of the state are combined using an
invertible linear transformation. The MixColumns function takes four bytes as input and outputs
four bytes, where each input byte affects all four output bytes. Together with ShiftRows,
MixColumns provides diffusion in the cipher.

During this operation, each column is multiplied by the known matrix that for the 128-bit key is:
Routing Algorithms

Non-Hierarchical Routing
In this type of routing, interconnected networks are viewed as a single network, where bridges,
routers and gateways are just additional nodes.

 Every node keeps information about every other node in the network
 In case of adaptive routing, the routing calculations are done and updated for all the
nodes.

The above two are also the disadvantages of non-hierarchical routing, since the table sizes and
the routing calculations become too large as the networks get bigger. So this type of routing is
feasible only for small networks.

Hierarchical Routing
This is essentially a 'Divide and Conquer' strategy. The network is divided into different regions
and a router for a particular region knows only about its own domain and other routers. Thus, the
network is viewed at two levels:

1. The Sub-network level, where each node in a region has information about its peers in the
same region and about the region's interface with other regions. Different regions may
have different 'local' routing algorithms. Each local algorithm handles the traffic between
nodes of the same region and also directs the outgoing packets to the appropriate
interface.
2. The Network Level, where each region is considered as a single node connected to its
interface nodes. The routing algorithms at this level handle the routing of packets
between two interface nodes, and is isolated from intra-regional transfer.

Networks can be organized in hierarchies of many levels; e.g. local networks of a city at one
level, the cities of a country at a level above it, and finally the network of all nations.

In Hierarchical routing, the interfaces need to store information about:


 All nodes in its region which are at one level below it.
 Its peer interfaces.
 At least one interface at a level above it, for outgoing packages.

Advantages of Hierarchical Routing :

 Smaller sizes of routing tables.


 Substantially lesser calculations and updates of routing tables.

Disadvantage :

 Once the hierarchy is imposed on the network, it is followed and possibility of direct
paths is ignored. This may lead to sub optimal routing.

Source Routing
Source routing is similar in concept to virtual circuit routing. It is implemented as under:

 Initially, a path between nodes wishing to communicate is found out, either by flooding
or by any other suitable method.
 This route is then specified in the header of each packet routed between these two nodes.
A route may also be specified partially, or in terms of some intermediate hops.

Advantages:

 Bridges do not need to lookup their routing tables since the path is already specified in
the packet itself.
 The throughput of the bridges is higher, and this may lead to better utilization of
bandwidth, once a route is established.

Disadvantages:

 Establishing the route at first needs an expensive search method like flooding.
 To cope up with dynamic relocation of nodes in a network, frequent updates of tables are
required, else all packets would be sent in wrong direction. This too is expensive.
Policy Based Routing
In this type of routing, certain restrictions are put on the type of packets accepted and sent. e.g..
The IIT- K router may decide to handle traffic pertaining to its departments only, and reject
packets from other routes. This kind of routing is used for links with very low capacity or for
security purposes.

Shortest Path Routing


Here, the central question dealt with is 'How to determine the optimal path for routing ?' Various
algorithms are used to determine the optimal routes with respect to some predetermined criteria.
A network is represented as a graph, with its terminals as nodes and the links as edges. A 'length'
is associated with each edge, which represents the cost of using the link for transmission. Lower
the cost, more suitable is the link. The cost is determined depending upon the criteria to be
optimized. Some of the important ways of determining the cost are:

 Minimum number of hops: If each link is given a unit cost, the shortest path is the one
with minimum number of hops. Such a route is easily obtained by a breadth first search
method. This is easy to implement but ignores load, link capacity etc.
 Transmission and Propagation Delays: If the cost is fixed as a function of transmission
and propagation delays, it will reflect the link capacities and the geographical distances.
However these costs are essentially static and do not consider the varying load
conditions.
 Queuing Delays: If the cost of a link is determined through its queuing delays, it takes
care of the varying load conditions, but not of the propagation delays.

Ideally, the cost parameter should consider all the above mentioned factors, and it should be
updated periodically to reflect the changes in the loading conditions. However, if the routes are
changed according to the load, the load changes again. This feedback effect between routing and
load can lead to undesirable oscillations and sudden swings.

TESTING AND IMPLEMENTATION


SYSTEM TESTING
The purpose of testing is to discover errors. Testing is the process of trying to discover
every conceivable fault or weakness in a work product. It provides a way to check the
functionality of components, sub assemblies, assemblies and/or a finished product It is the
process of exercising software with the intent of ensuring that the

Software system meets its requirements and user expectations and does not fail in an
unacceptable manner. There are various types of test. Each test type addresses a specific testing
requirement.

TYPES OF TESTS

Unit testing
Unit testing involves the design of test cases that validate that the internal program logic is
functioning properly, and that program inputs produce valid outputs. All decision branches and
internal code flow should be validated. It is the testing of individual software units of the
application .it is done after the completion of an individual unit before integration. This is a
structural testing, that relies on knowledge of its construction and is invasive. Unit tests perform
basic tests at component level and test a specific business process, application, and/or system
configuration. Unit tests ensure that each unique path of a business process performs accurately
to the documented specifications and contains clearly defined inputs and expected results.

Integration testing
Integration tests are designed to test integrated software components to determine if they
actually run as one program. Testing is event driven and is more concerned with the basic
outcome of screens or fields. Integration tests demonstrate that although the components were
individually satisfaction, as shown by successfully unit testing, the combination of components is
correct and consistent. Integration testing is specifically aimed at exposing the problems that
arise from the combination of components.
Functional test
Functional tests provide systematic demonstrations that functions tested are available as
specified by the business and technical requirements, system documentation, and user manuals.

Functional testing is centered on the following items:

Valid Input : identified classes of valid input must be accepted.

Invalid Input : identified classes of invalid input must be rejected.

Functions : identified functions must be exercised.

Output : identified classes of application outputs must be exercised.

Systems/Procedures: interfacing systems or procedures must be invoked.

Organization and preparation of functional tests is focused on requirements, key functions, or


special test cases. In addition, systematic coverage pertaining to identify Business process flows;
data fields, predefined processes, and successive processes must be considered for testing.
Before functional testing is complete, additional tests are identified and the effective value of
current tests is determined.

System Test
System testing ensures that the entire integrated software system meets requirements. It tests a
configuration to ensure known and predictable results. An example of system testing is the
configuration oriented system integration test. System testing is based on process descriptions
and flows, emphasizing pre-driven process links and integration points

White Box Testing

White Box Testing is a testing in which in which the software tester has knowledge of the inner
workings, structure and language of the software, or at least its purpose. It is purpose. It is used
to test areas that cannot be reached from a black box level.

Black Box Testing


Black Box Testing is testing the software without any knowledge of the inner workings,
structure or language of the module being tested. Black box tests, as most other kinds of tests,
must be written from a definitive source document, such as specification or requirements
document, such as specification or requirements document. It is a testing in which the software
under test is treated, as a black box .you cannot “see” into it. The test provides inputs and
responds to outputs without considering how the software works.

6.1 Unit Testing:

Unit testing is usually conducted as part of a combined code and unit test phase of the
software lifecycle, although it is not uncommon for coding and unit testing to be conducted as
two distinct phases.

Test strategy and approach


Field testing will be performed manually and functional tests will be written in detail.
Test objectives
 All field entries must work properly.
 Pages must be activated from the identified link.
 The entry screen, messages and responses must not be delayed.
Features to be tested
 Verify that the entries are of the correct format
 No duplicate entries should be allowed
 All links should take the user to the correct page.

6.2 Integration Testing


Software integration testing is the incremental integration testing of two or more
integrated software components on a single platform to produce failures caused by interface
defects.

The task of the integration test is to check that components or software applications, e.g.
components in a software system or – one step up – software applications at the company level –
interact without error.

Test Results: All the test cases mentioned above passed successfully. No defects encountered.
6.3 Acceptance Testing
User Acceptance Testing is a critical phase of any project and requires significant
participation by the end user. It also ensures that the system meets the functional requirements.

Test Results: All the test cases mentioned above passed successfully. No defects encountered.

Implementation

SDLC is the acronym of Software Development Life Cycle. It is also called as Software
development process. The software development life cycle (SDLC) is a framework defining
tasks performed at each step in the software development process. ISO/IEC 12207 is an
international standard for software life-cycle processes. It aims to be the standard that defines all
the tasks required for developing and maintaining software.
What is SDLC?
SDLC is a process followed for a software project, within a software organization. It consists of
a detailed plan describing how to develop, maintain, replace and alter or enhance specific
software. The life cycle defines a methodology for improving the quality of software and the
overall development process.

A typical Software Development life cycle consists of the following stages:

Stage 1: Planning and Requirement Analysis: Requirement analysis is the most important and
fundamental stage in SDLC. It is performed by the senior members of the team with inputs from
the customer, the sales department, market surveys and domain experts in the industry. This
information is then used to plan the basic project approach and to conduct product feasibility
study in the economical, operational, and technical areas.

Planning for the quality assurance requirements and identification of the risks associated with the
project is also done in the planning stage. The outcome of the technical feasibility study is to
define the various technical approaches that can be followed to implement the project
successfully with minimum risks.

Stage 2: Defining Requirements: Once the requirement analysis is done the next step is to
clearly define and document the product requirements and get them approved from the customer
or the market analysts. This is done through ‘SRS’ – Software Requirement Specification
document which consists of all the product requirements to be designed and developed during
the project life cycle.

Stage 3: Designing the product architecture: SRS is the reference for product architects to
come out with the best architecture for the product to be developed. Based on the requirements
specified in SRS, usually more than one design approach for the product architecture is proposed
and documented in a DDS - Design Document Specification. This DDS is reviewed by all the
important stakeholders and based on various parameters as risk

assessment, product robustness, design modularity , budget and time constraints , the best design
approach is selected for the product.

A design approach clearly defines all the architectural modules of the product along with its
communication and data flow representation with the external and third party modules (if any).
The internal design of all the modules of the proposed architecture should be clearly defined with
the minutest of the details in DDS.

Stage 4: Building or Developing the Product : In this stage of SDLC the actual development
starts and the product is built. The programming code is generated as per DDS during this stage.
If the design is performed in a detailed and organized manner, code generation can be
accomplished without much hassle.

Developers have to follow the coding guidelines defined by their organization and programming
tools like compilers, interpreters, debuggers etc are used to generate the code. Different high
level programming languages such as C, C++, Pascal, Java, and PHP are used for coding. The
programming language is chosen with respect to the type of software being developed.

Stage 5: Testing the Product : This stage is usually a subset of all the stages as in the modern
SDLC models, the testing activities are mostly involved in all the stages of SDLC. However this
stage refers to the testing only stage of the product where products defects are reported, tracked,
fixed and retested, until the product reaches the quality standards defined in the SRS.

Stage 6: Deployment in the Market and Maintenance : Once the product is tested and ready
to be deployed it is released formally in the appropriate market. Sometime product deployment
happens in stages as per the organizations’ business strategy. The product may first be released
in a limited segment and tested in the real business environment (UAT- User acceptance testing).

Then based on the feedback, the product may be released as it is or with suggested enhancements
in the targeting market segment. After the product is released in the market, its maintenance is
done for the existing customer base.

SDLC Models

There are various software development life cycle models defined and designed which are
followed during software development process. These models are also referred as "Software
Development Process Models". Each process model follows a Series of steps unique to its type,
in order to ensure success in process of software development. Following are the most important
and popular SDLC models followed in the industry:

 Waterfall Model
 Iterative Model
 Spiral Model
 V-Model
 Big Bang Model

The other related methodologies are Agile Model, RAD Model – Rapid Application
Development and Prototyping Models.

Waterfall Model
The Waterfall Model was first Process Model to be introduced. It is also referredto as
alinear-sequential life cycle model. It is very simple to understand and [Link] a waterfall model,
each phase must be completed before the next phase can begin and there is no overlapping in the
phases.

Waterfall model is the earliest SDLC approach that was used for software development .The
waterfall Model illustrates the software development process in a linear sequential flow; hence it
is also referred to as a linear-sequential life cycle model. This means that any phase in the
development process begins only if the previous phase is complete. In waterfall model phases do
not overlap..

Waterfall Model design

Waterfall approach was first SDLC Model to be used widely in Software Engineering to ensure
success of the project. In "The Waterfall" approach, the whole process of software development
is divided into separate phases. In Waterfall model, typically, the outcome of one phase acts as
the input for the next phase sequentially.

Following is a diagrammatic representation of different phases of waterfall model.


The sequential phases in Waterfall model are:

 Requirement Gathering and analysis All possible requirements of the system to be


developed are captured in this phase and documented in a requirement specification doc.

 System Design: The requirement specifications from first phase are studied in this phase and
system design is prepared. System Design helps in specifying hardware and system requirements
and also helps in defining overall system architecture.

 Implementation: With inputs from system design, the system is first developed in small
programs called units, which are integrated in the next phase. Each unit is developed and tested
for its functionality which is referred to as Unit Testing.

 Integration and Testing: All the units developed in the implementation phase are integrated
into a system after testing of each unit. Post integration the entire system is tested for any faults
and failures.

 Deployment of system: Once the functional and non functional testing is done, the product is
deployed in the customer environment or released into the market.

 Maintenance: There are some issues which come up in the client environment. To fix those
issues patches are released. Also to enhance the product some better versions are released.
Maintenance is done to deliver these changes in the customer environment.

All these phases are cascaded to each other in which progress is seen as flowing steadily
downwards (like a waterfall) through the phases. The next phase is started only after the defined
set of goals are achieved for previous phase and it is signed off, so the name "Waterfall Model".
In this model phases do not overlap.

Waterfall Model Application

Every software developed is different and requires a suitable SDLC approach to be followed
based on the internal and external factors. Some situations where the use of Waterfall model is
most appropriate are:
 Requirements are very well documented, clear and fixed

 Product definition is stable

 Technology is understood and is not dynamic

 There are no ambiguous requirements

 Ample resources with required expertise are available to support the product

 The project is short

Iterative Model

In Iterative model, iterative process starts with a simple implementation of a small set of
the software requirements and iteratively enhances the evolving versions until the complete
system is implemented and ready to be deployed.

An iterative life cycle model does not attempt to start with a full specification of requirements.
Instead, development begins by specifying and implementing just part of the software, which is
then reviewed in order to identify further requirements. This process is then repeated, producing
a new version of the software at the end of each iteration of the model.

Iterative Model design

Iterative process starts with a simple implementation of a subset of the software requirements
and iteratively enhances the evolving versions until the full system is implemented. At each
iteration, design modifications are made and new functional capabilities are added. The basic
idea behind this method is to develop a system through repeated cycles (iterative) and in smaller
portions at a time (incremental).

Following is the pictorial representation of Iterative and Incremental model:


Iterative and Incremental development is a combination of both iterative design or iterative
method and incremental build model for development. "During software development, more than
one iteration of the software development cycle may be in progress at the same time." and "This
process may be described as an "evolutionary acquisition" or "incremental build" approach."

In incremental model the whole requirement is divided into various builds. During each iteration,
the development module goes through the requirements, design, implementation and testing
phases. Each subsequent release of the module adds function to the previous release. The process
continues till the complete system is ready as per the requirement.

The key to successful use of an iterative software development lifecycle is rigorous validation of
requirements, and verification & testing of each version of the software against those
requirements within each cycle of the model. As the software evolves through successive cycles,
tests have to be repeated and extended to verify each version of the software.

Iterative Model Application

Like other SDLC models, Iterative and incremental development has some specific applications
in the software industry. This model is most often used in the following scenarios:

 Requirements of the complete system are clearly defined and understood.

 Major requirements must be defined; however, some functionalities or requested


enhancements may evolve with time.

 There is a time to the market constraint.


 A new technology is being used and is being learnt by the development team while working
on the project.

 Resources with needed skill set are not available and are planned to be used on contract basis
for specific iterations.

 There are some high risk features and goals which may change in the future.

Spiral Model

The spiral model combines the idea ofiterative developmentwith thesystematic,


controlled aspects of thewaterfall model.

Spiral model is a combination of iterative development process model and sequential linear
development model i.e. waterfall model with very high emphasis on risk analysis. It allows for
incremental releases of the product, or incremental refinement through each iteration around the
spiral.

Spiral Model design

The spiral model has four phases. A software project repeatedly passes through these phases in
iterations called Spirals.

 Identification

This phase starts with gathering the business requirements in the baseline spiral. In the
subsequent spirals as the product matures, identification of system requirements, subsystem
requirements and unit requirements are all done in this phase.

This also includes understanding the system requirements by continuous communication


between the customer and the system analyst. At the end of the spiral the product is deployed in
the identified market.
 Design

Design phase starts with the conceptual design in the baseline spiral and involves
architectural design, logical design of modules, physical product design and final design in the
subsequent spirals.

 Construct or Build

Construct phase refers to production of the actual software product at every spiral. In the
baseline spiral when the product is just thought of and the design is being developed a POC
(Proof of Concept) is developed in this phase to get customer feedback.

Then in the subsequent spirals with higher clarity on requirements and design details a working
model of the software called build is produced with a version number. These builds are sent to
customer for feedback.

 Evaluation and Risk Analysis

Risk Analysis includes identifying, estimating, and monitoring technical feasibility and
management risks, such as schedule slippage and cost overrun. After testing the build, at the end
of first iteration, the customer evaluates the software and provides feedback.
Based on the customer evaluation, software development process enters into the next iteration
and subsequently follows the linear approach to implement the feedback suggested by the
customer. The process of iterations along the spiral continues throughout the life of the software.

Spiral Model Application

Spiral Model is very widely used in the software industry as it is in synch with the natural
development process of any product i.e. learning with maturity and also involves minimum risk
for the customer as well as the development firms. Following are the typical uses of Spiral
model:

 When costs there is a budget constraint and risk evaluation is important


 For medium to high-risk projects
 Long-term project commitment because of potential changes to economic priorities as the
requirements change with time
 Customer is not sure of their requirements which is usually the case
 Requirements are complex and need evaluation to get clarity
 New product line which should be released in phases to get enough customer feedback
 Significant changes are expected in the product during the development cycle

V -Model

The V-modelis SDLC model where execution of processes happens in a sequential


manner in V-shape. It is also known as Verification and Validation model.

V -Model is an extension of the waterfall model and is based on association of a testing phase for
each corresponding development stage. This means that for every single phase in the
development cycle there is a directly associated testing phase. This is a highly disciplined model
and next phase starts only after completion of the previous phase.

V-Model design

Under V-Model, the corresponding testing phase of the development phase is planned in parallel.
So there are Verification phases on one side of the ‘V’ and Validation phases on the other side.
Coding phase joins the two sides of the V-Model.
The below figure illustrates the different phases in V-Model of SDLC.

Verification Phases

Following are the Verification phases in V-Model:

 Business Requirement Analysis :

This is the first phase in the development cycle where the product requirements are
understood from the customer perspective. This phase involves detailed communication with the
customer to understand his expectations and exact requirement. This is a very important activity
and need to be managed well, as most of the customers are not sure about what exactly they
need. The acceptance test design planning is done at this stage as business requirements can be
used as an input for acceptance testing.

 System Design:

Once you have the clear and detailed product requirements, it’s time to design the
complete system. System design would comprise of understanding and detailing the complete
hardware and communication setup for the product under development. System test plan is
developed based on the system design. Doing this at an earlier stage leaves more time for actual
test execution later.
 Architectural Design:

Architectural specifications are understood and designed in this phase. Usually more than
one technical approach is proposed and based on the technical and financial feasibility the final
decision is taken. System design is broken down further into modules taking up different
functionality. This is also referred to as High Level Design (HLD).

The data transfer and communication between the internal modules and with the outside world
(other systems) is clearly understood and defined in this stage. With this information, integration
tests can be designed and documented during this stage.

 Module Design:

In this phase the detailed internal design for all the system modules is specified, referred
to as Low Level Design (LLD). It is important that the design is compatible with the other
modules in the system architecture and the other external systems. Unit tests are an essential part
of any development process and helps eliminate the maximum faults and errors at a very early
stage. Unit tests can be designed at this stage based on the internal module designs.

Coding Phase

The actual coding of the system modules designed in the design phase is taken up in the
Coding phase. The best suitable programming language is decided based on the system and
architectural requirements. The coding is performed based on the coding guidelines and
standards. The code goes through numerous code reviews and is optimized for best performance
before the final build is checked into the repository.

Validation Phases

Following are the Validation phases in V-Model:

 Unit Testing

Unit tests designed in the module design phase are executed on the code during this
validation phase. Unit testing is the testing at code level and helps eliminate bugs at an early
stage, though all defects cannot be uncovered by unit testing.
 Integration Testing

Integration testing is associated with the architectural design phase. Integration tests are
performed to test the coexistence and communication of the internal modules within the system.

 System Testing

System testing is directly associated with the System design phase. System tests check
the entire system functionality and the communication of the system under development with
external systems. Most of the software and hardware compatibility issues can be uncovered
during system test execution.

 Acceptance Testing

Acceptance testing is associated with the business requirement analysis phase and
involves testing the product in user environment. Acceptance tests uncover the compatibility
issues with the other systems available in the user environment. It also discovers the non
functional issues such as load and performance defects in the actual user environment.

V-Model Application

V- Model application is almost same as waterfall model, as both the models are of
sequential type. Requirements have to be very clear before the project starts, because it is usually
expensive to go back and make changes. This model is used in the medical development field, as
it is strictly disciplined domain. Following are the suitable scenarios to use V-Model:

 Requirements are well defined, clearly documented and fixed.


 Product definition is stable.
 Technology is not dynamic and is well understood by the project team.
 There are no ambiguous or undefined requirements
 The project is short.

CONCLUSION AND FUTURE SCOPE


We studied the problem of traffic allocation in multiple-path routing algorithms in the presence
of jammers whose effect can only be characterized statistically. We have presented methods for
each network node to probabilistically characterize the local impact of a dynamic jamming attack
and for data sources to incorporate this information into the routing algorithm. We formulated
multiple-path traffic allocation in multi-source networks as a lossy network flow optimization
problem using an objective function based on portfolio selection theory from finance. We
showed that this centralized optimization problem can be solved using a distributed algorithm
based on decomposition in network utility maximization (NUM). We presented simulation
results to illustrate the impact of jamming dynamics and mobility on network throughput and to
demonstrate the efficacy of our traffic allocation algorithm. We have thus shown that multiple
path source routing algorithms can optimize the throughput performance by effectively
incorporating the empirical jamming impact into the allocation of traffic to the set of paths.

BIBLIOGRAPGY
1. T.X. Brown, J.E. James, and A. Sethi, “Jamming and Sensing of Encrypted Wireless Ad Hoc
Networks,” Proc. ACM Int’l Symp. Mobile Ad Hoc Networking and Computing (MobiHoc), pp.
120-130,2006.
2. M. Cagalj, S. Capkun, and J.-P. Hubaux, “Wormhole-Based Anti-Jamming Techniques in
Sensor Networks,” IEEE Trans. MobileComputing, vol. 6, no. 1, pp. 100-114, Jan. 2007.
3. A. Chan, X. Liu, G. Noubir, and B. Thapa, “Control Channel Jamming: Resilience and
Identification of Traitors,” Proc. IEEE Int’l Symp. Information Theory (ISIT), 2007.
4. T. Dempsey, G. Sahin, Y. Morton, and C. Hopper, “IntelligentSensing and Classification in
Ad Hoc Networks: A Case Study,” IEEE Aerospace and Electronic Systems Magazine, vol. 24,
no. 8 pp. 23-30, Aug. 2009.
5. Y. Desmedt, “Broadcast Anti-Jamming Systems,” Computer Networks, vol. 35, nos. 2/3, pp.
223-236, Feb. 2001.
6. K. Gaj and P. Chodowiec, “FPGA and ASIC Implementations of AES,” Cryptographic
Engineering, pp. 235-294, Springer, 2009.
7. O. Goldreich, Foundations of Cryptography: Basic Applications. Cambridge Univ. Press,
2004.
8. B. Greenstein, D. Mccoy, J. Pang, T. Kohno, S. Seshan, and D. Wetherall, “Improving
Wireless Privacy with an Identifier-Free Link Layer Protocol,” Proc. Int’l Conf. Mobile Systems,
Applications, and Services (MobiSys), [Link], IEEE 802.11 Standard,
[Link] getieee802/download/[Link], [Link] and J. Brainard,
“Client Puzzles: A Cryptographic Countermeasure against Connection Depletion Attacks,” Proc.
Network and Distributed System Security Symp. (NDSS), pp. 151-165,1999.

FORMS AND REPORTS

Home page

This is the home page where user to browse the source file for file transfer process.
Receiver

This form shows the receiver path and its status.


This receiver form shows running and waiting to receive file.
Select file

This form shows first to select the file from folder for file transaction.

Common questions

Powered by AI

The choice of SDLC model is influenced by factors such as project complexity, budget constraints, risk evaluation, and customer requirements. The Waterfall Model suits projects with well-defined requirements and low uncertainty, whereas the Iterative Model is chosen for projects expected to evolve over time. The Spiral Model is beneficial for high-risk projects where iterative risk assessment and mitigation are crucial. Finally, Agile is ideal for projects requiring rapid delivery and flexibility to accommodate changing requirements due to its iterative and incremental approach .

The Java API provides a comprehensive set of ready-made software components essential for building Java applications. It supports different types of programs by offering packages that facilitate various functionalities, such as graphical user interface creation, networking, internationalization, and security. The API enables developers to harness reusable components, thus streamlining the development process and supporting both applets and standalone applications. Its extensive library reduces the need for developers to reinvent the wheel, fostering efficient and rapid application development .

Java achieves both compilation and interpretation by first compiling Java source code into Java bytecodes using a compiler. These bytecodes, which are platform-independent, are then interpreted by the Java Virtual Machine (JVM) at runtime. This dual process allows Java programs to be 'write once, run anywhere,' offering the flexibility of execution across various hardware and operating systems while maintaining performance through optimizations such as just-in-time compilation .

The proposed packet routing system in wireless networks aims to scatter packets along multiple paths that consist of idle and underloaded nodes, thus reducing congestion effectively. Key features include faster transfer of data, minimal packet loss, efficient use of bandwidth, and maintaining low congestion. Congestion is controlled by designing the network to provide quality of service (QoS) to inelastic traffic such as video and audio, while also achieving high data rates for elastic traffic like email and web traffic. Efficient congestion control and scheduling are crucial components of this system .

Java's automatic garbage collection feature has significantly contributed to its adoption by reducing the burden on developers to manually manage memory allocation and deallocation, a common source of defects in languages like C++. This feature helps prevent memory leaks and enhances reliability by freeing unused objects automatically, thus leading to more robust applications. The ease of managing memory efficiently has attracted developers looking for a language supporting complex, object-oriented programming without the complexity of manual memory management .

The V-Model enhances the traditional Waterfall Model by incorporating testing activities alongside each development phase, thus promoting early detection of defects. This results in a higher level of discipline and reduces the cost of fixing errors discovered later in the process. While the Waterfall Model processes are linear and sequential, the V-Model achieves a more integrated approach that incorporates verification and validation at each development stage, ultimately improving the quality and reliability of the final product .

Java maintains platform independence through its use of Java bytecodes, which are interpreted by the Java Virtual Machine (Java VM) on any platform with a Java interpreter. This 'write once, run anywhere' capability enables Java programs to run on different operating systems like Windows, Linux, and MacOS. The trade-off compared to native code is that the Java platform can be slower due to the overhead of interpreting bytecode, but techniques like just-in-time bytecode compilation can improve performance to near native levels without sacrificing portability .

The Spiral Model accommodates changes in project requirements by incorporating iterative planning and risk assessment at each cycle or spiral. Each iteration allows for the reevaluation of project objectives, risks, and alternatives, enabling stakeholders to provide feedback and potential changes before progressing further. This adaptability ensures the model can handle evolving requirements effectively, making it suitable for complex projects where understanding of project goals may develop over time .

The Waterfall Model is particularly beneficial in scenarios where the project requirements are well-documented, clear, and stable. It is suitable when the technology is well-understood and unlikely to change throughout the development process. Its straight-line progression is ideal for straightforward projects where each phase can be completed without the need to revisit previous stages, making it a preferable choice in structured environments with fixed deadlines and budgets .

Java Servlets are server-side programs running within a Java Web server, designed to generate dynamic content, handle requests, and manage responses in web applications. Unlike traditional CGI scripts, which start a new process for each request leading to increased resource usage, Servlets run within a server's JVM, allowing for better performance and resource management by reusing existing processes across requests. Servlets integrate smoothly with other Java technologies, offer robust session tracking, and provide a higher level of abstraction for handling HTTP requests .

You might also like