Computer network
A computer network, often simply referred to as a network, is a collection of computers and devices interconnected by communications channels that facilitate communications and allows sharing of resources and information among interconnected devices. computer networks are the core of modern communication.
Properties
Computer networks: Facilitate communications Using a network, people can communicate efficiently and easily via email, instant messaging, chat rooms, telephone, video telephone calls, and video conferencing. Permit sharing of files, data, and other types of information In a network environment, authorized users may access data and information stored on other computers on the network. The capability of providing access to data and information on shared storage devices is an important feature of many networks. Share network and computing resources In a networked environment, each computer on a network may access and use resources provided by devices on the network, such as printing a document on a shared network printer. Distributed computing uses computing resources across a network to accomplish tasks. May be insecure A computer network may be used by computer hackers to deploy computer viruses or computer worms on devices connected to the network, or to prevent these devices from normally accessing the network (denial of service). May interfere with other technologies Power line communication strongly disturbs certain forms of radio communication, e.g., amateur radio.[5] It may also interfere with last mile access technologies such as ADSL and VDSL.[6] May be difficult to set up A complex computer network may be difficult to set up. It may also be very costly to set up an effective computer network in a large organization or company.
TYPES OF NETWORKS 1 BASED ON TRANSMISSION MEDIA WIRED:
The wired network connections can be defined as a jack in which we plug a network connection and hence a wired connection is established. Wired network connections are similar to the phone jacks except for the size. Network jacks are
yellow and orange in color. Wired network allows for the fast and easy access to the internet. All the wired networks are full duplex, switched. There are many types of wired internet connections which help to establish wired network connections. B-ISDN, DSL, ADSL, ADSL+2, SDSL, VDSL and cable are included in the various forms of the wired internet connections-DSL stands for broadband integrated services digital networks. Wireless network refers to any type of computer network that is not connected by cables of any kind. It is a method by which telecommunications networks and enterprise (business), installations avoid the costly process of introducing cables into a building, or as a connection between various equipment locations.[1] Wireless telecommunications networks are generally implemented and administered using a transmission system called radio waves. This implementation takes place at the physical level (layer) of the network structure.[2]
BASED ON NETWORK SIZE:
.LAN - Local Area Network A local area network (LAN) is a computer network that connects computers and devices in a limited geographical area such as home, school, computer laboratory or office building.[1] The defining characteristics of LANs, in contrast to wide area networks (WANs), include their usually higher data-transfer rates, smaller geographic area, and lack of a need for leased telecommunication lines. A LAN connects network devices over a relatively short distance. A networked office building, school, or home usually contains a single LAN, though sometimes one building will contain a few small LANs (perhaps one per room), and occasionally a LAN will span a group of nearby buildings. In TCP/IP networking, a LAN is often but not always implemented as a single IP subnet. In addition to operating in a limited space, LANs are also typically owned, controlled, and managed by a single person or organization. They also tend to use certain connectivity technologies, primarily Ethernet and Token Ring.
WAN - Wide Area Network
As the term implies, a WAN spans a large physical distance. The Internet is the largest WAN, spanning the Earth. A WAN is a geographically-dispersed collection of LANs. A network device called a router connects LANs to a WAN. In IP networking, the router maintains both a LAN address and a WAN address. A WAN differs from a LAN in several important ways. Most WANs (like the Internet) are not owned by any one organization but rather exist under collective or distributed
ownership and management. WANs tend to use technology like ATM, Frame Relay and X.25 for connectivity over the longer distances. MAN: A metropolitan area network (MAN) is a computer network that usually spans a city or a large campus. A MAN usually interconnects a number of local area networks (LANs) using a high-capacity backbone technology, such as fiber-optical links, and provides up-link services to wide area networks (or WAN) and the Internet. A MAN is optimized for a larger geographical area than a LAN, ranging from several blocks of buildings to entire cities. MANs can also depend on communications channels of moderate-to-high data rates. A MAN might be owned and operated by a single organization, but it usually will be used by many individuals and organizations. MANs might also be owned and operated as public utilities. They will often provide means for internetworking of local networks. A Metropolitan Area Network (MAN) is a large computer network that spans a metropolitan area or campus. Its geographic scope falls between a WAN and LAN. MANs provide Internet connectivity for LANs in a metropolitan region, and connect them to wider area networks like the Internet.
BASED ON TOPOLOGY:
Network topology is the layout pattern of interconnections of the various elements (links, nodes, etc.) of a computer[1][2] or biological network.[3] Network topologies may be physical or logical. Physical topology refers to the the physical design of a network including the devices, location and cable installation. Logical topology refers to how data is actually transferred in a network as opposed to its physical design. In general physical topology relates to a core network whereas logical topology relates to basic network.
4 PEER-TO-PEER NETWORK:
P2P networking has generated tremendous interest worldwide among both Internet surfers and computer networking professionals. P2P software systems like Kazaa and Napster rank amongst the most popular software applications ever. Numerous businesses and Web sites have promoted "peer to peer" technology as the future of Internet networking. Although they have actually existed for many years, P2P technologies promise to radically change the future of networking. P2P file sharing software has also created much controversy over legality and "fair use." In general, experts disagree on various details of P2P and precisely how it will evolve in the future.
Traditional Peer to Peer Networks
The P2P acronym technically stands for peer to peer. Webopedia defines P2P as "A type of network in which each workstation has equivalent capabilities and responsibilities. This differs from client/server architectures, in which some computers are dedicated to serving the others." This definition captures the traditional meaning of peer to peer networking. Computers in a peer to peer network are typically situated physically near to each other and run similar networking protocols and software. Before home networking became popular, only small businesses and schools built peer to peer networks.
Home Peer to Peer Networks
Most home computer networks today are peer to peer networks. Residential users configure their computers in peer workgroups to allow sharing of files, printers and other resources equally among all of the devices. Although one computer may act as a file server or Fax server at any given time, other home computers often have equivalent capability to handle those responsibilities. Both wired and wireless home networks qualify as peer to peer environments. Some may argue that the installation of a network router or similar centerpiece device means that network is no longer peer to peer. From the networking point of view, this is inaccurate. A router simply joins the home network to the Internet; it does not by itself change how resources within the network are shared.
5 CLIENT SERVER NETWORKS:
The term client-server refers to a popular model for computer networking that utilizes client and server devices each designed for specific purposes. The client-server model can be used on the Internet as well as local area networks (LANs). Examples of clientserver systems on the Internet include Web browsers and Web servers, FTP clients and servers, and DNS.
Client and Server Devices
Client/server networking grew in popularity many years ago as personal computers (PCs) became the common alternative to older mainframe computers. Client devices are typically PCs with network software applications installed that request and receive information over the network. Mobile devices as well as desktop computers can both function as clients. A server device typically stores files and databases including more complex applications like Web sites. Server devices often feature higher-powered central processors, more memory, and larger disk drives than clients.
Client-Server Applications
The client-server model distinguishes between applications as well as devices. Network clients make requests to a server by sending messages, and servers respond to their clients by acting on each request and returning results. One server generally supports numerous clients, and multiple servers can be networked together in a pool to handle the increased processing load as the number of clients grows. A client computer and a server computer are usually two separate devices, each customized for their designed purpose. For example, a Web client works best with a large screen display, while a Web server does not need any display at all and can be located anywhere in the world. However, in some cases a given device can function both as a client and a server for the same application. Likewise, a device that is a server for one application can simultaneously act as a client to other servers, for different applications. [Some of the most popular applications on the Internet follow the client-server model including email, FTP and Web services. Each of these clients features a user interface (either graphic- or text-based) and a client application that allows the user to connect to servers. In the case of email and FTP, users enter a computer name (or sometimes an IP address) into the interface to set up connections to the server.
Client-Server vs Peer-to-Peer and Other Models
The client-server model was originally developed to allow more users to share access to database applications. Compared to the mainframe approach, client-server offers improved scalability because connections can be made as needed rather than being fixed. The client-server model also supports modular applications that can make the job of creating software easier. In so-called "two-tier" and "three-tier" types of client-server systems, software applications are separated into modular pieces, and each piece is installed on clients or servers specialized for that subsystem. Client-server is just one approach to managing network applications The primary alternative, peer-to-peer networking, models all devices as having equivalent capability rather than specialized client or server roles. Compared to client-server, peer to peer networks offer some advantages such as more flexibility in growing the system to handle large number of clients. Client-server networks generally offer advantages in keeping data secure.
6 DECENTRALISED PEER-TO-PEER NETWORKS:
Decentralization
Peer-to-peer systems seem to go hand-in-hand with decentralized systems. In a fully decentralized system, not only is every host an equal participant, but there are no hosts with special facilitating or administrative roles. In practice, building fully decentralized systems can be difficult, and many peer-to-peer applications take hybrid approaches to
solving problems. As we have already seen, DNS is peer-to-peer in protocol design but with a built-in sense of hierarchy. There are many other examples of systems that are peer-to-peer at the core and yet have some semi-centralized organization in application, such as Usenet, instant messaging, and Napster. Usenet is an instructive example of the evolution of a decentralized system. Usenet propagation is symmetric: hosts share traffic. But because of the high cost of keeping a full news feed, in practice there is a backbone of hosts that carry all of the traffic and serve it to a large number of "leaf nodes" whose role is mostly to receive articles. Within Usenet, there was a natural trend toward making traffic propagation hierarchical, even though the underlying protocols do not demand it. This form of "soft centralization" may prove to be economic for many peer-to-peer systems with high-cost data transmission. Many other current peer-to-peer applications present a decentralized face while relying on a central facilitator to coordinate operations. To a user of an instant messaging system, the application appears peer-to-peer, sending data directly to the friend being messaged. But all major instant messaging systems have some sort of server on the back end that facilitates nodes talking to each other. The server maintains an association between the user's name and his or her current IP address, buffers messages in case the user is offline, and routes messages to users behind firewalls. Some systems (such as ICQ) allow direct client-to-client communication when possible but have a server as a fallback. A fully decentralized approach to instant messaging would not work on today's Internet, but there are scaling advantages to allowing client-to-client communication when possible. Napster is another example of a hybrid system. Napster's file sharing is decentralized: one Napster client downloads a file directly from another Napster client's machine. But the directory of files is centralized, with the Napster servers answering search queries and brokering client connections. This hybrid approach seems to scale well: the directory can be made efficient and uses low bandwidth, and the file sharing can happen on the edges of the network. In practice, some applications might work better with a fully centralized design, not using any peer-to-peer technology at all. One example is a search on a large, relatively static database. Current web search engines are able to serve up to one billion pages all from a single place. Search algorithms have been highly optimized for centralized operation; there appears to be little benefit to spreading the search operation out on a peer-to-peer network (database generation, however, is another matter). Also, applications that require centralized information sharing for accountability or correctness are hard to spread out on a decentralized network. For example, an auction site needs to guarantee that the best price wins; that can be difficult if the bidding process has been spread across many locations. Decentralization engenders a whole new area of network-related failures: unreliability, incorrect data synchronization, etc. Peer-to-peer designers need to balance the power of peer-to-peer models against the complications and limitations of decentralized systems.
7 CENTRALISED PEER-TO-PEER NETWORK:
In centralized peer-to-peer systems, a central server is used for indexing functions and to bootstrap the entire system. Although this has similarities with a structured architecture, the connections between peers are not determined by any algorithm.
Structured systems
Structured P2P networks employ a globally consistent protocol to ensure that any node can efficiently route a search to some peer that has the desired file, even if the file is extremely rare. Such a guarantee necessitates a more structured pattern of overlay links. By far the most common type of structured P2P network is the distributed hash table (DHT), in which a variant of consistent hashing is used to assign ownership of each file to a particular peer, in a way analogous to a traditional hash table's assignment of each key to a particular array slot.
9 Unstructured systems
An unstructured P2P network is formed when the overlay links are established arbitrarily. Such networks can be easily constructed as a new peer that wants to join the network can copy existing links of another node and then form its own links over time. In an unstructured P2P network, if a peer wants to find a desired piece of data in the network, the query has to be flooded through the network to find as many peers as possible that share the data. The main disadvantage with such networks is that the queries may not always be resolved. Popular content is likely to be available at several peers and any peer searching for it is likely to find the same thing. But if a peer is looking for rare data shared by only a few other peers, then it is highly unlikely that search will be successful. Since there is no correlation between a peer and the content managed by it, there is no guarantee that flooding will find a peer that has the desired data. Flooding also causes a high amount of signaling traffic in the network and hence such networks typically have very poor search efficiency. Many of the popular P2P networks are unstructured. In pure P2P networks: Peers act as equals, merging the roles of clients and server. In such networks, there is no central server managing the network, neither is there a central router. Some examples of pure P2P Application Layer networks designed for peer-to-peer file sharing are gnutella (pre v0.4) and Freenet. There also exist hybrid P2P systems, which distribute their clients into two groups: client nodes and overlay nodes. Typically, each client is able to act according to the momentary need of the network and can become part of the respective overlay network used to coordinate the P2P structure. This division between normal and 'better' nodes is done in order to address the scaling problems on early pure P2P networks. As examples for such networks can be named modern implementations of gnutella (after v0.4) and Gnutella2.
Another type of hybrid P2P network are networks using on the one hand central server(s) or bootstrapping mechanisms, on the other hand P2P for their data transfers. These networks are in general called 'centralized networks' because of their lack of ability to work without their central server(s). An example for such a network is the eDonkey network (often also called eD2k). 10 Napster Napster was founded by Shawn Fanning and his uncle John Fanning while the former was attending Northeastern University in Boston is an online music store and a Best Buy company. It was originally founded as a pioneering peer-to-peer file sharing internet service that emphasized sharing audio files that were typically digitally encoded music as MP3 format files. The original company ran into legal difficulties over copyright infringement, ceased operations and was eventually acquired by Roxio and later by Best Buy. For more information about the current service, see Napster (pay service). Later companies and projects successfully followed its P2P filesharing example such as Gnutella, Freenet and many others. Others such as Grokster, Madster, the original eDonkey network and others encountered problems similar to Napster. 11 Gnutella ( /ntl/ with a silent g, but often /ntl/) is a large peer-to-peer network which, at the time of its creation, was the first decentralized peer-to-peer network of its kind, leading to other, later networks adopting the model.[1] It celebrated a decade of existence on March 14, 2010 and has a user base in the millions for peer-to-peer file sharing. In June 2005, gnutella's population was 1.81 million computers[2] increasing to over three million nodes by January 2006.[3] In late 2007, it was the most popular file sharing network on the Internet with an estimated market share of more than 40%.[4] The first client was developed by Justin Frankel and Tom Pepper of Nullsoft in early 2000, soon after the company's acquisition by AOL. On March 14, the program was made available for download on Nullsoft's servers.
The gnutella network is a fully distributed alternative to such semi-centralized systems as FastTrack (KaZaA) and the original Napster. Initial popularity of the network was spurred on by Napster's threatened legal demise in early 2001. This growing surge in popularity revealed the limits of the initial protocol's scalability. In early 2001, variations on the protocol (first implemented in proprietary and closed source clients) allowed an improvement in scalability. Instead of treating every user as client and server, some users
were now treated as "ultrapeers", routing search requests and responses for users connected to them. 12 Freenet
Freenet is different from most other peer-to-peer applications, both in how users interact with it and in the security it offers. It separates the underlying network structure and protocol from how users interact with the network; as a result, there are a variety of ways to access content on the Freenet network. IT is a decentralized, censorship-resistant distributed data store originally designed by Ian Clarke.[5] According to Clarke, Freenet aims to provide freedom of speech through a peerto-peer network with strong protection of anonymity; as part of supporting its users' freedom, Freenet is free and open source software.[6] Freenet works by pooling the contributed bandwidth and storage space of member computers to allow users to anonymously publish or retrieve various kinds of information. Freenet has been under continuous development since 2000
13 The Advantages And Disadvantages of Peer-to-peer Network and client /server netwroks
Here are some advantages and disadvantages of Peer-to-Peer Networks Advantages (Why) Disadvantages (Why)
Peer to Peer Networks are easy and simple If you have not connected the computers to set up and only require a Hub or a Switch together properly then there can be to connect all the computers together. problems accessing certain files. You can access any file on the computer as long as it is set to shared folder. The requirements for a Peer to Peer Network are that you have a 10 Base T Ethernet cable and an Ethernet hub/ switch. This is rather cheap than having a server. The architecture of the lay out (How It Connects) is simple. If one computer fails to work all the other computers connected to it continue to work. It doesnt always work if you have many computers and works better with 2 8 computers. Security is not good and you can set passwords for files that you dont want people to access but apart from that the security is pretty poor.
Here are some advantages and disadvantages of Client/Server Networks Advantages (Why) A Client Sever Can Be scaled up to many services that can also be used by multiple users. A client server enables the roles and responsibilities of a computing system. This When the server goes down or crashes. All means that it can update all the computers the computers connected to it become connected to it. An example of this would be unavailable to use. software updates or hardware updates. All the data is stored on the servers, which generally have far greater security controls than most clients. Servers can better control access and resources, to guarantee that only those clients with the appropriate permissions may access and change data. The security is a lot more advanced than a peer to peer network. You can have passwords to your won profile so that no one can access everything when they want. And the level off access range in different organisations When everyone tries to do the same thing it takes a little while for the server to do certain tasks. An example of this would be everyone logging into there profile in an organisation or a college at the same time. Disadvantages (Why)
More expensive than a peer to peer network. You have to pay for start up cost.
Many mature client-server technologies are already available which were designed to When you expand the server it starts to slow ensure security, 'friendliness' of the user down due to the Bit rate per second. interface, and ease of use. It functions with multiple different clients of different capabilities.
14 Applications
There are numerous applications of peer-to-peer networks. The most commonly known is for content distribution
Content delivery
Many file sharing networks, such as gnutella, G2 and the eDonkey network popularized peer-to-peer technologies. From 2004 on, such networks form the largest contributor of network traffic on the Internet. Peer-to-peer content delivery networks (P2P-CDN) (Giraffic, Kontiki, Ignite, RedSwoosh). Peer-to-peer content services, e.g. caches for improved performance such as Correli Caches[12] Software publication and distribution (Linux, several games); via file sharing networks. Streaming media. P2PTV and PDTP. Applications include TVUPlayer, Joost, CoolStreaming, Cybersky-TV, PPLive, LiveStation,Giraffic and Didiom. Spotify uses a peer-to-peer network along with streaming servers to stream music to its desktop music player. Peercasting for multicasting streams. See PeerCast, IceShare, FreeCast, Rawflow Pennsylvania State University, MIT and Simon Fraser University are carrying on a project called LionShare designed for facilitating file sharing among educational institutions globally. Osiris (Serverless Portal System) allows its users to create anonymous and autonomous web portals distributed via P2P network.
Exchange of physical goods, services,or space
Peer-to-peer renting/sharing web platforms such as Rentalic enable people to find and reserve goods, services, or space on the virtual platform, but carry out the actual P2P transaction in the physical world (for example: emailing a local footwear vendor to reserve for you that comfy pair of slippers which you've always had your eyes on).
Networking
Domain Name System, for Internet information retrieval. See Comparison of DNS server software cloud computing Dalesa a peer-to-peer web cache for LANs (based on IP multicasting).
Science
In bioinformatics, drug candidate identification. The first such program was begun in 2001 the Centre for Computational Drug Discovery at the University of Oxford in cooperation with the National Foundation for Cancer Research. There are now several similar programs running under the United Devices Cancer Research Project. The sciencenet P2P search engine. BOINC
Search
YaCy, a free distributed search engine, built on principles of peer-to-peer networks.
Communications networks
Skype, one of the most widely used internet phone applications is using P2P technology. VoIP (using application layer protocols such as SIP) Instant messaging and online chat Completely decentralized networks of peers: Usenet (1979) and WWIVnet (1987).
General
Research like the Chord project, the PAST storage utility, the P-Grid, and the CoopNet content distribution system. JXTA, for Peer applications. See Collanos Workplace (Teamwork software), Sixearch
Miscellaneous
The U.S. Department of Defense has started research on P2P networks as part of its modern network warfare strategy.[13] In May, 2003 Dr. Tether. Director of Defense Advanced Research Project Agency testified that U.S. Military is using P2P networks. Kato et al.s studies indicate over 200 companies with approximately $400 million USD are investing in P2P network. Besides File Sharing, companies are also interested in Distributing Computing, Content Distribution. Wireless community network, Netsukuku An earlier generation of peer-to-peer systems were called "metacomputing" or were classed as "middleware". These include: Legion, Globus Bitcoin is a peer-to-peer based digital currency.
Historical perspective
Tim Berners-Lee's vision for the World Wide Web was close to a P2P network in that it assumed each user of the web would be an active editor and contributor, creating and linking content to form an interlinked "web" of links.[citation needed] This contrasts to the current broadcasting-like structure of the web.[citation needed] Some networks and channels such as Napster, OpenNAP and IRC serving channels use a clientserver structure for some tasks (e.g., searching) and a P2P structure for others. Networks such as gnutella or Freenet use a P2P structure for nearly all tasks, with the exception of finding peers to connect to when first setting up.
P2P architecture embodies one of the key technical concepts of the Internet, described in the first Internet Request for Comments, RFC 1, "Host Software" dated April 7, 1969. More recently, the concept has achieved recognition in the general public in the context of the absence of central indexing servers in architectures used for exchanging multimedia files.
Network neutrality controversy
Peer-to-peer applications present one of the core issues in the network neutrality controversy. In October 2007, Comcast, one of the largest broadband Internet providers in the USA, started blocking P2P applications such as BitTorrent. Their rationale was that P2P is mostly used to share illegal content, and their infrastructure is not designed for continuous, high-bandwidth traffic. Critics point out that P2P networking has legitimate uses, and that this is another way that large providers are trying to control use and content on the Internet, and direct people towards a client-server-based application architecture. The client-server model provides financial barriers-to-entry to small publishers and individuals, and is quite inefficient for sharing large files.[citation needed]