May 5, 2012

Internet Protocol Suite

The Internet Protocol suite, usually referred to as "TCP/IP," is a full set of internetworking protocols that operate in the network layer, the transport layer, and the application layer. While TCP/IP refers to two separate protocols called TCP and IP, Internet Protocol suite refers to the entire set of protocols developed by the Internet community. Still, most people just say "TCP/IP" when they are referring to the Internet Protocol suite. Figure I-8 illustrates how the protocols compare to the OSI model.

May 4, 2012

Peer-to-peer

Peer-to-peer (abbreviated to P2P) refers to a computer network in which each computer in the network can act as a client or server for the other computers in the network, allowing shared access to files and peripherals without the need for a central server. P2P networks can be set up in the home, a business or over the Internet. Each network type requires all computers in the network to use the same or a compatible program to connect to each other and access files and other resources found on the other computer. P2P networks can be used for sharing content such as audio, video, data or anything in digital format.
P2P is a distributed application architecture that partitions tasks or workloads among peers. Peers are equally privileged participants in the application. Each computer in the network is referred to a node. The owner of each computer on a P2P network would set aside a portion of its resources - such as processing power, disk storage or network bandwidth -to be made directly available to other network participant, without the need for central coordination by servers or stable hosts. With this model, peers are both suppliers and consumers of resources, in contrast to the traditional client–server model where only servers supply (send), and clients consume (receive).

May 1, 2012

Client / Server Model

The client/server model is a computing model that acts as distributed application which partitions tasks or workloads between the providers of a resource or service, called servers, and service requesters, called clients. Often clients and servers communicate over a computer network on separate hardware, but both client and server may reside in the same system. A server machine is a host that is running one or more server programs which share their resources with clients. A client does not share any of its resources, but requests a server's content or service function. Clients therefore initiate communication sessions with servers which await incoming requests.

Apr 29, 2012

Internet Engineering Task Force (IETF)

The Internet Engineering Task Force (IETF) develops and promotes Internet standards, cooperating closely with the W3C and ISO/IEC standards bodies and dealing in particular with standards of the TCP/IP and Internet protocol suite. It is an open standards organization, with no formal membership or membership requirements. All participants and managers are volunteers, though their work is usually funded by their employers or sponsors; for instance, the current chairperson is funded by VeriSign and the U.S. government's National Security Agency.
 

Organization

 
The IETF is organized into a large number of working groups and informal discussion groups (BoF)s, each dealing with a specific topic. Each group is intended to complete work on that topic and then disband. Each working group has an appointed chairperson (or sometimes several co-chairs), along with a charter that describes its focus, and what and when it is expected to produce. It is open to all who want to participate, and holds discussions on an open mailing list or at IETF meetings. The mailing list consensus is the final arbiter of decision of-making.

Apr 28, 2012

Web Crawler

A Web crawler is a computer program that browses the World Wide Web in a methodical, automated manner or in an orderly fashion. Other terms for Web crawlers are ants, automatic indexers, bots, Web spiders, Web robots, or—especially in the FOAF community—Web scutters. This process is called Web crawling or spidering. Many sites, in particular search engines, use spidering as a means of providing up-to-date data. Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine that will index the downloaded pages to provide fast searches. Crawlers can also be used for automating maintenance tasks on a Web site, such as checking links or validating HTML code. Also, crawlers can be used to gather specific types of information from Web pages, such as harvesting e-mail addresses (usually for sending spam).

Favorite Blogs