Showing posts with label Network. Show all posts
Showing posts with label Network. Show all posts

May 13, 2012

Diskless Node

A diskless node (or diskless workstation) is a workstation or personal computer without disk drives, which employs network booting to load its operating system from a server. (A computer may also be said to act as a diskless node, if its disks are unused and network booting is used.)
Diskless nodes (or computers acting as such) are sometimes known as network computers or hybrid clients. Hybrid client may either just mean diskless node, or it may be used in a more particular sense to mean a diskless node which runs some, but not all, applications remotely, as in the thin client computing architecture.

May 5, 2012

Internet Protocol Suite

The Internet Protocol suite, usually referred to as "TCP/IP," is a full set of internetworking protocols that operate in the network layer, the transport layer, and the application layer. While TCP/IP refers to two separate protocols called TCP and IP, Internet Protocol suite refers to the entire set of protocols developed by the Internet community. Still, most people just say "TCP/IP" when they are referring to the Internet Protocol suite. Figure I-8 illustrates how the protocols compare to the OSI model.

May 4, 2012

Peer-to-peer

Peer-to-peer (abbreviated to P2P) refers to a computer network in which each computer in the network can act as a client or server for the other computers in the network, allowing shared access to files and peripherals without the need for a central server. P2P networks can be set up in the home, a business or over the Internet. Each network type requires all computers in the network to use the same or a compatible program to connect to each other and access files and other resources found on the other computer. P2P networks can be used for sharing content such as audio, video, data or anything in digital format.
P2P is a distributed application architecture that partitions tasks or workloads among peers. Peers are equally privileged participants in the application. Each computer in the network is referred to a node. The owner of each computer on a P2P network would set aside a portion of its resources - such as processing power, disk storage or network bandwidth -to be made directly available to other network participant, without the need for central coordination by servers or stable hosts. With this model, peers are both suppliers and consumers of resources, in contrast to the traditional client–server model where only servers supply (send), and clients consume (receive).

May 1, 2012

Client / Server Model

The client/server model is a computing model that acts as distributed application which partitions tasks or workloads between the providers of a resource or service, called servers, and service requesters, called clients. Often clients and servers communicate over a computer network on separate hardware, but both client and server may reside in the same system. A server machine is a host that is running one or more server programs which share their resources with clients. A client does not share any of its resources, but requests a server's content or service function. Clients therefore initiate communication sessions with servers which await incoming requests.

Apr 29, 2012

Internet Engineering Task Force (IETF)

The Internet Engineering Task Force (IETF) develops and promotes Internet standards, cooperating closely with the W3C and ISO/IEC standards bodies and dealing in particular with standards of the TCP/IP and Internet protocol suite. It is an open standards organization, with no formal membership or membership requirements. All participants and managers are volunteers, though their work is usually funded by their employers or sponsors; for instance, the current chairperson is funded by VeriSign and the U.S. government's National Security Agency.
 

Organization

 
The IETF is organized into a large number of working groups and informal discussion groups (BoF)s, each dealing with a specific topic. Each group is intended to complete work on that topic and then disband. Each working group has an appointed chairperson (or sometimes several co-chairs), along with a charter that describes its focus, and what and when it is expected to produce. It is open to all who want to participate, and holds discussions on an open mailing list or at IETF meetings. The mailing list consensus is the final arbiter of decision of-making.

Apr 28, 2012

Web Crawler

A Web crawler is a computer program that browses the World Wide Web in a methodical, automated manner or in an orderly fashion. Other terms for Web crawlers are ants, automatic indexers, bots, Web spiders, Web robots, or—especially in the FOAF community—Web scutters. This process is called Web crawling or spidering. Many sites, in particular search engines, use spidering as a means of providing up-to-date data. Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine that will index the downloaded pages to provide fast searches. Crawlers can also be used for automating maintenance tasks on a Web site, such as checking links or validating HTML code. Also, crawlers can be used to gather specific types of information from Web pages, such as harvesting e-mail addresses (usually for sending spam).

Apr 12, 2012

Subnet Mask Tutorial

The subnet mask plays an important role in computer networking. It's used to determine the subnetwork an IP address belongs to. It achieves this by masking the part of the IP address that will be used to create the subnetworks and not masking the portion of the IP address that will be used for host addresses.

Networks based on TCP/IP use subnet masking to split an IP address into two parts; the first part is used to divide the network into logical subnetworks, the second part is used to assign computers, otherwise known as hosts, to subnetworks. The subnet mask and IP address are interdependant; you look at the IP address in relation to the subnet mask to determine how many subnetworks and how many hosts per subnetwork there will be. We will focus solely on class C addresses as these are the most likely class readers of this article will encounter.

The Importance Of Network Security

Since the rise in popularity of the Internet, we have started to use our computers for a much wider range of tasks than ever before. At home, we buy our groceries, do our banking, buy birthday presents, send communications via email, write our life story on social networking sites; at work, our businesses provide e-commerce via websites, staff send and recieve emails, phonecalls and video conferencing are done through the network using IP based servcices; all of this is done online and it would present a serious security threat if it wasn't for the fact we have various security measures at our disposal. I would like to cover some basic examples of how network security helps to keep us safe online, both at home and in the workplace.

Apr 10, 2012

IP Address

An Internet Protocol address (IP address) is a numerical label assigned to each device (e.g., computer, printer) participating in a computer network that uses the Internet Protocol for communication. An IP address serves two principal functions: host or network interface identification and location addressing. Its role has been characterized as follows: "A name indicates what we seek. An address indicates where it is. A route indicates how to get there". The designers of the Internet Protocol defined an IP address as a 32-bit number and this system, known as Internet Protocol Version 4 (IPv4), is still in use today. However, due to the enormous growth of the Internet and the predicted depletion of available addresses, a new addressing system (IPv6), using 128 bits for the address, was developed in 1995, standardized as RFC 2460 in 1998, and its deployment has been ongoing since the mid-2000s.

Uniform Resource Locator (URL)

In computing, a uniform resource locator (URL) is a specific character string that constitutes a reference to an Internet resource.
A URL is technically a type of uniform resource identifier (URI) but in many technical documents and verbal discussions URL is often used as a synonym for URI.
 

History

 
In computing, a uniform resource locator (URL) is a specific character string that constitutes a reference to an Internet resource. A URL is technically a type of uniform resource identifier (URI) but in many technical documents and verbal discussions URL is often used as a synonym for URI.
 

Apr 5, 2012

HyperText Markup Language (HTML)

HyperText Markup Language (HTML) is the main markup language for web pages. HTML elements are the basic building-blocks of webpages. HTML is written in the form of HTML elements consisting of tags enclosed in angle brackets (like <html>), within the web page content. HTML tags most commonly come in pairs like <h1> and </h1>, although some tags, known as empty elements, are unpaired, for example <img>. The first tag in a pair is the start tag, the second tag is the end tag (they are also called opening tags and closing tags). In between these tags web designers can add text, tags, comments and other types of text-based content. The purpose of a web browser is to read HTML documents and compose them into visible or audible web pages. The browser does not display the HTML tags, but uses the tags to interpret the content of the page.

Apr 4, 2012

World Wide Web (WWW)

The World Wide Web (abbreviated as WWW or W3, and commonly known as the Web) is a system of interlinked hypertext documents accessed via the Internet. With a web browser, one can view web pages that may contain text, images, videos, and other multimedia, and navigate between them via hyperlinks.
Using concepts from his earlier hypertext systems like ENQUIRE, British engineer and computer scientist Sir Tim Berners-Lee, now Director of the World Wide Web Consortium (W3C), wrote a proposal in March 1989 for what would eventually become the World Wide Web. At CERN, a European research organization near Geneva situated on Swiss and French soil, Berners-Lee and Belgian computer scientist Robert Cailliau proposed in 1990 to use hypertext "... to link and access information of various kinds as a web of nodes in which the user can browse at will", and they publicly introduced the project in December.
 

Favorite Blogs