World Wide Web
- "The Web" and "WWW" redirect here. For other uses, see Web and WWW (disambiguation). For the world's first browser, see WorldWideWeb.
The World Wide Web ("WWW" or simply the "Web") is a global collection of documents hosted on computers and available to the public. These documents include text files, images, videos, sound files and many other types of information. Individual piece of informations on the Web referred to as resources, and each is identified by a short, unique, global identifier called Uniform Resource Identifier (URI) so that each can be found, accessed and cross referenced in the simplest possible way.
The term is often mistakenly used as a synonym for the Internet itself, but the Web is actually only a collection of documents. While the Internet is the global network that connects a vast number of smaller networks. The Web is accessed over the Internet, as are other services that are not part of the Web - like e-mail, instant messaging and Voice over IP.[1]
Basic terms
The World Wide Web is the combination of 4 basic ideas:
- Hypertext: a format of information which allows one, in a computer environment, to move from one part of a document to another or from one document to another through internal connections among these documents (called "hyperlinks");
- Resource Identifiers: unique identifiers used to locate a particular resource (computer file, document or other resource) on the network;
- The Client-server model of computing: a system in which client software or a client computer makes requests of server software or a server computer that provides the client with resources or services, such as data or files; and
- Markup language: characters or codes embedded in text which indicate structure, semantic meaning, or advice on presentation.
Web pages are often arranged in collections of related material called "websites." The act of following hyperlinks from one website to another is referred to as "browsing" or sometimes as "surfing" the Web.
The phrase "surfing the Internet" was first popularized in print by Jean Armour Polly, a librarian, in an article called Surfing the INTERNET, published in the University of Minnesota Wilson Library Bulletin in June, 1992. Although Polly may have developed the phrase independently, slightly earlier uses of similar terms have been found on the Usenet from 1991 and 1992, and some recollections claim it was also used verbally in the hacker community for a couple years before that. Polly is famous as "NetMom" in the history of the Internet.
Although the English word worldwide is normally written as one word (without a space or hyphen), the proper name World Wide Web and abbreviation WWW are now well-established even in formal English. The earliest references to the Web called it the WorldWideWeb (an example of computer programmers' fondness for CamelCase) or the World-Wide Web (with a hyphen, this version of the name is the closest to normal English usage).
Ironically, the abbreviation "WWW" contains two or three times as many syllables (depending on accent) as the full term "World Wide Web". There are multiple pronunciations of "www'.
How the Web works
Web pages, and other files (such as images and videos), on the World Wide Web are stored on many different web servers located all around the world. These web servers are all connected to the the Internet. The user accesses the Web through the Internet using a user agent program, such as a web browser (a type of user agent that renders and displays the requested web page to the user). The user can navigate to different pages on the web by either typing in the Uniform Resource Identifier (URI)[2], also referred to as the "web address" (for example http://pilot.citizendium.org/wiki/World_Wide_Web) into the web browser, or by following a hyperlink from another web page. The user agent uses this URI to figure out which web server to ask for which resource and using which communication protocol. Unless a problem has occured, the server sends back the requested resource.
Caching
If the user returns to a page fairly soon, it is likely that the data will not be retrieved from the source web server, as above, again. By default, browsers cache all web resources on the local hard drive. An HTTP request will be sent by the browser that asks for the data only if it has been updated since the last download. If it has not, the cached version will be reused in the rendering step.
This is particularly valuable in reducing the amount of web traffic on the internet. The decision about expiry is made independently for each resource (image, stylesheet, JavaScript file etc, as well as for the HTML itself). Thus even on sites with highly dynamic content, many of the basic resources are only supplied once per session or less. It is worth it for any web site designer to collect all the CSS and JavaScript into a few site-wide files so that they can be downloaded into users' caches and reduce page download times and demands on the server.
There are other components of the internet that can cache web content. The most common in practice are often built into corporate and academic firewalls where they cache web resources requested by one user for the benefit of all.
Apart from the facilities built into web servers that can ascertain when physical files have been updated, it is possible for designers of dynamically generated web pages to control the HTTP headers sent back to requesting users, so that pages are not cached when they should not be — for example internet banking and news pages.
This helps with understanding the difference between the HTTP 'GET' and 'POST' verbs — data requested with a GET may be cached, if other conditions are met, whereas data obtained after POSTing information to the server usually will not.
Web Addresses
Template:For2 On the Internet, a site is commonly identified to users by a domain name (such as www.google.com), while computers usually use Internet Protocol (IP) addresses (such as 209.85.135.104). To avoid forcing users to remember complex strings of numbers, browsers use Domain Name Service (DNS) routers, which act as phonebooks, matching the domain name with an IP address.
A web address consists of several parts. An example URI might be:[3]
http://www.w3.org:80/Consortium/activities.html#HTMLActivity
- http:// tells the browser to use the HTTP protocol.
- www.w3.org refers to the specific server. The browser would look this up with DNS.
- :80 refers to the port (networking) port. The default HTTP port is 80, and generally isn't included in an address.
- /Consortium/ is the path on the server. Like on desktops, most websites are organized in a folder-based heirarchy.
- activities.html refers to the specific web page.
- #HTMLActivity is called a "fragment". It directs the browser to a section on the actual page.
Web addresses frequently start with "www". The first part of the web address indicates the specific service you wish to access. In the case of news.google.com, this means "Google.com's website, specifically the servers that handle 'news'." The reason that "www" is so common is that customarily, different services or protocols are handled under different hostnames. For instance, public FTP was traditionally done from "ftp.name.org", while Gopher would have been handled by "gopher.name.org". This convention predated the World Wide Web, and so organizations began calling web servers "www" servers. Some sites still require the "www", while other sites don't.
Origins
- See also: History of the Internet
The underlying ideas of the Web can be traced as far back as 1980, when, at CERN in Switzerland, Tim Berners-Lee built ENQUIRE - a system which contained many of the same core ideas for the modern Web.[4] In 1990 Berners-Lee created the first web server and also wrote the first web browser, called WorldWideWeb. The Web made its debut as a publically available service on August 6, 1991.[5][4]
The crucial underlying concept of hypertext originated with older projects from the 1960s, such as Ted Nelson's Project Xanadu and Douglas Engelbart's oN-Line System (NLS).[6]
On April 30, 1993, CERN announced that the World Wide Web would be free to anyone.[7] This came two months after the announcement that gopher, the older distributed document protocol, was no longer free to use.[8]
The World Wide Web, however, only gained critical mass after the 1993 release of the graphical Mosaic web browser by the National Center for Supercomputing Applications (NCSA) developed by Marc Andreessen. Prior to the release of Mosaic, graphics were not commonly mixed with text in Web pages and its popularity was less than older protocols in use over the Internet, such as Gopher protocol and Wide area information server. Mosaic's graphical user interface allowed the Web to become by far the most popular Internet protocol.
Web standards
At its core, the Web is made up of three standards:
- the Uniform Resource Identifier (URI), which is a universal system for referencing resources on the Web, such as Web pages;
- the HyperText Transfer Protocol (HTTP), which specifies how the browser and server communicate with each other; and
- the HyperText Markup Language (HTML), used to define the structure and content of hypertext documents.
Berners-Lee now heads the World Wide Web Consortium (W3C), which develops and maintains these and other standards that enable computers on the Web to effectively store and communicate different forms of information.
Java and JavaScript
Another significant advance in the technology was Sun Microsystems' Java platform. It initially enabled Web servers to embed small programs (called applets) directly into the information being served, and these applets would run on the end-user's computer, allowing faster and richer user interaction. Eventually, it came to be more widely used as a tool for generating complex server-side content as it is requested. Java never gained as much acceptance as Sun had hoped as a platform for client-side applets for a variety of reasons, including lack of integration with other content (applets were confined to small boxes within the rendered page) and poor performance (particularly start up delays) of Java VMs on PC hardware of that time.
JavaScript, however, is a scripting language that was developed for Web pages. The standardised version is ECMAScript. While its name is similar to Java, it was developed by Netscape and not Sun Microsystems, and it has almost nothing to do with Java, with the only exception being that like Java its syntax is derived from the C programming language. Like Java, JavaScript is also object oriented, but like C++ and unlike Java, it allows mixed code — both object-oriented as well as procedural. In conjunction with the Document Object Model, JavaScript has become a much more powerful language than its creators originally envisioned. Sometimes its usage is expressed under the term Dynamic HTML (DHTML), to emphasise a shift away from static HTML pages.
Ajax (Asynchronous JavaScript And XML) is a JavaScript-based technology that may have a significant effect on the development of the World Wide Web. By providing a method where only part of a page need be updated when required, rather than the whole, Ajax allows such updates to be much faster and more efficient. Ajax is seen as an important aspect of Web 2.0. Examples of Ajax techniques currently in use can be seen in Gmail, Google Maps etc.
Sociological implications
The Web, as it stands today, has allowed global interpersonal exchange on a scale unprecedented in human history. People separated by vast distances, or even large amounts of time, can use the Web to exchange — or even mutually develop — their most intimate and extensive thoughts, or alternately their most casual attitudes and spirits. Emotional experiences, political ideas, cultural customs, musical idioms, business advice, artwork, photographs, literature, can all be shared and disseminated digitally with less individual investment than ever before in human history. Although the existence and use of the Web relies upon material technology, which comes with its own disadvantages, its information does not use physical resources in the way that libraries or the printing press have. Therefore, propagation of information via the Web (via the Internet, in turn) is not constrained by movement of physical volumes, or by manual or material copying of information. And by virtue of being digital, the information of the Web can be searched more easily and efficiently than any library or physical volume, and vastly more quickly than a person could retrieve information about the world by way of physical travel or by way of mail, telephone, telegraph, or any other communicative medium.
The Web is the most far-reaching and extensive medium of personal exchange to appear on Earth. It has probably allowed many of its users to interact with many more groups of people, dispersed around the planet in time and space, than is possible when limited by physical contact or even when limited by every other existing medium of communication combined.
Because the Web is global in scale, some have suggested that it will nurture mutual understanding on a global scale. By definition or by necessity, the Web has such a massive potential for social exchange, it has the potential to nurture empathy and symbiosis, but it also has the potential to incite belligerence on a global scale, or even to empower demagogues and repressive regimes in ways that were historically impossible to achieve.
Content
The Web is available to individuals outside mass media. In order to "publish" a web page, one does not have to go through a publisher or other media institution, and potential readers could be found in all corners of the globe. The increased opportunity to publish materials is certainly observable in the countless personal pages, as well as pages by families, small shops, etc., facilitated by the emergence of free web hosting services. It's free to post some smaller webpages, and even larger sites are inexpensive in comparison to traditional media.
Unlike books and documents, hypertext does not have a linear order from beginning to end. It is not broken down into the hierarchy of chapters, sections, subsections, etc. This allows readers to easily find more on a topic, move to other related topics, or skip sections they're uninterested in.
Many different kinds of information are now available on the Web, and for those who wish to know other societies, their cultures and peoples, it has become easier. When travelling in a foreign country or a remote town, one might be able to find some information about the place on the Web, especially if the place is in one of the developed countries. Local newspapers, government publications, and other materials are easier to access, and therefore the variety of information obtainable with the same effort may be said to have increased, for the users of the Internet.
Although some websites are available in multiple languages, many are in the local language only. Also, not all software supports all special characters, and RTL languages. These factors would challenge the notion that the World Wide Web will bring a unity to the world.
Statistics
According to a 2001 study [1], there were more than 550 billion documents on the Web, mostly in the "invisible Web". A 2002 survey of 2,024 million web pages [2] determined that by far the most Web content was in English: 56.4%; next were pages in German (7.7%), French (5.6%) and Japanese (4.9%). A more recent study [3] which used web searches in 75 different languages to sample the Web determined that there were over 11.5 billion web pages in the publicly-indexable Web as of January 2005.
Speed issues
Frustration over congestion issues in the Internet infrastructure and the high latency that results in slow browsing has led to an alternative name for the World Wide Web: the World Wide Wait. Speeding up the Internet is an ongoing discussion over the use of peering and QoS technologies. Other suggested solutions to reduce the World Wide Wait can be found from the W3C.
Standard guidelines for ideal web response times are (Nielsen 1999, page 42):
- 0.1 second (one tenth of a second). Ideal response time. The user doesn't sense any interruption.
- 1 second. Highest acceptable response time. Download times above 1 second interrupt the user experience.
- 10 seconds. Unacceptable response time. The user experience is interrupted and the user is likely to leave the site or system.
These numbers are useful for planning server capacity.
Link rot
Link rot is when web links become broken due to resources moving or ceasing to exist. The ephemeral nature of the Web has prompted many efforts to archive the Web. The Internet Archive is one of the most well-known efforts; they have been archiving the Web since 1996.
Academic conferences
The major academic event covering the WWW is the World Wide Web series of conferences, promoted by IW3C2. There is a list with links to all conferences in the series.
Standards
The following is a cursory list of the documents that define the World Wide Web's three core standards:
- Uniform Resource Locators (URL)
- HyperText Transfer Protocol (HTTP)
- RFC 1945, HTTP/1.0 specification (May 1996)
- RFC 2616, HTTP/1.1 specification (June 1999)
- RFC 2617, HTTP Authentication
- HTTP/1.1 specification errata
- Hypertext Markup Language (HTML)
See also
Notes
- ↑ Halsal, p. 359, 568
- ↑ For historical reasons, the URI (Uniform Resource Identifier) is often referred to as the URL (Uniform Resource Locator), however this is not always the correct term to use - see URL#A popular synonym for "URI" for more information.
- ↑ RFC 2616. Uniform Resource Identifer (URI) Schemes. The Internet Society (1999). Retrieved on 2007-01-17.
- ↑ 4.0 4.1 Berners-Lee, Tim (1993/1994). A Brief History of the Web. World Wide Web Consortium. Retrieved on 2007-01-19.
- ↑ Template:Cite newsgroup
- ↑ Sturrock, Charles P.; Begle, Edwin F. (1995). Computerization and Networking of Materials Databases, Fourht Volume. ASTM International, 154. ISBN 0803120265.
- ↑ CERN (1993-04-30). Statement concerning CERN W3 software release into public domain. Press release. Retrieved on 2007-01-19.
- ↑ Template:Cite newsgroup
References
- Berners-Lee, Tim; Bray, Tim; Connolly, Dan; Cotton, Paul; Fielding, Roy; Jeckle, Mario; Lilley, Chris; Mendelsohn, Noah; Orchard, David; Walsh, Norman; Williams, Stuart (December 15, 2004). Architecture of the World Wide Web, Volume One. Version 20041215. W3C.
- Halsall, Fred [1985] (2005). Computer Networking and the Internet, fifth edition. Pearson Education. ISBN 0-321-26358-8.
- Fielding, R.; Gettys, J.; Mogul, J.; Frystyk, H.; Masinter, L.; Leach, P.; Berners-Lee, T. (June 1999). Hypertext Transfer Protocol — HTTP/1.1. Request For Comments 2616. Information Sciences Institute.
- Polo, Luciano (2003). World Wide Web Technology Architecture: A Conceptual Analysis. New Devices. Retrieved on July 31, 2005.
External links
- Open Directory — Computers: Internet: Web Design and Development
- WWW-Virtual Library: History of the Internet & W3
- Early archive of the first web site
- Internet Statistics: Growth and Usage of the Web and the Internet
- The History of the Web
- Webology
- The World Wide Web Virtual Library: Web Site Tools from the World Wide Web Virtual Library
- A comprehensive history with people, concepts and many interesting quotations
- Article DirectorySome articles about World Wide Web