A web site is a collection of Web pages, images, videos or other digital assets that is

hosted on one or more web servers, usually accessible via the Internet.

A Web page is a document, typically written in (X)HTML, that is almost always

accessible via HTTP, a protocol that transfers information from the Web server to

display in the user's Web browser.

All publicly accessible websites are seen collectively as constituting the "World Wide


The pages of a website can usually be accessed from a common root URL called the

homepage, and usually reside on the same physical server. The URLs of the pages

organize them into a hierarchy, although the hyperlinks between them control how the

reader perceives the overall structure and how the traffic flows between the different

parts of the site.

Some websites require a subscription to access some or all of their content. Examples

of subscription sites include many business sites, parts of many news sites, academic

journal sites, gaming sites, message boards, Web-based e-mail, services, social

networking websites, and sites providing real-time stock market data. Because they

require authentication to view the content they are technically an Intranet site.

The World Wide Web was created in 1990 by CERN engineer, Tim Berners-Lee.[1] On

30 April 1993, CERN announced that the World Wide Web would be free to anyone.[2]

Before the introduction of HTML and HTTP other protocols such as file transfer protocol

and the gopher protocol were used to retrieve individual files from a server. These

protocols offer a simple directory structure which the user navigates and chooses files

to download. Documents were most often presented as plain text files without

formatting or were encoded in word processor formats.


Organized by function a website may be

* a personal website
* a commercial website
* a government website
* a non-profit organization website

It could be the work of an individual, a business or other organization, and is typically

dedicated to some particular topic or purpose. Any website can contain a hyperlink to

any other website, so the distinction between individual sites, as perceived by the

user, may sometimes be blurred.

Websites are written in, or dynamically converted to, HTML (Hyper Text Markup

Language) and are accessed using a software interface classified as an user agent.

Web pages can be viewed or otherwise accessed from a range of computer-based and

Internet-enabled devices of various sizes, including desktop computers, laptops, PDAs

and cell phones.

A website is hosted on a computer system known as a web server, also called an HTTP

server, and these terms can also refer to the software that runs on these systems and

that retrieves and delivers the Web pages in response to requests from the website

users. Apache is the most commonly used Web server software (according to Netcraft

statistics) and Microsoft's Internet Information Server (IIS) is also commonly used.

Website styles

Static Website

A Static Website is one that has web pages stored on the server in the same form as

the user will view them. It is primarily coded in HTML (Hyper-text Markup Language).

A static website is also called a Classic website, a 5-page website or a Brochure

website because it simply presents pre-defined information to the user. It may include

information about a company and its products and services via text, photos, Flash

animation, audio/video and interactive menus and navigation.

This type of website usually displays the same information to all visitors, thus the

information is static. Similar to handing out a printed brochure to customers or clients,

a static website will generally provide consistent, standard information for an extended

period of time. Although the website owner may make updates periodically, it is a

manual process to edit the text, photos and other content and may require basic

website design skills and software.

In summary, visitors are not able to control what information they receive via a static

website, and must instead settle for whatever content the website owner has decided

to offer at that time.

They are edited using four broad categories of software:

* Text editors, such as Notepad or TextEdit, where the HTML is manipulated directly

within the editor program
* WYSIWYG offline editors, such as Microsoft FrontPage and Adobe Dreamweaver

(previously Macromedia Dreamweaver), where the site is edited using a GUI interface

and the underlying HTML is generated automatically by the editor software
* WYSIWYG Online editors, where the any media rich online presentation like

websites, widgets, intro, blogs etc. are created on a flash based platform.
* Template-based editors, such as Rapidweaver and iWeb, which allow users to

quickly create and upload websites to a web server without having to know anything

about HTML, as they just pick a suitable template from a palette and add pictures and

text to it in a DTP-like fashion without ever having to see any HTML code.

MRF Web Technologies
Mrf Web Design
Web Management India
Web Design Devlopment
Web Solution Tools

World Wide Web

The World Wide Web (commonly shortened to the Web) is a system of interlinked

hypertext documents accessed via the Internet. With a Web browser, one can view

Web pages that may contain text, images, videos, and other multimedia and navigate

between them using hyperlinks. Using concepts from earlier hypertext systems, the

World Wide Web was begun in 1989 by English scientist Tim Berners-Lee, working at

the European Organization for Nuclear Research (CERN) in Geneva, Switzerland. In

1990, he proposed building a "web of nodes" storing "hypertext pages" viewed by

"browsers" on a network,[1] and released that web in 1992. Connected by the existing

Internet, other websites were created, around the world, adding international

standards for domain names & the HTML language. Since then, Berners-Lee has played

an active role in guiding the development of Web standards (such as the markup

languages in which Web pages are composed), and in recent years has advocated his

vision of a Semantic Web.

The World Wide Web enabled the spread of information over the Internet through an

easy-to-use and flexible format. It thus played an important role in popularising use of

the Internet, [2] to the extent that the World Wide Web has become a synonym for

Internet, with the two being conflated in popular use. [3]

How it works

Viewing a Web page on the World Wide Web normally begins either by typing the URL

of the page into a Web browser, or by following a hyperlink to that page or resource.

The Web browser then initiates a series of communication messages, behind the

scenes, in order to fetch and display it.

First, the server-name portion of the URL is resolved into an IP address using the

global, distributed Internet database known as the domain name system, or DNS. This

IP address is necessary to contact and send data packets to the Web server.

The browser then requests the resource by sending an HTTP request to the Web server

at that particular address. In the case of a typical Web page, the HTML text of the

page is requested first and parsed immediately by the Web browser, which will then

make additional requests for images and any other files that form a part of the page.

Statistics measuring a website's popularity are usually based on the number of 'page

views' or associated server 'hits', or file requests, which take place.

Having received the required files from the Web server, the browser then renders the

page onto the screen as specified by its HTML, CSS, and other Web languages. Any

images and other resources are incorporated to produce the on-screen Web page that

the user sees.

Most Web pages will themselves contain hyperlinks to other related pages and perhaps

to downloads, source documents, definitions and other Web resources. Such a

collection of useful, related resources, interconnected via hypertext links, is what was

dubbed a "web" of information. Making it available on the Internet created what Tim

Berners-Lee first called the WorldWideWeb (a term written in CamelCase, subsequently

discarded) in November 1990.[1]

Berners-Lee has said that the most important feature of the World Wide Web is "Error

404", which tells the user that a file does not exist. Without this feature, he said, the

web would have ground to a halt long ago.

Berners-Lee has also expressed regret over the format of the URL. Currently it is

divided into two parts - the route to the server which is divided by dots, and the file

path separated by slashes. The server route starts with the least significant element

and ends with the most significant, then the file path reverses this, moving from high

to low. Berners-Lee would have liked to see this rationalised. So an address which is

currently (e.g.) " /document/pictures/illustration.jpg" would

become http:/uk/co/examplesite/documents/pictures/illustration.jpg. In this format the

server no longer has any special place in the address, which is simply one coherent

hierarchical path.


History of the World Wide Web

This NeXT Computer used by Sir Tim Berners-Lee at CERN became the first Web server.

The underlying ideas of the Web can be traced as far back as 1980, when, at CERN in

Switzerland, Sir Tim Berners-Lee built ENQUIRE (a reference to Enquire Within Upon

Everything, a book he recalled from his youth). While it was rather different from the

system in use today, it contained many of the same core ideas (and even some of the

ideas of Berners-Lee's next project after the World Wide Web, the Semantic Web).

In March 1989, Berners-Lee wrote a proposal[4] which referenced ENQUIRE and

described a more elaborate information management system. With help from Robert

Cailliau, he published a more formal proposal (on November 12, 1990) to build a

"Hypertext project" called "WorldWideWeb" (one word, also "W3")[1] as a "web of

nodes" with "hypertext documents" to store data. That data would be viewed in

"hypertext pages" (webpages) by various "browsers" (line-mode or full-screen) on the

computer network, using an "access protocol" connecting the "Internet and DECnet

protocol worlds".[1]

The proposal had been modeled after EBT's (Electronic Book Technology, a spin-off

from the Institute for Research in Information and Scholarship at Brown University)

Dynatext SGML reader that CERN had licensed. The Dynatext system, although

technically advanced (a key player in the extension of SGML ISO 8879:1986 to

Hypermedia within HyTime), was considered too expensive and with an inappropriate

licensing policy for general HEP (High Energy Physics) community use: a fee for each

document and each time a document was charged.

A NeXT Computer was used by Berners-Lee as the world's first Web server and also to

write the first Web browser, WorldWideWeb, in 1990. By Christmas 1990, Berners-Lee

had built all the tools necessary for a working Web:[5] the first Web browser (which

was a Web editor as well), the first Web server, and the first Web pages[6] which

described the project itself.

On August 6, 1991, he posted a short summary of the World Wide Web project on the

alt.hypertext newsgroup.[7] This date also marked the debut of the Web as a publicly

available service on the Internet.

The first server outside Europe was set up at SLAC in December 1991 [8].

The crucial underlying concept of hypertext originated with older projects from the

1960s, such as the Hypertext Editing System (HES) at Brown University--- among

others Ted Nelson and Andries van Dam--- Ted Nelson's Project Xanadu and Douglas

Engelbart's oN-Line System (NLS). Both Nelson and Engelbart were in turn inspired by

Vannevar Bush's microfilm-based "memex," which was described in the 1945 essay "As

We May Think".

Berners-Lee's breakthrough was to marry hypertext to the Internet. In his book

Weaving The Web, he explains that he had repeatedly suggested that a marriage

between the two technologies was possible to members of both technical communities,

but when no one took up his invitation, he finally tackled the project himself. In the

process, he developed a system of globally unique identifiers for resources on the Web

and elsewhere: the Uniform Resource Identifier.

The World Wide Web had a number of differences from other hypertext systems that

were then available. The Web required only unidirectional links rather than bidirectional

ones. This made it possible for someone to link to another resource without action by

the owner of that resource. It also significantly reduced the difficulty of implementing

Web servers and browsers (in comparison to earlier systems), but in turn presented the

chronic problem of link rot. Unlike predecessors such as HyperCard, the World Wide

Web was non-proprietary, making it possible to develop servers and clients

independently and to add extensions without licensing restrictions.

On April 30, 1993, CERN announced[9] that the World Wide Web would be free to

anyone, with no fees due. Coming two months after the announcement that the Gopher

protocol was no longer free to use, this produced a rapid shift away from Gopher and

towards the Web. An early popular Web browser was ViolaWWW, which was based

upon HyperCard.

Scholars generally agree, however, that the turning point for the World Wide Web

began with the introduction[10] of the Mosaic Web browser[11] in 1993, a graphical

browser developed by a team at the National Center for Supercomputing Applications at

the University of Illinois at Urbana-Champaign (NCSA-UIUC), led by Marc Andreessen.

Funding for Mosaic came from the High-Performance Computing and Communications

Initiative, a funding program initiated by the High Performance Computing and

Communication Act of 1991, one of several computing developments initiated by

Senator Al Gore.[12] Prior to the release of Mosaic, graphics were not commonly mixed

with text in Web pages, and its popularity was less than older protocols in use over the

Internet, such as Gopher and Wide Area Information Servers (WAIS). Mosaic's graphical

user interface allowed the Web to become, by far, the most popular Internet protocol.

The World Wide Web Consortium (W3C) was founded by Tim Berners-Lee after he left

the European Organization for Nuclear Research (CERN) in October, 1994. It was

founded at the Massachusetts Institute of Technology Laboratory for Computer Science

(MIT/LCS) with support from the Defense Advanced Research Projects Agency

(DARPA)—which had pioneered the Internet—and the European Commission.

Web standards

Many formal standards and other technical specifications define the operation of

different aspects of the World Wide Web, the Internet, and computer information

exchange. Many of the documents are the work of the World Wide Web Consortium

(W3C), headed by Berners-Lee, but some are produced by the Internet Engineering

Task Force (IETF) and other organizations.

Usually, when Web standards are discussed, the following publications are seen as


* Recommendations for markup languages, especially HTML and XHTML, from the

W3C. These define the structure and interpretation of hypertext documents.
* Recommendations for stylesheets, especially CSS, from the W3C.
* Standards for ECMAScript (usually in the form of JavaScript), from Ecma

* Recommendations for the Document Object Model, from W3C.

Additional publications provide definitions of other essential technologies for the World

Wide Web, including, but not limited to, the following:

* Uniform Resource Identifier (URI), which is a universal system for referencing

resources on the Internet, such as hypertext documents and images. URIs, often called

URLs, are defined by the IETF's RFC 3986 / STD 66: Uniform Resource Identifier (URI):

Generic Syntax, as well as its predecessors and numerous URI scheme-defining RFCs;
* HyperText Transfer Protocol (HTTP), especially as defined by RFC 2616: http://1.1

and RFC 2617: HTTP Authentication, which specify how the browser and server

authenticate each other.


Computer users, who save time and money, and who gain conveniences and

entertainment, may or may not have surrendered the right to privacy in exchange for

using a number of technologies including the Web.[13] Worldwide, more than a half

billion people have used a social network service,[14] and of Americans who grew up

with the Web, half created an online profile[15] and are part of a generational shift

that could be changing norms.[16][17] Among services paid for by advertising, Yahoo!

could collect the most data about users of commercial websites, about 2,500 bits of

information per month about each typical user of its site and its affiliated advertising

network sites. Yahoo! was followed by MySpace with about half that potential and then

by AOL-TimeWarner, Google, Facebook, Microsoft, and eBay.[18]

Privacy representatives from 60 countries have resolved to ask for laws to complement

industry self-regulation, for education for children and other minors who use the Web,

and for default protections for users of social networks.[19] They also believe data

protection for personally identifiable information benefits business more than the sale

of that information.[19] Users can opt-in to features in browsers from companies such

as Apple, Google, Microsoft (beta) and Mozilla (beta) to clear their personal histories

locally and block some cookies and advertising networks[20] but they are still tracked

in websites' server logs.[citation needed] Berners-Lee and colleagues see hope in

accountability and appropriate use achieved by extending the Web's architecture to

policy awareness, perhaps with audit logging, reasoners and appliances.[21]


The Web has become criminals' preferred pathway for spreading malware. Cybercrime

carried out on the Web can include identity theft, fraud, espionage and intelligence

gathering.[22] Web-based vulnerabilities now outnumber traditional computer security

concerns,[23] and as measured by Google, about one in ten Web pages may contain

malicious code.[24] Most Web-based attacks take place on legitimate websites, and

most, as measured by Sophos, are hosted in the United States, China and Russia.[25]

The most common of all malware threats is SQL injection attacks against websites.[26]

Through HTML and URIs the Web was vulnerable to attacks like cross-site scripting

(XSS) that came with the introduction of JavaScript[27] and were exacerbated to some

degree by Web 2.0 and Ajax web design that favors the use of scripts.[28] Today by

one estimate, 70% of all websites are open to XSS attacks on their users.[29]

Proposed solutions vary to extremes. Large security vendors like McAfee already design

governance and compliance suites to meet post-9/11 regulations,[30] and some, like

Finjan have recommended active real-time inspection of code and all content regardless

of its source.[22] Some have argued that for enterprise to see security as a business

opportunity rather than a cost center,[31] "ubiquitous, always-on digital rights

management" enforced in the infrastructure by a handful of organizations must replace

the hundreds of companies that today secure data and networks.[32] Jonathan Zittrain

has said users sharing responsibility for computing safety is far preferable to locking

down the Internet.[33]

Web accessibility

Many countries regulate web accessibility as a requirement for web sites.


A significant advance in Web technology was Sun Microsystems' Java platform. It

enables Web pages to embed small programs (called applets) directly into the view.

These applets run on the end-user's computer, providing a richer user interface than

simple Web pages. Java client-side applets never gained the popularity that Sun had

hoped for a variety of reasons, including lack of integration with other content (applets

were confined to small boxes within the rendered page) and the fact that many

computers at the time were supplied to end users without a suitably installed Java

Virtual Machine, and so required a download by the user before applets would appear.

Adobe Flash now performs many of the functions that were originally envisioned for

Java applets, including the playing of video content, animation, and some rich GUI

features. Java itself has become more widely used as a platform and language for

server-side and other programming.


JavaScript, on the other hand, is a scripting language that was initially developed for

use within Web pages. The standardized version is ECMAScript. While its name is

similar to Java, JavaScript was developed by Netscape and has very little to do with

Java, although the syntax of both languages is derived from the C programming

language. In conjunction with a Web page's Document Object Model (DOM), JavaScript

has become a much more powerful technology than its creators originally

envisioned.[citation needed] The manipulation of a page's DOM after the page is

delivered to the client has been called Dynamic HTML (DHTML), to emphasize a shift

away from static HTML displays.

In simple cases, all the optional information and actions available on a

JavaScript-enhanced Web page will have been downloaded when the page was first

delivered. Ajax ("Asynchronous JavaScript and XML") is a group of interrelated web

development techniques used for creating interactive web applications that provide a

method whereby parts within a Web page may be updated, using new information

obtained over the network at a later time in response to user actions. This allows the

page to be more responsive, interactive and interesting, without the user having to

wait for whole-page reloads. Ajax is seen as an important aspect of what is being

called Web 2.0. Examples of Ajax techniques currently in use can be seen in Gmail,

Google Maps, and other dynamic Web applications.

Publishing Web pages

Web page production is available to individuals outside the mass media. In order to

publish a Web page, one does not have to go through a publisher or other media

institution, and potential readers could be found in all corners of the globe.

Many different kinds of information are available on the Web, and for those who wish

to know other societies, cultures, and peoples, it has become easier.

The increased opportunity to publish materials is observable in the countless personal

and social networking pages, as well as sites by families, small shops, etc., facilitated

by the emergence of free Web hosting services.


According to a 2001 study, there were massively more than 550 billion documents on

the Web, mostly in the invisible Web, or deep Web.[34] A 2002 survey of 2,024 million

Web pages[35] determined that by far the most Web content was in English: 56.4%;

next were pages in German (7.7%), French (5.6%), and Japanese (4.9%). A more

recent study, which used Web searches in 75 different languages to sample the Web,

determined that there were over 11.5 billion Web pages in the publicly indexable Web

as of the end of January 2005.[36] As of June 2008, the indexable web contains at

least 63 billion pages.[37] On July 25, 2008, Google software engineers Jesse Alpert

and Nissan Hajaj announced that Google Search had discovered one trillion unique


Over 100.1 million websites operated as of March 2008.[39] Of these 74% were

commercial or other sites operating in the .com generic top-level domain.[39]

Speed issues

Frustration over congestion issues in the Internet infrastructure and the high latency

that results in slow browsing has led to an alternative, pejorative name for the World

Wide Web: the World Wide Wait.[citation needed] Speeding up the Internet is an

ongoing discussion over the use of peering and QoS technologies. Other solutions to

reduce the World Wide Wait can be found on W3C.

Standard guidelines for ideal Web response times are:[40]

* 0.1 second (one tenth of a second). Ideal response time. The user doesn't sense

any interruption.
* 1 second. Highest acceptable response time. Download times above 1 second

interrupt the user experience.
* 10 seconds. Unacceptable response time. The user experience is interrupted and

the user is likely to leave the site or system.

These numbers are useful for planning server capacity.


If a user revisits a Web page after only a short interval, the page data may not need to

be re-obtained from the source Web server. Almost all Web browsers cache

recently-obtained data, usually on the local hard drive. HTTP requests sent by a

browser will usually only ask for data that has changed since the last download. If the

locally-cached data are still current, it will be reused.

Caching helps reduce the amount of Web traffic on the Internet. The decision about

expiration is made independently for each downloaded file, whether image, stylesheet,

JavaScript, HTML, or whatever other content the site may provide. Thus even on sites

with highly dynamic content, many of the basic resources only need to be refreshed

occasionally. Web site designers find it worthwhile to collate resources such as CSS

data and JavaScript into a few site-wide files so that they can be cached efficiently.

This helps reduce page download times and lowers demands on the Web server.

There are other components of the Internet that can cache Web content. Corporate and

academic firewalls often cache Web resources requested by one user for the benefit of

all. (See also Caching proxy server.) Some search engines, such as Google or Yahoo!,

also store cached content from websites.

Apart from the facilities built into Web servers that can determine when files have

been updated and so need to be re-sent, designers of dynamically-generated Web

pages can control the HTTP headers sent back to requesting users, so that transient or

sensitive pages are not cached. Internet banking and news sites frequently use this


Data requested with an HTTP 'GET' is likely to be cached if other conditions are met;

data obtained in response to a 'POST' is assumed to depend on the data that was

POSTed and so is not cached.

Link rot and Web archival

Main article: Link rot

Over time, many Web resources pointed to by hyperlinks disappear, relocate, or are

replaced with different content. This phenomenon is referred to in some circles as "link

rot" and the hyperlinks affected by it are often called "dead links".

The ephemeral nature of the Web has prompted many efforts to archive Web sites. The

Internet Archive is one of the most well-known efforts; it has been active since 1996.

Academic conferences

The major academic event covering the Web is the World Wide Web Conference,

promoted by IW3C2.

WWW prefix in Web addresses

The letters "www" are commonly found at the beginning of Web addresses because of

the long-standing practice of naming Internet hosts (servers) according to the services

they provide. So for example, the host name for a Web server is often "www"; for an

FTP server, "ftp"; and for a USENET news server, "news" or "nntp" (after the news

protocol NNTP). These host names appear as DNS subdomain names, as in


This use of such prefixes is not required by any technical standard; indeed, the first

Web server was at "",[41] and even today many Web sites exist without

a "www" prefix. The "www" prefix has no meaning in the way the main Web site is

shown. The "www" prefix is simply one choice for a Web site's host name.

However, some website addresses require the www. prefix, and if typed without one,

won't work; there are also some which must be typed without the prefix. Sites that do

not have Host Headers properly setup are the cause of this. Some hosting companies

do not setup a www or @ A record in the web server configuration and/or at the DNS

server level.

Some Web browsers will automatically try adding "www." to the beginning, and

possibly ".com" to the end, of typed URLs if no host is found without them. All major

web browsers will also prefix "" and append ".com" to the

address bar contents if the Control and Enter keys are pressed simultaneously. For

example, entering "example" in the address bar and then pressing either Enter or

Control+Enter will usually resolve to "", depending on the

exact browser version and its settings.
Mrf Web Design
Mrf Web Design
Web Management India
Web Design Devlopment
Web Solution Tools

Web server

1. HTTP: every web server program operates by accepting HTTP requests from the

client, and providing an HTTP response to the client. The HTTP response usually

consists of an HTML document, but can also be a raw file, an image, or some other

type of document (defined by MIME-types). If some error is found in client request or

while trying to serve it, a web server has to send an error response which may include

some custom HTML or text messages to better explain the problem to end users.
2. Logging: usually web servers have also the capability of logging some detailed

information, about client requests and server responses, to log files; this allows the

webmaster to collect statistics by running log analyzers on these files.

In practice many web servers implement the following features also:

1. Authentication, optional authorization request (request of user name and

password) before allowing access to some or all kind of resources.
2. Handling of static content (file content recorded in server's filesystem(s)) and

dynamic content by supporting one or more related interfaces (SSI, CGI, SCGI, FastCGI,

JSP, PHP, ASP, ASP.NET, Server API such as NSAPI, ISAPI, etc.).
3. HTTPS support (by SSL or TLS) to allow secure (encrypted) connections to the

server on the standard port 443 instead of usual port 80.
4. Content compression (i.e. by gzip encoding) to reduce the size of the responses

(to lower bandwidth usage, etc.).
5. Virtual hosting to serve many web sites using one IP address.
6. Large file support to be able to serve files whose size is greater than 2 GB on 32

bit OS.
7. Bandwidth throttling to limit the speed of responses in order to not saturate the

network and to be able to serve more clients.

Origin of returned content

The origin of the content sent by server is called:

* static if it comes from an existing file lying on a filesystem;
* dynamic if it is dynamically generated by some other program or script or

application programming interface (API) called by the web server.

Serving static content is usually much faster (from 2 to 100 times) than serving

dynamic content, especially if the latter involves data pulled from a database.

Path translation

Web servers are able to map the path component of a Uniform Resource Locator (URL)


* a local file system resource (for static requests);
* an internal or external program name (for dynamic requests).

For a static request the URL path specified by the client is relative to the Web server's

root directory.

Consider the following URL as it would be requested by a client:

The client's web browser will translate it into a connection to with

the following HTTP 1.1 request:

GET /path/file.html HTTP/1.1

The web server on will append the given path to the path of its

root directory. On Unix machines, this is commonly /var/www. The result is the local

file system resource:


The web server will then read the file, if it exists, and send a response to the client's

web browser. The response will describe the content of the file and contain the file

itself. ..........

Load limits

A web server (program) has defined load limits, because it can handle only a limited

number of concurrent client connections (usually between 2 and 60,000, by default

between 500 and 1,000) per IP address (and TCP port) and it can serve only a certain

maximum number of requests per second depending on:

* its own settings;
* the HTTP request type;
* content origin (static or dynamic);
* the fact that the served content is or is not cached;
* the hardware and software limits of the OS where it is working.

When a web server is near to or over its limits, it becomes overloaded and thus


Overload causes
A daily graph of a web server's load, indicating a spike in the load early in the day.

At any time web servers can be overloaded because of:

* Too much legitimate web traffic (i.e. thousands or even millions of clients hitting

the web site in a short interval of time. e.g. Slashdot effect);
* DDoS (Distributed Denial of Service) attacks;
* Computer worms that sometimes cause abnormal traffic because of millions of

infected computers (not coordinated among them);
* XSS viruses can cause high traffic because of millions of infected browsers and/or

web servers;
* Internet web robots traffic not filtered/limited on large web sites with very few

resources (bandwidth, etc.);
* Internet (network) slowdowns, so that client requests are served more slowly and

the number of connections increases so much that server limits are reached;
* Web servers (computers) partial unavailability, this can happen because of

required or urgent maintenance or upgrade, HW or SW failures, back-end (i.e. DB)

failures, etc.; in these cases the remaining web servers get too much traffic and

become overloaded.

Overload symptoms

The symptoms of an overloaded web server are:

* requests are served with (possibly long) delays (from 1 second to a few hundred

* 500, 502, 503, 504 HTTP errors are returned to clients (sometimes also unrelated

404 error or even 408 error may be returned);
* TCP connections are refused or reset (interrupted) before any content is sent to

* in very rare cases, only partial contents are sent (but this behavior may well be

considered a bug, even if it usually depends on unavailable system resources).

Anti-overload techniques

To partially overcome above load limits and to prevent overload, most popular web

sites use common techniques like:

* managing network traffic, by using:
o Firewalls to block unwanted traffic coming from bad IP sources or having bad

o HTTP traffic managers to drop, redirect or rewrite requests having bad HTTP

o Bandwidth management and traffic shaping, in order to smooth down peaks in

network usage;
* deploying web cache techniques;
* using different domain names to serve different (static and dynamic) content by

separate Web servers, i.e.:

* using different domain names and/or computers to separate big files from small

and medium sized files; the idea is to be able to fully cache small and medium sized

files and to efficiently serve big or huge (over 10 - 1000 MB) files by using different

* using many Web servers (programs) per computer, each one bound to its own

network card and IP address;
* using many Web servers (computers) that are grouped together so that they act or

are seen as one big Web server, see also: Load balancer;
* adding more hardware resources (i.e. RAM, disks) to each computer;
* tuning OS parameters for hardware capabilities and usage;
* using more efficient computer programs for web servers, etc.;
* using other workarounds, especially if dynamic content is involved.

Historical notes
The world's first web server.

In 1989 Tim Berners-Lee proposed to his employer CERN (European Organization for

Nuclear Research) a new project, which had the goal of easing the exchange of

information between scientists by using a hypertext system. As a result of the

implementation of this project, in 1990 Berners-Lee wrote two programs:

* a browser called WorldWideWeb;
* the world's first web server, later known as CERN HTTPd, which ran on NeXTSTEP.

Between 1991 and 1994 the simplicity and effectiveness of early technologies used to

surf and exchange data through the World Wide Web helped to port them to many

different operating systems and spread their use among lots of different social groups

of people, first in scientific organizations, then in universities and finally in industry.

In 1994 Tim Berners-Lee decided to constitute the World Wide Web Consortium to

regulate the further development of the many technologies involved (HTTP, HTML, etc.)

through a standardization process.

The following years are recent history which has seen an exponential growth of the

number of web sites and servers.

Market structure

Given below is a list of top Web server software vendors published in a Netcraft survey

in September 2008.
Vendor Product Web Sites Hosted Percent
Apache Apache 91,068,713 50.24%
Microsoft IIS 62,364,634 34.4%
Google GWS 10,072,687 5.56%
lighttpd lighttpd 3,095,928 1.71%
nginx nginx 2,562,554 1.41%
Oversee Oversee 1,938,953 1.07%
Others - 10,174,366 5.61%
Total - 181,277,835 100.00%

Mrf Web Design
Mrf Web Design
Web Management India
Web Design Devlopment
Web Solution Tools

Modern programming

Quality requirements

Whatever the approach to software development may be, the final program must

satisfy some fundamental properties. The following five properties are among the most


* Efficiency/Performance: the amount of system resources a program consumes

(processor time, memory space, slow devices, network bandwidth and to some extent

even user interaction), the less the better.
* Reliability: how often the results of a program are correct. This depends on

prevention of error propagation resulting from data conversion and prevention of errors

resulting from buffer overflows, underflows and zero division.
* Robustness: how well a program anticipates situations of data type conflict and

other incompatibilities that result in run time errors and program halts. The focus is

mainly on user interaction and the handling of exceptions.
* Usability: the clarity and intuitiveness of a programs output can make or break its

success. This involves a wide range of textual and graphical elements that makes a

program easy and comfortable to use.
* Portability: the range of computer hardware and operating system platforms on

which the source code of a program can be compiled/interpreted and run. This depends

mainly on the range of platform specific compilers for the language of the source code

rather than anything having to do with the program directly.

Algorithmic complexity

The academic field and the engineering practice of computer programming are both

largely concerned with discovering and implementing the most efficient algorithms for a

given class of problem. For this purpose, algorithms are classified into orders using

so-called Big O notation, O(n), which expresses resource use, such as execution time

or memory consumption, in terms of the size of an input. Expert programmers are

familiar with a variety of well-established algorithms and their respective complexities

and use this knowledge to choose algorithms that are best suited to the circumstances.


The first step in most formal software development projects is requirements analysis,

followed by testing to determine value modeling, implementation, and failure

elimination (debugging). There exist a lot of differing approaches for each of those

tasks. One approach popular for requirements analysis is Use Case analysis.

Popular modeling techniques include Object-Oriented Analysis and Design (OOAD) and

Model-Driven Architecture (MDA). The Unified Modeling Language (UML) is a notation

used for both OOAD and MDA.

A similar technique used for database design is Entity-Relationship Modeling (ER


Implementation techniques include imperative languages (object-oriented or

procedural), functional languages, and logic languages.

Measuring language usage

It is very difficult to determine what are the most popular of modern programming

languages. Some languages are very popular for particular kinds of applications (e.g.,

COBOL is still strong in the corporate data center, often on large mainframes, FORTRAN

in engineering applications, and C in embedded applications), while some languages

are regularly used to write many different kinds of applications.

Methods of measuring language popularity include: counting the number of job

advertisements that mention the language[7], the number of books teaching the

language that are sold (this overestimates the importance of newer languages), and

estimates of the number of existing lines of code written in the language (this

underestimates the number of users of business languages such as COBOL).

A bug which was debugged in 1947.

Debugging is a very important task in the software development process, because an

erroneous program can have significant consequences for its users. Some languages are

more prone to some kinds of faults because their specification does not require

compilers to perform as much checking as other languages. Use of a static analysis tool

can help detect some possible problems.

Debugging is often done with IDEs like Visual Studio, NetBeans, and Eclipse.

Standalone debuggers like gdb are also used, and these often provide less of a visual

environment, usually using a command line.

Programming languages

Main articles: Programming language and List of programming languages

Different programming languages support different styles of programming (called

programming paradigms). The choice of language used is subject to many

considerations, such as company policy, suitability to task, availability of third-party

packages, or individual preference. Ideally, the programming language best suited for

the task at hand will be selected. Trade-offs from this ideal involve finding enough

programmers who know the language to build a team, the availability of compilers for

that language, and the efficiency with which programs written in a given language


Allen Downey, in his book How To Think Like A Computer Scientist, writes:

The details look different in different languages, but a few basic instructions appear

in just about every language: input: Get data from the keyboard, a file, or some other

device. output: Display data on the screen or send data to a file or other device. math:

Perform basic mathematical operations like addition and multiplication. conditional

execution: Check for certain conditions and execute the appropriate sequence of

statements. repetition: Perform some action repeatedly, usually with some variation.

Many computer languages provide a mechanism to call functions provided by libraries.

Provided the functions in a library follow the appropriate runtime conventions (eg,

method of passing arguments), then these functions may be written in any other


MRF Web Technologies
Mrf Web Design
Web Management India
Web Design Devlopment
Web Solution Tools

Computer programming

Computer programming (often shortened to programming or coding) is the process of

writing, testing, debugging/troubleshooting, and maintaining the source code of

computer programs. This source code is written in a programming language. The code

may be a modification of an existing source or something completely new. The purpose

of programming is to create a program that exhibits a certain desired behavior

(customization). The process of writing source code often requires expertise in many

different subjects, including knowledge of the application domain, specialized

algorithms and formal logic.

Within software engineering, programming (the implementation) is regarded as one

phase in a software development process.

There is an ongoing debate on the extent to which the writing of programs is an art, a

craft or an engineering discipline.[1] Good programming is generally considered to be

the measured application of all three, with the goal of producing an efficient and

evolvable software solution (the criteria for "efficient" and "evolvable" vary

considerably). The discipline differs from many other technical professions in that

programmers generally do not need to be licensed or pass any standardized (or

governmentally regulated) certification tests in order to call themselves "programmers"

or even "software engineers." However, representing oneself as a "Professional

Software Engineer" without a license from an accredited institution is illegal in many

parts of the world.

Another ongoing debate is the extent to which the programming language used in

writing computer programs affects the form that the final program takes. This debate is

analogous to that surrounding the Sapir-Whorf hypothesis [2] in linguistics, that

postulates that a particular language's nature influences the habitual thought of its

speakers. Different language patterns yield different patterns of thought. This idea

challenges the possibility of representing the world perfectly with language, because it

acknowledges that the mechanisms of any language condition the thoughts of its

speaker community.

Said another way, programming is the craft of transforming requirements into

something that a computer can execute.

History of programming

See also: History of programming languages

Wired plug board for an IBM 402 Accounting Machine.

The concept of devices that operate following a pre-defined set of instructions traces

back to Greek Mythology, notably Hephaestus and his mechanical servants[3]. The

Antikythera mechanism was a calculater utilizing gears of various sizes and

configuration to determine its operation. The earliest known programmable machines

(machines whose behavior can be controlled and predicted with a set of instructions)

were Al-Jazari's programmable Automata in 1206.[4] One of Al-Jazari's robots was

originally a boat with four automatic musicians that floated on a lake to entertain

guests at royal drinking parties. Programming this mechanism's behavior meant placing

pegs and cams into a wooden drum at specific locations. These would then bump into

little levers that operate a percussion instrument. The output of this device was a

small drummer playing various rhythms and drum patterns.[5][6] Another sophisticated

programmable machine by Al-Jazari was the castle clock, notable for its concept of

variables which the operator could manipulate as necessary (i.e. the length of day and

night). The Jacquard Loom, which Joseph Marie Jacquard developed in 1801, uses a

series of pasteboard cards with holes punched in them. The hole pattern represented

the pattern that the loom had to follow in weaving cloth. The loom could produce

entirely different weaves using different sets of cards. Charles Babbage adopted the

use of punched cards around 1830 to control his Analytical Engine. The synthesis of

numerical calculation, predetermined operation and output, along with a way to

organize and input instructions in a manner relatively easy for humans to conceive and

produce, led to the modern development of computer programming.

Development of computer programming accelerated through the Industrial Revolution.

The punch card innovation was later refined by Herman Hollerith who, in 1896 founded

the Tabulating Machine Company (which became IBM). He invented the Hollerith

punched card, the card reader, and the key punch machine. These inventions were the

foundation of the modern information processing industry. The addition of a plug-board

to his 1906 Type I Tabulator allowed it to do different jobs without having to be

physically rebuilt. By the late 1940s there were a variety of plug-board programmable

machines, called unit record equipment, to perform data processing tasks (card

reading). Early computer programmers used plug-boards for the variety of complex

calculations requested of the newly invented machines.
Data and instructions could be stored on external punch cards, which were kept in order

and arranged in program decks.

The invention of the Von Neumann architecture allowed computer programs to be

stored in computer memory. Early programs had to be painstakingly crafted using the

instructions of the particular machine, often in binary notation. Every model of

computer would be likely to need different instructions to do the same task. Later

assembly languages were developed that let the programmer specify each instruction in

a text format, entering abbreviations for each operation code instead of a number and

specifying addresses in symbolic form (e.g. ADD X, TOTAL). In 1954 Fortran, the first

higher level programming language, was invented. This allowed programmers to specify

calculations by entering a formula directly (e.g. Y = X*2 + 5*X + 9). The program text,

or source, was converted into machine instructions using a special program called a

compiler. Many other languages were developed, including ones for commercial

programming, such as COBOL. Programs were mostly still entered using punch cards or

paper tape. (See computer programming in the punch card era). By the late 1960s, data

storage devices and computer terminals became inexpensive enough so programs could

be created by typing directly into the computers. Text editors were developed that

allowed changes and corrections to be made much more easily than with punch cards.

As time has progressed, computers have made giant leaps in the area of processing

power. This has brought about newer programming languages that are more abstracted

from the underlying hardware. Although these more abstracted languages require

additional overhead, in most cases the huge increase in speed of modern computers

has brought about little performance decrease compared to earlier counterparts. The

benefits of these more abstracted languages is that they allow both an easier learning

curve for people less familiar with the older lower-level programming languages, and

they also allow a more experienced programmer to develop simple applications quickly.

Despite these benefits, large complicated programs, and programs that are more

dependent on speed still require the faster and relatively lower-level languages with

today's hardware. (The same concerns were raised about the original Fortran language.)

Throughout the second half of the twentieth century, programming was an attractive

career in most developed countries. Some forms of programming have been increasingly

subject to offshore outsourcing (importing software and services from other countries,

usually at a lower wage), making programming career decisions in developed countries

more complicated, while increasing economic opportunities in less developed areas. It

is unclear how far this trend will continue and how deeply it will impact programmer

wages and opportunities.
MRF Web Technologies
Mrf Web Design
Web Management India
Web Design Devlopment
Web Solution Tools
villas sicily
options strategies

Dynamic web page production

Dynamic web page production

The production of server-side dynamic web pages is one of the main applications of

server-side scripting languages.

One important alternative to use them, on a MVC framework, is using web template

systems. Any "not web specific" programming language can be used to manage

template engines and web templates.
MRF Web Technologies
Mrf Web Design
Web Management India
Web Design Devlopment
Web Solution Tools

Server-side scripting

Server-side scripting is a web server technology in which a user's request is fulfilled by

running a script directly on the web server to generate dynamic HTML pages. It is

usually used to provide interactive web sites that interface to databases or other data

stores. This is different from client-side scripting where scripts are run by the viewing

web browser, usually in JavaScript. The primary advantage to server-side scripting is

the ability to highly customize the response based on the user's requirements, access

rights, or queries into data stores.

When the server serves data in a commonly used manner, for example according to the

HTTP or FTP protocols, users may have their choice of a number of client programs

(most modern web browsers can request and receive data using both of those

protocols). In the case of more specialized applications, programmers may write their

own server, client, and communications protocol, that can only be used with one


Programs that run on a user's local computer without ever sending or receiving data

over a network are not considered clients, and so the operations of such programs

would not be considered client-side operations.


In the "old" days of the web, server-side scripting was almost exclusively performed by

using a combination of C programs, Perl scripts and shell scripts using the Common

Gateway Interface (CGI). Those scripts were executed by the operating system,

mnemonic coding and the results simply served back by the web server. Nowadays,

these and other on-line scripting languages such as ASP and PHP can often be executed

directly by the web server itself or by extension modules (e.g. mod perl or mod php) to

the web server. Either form of scripting (i.e., CGI or direct execution) can be used to

build up complex multi-page sites, but direct execution usually results in lower

overhead due to the lack of calls to external interpreters.

Dynamic websites are also sometimes powered by custom web application servers, for

example the Python "Base HTTP Server" library, although some may not consider this to

be server-side scripting.
Web Management India
Web Solution Tools.
Mrf Web Design

Mrf Web Development
Mrf Web Development

Server-side scripting

Server-side scripting is a web server technology in which a user's request is fulfilled by

running a script directly on the web server to generate dynamic HTML pages. It is

usually used to provide interactive web sites that interface to databases or other data

stores. This is different from client-side scripting where scripts are run by the viewing

web browser, usually in JavaScript. The primary advantage to server-side scripting is

the ability to highly customize the response based on the user's requirements, access

rights, or queries into data stores.

When the server serves data in a commonly used manner, for example according to the

HTTP or FTP protocols, users may have their choice of a number of client programs

(most modern web browsers can request and receive data using both of those

protocols). In the case of more specialized applications, programmers may write their

own server, client, and communications protocol, that can only be used with one


Programs that run on a user's local computer without ever sending or receiving data

over a network are not considered clients, and so the operations of such programs

would not be considered client-side operations.


In the "old" days of the web, server-side scripting was almost exclusively performed by

using a combination of C programs, Perl scripts and shell scripts using the Common

Gateway Interface (CGI). Those scripts were executed by the operating system,

mnemonic coding and the results simply served back by the web server. Nowadays,

these and other on-line scripting languages such as ASP and PHP can often be executed

directly by the web server itself or by extension modules (e.g. mod perl or mod php) to

the web server. Either form of scripting (i.e., CGI or direct execution) can be used to

build up complex multi-page sites, but direct execution usually results in lower

overhead due to the lack of calls to external interpreters.

Dynamic websites are also sometimes powered by custom web application servers, for

example the Python "Base HTTP Server" library, although some may not consider this to

be server-side scripting.

MRF Web Technologies
Mrf Web Design
Web Management India
Web Design Devlopment
Web Solution Tools
villas sicily
options strategies

Client-side scripting

Client-side scripting generally refers to the class of computer programs on the web that

are executed client-side, by the user's web browser, instead of server-side (on the web

server). This type of computer programming is an important part of the Dynamic HTML

(DHTML) concept, enabling web pages to be scripted; that is, to have different and

changing content depending on user input, environmental conditions (such as the time

of day), or other variables.

Web authors write client-side scripts in languages such as JavaScript (Client-side

JavaScript) and VBScript.

Client-side scripts are often embedded within an HTML document, but they may also be

contained in a separate file, which is referenced by the document (or documents) that

use it. Upon request, the necessary files are sent to the user's computer by the web

server (or servers) on which they reside. The user's web browser executes the script,

then displays the document, including any visible output from the script. Client-side

scripts may also contain instructions for the browser to follow if the user interacts with

the document in a certain way, e.g., clicks a certain button. These instructions can be

followed without further communication with the server, though they may require such


By viewing the file that contains the script, users may be able to see its source code.

Many web authors learn how to write client-side scripts partly by examining the source

code for other authors' scripts.

In contrast, server-side scripts, written in languages such as Perl, PHP, and server-side

VBScript, are executed by the web server when the user requests a document. They

produce output in a format understandable by web browsers (usually HTML), which is

then sent to the user's computer. The user cannot see the script's source code (unless

the author publishes the code separately), and may not even be aware that a script

was executed. The documents produced by server-side scripts may, of course, contain

client-side scripts.

Client-side scripts have greater access to the information and functions available on

the user's browser, whereas server-side scripts have greater access to the information

and functions available on the server. Server-side scripts require that their language's

interpreter is installed on the server, and produce the same output regardless of the

client's browser, operating system, or other system details. Client-side scripts do not

require additional software on the server (making them popular with authors who lack

administrative access to their servers); however, they do require that the user's web

browser understands the scripting language in which they are written. It is therefore

impractical for an author to write scripts in a language that is not supported by the web

browsers used by a majority of his or her audience.

Due to security restrictions, client-side scripts may not be allowed to access the users

computer beyond the browser application. Techniques like ActiveX controls can be used

to sidestep this restriction.

Unfortunately, even languages that are supported by a wide variety of browsers may

not be implemented in precisely the same way across all browsers and operating

systems. Authors are well-advised to review the behavior of their client-side scripts on

a variety of platforms before they put them into use.

Web Management India
Web Solution Tools.
Mrf Web Design

Mrf Web Development
Mrf Web Development

Content development (web)

Web content development is the process of researching, writing, gathering, organizing,

and editing information for publication on web sites. Web site content may consist of

prose, graphics, pictures, recordings, movies or other media assets that could be

distributed by a hypertext transfer protocol server, and viewed by a web browser.

Content developers and web developers

When the World Wide Web began, web developers either generated content

themselves, or took existing documents and coded them into hypertext markup

language (HTML). In time, the field of web site development came to encompass many

technologies, so it became difficult for web site developers to maintain so many

different skills. Content developers are specialized web site developers who have

mastered content generation skills. They can integrate content into new or existing

web sites, but they may not have skills such as script language programming, database

programming, graphic design and copywriting.

Content developers may also be search engine optimization specialists, or Internet

marketing professionals. This is because content is called 'king'. High quality, unique

content is what search engines are looking for and content development specialists

therefore have a very important role to play in the search engine optimization process.

One issue currently plaguing the world of web content development is keyword-stuffed

content which are prepared solely for the purpose of manipulating a search engine. This

is giving a bad name to genuine web content writing professionals. The effect is writing

content designed to appeal to machines (algorithms) rather than people or community.

Search engine optimization specialists commonly submit content to Article Directories

to build their website's authority on any given topic. Most Article Directories allow

visitors to republish submitted content with the agreement that all links are

maintained. This has become a method of Search Engine Optimization for many

websites today. If written according to SEO copywriting rules, the submitted content

will bring benefits to the publisher (free SEO-friendly content for a webpage) as well as

to the author (a hyperlink pointing to his/her website, placed on an SEO-friendly

Web Management India
Web Solution Tools.
Mrf Web Design

Mrf Web Development
Mrf Web Development

Web design

Web page design is a process of conceptualization, planning, modeling, and execution

of electronic media content delivery via Internet in the form of technologies (such as

markup languages) suitable for interpretation and display by a web browser or other

web-based graphical user interfaces (GUIs).

The intent of web design is to create a web site (a collection of electronic files residing

on one or more web servers) that presents content (including interactive features or

interfaces) to the end user in the form of web pages once requested. Such elements as

text, forms, and bit-mapped images (GIFs, JPEGs, PNGs) can be placed on the page

using HTML, XHTML, or XML tags. Displaying more complex media (vector graphics,

animations, videos, sounds) usually requires plug-ins such as Flash, QuickTime, Java

run-time environment, etc. Plug-ins are also embedded into web pages by using HTML

or XHTML tags.

Improvements in the various browsers' compliance with W3C standards prompted a

widespread acceptance of XHTML and XML in conjunction with Cascading Style Sheets

(CSS) to position and manipulate web page elements. The latest standards and

proposals aim at leading to the various browsers' ability to deliver a wide variety of

media and accessibility options to the client possibly without employing plug-ins.

Typically web pages are classified as static or dynamic.

* Static pages don’t change content and layout with every request unless a human

(web master or programmer) manually updates the page.

* Dynamic pages adapt their content and/or appearance depending on the

end-user’s input or interaction or changes in the computing environment (user, time,

database modifications, etc.) Content can be changed on the client side (end-user's

computer) by using client-side scripting languages (JavaScript, JScript, Actionscript,

media players and PDF reader plug-ins, etc.) to alter DOM elements (DHTML). Dynamic

content is often compiled on the server utilizing server-side scripting languages (PHP,

ASP, Perl, Coldfusion, JSP, Python, etc.). Both approaches are usually used in complex


With growing specialization within communication design and information technology

fields, there is a strong tendency to draw a clear line between web design specifically

for web pages and web development for the overall logistics of all web-based services.
Web Site Design

A web site is a collection of information about a particular topic or subject. Designing a

web site is defined as the arrangement and creation of web pages that in turn make up

a web site. A web page consists of information for which the web site is developed. A

web site might be compared to a book, where each page of the book is a web page.

There are many aspects (design concerns) in this process, and due to the rapid

development of the Internet, new aspects may emerge. For non-commercial web sites,

the goals may vary depending on the desired exposure and response. For typical

commercial web sites, the basic aspects of design are:

* The content: the substance, and information on the site should be relevant to the

site and should target the area of the public that the website is concerned with.
* The usability: the site should be user-friendly, with the interface and navigation

simple and reliable.
* The appearance: the graphics and text should include a single style that flows

throughout, to show consistency. The style should be professional, appealing and

* The visibility: the site must also be easy to find via most, if not all, major search

engines and advertisement media.

A web site typically consists of text and images. The first page of a web site is known

as the Home page or Index. Some web sites use what is commonly called a Splash

Page. Splash pages might include a welcome message, language or region selection, or

disclaimer. Each web page within a web site is an HTML file which has its own URL.

After each web page is created, they are typically linked together using a navigation

menu composed of hyperlinks. Faster browsing speeds have led to shorter attention

spans and more demanding online visitors and this has resulted in less use of Splash

Pages, particularly where commercial web sites are concerned.

Once a web site is completed, it must be published or uploaded in order to be viewable

to the public over the internet. This may be done using an FTP client. Once published,

the web master may use a variety of techniques to increase the traffic, or hits, that the

web site receives. This may include submitting the web site to a search engine such as

Google or Yahoo, exchanging links with other web sites, creating affiliations with

similar web sites, etc.

Multidisciplinary requirements

Web site design crosses multiple disciplines of information systems, information

technology and communication design. The web site is an information system whose

components are sometimes classified as front-end and back-end. The observable

content (e.g. page layout, user interface, graphics, text, audio) is known as the

front-end. The back-end comprises the organization and efficiency of the source code,

invisible scripted functions, and the server-side components that process the output

from the front-end. Depending on the size of a Web development project, it may be

carried out by a multi-skilled individual (sometimes called a web master), or a project

manager may oversee collaborative design between group members with specialized



As in collaborative designs, there are conflicts between differing goals and methods of

web site designs. These are a few of the ongoing ones.

Lack of collaboration in design

In the early stages of the web, there wasn't as much collaboration between web

designs and larger advertising campaigns, customer transactions, social networking,

intranets and extranets as there is now. Web pages were mainly static online

brochures disconnected from the larger projects.

Many web pages are still disconnected from larger projects. Special design

considerations are necessary for use within these larger projects. These design

considerations are often overlooked, especially in cases where there is a lack of

leadership, lack of understanding of why and technical knowledge of how to integrate,

or lack of concern for the larger project in order to facilitate collaboration. This often

results in unhealthy competition or compromise between departments, and less than

optimal use of web pages.

Liquid versus fixed layouts

On the web the designer has no control over several factors, including the size of the

browser window, the web browser used, the input devices used (mouse, touch screen,

voice command, text, cell phone number pad, etc.) and the size and characteristics of

available fonts.

Some designers choose to control the appearance of the elements on the screen by

using specific width designations. This control may be achieved through the use of a

HTML table-based design or a more semantic div-based design through the use of CSS.

Whenever the text, images, and layout of a design do not change as the browser

changes, this is referred to as a fixed width design. Proponents of fixed width design

prefer precise control over the layout of a site and the precision placement of objects

on the page. Other designers choose a liquid design. A liquid design is one where the

design moves to flow content into the whole screen, or a portion of the screen, no

matter what the size of the browser window. Proponents of liquid design prefer greater

compatibility and using the screen space available. Liquid design can be achieved by

setting the width of text blocks and page modules to a percentage of the page, or by

avoiding specifying the width for these elements all together, allowing them to expand

or contract naturally in accordance with the width of the browser.

Both liquid and fixed design developers must make decisions about how the design

should degrade on higher and lower screen resolutions. Sometimes the pragmatic

choice is made to flow the design between a minimum and a maximum width. This

allows the designer to avoid coding for the browser choices making up The Long Tail,

while still using all available screen space. Depending on the purpose of the content, a

web designer may decide to use either fixed or liquid layouts on a case-by-case basis.

Similar to liquid layout is the optional fit to window feature with Adobe Flash content.

This is a fixed layout that optimally scales the content of the page without changing

the arrangement or text wrapping when the browser is resized.


Adobe Flash (formerly Macromedia Flash) is a proprietary, robust graphics animation or

application development program used to create and deliver dynamic content, media

(such as sound and video), and interactive applications over the web via the browser.

Many graphic artists use Flash because it gives them exact control over every part of

the design, and anything can be animated and generally "jazzed up". Some application

designers enjoy Flash because it lets them create applications that do not have to be

refreshed or go to a new web page every time an action occurs. Flash can use

embedded fonts instead of the standard fonts installed on most computers. There are

many sites which forgo HTML entirely for Flash. Other sites may use Flash content

combined with HTML as conservatively as gifs or jpegs would be used, but with smaller

vector file sizes and the option of faster loading animations. Flash may also be used to

protect content from unauthorized duplication or searching. Alternatively, small,

dynamic Flash objects may be used to replace standard HTML elements (such as

headers or menu links) with advanced typography not possible via regular HTML or CSS

(see Scalable Inman Flash Replacement).

Flash is not a standard produced by a vendor-neutral standards organization like most

of the core protocols and formats on the Internet. Flash is much more self-contained

than the open HTML format as it does not integrate with web browser UI features. For

example: the browsers "Back" button couldn't be used to go to a previous screen in the

same Flash file, but instead a previous HTML page with a different Flash file. The

browsers "Reload" button wouldn't reset just a portion of a Flash file, but instead

would restart the entire Flash file as loaded when the HTML page was entered, similar

to any online video. Such features would instead be included in the interface of the

Flash file if needed.

Flash requires a proprietary media-playing plugin to be seen. According to a study,[2]

98% of US Web users have the Flash Player installed.[3] The percentage has remained

fairly constant over the years; for example, a study conducted by NPD Research in 2002

showed that 97.8% of US Web users had the Flash player installed. Numbers vary

depending on the detection scheme and research demographics.[4]

Flash detractors claim that Flash websites tend to be poorly designed, and often use

confusing and non-standard user-interfaces, such as the inability to scale according to

the size of the web browser, or its incompatibility with common browser features such

as the back button. Up until recently, search engines have been unable to index Flash

objects, which has prevented sites from having their contents easily found. This is

because many search engine crawlers rely on text to index websites. It is possible to

specify alternate content to be displayed for browsers that do not support Flash. Using

alternate content also helps search engines to understand the page, and can result in

much better visibility for the page. However, the vast majority of Flash websites are

not disability accessible (for screen readers, for example) or Section 508 compliant. An

additional issue is that sites which commonly use alternate content for search engines

to their human visitors are usually judged to be spamming search engines and are

automatically banned.

The most recent incarnation of Flash's scripting language (called "ActionScript", which is

an ECMA language similar to JavaScript) incorporates long-awaited usability features,

such as respecting the browser's font size and allowing blind users to use screen

readers. Actionscript 2.0 is an Object-Oriented language, allowing the use of CSS, XML,

and the design of class-based web applications.

CSS versus tables for layout

When Netscape Navigator 4 dominated the browser market, the popular solution

available for designers to lay out a Web page was by using tables. Often even simple

designs for a page would require dozens of tables nested in each other. Many web

templates in Dreamweaver and other WYSIWYG editors still use this technique today.

Navigator 4 didn't support CSS to a useful degree, so it simply wasn't used.

After the browser wars subsided, and the dominant browsers such as Internet Explorer

became more W3C compliant, designers started turning toward CSS as an alternate

means of laying out their pages. CSS proponents say that tables should be used only

for tabular data, not for layout. Using CSS instead of tables also returns HTML to a

semantic markup, which helps bots and search engines understand what's going on in a

web page. All modern Web browsers support CSS with different degrees of limitations.

However, one of the main points against CSS is that by relying on it exclusively, control

is essentially relinquished as each browser has its own quirks which result in a slightly

different page display. This is especially a problem as not every browser supports the

same subset of CSS rules. For designers who are used to table-based layouts,

developing Web sites in CSS often becomes a matter of trying to replicate what can be

done with tables, leading some to find CSS design rather cumbersome due to lack of

familiarity. For example, at one time it was rather difficult to produce certain design

elements, such as vertical positioning, and full-length footers in a design using

absolute positions. With the abundance of CSS resources available online today,

though, designing with reasonable adherence to standards involves little more than

applying CSS 2.1 or CSS 3 to properly structured markup.

These days most modern browsers have solved most of these quirks in CSS rendering

and this has made many different CSS layouts possible. However, some people

continue to use old browsers, and designers need to keep this in mind, and allow for

graceful degrading of pages in older browsers. Most notable among these old browsers

are Internet Explorer 5 and 5.5, which, according to some web designers, are becoming

the new Netscape Navigator 4 — a block that holds the World Wide Web back from

converting to CSS design. However, the W3 Consortium has made CSS in combination

with XHTML the standard for web design.

Form versus Function

Some web developers have a graphic arts background and may pay more attention to

how a page looks than considering other issues such as how visitors are going to find

the page via a search engine. Some might rely more on advertising than search engines

to attract visitors to the site. On the other side of the issue, search engine

optimization consultants (SEOs) are concerned with how well a web site works

technically and textually: how much traffic it generates via search engines, and how

many sales it makes, assuming looks don't contribute to the sales. As a result, the

designers and SEOs often end up in disputes where the designer wants more 'pretty'

graphics, and the SEO wants lots of 'ugly' keyword-rich text, bullet lists, and text

links[citation needed]. One could argue that this is a false dichotomy due to the

possibility that a web design may integrate the two disciplines for a collaborative and

synergistic solution[citation needed]. Because some graphics serve communication

purposes in addition to aesthetics, how well a site works may depend on the graphic

designer's visual communication ideas as well as the SEO considerations.

Another problem when using a lot of graphics on a page is that download times can be

greatly lengthened, often irritating the user. This has become less of a problem as the

internet has evolved with high-speed internet and the use of vector graphics. This is an

engineering challenge to increase bandwidth in addition to an artistic challenge to

minimize graphics and graphic file sizes. This is an on-going challenge as increased

bandwidth invites increased amounts of content.

Accessible Web design

Main article: Web accessibility

To be accessible, web pages and sites must conform to certain accessibility principles.

These can be grouped into the following main areas:

* use semantic markup that provides a meaningful structure to the document (i.e.

web page)
* Semantic markup also refers to semantically organizing the web page structure

and publishing web services description accordingly so that they can be recognized by

other web services on different web pages. Standards for semantic web are set by IEEE
* use a valid markup language that conforms to a published DTD or Schema
* provide text equivalents for any non-text components (e.g. images, multimedia)
* use hyperlinks that make sense when read out of context. (e.g. avoid "Click

* don't use frames
* use CSS rather than HTML Tables for layout.
* author the page so that when the source code is read line-by-line by user agents

(such as a screen readers) it remains intelligible. (Using tables for design will often

result in information that is not.)

However, W3C permits an exception where tables for layout either make sense when

linearized or an alternate version (perhaps linearized) is made available.

Website accessibility is also changing as it is impacted by Content Management

Systems that allow changes to be made to webpages without the need of obtaining

programming language knowledge.

Website Planning

Before creating and uploading a website, it is important to take the time to plan

exactly what is needed in the website. Thoroughly considering the audience or target

market, as well as defining the purpose and deciding what content will be developed

are extremely important.


It is essential to define the purpose of the website as one of the first steps in the

planning process. A purpose statement should show focus based on what the website

will accomplish and what the users will get from it. A clearly defined purpose will help

the rest of the planning process as the audience is identified and the content of the

site is developed. Setting short and long term goals for the website will help make the

purpose clear and plan for the future when expansion, modification, and improvement

will take place.Goal-setting practices and measurable objectives should be identified to

track the progress of the site and determine success.


Defining the audience is a key step in the website planning process. The audience is

the group of people who are expected to visit your website – the market being

targeted. These people will be viewing the website for a specific reason and it is

important to know exactly what they are looking for when they visit the site. A clearly

defined purpose or goal of the site as well as an understanding of what visitors want to

do or feel when they come to your site will help to identify the target audience. Upon

considering who is most likely to need or use the content, a list of characteristics

common to the users such as:

* Audience Characteristics
* Information Preferences
* Computer Specifications
* Web Experience

Taking into account the characteristics of the audience will allow an effective website

to be created that will deliver the desired content to the target audience.


Content evaluation and organization requires that the purpose of the website be clearly

defined. Collecting a list of the necessary content then organizing it according to the

audience's needs is a key step in website planning. In the process of gathering the

content being offered, any items that do not support the defined purpose or accomplish

target audience objectives should be removed. It is a good idea to test the content

and purpose on a focus group and compare the offerings to the audience needs. The

next step is to organize the basic information structure by categorizing the content and

organizing it according to user needs. Each category should be named with a concise

and descriptive title that will become a link on the website. Planning for the site's

content ensures that the wants or needs of the target audience and the purpose of the

site will be fulfilled.

Compatibility and restrictions

Because of the market share of modern browsers (depending on your target market),

the compatibility of your website with the viewers is restricted. For instance, a website

that is designed for the majority of websurfers will be limited to the use of valid XHTML

1.0 Strict or older, Cascading Style Sheets Level 1, and 1024x768 display resolution.

This is because Internet Explorer is not fully W3C standards compliant with the

modularity of XHTML 1.1 and the majority of CSS beyond 1. A target market of more

alternative browser (e.g. Firefox, Safari and Opera) users allow for more W3C

compliance and thus a greater range of options for a web designer.

Another restriction on webpage design is the use of different Image file formats. The

majority of users can support GIF, JPEG, and PNG (with restrictions). Again Internet

Explorer is the major restriction here, not fully supporting PNG's advanced transparency

features, resulting in the GIF format still being the most widely used graphic file format

for transparent images.

Many website incompatibilities go unnoticed by the designer and unreported by the

users. The only way to be certain a website will work on a particular platform is to test

it on that platform.

Planning documentation

Documentation is used to visually plan the site while taking into account the purpose,

audience and content, to design the site structure, content and interactions that are

most suitable for the website. Documentation may be considered a prototype for the

website – a model which allows the website layout to be reviewed, resulting in

suggested changes, improvements and/or enhancements. This review process increases

the likelihood of success of the website.

First, the content is categorized and the information structure is formulated. The

information structure is used to develop a document or visual diagram called a site

map. This creates a visual of how the web pages will be interconnected, which helps in

deciding what content will be placed on what pages. There are three main ways of

diagramming the website structure:

* Linear Website Diagrams will allow the users to move in a predetermined

* Hierarchical structures (of Tree Design Website Diagrams) provide more than one

path for users to take to their destination;
* Branch Design Website Diagrams allow for many interconnections between web

pages such as hyperlinks within sentences.

In addition to planning the structure, the layout and interface of individual pages may

be planned using a storyboard. In the process of storyboarding, a record is made of the

description, purpose and title of each page in the site, and they are linked together

according to the most effective and logical diagram type. Depending on the number of

pages required for the website, documentation methods may include using pieces of

paper and drawing lines to connect them, or creating the storyboard using computer


Some or all of the individual pages may be designed in greater detail as a website

wireframe, a mock up model or comprehensive layout of what the page will actually

look like. This is often done in a graphic program, or layout design program. The

wireframe has no working functionality, only planning, though it can be used for selling

ideas to other web design companies.
Web Management India
Web Solution Tools.
Mrf Web Design

Mrf Web Development
Mrf Web Development