Pierre Omidyar

Pierre Omidyar
Pierre Morad Omidyar

On 4 September 1995, the 28-year-old software developer and entrepreneur Pierre Omidyar launched the famous eBay auction site as an experiment in how a level playing field would affect the efficiency of a marketplace. As of 2022, eBay is still one of the world’s leading marketplace and eCommerce platforms, with yearly revenue of over $10 billion.

Pierre Morad Omidyar was born on 21 June 1967 in Paris, France, to Iranian immigrant parents (Cyrus Omidyar, a physician, and Elahé Mir-Djalali Omidyar, a linguist), both of whom had been sent by his grandparents to attend university there. In 1973 Pierre moved to the US (Maryland), where his father Cyrus began his residency at Johns Hopkins University Medical Center as a surgeon. Omidyar graduated with a degree in computer science from Tufts University in 1988. Shortly after, he went to work for Claris, an Apple Computer subsidiary, where he helped write the vector-based drawing application MacDraw.

In 1991 Pierre started his career as an entrepreneur, co-founding Ink Development, a pen-based computing startup that was later rebranded as an e-commerce company and renamed eShop.

On a long holiday weekend sometime in the middle of 1995, Pierre sat down in his living room in San Jose, California, to write the original computer code for what eventually became an internet superbrand—the auction site eBay. Initially, he wanted to call his site echobay, but the name had already been registered. Thus the word eBay was made up on the fly by Omidyar.

The site www.ebay.com was launched on Labor Day, 1995, under the more prosaic title of Auction Web, and was hosted on a site, Omidyar had created for information on the Ebola virus. The site began with the listing of a single broken laser pointer. Though Pierre had intended the listing to be a test more than a serious offer to sell at auction, he was shocked when soon the item sold for $14.83.

Auction Web was later renamed eBay. The service, meant to be a marketplace for the sale of goods and services for individuals, was free at first but started charging in order to cover internet service provider costs and soon started making a profit.

What is the profitable Business Model of eBay?

It was built on the idea of an online person-to-person trading community on the Internet, using the World Wide Web. Buyers and sellers are brought together in a manner, where sellers are permitted to list items for sale, buyers to bid on items of interest, and all eBay users to browse through listed items in a fully automated way. The items are arranged by topics, and each type of auction has its own category.

eBay has both streamlined and globalized traditional person-to-person trading, which has traditionally been conducted through such forms as garage sales, collectibles shows, flea markets, and more, with their web interface. This facilitates easy exploration for buyers and enables the sellers to immediately list an item for sale within minutes of registering.

Browsing and bidding on auctions are free of charge, but sellers are charged two kinds of charges:
• When an item is listed on eBay, a nonrefundable Insertion Fee is charged, which ranges between 30 cents and $3.30, depending on the seller’s opening bid on the item.
• A fee is charged for additional listing options to promote the item, such as highlighted or bold listing.
• A Final Value (final sale price) fee is charged at the end of the seller’s auction. This fee generally ranges from 1.25% to 5% of the final sale price.

eBay notifies the buyer and seller via e-mail at the end of the auction if a bid exceeds the seller’s minimum price, and the seller and buyer finish the transaction independently of eBay. The binding contract of the auction is between the winning bidder and the seller only.

This appeared to be an excellent business model.

By 1996 the company was large enough to require the skills of a Stanford MBA in Jeffrey Skoll, who came aboard an already profitable ship. Meg Whitman, a Harvard graduate, soon followed as president and CEO, along with a strong business team under whose leadership eBay grew rapidly, branching out from collectibles into nearly every type of market. eBay’s vision for success transitioned from one of commerce—buying and selling things—to one of connecting people around the world together.

With the exponential growth and strong branding, eBay thrived, eclipsing many of the other upstart auction sites that dotted the dot-com bubble. By the time eBay had gone public in 1998, both Omidyar and Skoll were billionaires. In 2021, Forbes ranked Omidyar as the 24th-richest person in the world, with an estimated net worth of $21.8 billion. In 2009 the net worth of the company reached S$5.5 Billion. Over one million people worldwide now rely on their eBay sales as part of their income. As of 2022, with 185 million active buyers and 19 million sellers worldwide, eBay is one of the world’s leading marketplace and eCommerce platforms.

Omidyar served as chairman of eBay from 1998 to 2015. In 2020, he stepped down from the board of the company as part of a broader overhaul. He has, however, stayed active in the company, retaining the title of director emeritus.

Jeff Bezos

Jeff Bezos
Jeffrey (Jeff) Preston Bezos

Jeffrey Preston Bezos, the founder of the famous Amazon.com, was born on 12 January 1964, in Albuquerque, New Mexico, when his mother, Jackie, was still in her teens. Her marriage to his father lasted a little more than a year. She remarried when Bezos was five and Jeffrey took the name of his stepfather, Miguel Bezos.

In 1971 the family moved to Houston, Texas, where Jeffrey attended an elementary school, showing intense and varied scientific interests. He rigged an electric alarm to keep his younger siblings out of his room and maintain his privacy and converted his parents’ garage into a laboratory for his science projects.

Later the family moved to Miami, Florida, where Bezos attended a high school. While in high school, he attended the student science training program at the University of Florida, which helped him receive a Silver Knight Award in 1982. He entered Princeton University, planning to study physics, but soon returned to his love of computers and graduated summa cum laude, Phi Beta Kappa with a degree in computer science and electrical engineering.

After graduating from Princeton, Bezos worked on Wall Street in the computer science field. Then he worked on building a network for international trade for a company known as Fitel. Then Bezos worked for Bankers Trust, becoming a vice president. Later on, he also worked in computer science for D. E. Shaw & Co.

In 1994, Bezos decided to take part in the Internet gold rush, developing the idea of selling books to a mass audience through the Internet. There is an apocryphal legend, that Bezos decided to found Amazon after making a cross-country drive with his wife from New York to Seattle, writing up the Amazon business plan on the way and setting up the original company in his garage.

Bezos decided to name the company Amazon after the world’s largest river and reserved the domain name Amazon.com. The company was incorporated in the state of Washington, beginning service in July 1995. The initial Web site was text heavy and gray, it wasn’t pretty and didn’t have even listing book publication dates and other key information. But that didn’t concern Madrona Venture Group’s Tom Alberg, who invested $100000 in Amazon in 1995.

By the fourth month in business, the company was selling more than 100 books a day.

Bezos succeeded to created more than a bookstore, he created an online community. The site was revolutionary early on for allowing average consumers to create online product reviews. It not only drew people who wanted to buy books, but also those who wanted to research them before buying.

The company began as an online bookstore, but gradually incorporated a number of products and services into its shopping model, either through development or acquisition.

In 1997, Amazon added music CDs and movie videos to the Web site, which many considered to be a wise move designed to complement the company’s expansive book collection. Soon Amazon added five more product categories—toys, electronics, software, video games, and home improvement.

Time magazine with Jeff Bezos
Time magazine with Jeff Bezos

In 1999, Time magazine (with the nearby image shown on the cover) named Bezos Person of the Year and recognized the company’s success in popularizing online shopping.

Amazon’s initial business plan was unusual: the company did not expect a profit for four to five years and the strategy was effective. In 1996, its first full fiscal year in business, Amazon generated $15.7 million in sales, a figure that would increase by 800 percent the following year. The company successfully survived the dot-com bubble and remains profitable now. Revenues increased thanks to product diversification and an international presence: $3.9 billion in 2002, $5.3 billion in 2003, $6.9 billion in 2004, $8.5 billion in 2005, and $10.7 billion in 2006. In May 1997 Amazon.com issued its initial public offering of stock.

In 2007 Amazon launched the remarkable series of e-book readers Kindle. In 2011 Amazon entered the tablet business with Kindle Fire.

The site amazon.com attracted over 900 million visitors annually by 2011. In 2012 the company has over 56000 employees. Amazon’s annual revenue for 2021 was $469.822B, a 21.7% increase from 2020, and the number of employees is 1468000.

In 2004, Bezos founded a human space flight startup company called Blue Origin. He is known for his attention to business process details, trying to know about everything from contract minutiae to how he is quoted in all Amazon press releases.

David Filo and Jerry Yang

David Filo and Jerry Yang
David Filo and Jerry Yang

At the beginning of 1994 two Ph.D. candidates in Electrical Engineering at Stanford University—Jerry Chih-Yuan Yang (born 6 November 1968, in Taipei, Taiwan) and David Robert Filo (born 20 April 1966, in Wisconsin) were looking for a single place to find useful Web sites and for a way to keep track of their personal interests on the Internet. As they didn’t manage to find such a tool, they decided to create their own. Thus the now ubiquitous web portal and global brand Yahoo! began as a student hobby and evolved into a site, that has changed how people communicate with each other and find and access information.

Filo and Yang started the realization of his project in a campus trailer in February 1994, and before long they were spending more time on their home-brewed lists of favorite links than on their doctoral dissertations. Eventually, Jerry and David’s lists became too long and unwieldy, and they broke them out into categories. When the categories became too full, they developed subcategories, thus the core concept behind Yahoo was born.

The Web site started out as Jerry and David’s Guide to the World Wide Web but eventually received a new moniker with the help of a dictionary. Filo and Yang decided to select the name Yahoo because they liked the general definition of the word (which comes from Gulliver’s Travels by Jonathan Swift, where a Yahoo is a legendary being): rude, unsophisticated, uncouth. Later the name Yahoo was popularized as an acronym for Yet Another Hierarchical Officious Oracle.

The Yahoo! first resided on Yang’s student workstation, Akebono, (URL was akebono.stanford.edu/yahoo), while the software was lodged on Filo’s computer, Konishiki, both named after legendary sumo wrestlers.

To their surprise, Jerry and David soon found they were not alone in wanting a single place to find useful Web sites. Before long, hundreds of people were accessing their guide from well beyond the Stanford trailer. Word spread from friends to what quickly became a significant, loyal audience throughout the closely-knit Internet community. Yahoo! celebrated its first million-hit day in the fall of 1994, translating to almost 100 thousand unique visitors.

The Yahoo! domain was created on 18 January 1995. Due to the torrent of traffic and enthusiastic reception Yahoo! was receiving, the founders knew they had a potential business on their hands. In March 1995, the pair incorporated the business and met with dozens of Silicon Valley venture capitalists, looking for financing. They eventually came across Michael Moritz of Sequoia Capital, the well-regarded firm whose most successful investments included Apple Computer, Atari, Oracle, and Cisco Systems. Sequoia Capital agreed to fund Yahoo! in April 1995 with an initial investment of nearly $2 million.

Like many other web search engines, Yahoo started as a web directory, but soon diversified into a web portal and a search engine.

Realizing their new company had the potential to grow quickly, the founders began to shop for a management team. They hired Tim Koogle, a veteran of Motorola, as chief executive officer and Jeffrey Mallett, founder of Novell’s WordPerfect consumer division, as chief operating officer. After securing a second round of funding in the Fall of 1995 and an initial public offering, Yahoo raised $33.8 million in April 1996, with a total of 49 employees.

Here you can see the earliest known Yahoo! website from October 1996.

Today, Yahoo! Inc. is a leading global Internet communications, commerce, and media company that offers a comprehensive branded network of services to more than 350 million individuals each month worldwide. It provides internet communication services (such as Yahoo! Messenger and Yahoo! Mail), social networking services and user-generated content (such as My Web, Yahoo! Personals, Yahoo! 360°, Delicious, Flickr, and Yahoo! Buzz), media contents and news (such as Yahoo! Sports, Yahoo! Finance, Yahoo! Music, Yahoo! Movies, Yahoo! News, Yahoo! Answers and Yahoo! Games), etc. Headquartered in Sunnyvale, California, Yahoo! has offices in Europe, Asia, Latin America, Australia, Canada, and the United States.

In June 2017, Verizon Communications Inc. completed the acquisition of Yahoo. David Filo and Jerry Yang became billionaires a long time ago. As of 2022, the net worth of Filo is 3.2 billion USD, while Yang’s net worth is only 2.6 billion USD.

Matthew Gray

Matthew Gray
Matthew Gray

The brilliant idea of the World Wide Web was devised in the spring of 1989 at the head of Tim Berners-Lee, a physicist at CERN, but it didn’t gain any widespread popular use until the remarkable NCSA Mosaic web browser was introduced at the beginning of 1993.

In the spring of 1993, just months after the release of Mosaic, Matthew Gray, who studied physics at Massachusetts Institute of Technology (MIT) and was one of the three members of the Student Information Processing Board (SIPB) who set up the site www.mit.edu, decided to write a program, called World Wide Web Wanderer, to systematically traverse the Web and collect sites. Wanderer was first functional in the spring of 1993 and became the first automated Web agent (spider or web crawler). The Wanderer certainly did not reach every site on the Web, but it was run with a consistent methodology, hopefully yielding consistent data for the growth of the Web.

Matthew was initially motivated primarily to discover new sites, as the Web was still a relatively small place (in early 1993 the total number of websites all over the world was about 100, and in June of 1995, even with the phenomenal growth of the Internet, the number of Web servers increased to a point where one in every 270 machines on the Internet is a Web server). As the Web started to grow rapidly after 1993, the focus quickly changed to charting the growth of the Web. The first report, compiled using the data collected by Wanderer (see the table below) covers the period from June 1993 to June 1995.

Results Summary
Month/Year Nr. of Web sites % of .com sites Hosts per Web server
06/93 130 1.5 13000
12/93 623 4.6 3475
06/94 2738 13.5 1095
12/94 10022 18.3 451
06/95 23500 31.3 270
01/96 100000 50.0 94

Wanderer was written using the Perl language and while crawling the Web, it generated an index called Wandex—the first web database. Initially, the Wanderer counted only Web servers, but shortly after its introduction, it started to capture URLs as it went along.

Matthew Gray’s Wanderer created quite a controversy at the time, partially because early versions of the program ran rampant through the Web and caused a noticeable network performance degradation. This degradation occurred because it would access the same page hundreds of times a day. The Wanderer soon amended its ways, but the controversy over whether spiders were good or bad for the Internet remained for some time.

Wanderer certainly was not the Internet’s first search engine, it was the Archie of Alan Emtage, but Wanderer was the first web robot, and, with its index Wandex, clearly had the potential to become the first general-purpose Web search engine, years before Yahoo and Google. Mathew Gray however does not make this claim and he always stated that this was not its purpose. Anyway, Wanderer inspired a number of programmers to follow up on the idea of web robots.

From 2001 to 2006, Matthew Gray was CTO of Newbury Networks, Inc., a provider of wireless location technology. As of 2022, he has spent more than 15 years as a software engineer and engineering director at Google.

Marc Andreessen and Eric Bina

Eric Bina (left) and Marc Andreessen (right)
Eric Bina (left) and Marc Andreessen (right)

NCSA Mosaic of Marc Andreessen (born 9 July 1971) and Eric Bina (born 25 October 1964) was neither the first web browser (the first was the WorldWideWeb of Berners-Lee) nor the first graphical web browser (it was preceded by the lesser-known Erwise and ViolaWWW), but it was the web browser credited with popularizing the World Wide Web. Its clean, easily understood user interface, reliability, Windows port, and simple installation all contributed to making it the application that opened up the Web to the general public.

In 1992 Marc Andreesen was a student in Computer Science and a part-time assistant at the NCSA (National Center for Supercomputing Applications) at the University of Illinois. His position at NCSA allowed him to become quite familiar with the Internet and World Wide Web, which began to take off.

NCSA Mosaic beta version
NCSA Mosaic beta version

There were several web browsers available then, but they were for Unix machines which were rather expensive. This meant that the Web was mostly used by academics and engineers who had access to such machines. The user interfaces of all available browsers also tended to be not very user-friendly, which also hindered the spread of the WWW. That’s why Marc decided to develop a browser that was easier to use and more graphically rich.

In the same 1992, Andreesen recruited his colleague from NCSA and the University of Illinois, Eric Bina (Master in Computer Science from the University of Illinois from 1988), to help with his project. The two worked tirelessly. Bina remembers that they would work three to four days straight, then crash for about a day. They called their new browser Mosaic. It was much more sophisticated graphically than other browsers of the time. Like other browsers, it was designed to display HTML documents, but new formatting tags like center were included.

The most important feature was the inclusion of the image tag which allowed the inclusion of images on web pages. Earlier browsers allowed the viewing of pictures, but only as separate files. NCSA Mosaic made it possible for images and text to appear on the same page. It also featured a graphical interface with clickable buttons that let users navigate easily and controls that let users scroll through text with ease. Another innovative feature was the new form of hyperlinks. In earlier browsers, hypertext links had reference numbers that the user typed in to navigate to the linked document. The new hyperlinks allowed the user to simply click on a link to retrieve a document.

NCSA Mosaic for Mac
NCSA Mosaic for Mac

NCSA Mosaic was also a client for earlier protocols such as FTP, NNTP, and gopher.

In January 1993, Mosaic was posted for free download on NCSA’s servers and became immediately popular, more than 5000 copies were being downloaded each month. Within weeks tens of thousands of people had downloaded the program. The original version was for Unix, but Andreesen and Bina quickly put together a team to develop PC and Mac versions, which were released in the late spring of the same year. With Mosaic now available for more popular platforms, its popularity soon skyrocketed. More users meant a bigger Web audience. The bigger audiences spurred the creation of new content, which in turn further increased the audience on the Web and so on. As the number of users on the Web increased, the browser of choice was Mosaic so its distribution increased accordingly.

NCSA Mosaic for Windows
NCSA Mosaic for Windows

By December 1993, Mosaic’s growth was so great that it made the front page of the New York Times business section. The article concluded that Mosaic was perhaps “an application program so different and so obviously useful that it can create a new industry from scratch”. NCSA administrators were quoted in the article, but there was no mention of either Andreesen or Bina. Marc realized that when he was through with his studies NCSA would take over Mosaic for themselves. So when he graduated in December 1993, he left and moved to Silicon Valley in California.

Later Andreesen and Jim Clark, the founder of Silicon Graphics, incorporated Mosaic Communications Corporation and developed the famous Netscape browser and server products.

NCSA Mosaic won multiple technology awards, including being named 1993 Product of the Year by InfoWorld magazine and 1994 Technology of the Year by Industry Week magazine.

NCSA discontinued support for Mosaic in 1997, shifting its focus to other research and development projects.

Alan Emtage

Alan Emtage
Alan Emtage

The Internet’s first search engine—the Archie system, was created in 1989 by a student at McGill University in Montreal, Canada, Alan Emtage. Emtage (born 27 November 1964, in Barbados) conceived the first version of Archie, which was actually a pre-Web internet search engine for locating material in public FTP archives.

A native of Barbados, Alan attended high school at Harrison College from 1975 to 1983 (and in 1981 became the owner of a Sinclair ZX81 with 1K of memory), where he graduated at the top of his class, winning the Barbados Scholarship. Alan was always crazy about computers and while a student at Harrison College, he tossed around a number of other career choices including meteorology and organic chemistry, but chose computer science.

In 1983 Alan entered McGill University in Montreal, Canada, to study for a Bachelor’s degree in computer science. In 1987 he continued his study for a Master’s degree, which he obtained in 1991. He was part of the team that brought the first Internet link to eastern Canada (and only the second link in the country) in 1986.

In 1989 while a student and working as a systems administrator for the School of Computer Science, Alan conceived and implemented the original version of the Archie search engine, the world’s first Internet search engine and the start of a line that leads directly to today’s giants Yahoo and Google. (The name Archie stands for “archive” without the “v”, not the kid from the comics)

Working as a systems administrator, Alan was responsible for locating software for the students and staff of the faculty. The necessity for searching for information became the mother of invention.

He decided to develop a set of programs, that would go out and look through the repositories of software (public anonymous FTP (File Transfer Protocol) sites) and build basically an index of the available software, a searchable database of filenames. One thing led to another and word got out that he had an index available and people started writing in and asking if we could search the index on their behalf.

As a result, rather than doing it himself, Alan allowed them to do it themselves so we wrote software that would allow them to come in and search the index themselves. That was the beginning.

It seems that the administration of the university was the last to find out about what Alan had done. As Alan remembered: “We had no permission from the school to provide this service; and as a matter of fact, the head of our department found out about it for the first time by going to a conference. Somebody went up to him and said they really wanted to congratulate him for providing this service and he graciously smiled, said ‘You’re welcome’ and went back to McGill and said ‘What the hell is all of this? I have no idea what they’re talking about’.
“That was a once-in-a-lifetime opportunity. It was largely being in the right place at the right time with the right idea. There were other people who had similar ideas and were working on similar projects, I just happened to get there first.”

Archie is considered the original search engine and a lot of the techniques that Emtage and other people that worked with him on Archie came up with are basically the same techniques that Google, Yahoo!, and all the other search engines use.

Later Alan and his colleagues developed various versions that allowed them to split up the service so that it would be available at other universities rather than taxing the facility at McGill.

In 1992, Emtage along with the computer scientist Peter Deutsch formed Bunyip Information Systems—the world’s first company expressly founded for and dedicated to providing Internet information services with a licensed commercial version of the Archie search engine used by millions of people worldwide.

Emtage was a founding member of the Internet Society and went on to create and chair several Working Groups at the Internet Engineering Task Force, the standard-setting body for the Internet. Working with other pioneers such as Tim Berners-Lee, Marc Andreessen, Mark McCahill (creator of Gopher), and Jon Postel, Emtage co-chaired the Uniform Resource Identifier (URI) Working Group which created and codified the standard for Uniform Resource Locators (URLs).

Emtage is currently Chief Technical Officer at Mediapolis, Inc., a web engineering company in New York City. Besides computers, traveling and photography are his passions. He has been sky-diving in Mexico, hand-gliding in Brazil, diving in Fiji, hot air ballooning in Egypt, and white-water rafting in the Arctic circle.

Jarkko Oikarinen

Nothing endures but change.
Heraclitus (540 BC-480 BC)

Jarkko Oikarinen
Jarkko Oikarinen

It was already mentioned on this site, that the first chat program in the world (EMISARI) was designed in 1971 by Murray Turoff. EMISARI however was used mainly for government and educational purposes and never became popular. The program, which gave birth to the modern extremely popular chat movement was the Internet Relay Chat (IRC) of Jarkko “WiZ” Oikarinen.

During the summer of 1988, Jarkko Oikarinen (born 16 August 1967, in Kuusamo, Finland), a 2nd-year student in the Department of Electrical Engineering at the University of Oulu, Finland, was working at the university Department of Information Processing Science, where he administered the department’s Sun-3 Unix server “tolsun.oulu.fi”, running on a public access BBS (bulletin board system) called OuluBox.

The work with server administration didn’t take all his time, so Jarkko started doing a communication program, which was meant to make OuluBox a little more usable. Partly inspired by Jyrki Kuoppala’s “rmsg” program for sending messages to people on other machines, and partly by Bitnet Relay Chat, Oikarinen decided to improve the existing multi-user chat program on OuluBox called MultiUser Talk (MUT) (which had a bad habit of not working properly), itself based on the basic talk program then available on Unix computers. He called the resulting program IRC (for Internet Relay Chat), and first deployed it at the end of August 1988.

When IRC started occasionally having more than 10 users (the first IRC server was the above-mentioned tolsun.oulu.fi), Jarkko asked some friends at Tampere University of Technology and Helsinki University of Technology to start running IRC servers to distribute the load. Some other universities soon followed. Markku Järvinen made the IRC client program more usable by including support for Emacs editor commands, and before long IRC was in use across Finland on the Finnish network FUNET, and then on the Scandinavian network NORDUNET.

In 1989 Oikarinen managed to get an account on the legendary machine “ai.ai.mit.edu” at the MIT university, from which he recruited the first IRC users outside Scandinavia and arranged to start the first outside-Scandinavian IRC server. Soon followed two other IRC servers, at the University of Denver and at Oregon State University, “orion.cair.du.edu” and “jacobcs.cs.orst.edu” respectively. The administrators emailed Jarkko and obtained connections to the Finnish IRC network to create transatlantic connections, and the number of IRC servers began to grow quickly across both North America and Europe.

IRC became well known to the general public around the world in 1991, when its use skyrocketed as a lot of users logged on to get up-to-date information on Iraq’s invasion of Kuwait, through a functional IRC link into the country that stayed operational for a week after radio and television broadcasts were cut off.

The Internet Relay Chat Protocol was defined in May 1993, in RFC 1459 by Jarkko Oikarinen and Darren Reed. It was mainly described as a protocol for group communication in discussion forums, called channels, but also allows one-to-one communication via private message as well as chat and data transfers via Direct Client-to-Client.

As of the end of 2009, the top 100 IRC networks served more than half a million users at a time, with hundreds of thousands of channels, operating on a total of some 1500 servers worldwide. As of 2016, a new standardization effort is under way under a working group called IRCv3, which focuses on more advanced client features like instant notifications, better history support, and improved security. As of June 2021, there are 481 different IRC networks known to be operating,

Tim Berners-Lee

For every problem there is a solution that is simple, clean, and wrong.
Henry Louis Mencken

Tim Berners-Lee
Tim Berners-Lee

Tim Berners-Lee used to say: “I just had to take the hypertext idea and connect it to the TCP and DNS ideas and—ta-da!—the World Wide Web.” As simple as it may seem, how this “simple” invention happened?

In March 1989, a physicist and a computer nerd in CERN (European Particle Physics Laboratory in Geneva, Switzerland), Timothy John “Tim” Berners-Lee (born 8 June 1955 in London), submitted to his boss a proposal for an information management system, the prototype of the now ubiquitous World Wide Web. The boss was not very impressed. Vague, but exciting, were the words that he wrote on the proposal, thus unofficially allowing Berners-Lee to continue his work on WWW (actually the term World Wide Web will be decided the next year, in 1989 Berners-Lee called his system Mesh).

Berners-Lee had already an experience with hypertext systems, including his own. During his first stay at CERN, in 1980, inspired by the MEMEX of Vannevar BushProject Xanadu of Ted Nelson, and NLS of Douglas Engelbart, he proposed a project for a new document system, based on the concept of hypertext, designed to facilitate sharing and updating information among researchers. The system was called ENQUIRE and was written in the Pascal programming language, using NORD-10 (16-bit minicomputer of Norsk Data, running under operating system SINTRAN III), later the program was ported to PC, then VMS (see the original proposal for ENQUIRE).

The inspiration of Berners-Lee came from the frustrating fact, that there was a lot of different data on different computers, but it was not connected at all. Because people at CERN came from universities all over the world, they brought with them all types of computers. Not just Unix, Mac, and PC: there were all kinds of big mainframe computers and medium-sized computers running all sorts of software. One had to log on to different computers to get at it, and even sometimes one had to learn a different program on each computer. So finding out how things worked was a really difficult task.

The first web server
The first web server, CERN HTTPd, running on a NeXTcube workstation

Berners-Lee wanted to write some programs to take information from one system and convert it so that it could be inserted into another system. More than once. The big question was: “Can’t we convert every information system so that it looks like part of some imaginary information system that everyone can read?” And that became the WWW.

In 1990, with the help of his colleague from CERN—Robert Cailliau, Berners-Lee produced a revision of the system, which was accepted by his manager. Berners-Lee coded the first Web browser, which also functioned as an editor (the name of the program was WorldWideWeb, running on the NeXTSTEP operating system), and the first Web server, CERN HTTPd (short for HyperText Transfer Protocol daemon), both running on a NeXTcube workstation (see the nearby images).

The first Web site in the world (with the DNS name info.cern.ch) built was put online on 6 August 1991.

The first web browser
The first web browser, WorldWideWeb

In the 1990s WWW gradually became the prevalent technology on the Internet and a global information medium. Today, the Web and the Internet allow connectivity from literally everywhere on earth—even ships at sea and in outer space. In 2011 the number of websites exceeded 300 million. Currently (November 2022), there are around 1.14 billion websites in the World (although only 17% of these websites are active, and 83% are inactive).

At the beginning of the century, new ideas for sharing and exchanging content ad hoc, such as Weblogs and RSS, rapidly gained acceptance on the Web. This new model for information exchange (called Web 2.0), primarily features DIY (Do It Yourself—is a term used to describe building, modifying, or repairing something without the aid of experts or professionals) user-edited and generated websites and various other content like video and audio media (YouTube), microblogging (Twitter), etc.

What is the future of WWW? What will the next version—Web 3.0 look like?

Tim Berners-Lee’s vision of the future Web as a universal medium for data, information, and knowledge exchange is connected with the term Semantic Web. In 1999 he wrote: “I have a dream for the Web in which computers become capable of analyzing all the data on the Web—the content, links, and transactions between people and computers. A Semantic Web, which should make this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy, and our daily lives will be handled by machines talking to machines. The intelligent agents people have touted for ages will finally materialize. And later in 2006, he add: “People keep asking what Web 3.0 is. I think maybe when you’ve got an overlay of scalable vector graphics—everything rippling and folding and looking misty—on Web 2.0 and access to a semantic Web integrated across a huge space of data, you’ll have access to an unbelievable data resource.”

Berners-Lee says internet access should be a human right, as common as electricity. Now he is working on a new data-sharing standard called Solid that could help deliver on the initial vision, and a company, Inrupt, to help commercialize this vision. He cautions that this new Web 3.0 vision for giving back control of our data differs wildly from current Web3 efforts built on less efficient blockchains. Core features of Solid include support for the following:
• Global single sign-on.
• Global access control.
• Universal API centered around people instead of apps.

Vint Cerf and Bob Kahn

Vinton Gray "Vint" Cerf
Vinton Gray “Vint” Cerf

The most popular network protocol in the world, the TCP/IP protocol suite, was designed in the first half of the 1970s by two DARPA scientists—Vint Cerf and Bob Kahn, persons most often called the fathers of the Internet.

Vinton Gray “Vint” Cerf (born 23 June 1943 in New Haven, Connecticut) obtained his B.S. in Math and Computer Science at Stanford University in 1965 and went to IBM, where he worked for some two years as a systems engineer, supporting QUIKTRAN—a system to make time-shared computing more economical and widely available for scientists, engineers, and businessmen.

In 1967 he left IBM to attend graduate school at the University of California, Los Angeles (UCLA), where he earned his master’s (in 1970) and Ph.D. degree (1972) in Computer Science. During his graduate student years, he studied under Professor Gerald Estrin, and worked in Leonard Kleinrock‘s data packet networking group that connected the first two nodes of the ARPANET, the predecessor to the Internet. He worked as a Principal Programmer, participating in a number of projects, including the ARPANet Network Measurement Center, a video graphics project including a computer-controlled 16 mm camera, development of ARPANet host protocol specifications.

While at UCLA, he also met Bob Kahn, who was working on the ARPANet hardware architecture in BBN (Bolt Beranek and Newman).

Robert Elliot Kahn
Robert Elliot Kahn

Robert Elliot Kahn (born 23 December 1938) received a B.E.E. degree from the City College of New York in 1960, and M.A. and a Ph.D. degree from Princeton University in 1962 and 1964 respectively.

After graduation, he received a position on the Technical Staff at Bell Labs and then became an Assistant Professor of Electrical Engineering at MIT. He took a leave of absence from MIT to join Bolt Beranek and Newman, where he was responsible for the system design of the Arpanet, the first packet-switched network, and was involved in the building of the Interface Message Processor.

In 1972, Kahn was hired by Larry Roberts at the IPTO to work on networking technologies, and in October he gave a demonstration of an ARPANet network connecting 40 different computers at the International Computer Communication Conference, making the network widely known for the first time to people from around the world and communication engineers realizing that packet switching was a real technology.

At the IPTO, Kahn worked on an existing project to establish a satellite packet network and initiated a project to establish a ground-based radio packet network. These experiences convinced him of the need for the development of an open-architecture network model, where any network could communicate with any other independent of individual hardware and software configuration. Kahn, therefore, set four goals for the design of what would become the Transmission Control Protocol (TCP):
Network connectivity. Any network could connect to another network through a gateway.
Distribution. There would be no central network administration or control.
Error recovery. Lost packets would be retransmitted.
Black box design. No internal changes would have to be made to a network to connect it to other networks.

In the spring of 1973, Vinton Cerf joined Kahn on the project. They started by conducting research on reliable data communications across packet radio networks, factored in lessons learned from the Networking Control Protocol, and then created the next-generation Transmission Control Protocol (TCP), the standard protocol used on the Internet today.

In the early versions of this technology, there was only one core protocol, which was named TCP. And in fact, these letters didn’t even stand for what they do today Transmission Control Protocol, but they were for the Transmission Control Program. The first version of this predecessor of modern TCP was written in 1973, then revised and formally documented in RFC 675Specification of Internet Transmission Control Program from December 1974.

What is the current status of the Internet Protocol Suite (commonly known as TCP/IP)?

It is the set of communications protocols used for the Internet and other similar networks. It is named after two of the most important protocols in it: the Transmission Control Protocol (TCP) and the Internet Protocol (IP), which were the first two networking protocols defined in this standard. Today’s IP networking represents a synthesis of several developments that began to evolve in the 1960s and 1970s, namely the Internet and LANs (Local Area Networks), which emerged in the mid- to late-1980s, together with the advent of the World Wide Web in the early 1990s.

The design of the network included the recognition that it should provide only the functions of efficiently transmitting and routing traffic between end nodes and that all other intelligence should be located at the edge of the network, in the end nodes. Using a simple design, it became possible to connect almost any network to the ARPANET, irrespective of their local characteristics. One popular saying has it that TCP/IP, the eventual product of Cerf and Kahn’s work, will run over two tin cans and a string.

A computer or device called a router (a name changed from a gateway to avoid confusion with other types of gateways) is provided with an interface to each network, and forwards packets back and forth between them. Requirements for routers are defined in RFC 1812.

DARPA then contracted with BBN Technologies, Stanford University, and the University College London to develop operational versions of the protocol on different hardware platforms. Four versions were developed: TCP v1, TCP v2, a split into TCP v3 and IP v3 in the spring of 1978, and then stability with TCP/IP v4—the standard protocol still in use on the Internet today.

Cerf (left) and Kahn being awarded the Presidential Medal Of Freedom by Former President Bush in 2005
Cerf (left) and Kahn being awarded the Presidential Medal Of Freedom by Former President Bush in 2005

In 1975, a two-network TCP/IP communications test was performed between Stanford and University College London (UCL). In November 1977, a three-network TCP/IP test was conducted between sites in the US, UK, and Norway. Several other TCP/IP prototypes were developed at multiple research centers between 1978 and 1983. The migration of the ARPANET to TCP/IP was officially completed on 1 January 1983, when the new protocols were permanently activated.

In March 1982, the US Department of Defense declared TCP/IP as the standard for all military computer networking. In 1985, the Internet Architecture Board held a three-day workshop on TCP/IP for the computer industry, attended by 250 vendor representatives, promoting the protocol and leading to its increasing commercial use.

The Internet Protocol Suite, like many protocol suites, may be viewed as a set of layers. Each layer solves a set of problems involving the transmission of data and provides a well-defined service to the upper layer protocols based on using services from some lower layers. Upper layers are logically closer to the user and deal with more abstract data, relying on lower-layer protocols to translate data into forms that can eventually be physically transmitted.

The TCP/IP model consists of four layers, as it is described in RFC 1122. From lowest to highest, these are—the Link Layer, the Internet Layer, the Transport Layer, and the Application Layer. It should be noted that this model was not intended to be a rigid reference model into which new protocols have to fit in order to be accepted as a standard.

Almost all operating systems in use today, including all consumer-targeted systems, include a TCP/IP implementation.

Bob Thomas

The most important feature of a computer virus is its ability to self-replicate (in a sense every self-replicating program can be called a virus). The idea of self-replicating programs can be traced back as early as 1949 when the mathematician John von Neumann envisioned specialized computers or self-replicating automata, that could build copies of themselves and pass on their programming to their progeny.

If a computer virus has the ability to self-replicate over a computer network, e.g. Internet, it is called a worm. It is not known who created the first self-replicating program in the world, but it is clear that the first worm in the world (so-called the Creeper worm) was created by the BBN engineer Robert (Bob) H. Thomas probably in 1971.

The company BBN Technologies (originally Bolt, Beranek and Newman) was a high-technology company, based in Cambridge, Massachusetts, which played an extremely important role in the development of packet switching networks (including the ARPANET and the Internet).

A number of well-known computer luminaries have worked at BBN, including Robert KahnJoseph Licklider, Marvin Minsky, Ray Tomlinson, John McCarthy, etc. Between them was the researcher Robert H. (Bob) Thomas, working in a small group of programmers who were developing a time-sharing system called TENEX, that ran on Digital PDP-10 (see the lower image).

The first PDP-10 model (KA10) in a large configuration: disk drives (lower left) and printer (lower right) in the foreground, CPU and DECtapes right center, memory cabinets to its left and a swapping disk and controller to their left, then data channels and 9-track tapes to its right. The Teletype console is sitting on the floor near the control panel. Just above the control panel and below the bottom DECtape drive is the paper-tape reader/punch.
The first PDP-10 model (KA10) in a large configuration: disk drives (lower left) and printer (lower right) in the foreground, CPU and DECtapes right center, memory cabinets to its left, and a swapping disk and controller to their left, then data channels and 9-track tapes to its right. The Teletype console is sitting on the floor near the control panel. Just above the control panel and below the bottom DECtape drive is the paper-tape reader/punch.

Let’s clarify, the Creeper wasn’t a real virus, not only because the notion of computer virus didn’t exist in the 1970s, but also because it was actually an experimental self-replicating program, not destined to damage, but to demonstrate a mobile application.

Creeper was written in PDP-10 assembly, ran on the old Tenex operating system (Tenex is the OS that saw the first email programs, SNDMSG and READMAIL, in addition to the use of the “@” symbol on email addresses), and used the ARPANET (predecessor of the current Internet) to infect DEC PDP-10 computers running the TENEX. Creeper caused infected systems to display the message “I’M THE CREEPER : CATCH ME IF YOU CAN.

The Creeper would start to print a file, but then stop, find another Tenex system, open a connection, pick itself up and transfer to the other machine (along with its external state, files, etc.), and then start running on the new machine, displaying the message. The program rarely if ever actually replicated itself, rather it jumped from one system to another, attempting to remove itself from previous systems as it propagated forward, thus Creeper didn’t install multiple instances of itself on several targets, actually, it just moseyed around a network (the techniques developed in Creeper were later used in the McROSS (Multi-computer Route Oriented Simulation System), an air traffic simulator, to allow parts of the simulation to move across the network).

It is uncertain how much damage (if any) the Creeper actually caused. Most sources say the worm was little more than an annoyance. Some sources claim that Creeper replicated so many times, that it crowded out other programs, but the extent of the damage is unspecified. Anyway, it immediately revealed the key problem with such worm programs: the problem with controlling the worm.

The Creeper program led to further work, including a version by a colleague of Thomas—Ray Tomlinson, that not only moved through the net but also replicated itself at times. To complement this enhanced Creeper, in 1972 the Reaper program was created, which moved through the net, replicating itself, and tried to find copies of Creeper and log them out. Thus, if Creeper was the first virus, then Reaper was the first anti-virus software.


Note from the author (Georgi Dalakov):
After the composition of this article in February 2010, I referred to Mr. Ray Tomlinson, the creator of Reaper, with an appeal for comment. He was so kind to provide me with one, as follows:
Your description agrees with my recollection, though I think it was somewhat later than 1970 and I don’t recall some of the details you give, such as printing a file as evidence of its presence on a particular machine (though it must have done something to indicate its progress). I do recall making the modifications you indicate and thinking of it as the escalation of an arms race.
There was a server (or daemon or background process) (RSEXEC, I think it was called) running on the individual machines that supported this activity. That is, the creeper application was not exploiting a deficiency of the operating system. The research effort was intended to develop mechanisms for bringing applications to other machines with intention of moving the application to the most efficient computer for its task. For example, it might be preferable to move the application to the machine having the data (as opposed to bringing the data to the applications). Another use would be to bring the application to a machine that might have spare cycles because it is located in a different timezone where local users are not yet awake. The CREEPER application was a demonstration of such a mobile application.