Boolean Logical Enunciators for complex searches
It is a good idea to read the tutorials below before attempting any complex
combinations of Boolean Enunciators, otherwise frustration might likely be
the result of wrong queries at the search engines. If doing the combination
correctly, the search engine will give highly relevant documents in response.
Geeks for Geeks
Introduction to Boolean Logic
https://www.geeksforgeeks.org/introduction-to-boolean-logic/
Cameras in the World Wide Web
A view of the world in real time. A road in Cincinnati or Chicago, a street
in New York or Nashville, showing what those who are there at this very
instant are also seeing. Only a suggestion for newcomers about this feature:
Please do not be a voyeur.
Earth Cam
Cameras in many geographic locations
http://search.earthcam.com/
Cinematography
Originally recorded on chemical film or on magnetic analogue video, and later
converted to digital video, or recorded directly as digital, there are available
thousands of motion pictures of short as well as of long reel, silent or sounded,
in black and white or coloured. A treasure for lovers of the good old films, or
for those who prefer images or sounds of more recent release.
British Pathe
Cinematographic documentaries as far back as 1896
http://www.britishpathe.com/
Tubidy Mobile
Part of the You Tube collection, in formats for mobile devices
http://tubidy.mobi/
Content safety
The Internet, and in particular the World Wide Web, is not always the clean
place that it ought to be, where reliable information could be found for free
like in a huge, worldwide public library. Regrettably, there are certain
individuals who publish materials of dubious reputation and unsavoury bad
taste. Like me, for instance.
For protecting the innocent against so unqualifiable elements as I am, there
are some institutions whose task it is that of labelling as "safe", "maybe
safe, maybe not", or "definitely unsafe" the content of documents that have
been published, and that have been remitted to those intitutions by their own
authors (the persons or organisations who created the documents), or their
current maintainers. The labelling is mainly based on concepts of morality
and on language used, although different labelling institutions also have
different criteria for this. The idea is primarily targeted to children, or
to other persons who do not tolerate immorality or harsh language.
The document thus labelled is expected to insert a PICS label (Internet
Content Safety) in the head of the document, where search engines or user
agents are expected to recognise the label and act in consequence. The PICS
label cannot be inserted in the body of the document, although a warning text
may also be shown in the body. Depending on the label, and on the features and
configuration of the operating system or the user agent, the document will be
shown with a warning, or it will be refused altogether. Some administrators go
too far, refusing essentially innocent documents through their systems just
because the PICS label merely suggests a possibly 'controversial' topic.
Counters of visitors
A counter of visitors counts how many times an Internet document has been
opened for reading (this is, how many times a copy of the document has been
requested from its host server). A counter may also show some statistical
information, for example the origin of the request (redirected by a hyper
link or not), the date and time when the document or each of its pages have
been opened (therefore showing which pages have been read and the time for
each), which out-bound hyper links the visitor has followed, if any, and many
other details, such as the computers used by those visitors, their operating
systems, their user agents, the colour and size resolution of their video
boards, the country of origin of the request, the human language preferently
used by the target computer, and plenty of other technical data.
All those facts can be used to redirect the visitor to an appropriate copy of
the document (in a different human language, for instance), or they can make
the Web Master aware of general preferences of visitors, and thus to manage
the content of the document accordingly. Automatic redirection should not be
used alone, internal links activated by manual operation should always appear
on each page of the document. The reason is that a human operator may prefer
to read in a certain language, in German, for example, but he is at the
moment using a computer located in Italy or with Italian configured as the
preferential language. He should have the easy means for going to the German
page without need of re-configuring the computer (or of moving the computer
and himself to Germany !!!), which may prove difficult or impossibe to do.
The company below, Extreme Tracker, really goes overboard in providing a
richness of technical detail such as makes sophisticated Web Masters happy.
Extreme Tracking
Counter of visitors with full statistics
http://extremetracking.com/
For redirection of a document request to another Uniform Resource Locator,
please see the appropriate section further below in this page:
Redirection of Uniform Resource Locator
Courses through the Internet
Courses made through computer connected to the Internet differ from courses
made in classroom only in the absence of physical presence of teacher and
students, because in everything else it is pretty much the same: there are
subjects programmed for study, a method for teaching them, a chronogramme,
tests and examinations, et cetera.
The advantage is precisely the freedom of not needing to go physically to
school. Teacher and students "meet" through computer at scheduled times,
while being free to study at their own pace at other times. However, they
MUST keep a discipline of study. The institution will not tolerate students
coming and going at their whim, it will promptly eliminate rascals from the
on-line course. Therefore, persons lacking the necessary time or resources
(reliable Internet connection), should abstain from starting the course.
Massachusetts Institute of Technology
Courses through the Internet
http://ocw.mit.edu/
Duplicated documents in the Internet
Documents published in the Internet or in any of its parts (like the World
Wide Web) belong to their authors, who are free to keep them, modify them, or
remove them, just like in any other distribution medium. Of course, it is
justified to quote them by other authors, and it may even be justified to
reproduce parts of them, but in these cases it is correct to inform of the
source and give credit to those who took the trouble of writing the texts.
The aim of the Internet is to create a source of information that is free for
everyone to use, not to plagiarise the efforts of creative writers and give
false credit to shameless individuals who are too lazy or too stupid for
making any creations of their own.
Copy Scape
Duplicated or plagiarised documents in the Internet
http://www.copyscape.com/
Games in the Internet
There are thousands of games available in the different protocols of Internet,
mostly in the World Wide Web. Games for all tastes: action, adventure, board
(such as chess or draughts), cards, puzzles, tests of knowledge in different
fields, and many others. Some can be played on-line against the host computer
or against human opponents, others require a user agent capable of executing
Java, Java Script or other languages for dynamic content, still others can be
downloaded and played off-line against the local computer, or using the local
computer screen as a reference for two or more human players. All computer
platforms and operating systems have games available for them in greater or
lesser amount. Even the oldest or most limited device can execute some games.
Of the vast repertoire in existence, only two games have been chosen here.
The first is the oldest and most international game of intelligence, the King
of Games: chess. The hyper-link below is useful for playing on-line.
Chess
Readers are invited to visit the Chess Page of CSS Dixieland, which includes
other hyper-links related to chess, chess history, advice, and sample games:
cssdixieland_chess.html
Dungeons
There is a kind of adventure games often known as 'dungeons', after the name
of the earliest of its kind, 'Dungeons and Dragons', a game played on a table,
or for main frame computers, that Wizards of the Coast released in the late
1960's and in the 1970's. Another game of a similar concept called 'Rogue'
was created by Glenn Wichman and Michael Toy in 1985, developed by Kenneth
Arnold and Michael Toy in 1986 for BSD, and subsequently ported to other Unics
systems by Timothy Stoehr. Rogue is a game that outputs text and simple graphics,
as opposed to the text-only output that was predominant at that time in adventure
games. The original BSD Curses screen-handling library was put together by Kenneth
Arnold, mainly to support graphic games, and the development of Rogue popularised
the use of the Curses library for various other purposes, not only for games. Since
then, Curses has become one of the most important application libraries in Unics.
Other 'dungeon' games such as Omega, Larn, Angband, Moria, Hack or Nethack,
all took off from the inspiration provided by Rogue. The game Diablo for
Microsoft Windows, though much more intensive in graphics, has a very similar
play logic. There are also several 'dungeon' games for mobile devices, as well
as collective games for several players, connected through a computer network.
The game known as 'Hack' was created by Jay Fenlason, helped by some other
programmers. Hack was further developed by Andries Brouwer, and ported to IBM
Personal Computer or compatibles by Don Kneller, with the name of 'PC-Hack'.
From the latter games Net-Hack was created by Mike Stephenson, and from this
several other games have evolved. Net-Hack is one of the oldest games today
available in Internet, having been ported to many platforms until version
3.4.3 of 2005, but currently most platforms have been abandoned. Thus, from
DOS to Unics, almost any computer HAD a Net-Hack port for it, but this is not
so with the most recent versions of Net-Hack (see notes after hyper-links).
Net-Hack is a graphic game of adventure that can be played either on-line or
off-line. This outstanding and complex game puts the player in a labyrinth of
many levels of depth, full of monsters, magic objects, weapons, armour, and
countless other items. The goal is to find a magic amulet and escape from the
labyrinth. The player starts by choosing one of several characters, and must
care for his hero while advancing towards his goal. He will find all sorts of
strange things that may be either beneficial or harmful. He must know what
decisions to take. The game is turn-based, like chess, without time limit.
The Net-Hack Development Team commanded by Mister Mike Stephenson is very
supportive and will answer all kinds of doubts related to their game. Players
of Nethack can read the instructions and dive directly into the game, or they
can also consult Web documents or Usenet groups with a dedicated community of
players who offer plenty of good advice. Sometimes the advice may disclose
'secrets' that the player may prefer to discover by himself. This kind of
advice is often known as 'spoiler'. Net-Hack Web document, maintained by
Mister Kenneth Lorber:
Net Hack
The most complex graphic adventure 'dungeon' game for a single player, on-line or off-line
http://www.nethack.org/
There are two ways to contact the Development Team of Net Hack. One way is
the Contact page at the above Web document:
Net Hack Contact page
For comments, suggestions, or report of problems with Net Hack in any of the
computer platfoms for which the game has been ported
http://www.nethack.org/common/contact.html
The other way is by electronic post. Preferable for programmers who wish to
send a patch to the Net Hack source code, or for other major collaborations:
devteam AT nethack DOT org
The string "AT" must be substituted by the character "@" and the string "DOT"
by the character "." without surrounding spaces and in single horizontal line.
This is to thwart automatic harvesting of links (often for spamming purposes).
Notes on the most recent versions of Net-Hack:
Until version 3.4.3 of 2005 Net-Hack was a very portable game. It could be
played in a wide array of computer architectures, with different processors
and operating systems, such as Amiga, Atari, DOS, Be OS, VMS, Unics, Windows
or Macintosh. Unfortunately, there was a fake release called 'Net-Hack 3.5',
unauthorised by the official Dev Team of Mister Mike Stephenson. The release
of the spurious '3.5' forced the Dev Team to prepare an official release as
'Net-Hack 3.6', and make it available in 2015 from the official Net-Hack Web
document of Mister Kenneth Lorber, hyper linked above. However, version 3.6
was put together in a rather hasty way, after ten years without any official
release, and only ports to Apple Macintosh and Microsoft Windows are for now
available. DOS, Unics, and all the other platforms, have been forsaken. The
source code of Net-Hack is freely available, and there is one page in the Web
document where persons interested in porting the game to other platforms can
announce their releases, but so far, only Macintosh and Windows ports exist
in the official Web document hyper linked above.
Mister Lorber lamely 'justifies' the decision of the Dev Team by declaring
that the other ports 'are not used' at present. Quite wrong, and an offence
for the enthusiasts of those other platforms. There are computer lovers who
treasure their old machines, still in working condition, or who use their old
operating systems. Relics such as CP/M in Intel 8086 (or even in Intel 8080)
are known to have some devoted users even today. They have Internet presence.
Luckily, the vast world of Computing is not limited to the ubiquitous Windows,
or to the Macintosh made 'for the rest of them', for the mass of ignorants who
know nigh to nothing about computers. There are experienced computer operators
who prefer to work with DOS, Unics, or other platforms, for advanced tasks
such as programming or for other serious purposes, and who WILL NOT surrender
to the market pressure of popular hardware and software vendors. It is a pity
that the Dev Team had not understood this when labelling those platforms as
'not used'. Only old versions of Net-Hack are now left for 'the rest of us'.
Fortunately, there are some operating systems or some of their distributions
that have gone through the pains of installing Net-Hack from the sources,
making it available as an executable. In some cases this executable presents
small images known as 'graphic tiles', which represent the hero, the monsters,
or the other objects that appear in the game. In other cases the executable
presents only textual characters. The game is the same with graphic tiles or
with textual characters. More important are the options defined at the time
of translating from source to executable, or the options defined by the
system administrator, or the options defined by the player himself. There
are MANY such options for Net-Hack. The player is advised to read the manual
and make his own decisions, or to experiment with the many possibilities.
Groups of interest in the Internet
Finding a group of people interested on a particular subject may yield better
results than querying a search engine about that subject, because the group
of enthusiasts will often be eager to welcome other persons interested in the
subject that they know and they love, and therefore the members of the group
will usually be ready to help the newcomer, answering questions and doubts
with plenty of relevant information. It may require some time to find the
right persons, though, as well as it will be necessary some diplomacy for not
offending sensitive individuals. But the results almost always pay high.
Google Groups
Groups with interest on specific subjects
http://groups.google.com/
Besides Google Groups, there are many other Internet resources with similar
purpose, to put in communication individuals who share a common interest.
This can be done in a forum or bulletin board, where people can write ideas
or comments, and read what others have written. It can also be done by
subscription to an electronic mailing list, where participants receive or
also send informations. The Usenet Newsgroups Protocol is entirely devoted to
this kind of interaction. In fact, part of Google Groups is taken from Usenet
and transferred to World Wide Web, since the two protocols are not compatible.
Warning: in many of those resources, particularly in Usenet, there are almost
always individuals who post irrelevant content, some of them even offensive
content. Most groups (but not all of them) have one or more moderators, whose
task is to edit or remove inappropriate content. Before writing to any group
it is recommended to read the rules for that group. They may appear as "Terms
of Service", "Conditions of Use", "Frequently Asked Questions", or similar
names. It is also good to read for some time before writing anything, so as
to perceive the relevance and tone that predominate in that group. In case of
doubt, it is advisable to request help from a moderator or from other members.
Hackers and crackers
The term "hacker" began to be used inside the Massachusetts Institute of
Technology during the early 1960's. Originally it was applied to computing
enthusiasts who were extremely knowledgeable on the inner workings of a
computer, and who applied this expertise for naughty but inoffensive actions.
With the demise of the "time sharing system" and the spread of microcomputers
during the late 1970's and the 1980's, the label of "hacker" was increasingly
applied by the public media to individuals who stealthily performed mischiefs
of different kinds with computers, ranging from the merely humorous to the
downright criminal. From the modifying of Web documents of some company or of
some government agency without their permission, to the collecting of secret
personal or corporate information for doubtful purposes. Therefore, the word
"hacker" became almost synonymous of "computer intruder", although it was not
properly its original meaning. "Cracker" is a more correct term, when applied
to a computer intruder.
In the section of Reference and Dictionaries further below there is a hyper
link to Wikipedia, encyclopaedia that can be edited by its readers. Wikipedia
has a wealth of information on almost any imaginable topic, like few resources
in the Internet have. One of those topics regards hackers. Some interconnected
pages of Wikipedia explain this concept in detail, and offer many hyper links
to expand knowledge on this exciting subject. The hyper link below also
possesses excellent lines of research into the whole topic, including its
historical and technical aspects. A word of warning for would-be "hackers":
many governments have increasingly enacted laws against certain forms of
computer crime. What may be taken as a joke without any consequences, may
also be taken as an illegal activity liable to prosecution. Besides, the
technical sophistication that is needed for "hacking sport" is very high
(depending on the kind of hacking activity and against whom). At any rate,
doing it seriously is not a week-end past time, it is a full-time business.
Hack in the Box
A "Jack in the Box" with detailed information on hackers
http://www.hackinthebox.org/
Hosts (servers) of Internet documents
A Gopher or Web document, or any other kind of data set, has to be stored
into some computer in the World. If that document or data set must be made
available through the Internet, through a Bulletin Board Service, or through
any other kind of network, then the storing computer must be connected to the
Internet or to that other network, ideally 24 hours a day and 7 days a week,
except for maintenance. The computer that stores the document or data set is
called a 'host' or a 'server'. A number of host servers accept Web documents
for free, and at least one host server (Super Dimension Fortress) accepts
Gopher documents for free or for a symbolic contribution. This hosting may be
done in exchange for advertisements that are inserted in the hosted document,
or in exchange for posting information in a forum or bulletin board, or for
some other forms of collaboration or contribution, but such participations
may be only suggested or encouraged, not always required. Other host servers
are paid, and obviously they do not insert any advertising and do not require
any extra efforts from the Gopher or Web Master.
Some hosts are reliable, some are not (they may be off-line too often). Some
hosts will only show documents coded in static HTML, others will correctly
show dynamic documents that use copy-righted, scripting proprietary languages,
such as ASP of Microsoft Corporation or Java of Sun Systems. Some hosts will
only allow small documents below 50 Kilobytes or even less, others allow
mastodontic documents ranging in the level of several Gigabytes (the maximum
limit for each data set is usually below 500 Megabytes). Some hosts will be
very restrict when it comes to allow band-width, other hosts are extremely
generous in giving lots of band-width. Some have plenty of resources or may
install those resources upon request of a relevant number of their users,
others offer only the most basic resources. Because all those characteristics
are never together in the same host, it is for the Gopher or Web Master who
owns the document to decide on the right host to be chosen. What works well
for a certain Master or a certain document, may not work for another.
Free host servers for World Wide Web documents
The next hyper link lists many host servers that accept storing Web
documents without payment. There is plenty of relevant information about
those hosts, including favourable or unfavourable comments written by some of
their users. Readers of CSS Dixieland are invited to send to those lists
their own comments about the hosts that they use.
Free Web Hosts
Servers where Web documents can be hosted
http://www.free-webhosts.com/
The next hyper link has a different purpose from the other. The forte of Web
Page Test is to test the availability of a Web document from the server where
that document is hosted, measuring retrieval time of the document from the
host. This helps a Web Master in checking efficiency of server and routing.
Unfortunately, Web Page Test does not directly work for Gopher servers.
Web Page Test
Tests the availability of a document from its host
http://www.webpagetest.org/
Free host servers for Gopher documents
As said above, Super Dimension Fortress is one of the VERY FEW, or perhaps the
only free Gopher host that still remains as of 2023. All the others are gone with
the wind. Information is given in the CSS Dixieland page devoted to the Gopher
Protocol. The page can be visited by activating the following internal link:
cssdixieland_gopher.html
Hyper links pointing to Internet documents
Hyper links shown in a Gopher or Web document that point to other documents
are called "outbound hyper links". It is easy to see them, just by looking at
the document that lists them (or looking at its source code, in case that any
hyper links might be hidden under very small text or under text of the same
colour as the background). Not so easy is the opposite way, "inbound hyper
links", this is, hyper links that point to a given document from a number of
unknown documents. The thing is important, because a hyper link pointing to
another document is a positive judgment about the value, originality or
exclusivity of the target document. In past years a resource existed for this
purpose, which was named Widex, but unfortunately it is now completely gone.
Identification of domains
We may wish to identify the person or institution possessing a given domain
of the Internet, if we find an interesting document in that domain, or for
another reason. Some countries keep strict records of the owner of every
domain under their jurisdiction, other countries are looser about domain
identification, therefore there is no guarantee that the owner of a domain be
always identified, but the lists offered by the two hyper links below are
probably the best way to start that kind of search.
Better Who Is
Names and Internet Protocol numbers
http://www.betterwhois.com/
Number of views of Internet documents
Good to know how our document is ranking in the Internet, it greatly boosts
the ego of a Web Master. It may also be interesting, although maybe not so
personally boosting, to know the ranking of documents belonging to other
Masters. Alexa used to provide that service, but years ago it stopped doing it.
Operating systems: Introduction
There was here a long explanation on the concept of operating system, giving
historical and technical details that were really beyond the scope of the
introductory purpose of this section. Readers seriously interested in the
subject have plenty of information in Internet, or in paper books. Of the
many operating systems that exist or have existed, two genealogies have been
chosen by CSS Dixieland as deserving a complete view: the DOS genealogy (in
particular Free-DOS) and the Unics genealogy (such as BSD, Minix, Linux,
Solaris, and others). For those operating systems, please see the pages on
Free-DOS and on Unics, accessible through the internal links below. For general
information on operating systems, including DOS and Unics, continue reading this page.
cssdixieland_freedos.html
cssdixieland_unics.html
Operating systems: Comparison
There is below a good list of operating systems to choose, all of them for
free. Some of them are open source for all or part of the system, at least
for the kernel or kernels, and with licence to modify the system by rewriting
and recompiling the sources in order to suit the needs or preferences of
technically sophisticated persons. Note that permission to modify the system
does not automatically imply permission to redistribute the modified system.
Keep in mind, however, that some systems may be incomplete or unreliable, or
they may only work with obsolete or uncommon hardware, or they may have a too
limited or too specialised availability of software. Read the comments written
by other people about a certain system. If necessary post Yourself specific
questions in a relevant board. At any rate, give a try to the unknown system
in an empty and formatted hard disk or another storage volume, or else
boot-strap the system from a floppy disk, compact disk, or another removable
volume, before thinking on doing partitions to the hard disk or on any other
actions that might compromise the future operability of Your computer.
Free Operating Systems
Detailed comparison and information on many free systems
http://www.freeos.com/compare/
Absolutely THE OLDEST Unics open source system in common use today is Minix
(meaning 'Minimal Unix'), created in 1987 by Mister Andrew Tanenbaum of Vrije
Universiteit at Amsterdam. Minix has a microkernel, not a monolithic kernel
as it is the case in most other Unics systems. Minix versions 1.x of 1987 and
2.x of 1997 were partly intended as teaching tools, showing how an operating
system could be built, but Minix version 3.x of 2006 is a fully functional
system, with important enhancements made in 2009. The Minix version 3.3 of
2014 is distributed as a 'live' (a boot-strappable) compact disk, this is,
the system can be used with no more than a compact disk in the computer.
Even without a hard disk, a Universal Serial Bus storage device, or another
boot-strappable volume, a fully functional command line is available directly
from the compact disk. Minix includes software made by BSD, by the Free
Software Foundation, and by other third parties, with the corresponding
licences. Minix is not part of BSD, but many executables made for BSD work
well in Minix and vice-versa. The Minix Web document includes an editable wiki.
Minix
The oldest open source Unics operating system, distributed as a 'live' compact disk
http://www.minix3.org/
From the technical point of view (albeit not from the legal point of view) BSD
is one of the many variants of Unics. Most current BSD systems derive from 4.3
BSD Lite operating system, a version of Unics developed by the Computer Systems
Research Group at the University of California at Berkeley between 1975 and 1993,
and from the 386 BSD operating system, which was the first port of BSD to Intel
80386 processor. The acronym "BSD" means "Berkeley Software Distribution". In
the following years, modifications from the 4.4 BSD Lite system (the last
release from the CSRG at Berkeley) were integrated into other BSD systems.
Operating systems: For old or limited computers
Not every computer can load mastodontic operating systems like Windows Vista
of 64 bits, which takes several Gigabytes of space. Luckily enough, those
mastodonts are not necessary in the minimal. In the hands of a knowledgeable
operator, many things can be done with a much more modest system, like for
instance the systems included here. Some of them take less than a hundred
Kilobytes of memory, including the kernel and a few of the most necessary
application programmes. This is the case of MS-DOS 1.25, marketed in 1981 by
the same company that later produced Windows (Microsoft Corporation), twenty
years before they fancied the dubious idea of building mastodontic systems.
All Boot Disks
Microsoft operating systems ready for boot-strapping. Several versions of MS-DOS
are collected here
http://www.allbootdisks.com/
The system of choice for using the maximum resources of an old or limited
computer, even with a small Random Access Memory, a small video memory, and
without hard disc (using the system as floppy disc-only or compact disc-only
system), is without any doubt Free-DOS, the system for the computer enthusiast
who really knows. Free-DOS is the most complete DOS system that has existed.
Free-DOS was created in 1994 as an open source alternative to the MS-DOS
system of Microsoft Corporation. The first stable version of Free-DOS was
released in 2006, after many beta test versions. Free-DOS works only with the
architecture of the IBM Personal Computer or compatible, using Intel x86
processor or compatible. Besides the hyper link to Free-DOS given below, this
document of CSS Dixieland has a page entirely dedicated to Free-DOS, with
detailed technical information on many aspects of this wonderful system:
cssdixieland_freedos.html
External hyper link to the Free-DOS Project. The official distribution is in
boot-strapabble compact disk. Persons wishinboot-strappable floppy disk, should look for the Odin distribution from the
Free-DOS project, or for the Fuzoma distribution of educational software
(hyper link to Fuzoma is given in the Free-DOS page of CSS Dixieland, above).
Free-DOS
Command line operating system of 16 bits
http://www.freedos.org/
Operating systems: Programmes for CP/M
In 1973 there were arithmetic calculators and some other electronic devices
that used microprocessors, but the smallest general-purpose computers were
the so-called 'minicomputers' of the PDP series, built by Digital Equipment
Corporation. The 'minicomputers' were the size of a wardrobe (but 'mini' when
compared to the huge mainframe computers), they costed thousands of Dollars,
and they required operation and maintenance by experts. At the end of that
year Mister David Ahl of Digital Corporation built only one unit of the first
prototype of microcomputer, the Scelbi-8 H, which in the following months
incorporated the 8-bit microprocessor Intel 8008. This first microcomputer
also incorporated the first magnetic floppy disk. The Scelbi-8 H microcomputer
was never sold commercially, but it gave ideas to other pioneers, among whom
were prominent the members of the 'Homebrew Computer Club', an innovative
group of sophisticated computer enthusiasts led by Mister Le
Also in 1973, Mister Gary Kildall (who at the time worked at Intel) developed
an operating system of 8 bits that he initially called 'Control Programme for
Microprocessors', abbreviated as CP/M. He initially offered CP/M to Intel,
but the corporation showed only a limited interest. Intel was interested in
selling its microprocessors to factories that needed to control industrial
processess and assembly lines, and it had only a very secondary interest in
selling microprocessors for small electronic devices. The company did not see
the tremendous potential of the future personal computers, at a time when the
only computers in existence were big, costly, and complicated machines. After
three years trying to convince Intel, Mister Kildall formed his own company,
the 'Intergalactic Digital Research', which marketed CP/M in 1976 with the
name of 'Control Programme for Microcomputers' or 'Control Programme Monitor'.
In 1974 another landmark was planted by Mister Johnathan Titus, the 'Personal
Minicomputer Mark-8', a microcomputer of 8 bits also based on Intel 8008. It
had 2 Kilobytes of Random Access Memory, extensible to 16 Kilobytes. It only
accepted machine code (in numbering base of two), and it was sold just as
paper plans, without physical parts. Only enthusiasts of Electronics had the
skills for building a computer themselves, and working with it in machine
code, but it represented a step ahead. The real breaktrough came in the
following year, with the Altair microcomputer.
In June 1975 Mister Edward Ted Roberts of Micro Instrumentation and
Telemetry Systems released the Altair 8800, microcomputer of 8 bits based on
Intel 8080. It had only 256 bytes of Random Access Memory and 64 Kilobytes of
storage, input by manual switches and output by panel of ligths. Units began
to be sold in 1975 at a price of 300 Dollars as a kit with all its physical
parts and assembling instructions, or else sold as a finished microcomputer
at the price of 400 Dollars. Initially it accepted only code in numbering base
of two for all of its input or output operations. Later it added elements to
reach 7 Kilobytes of Random Access Memory and accepted Basic programming
language, adapted to Altair 8800 by Messieurs Paul Allen and William Bill
Gates of Micro-Soft Corporation. Another team incorporated as peripherals a
reader of perforated paper tape and a keyboard. This modified microcomputer
was sold at 500 Dollars.
Also in 1975 the University of California in San Diego released the P-System,
an operating system of 8 bits, which was marketed by Softech Microsystems.
It had several versions.
Finally, as mentioned above, Mister Gary Kildall formed Intergalactic Digital
Research and in 1976 released the CP/M operating system. In a short time and
until the early 1980's most microcomputers had the CP/M system or were
programmable in Basic, resident in Read Only Memory. Some microcomputers had
the P-System, and a few top of the line had one of the variants of Unics.
DR-DOS was an improvement on CP/M made by Digital Research. Being initially a
strong competitor against PC-DOS and MS-DOS, it was made open source after the
demise of Digital Research.
Then the sad story comes. Shortly before 1980 Mister Tim Patterson of Seattle
Computer had built an operating system that he called QDOS, or 'Quick and
Dirty Operating System'. It was of 16 bits, and strongly based on CP/M-86,
another operating system of 16 bits produced by Digital Research. It happened
then that the 'Big Blue' (affectionate name given to IBM, the International
Business Machines Corporation) had decided to enter into the microcomputer
business. The old multi-national corporation was building the 'Personal
Computer', based on Intel microprocessor, and it needed a suitable operating
system for it. The original 'Personal Computer' of IBM had very limited
hardware, it could not execute Unics at all.
Thus, the Big Blue began looking for a solution. The company tried to deal for
the P-System, but the University was not interested. Senior IBM executives
also contacted Digital Research, but Mister Kildall did not even attend them,
he would not negotiate under the condition of absolute secrecy imposed by the
IBM representatives. Then they desperately turned to Microsoft. Messieurs
Paul Allen and William Bill Gates had already written Basic interpreters for
Altair, for Apple II and other computers, and for CP/M operating system. They
had a deal with Mister Kildall, under which Digital Research concentrated on
the operating system and Microsoft on application programmes.
However, when Mister Gates knew that Mister Kildall had not even attended the
dignified old gentlemen of the Big Blue, then Mister Gates declared that the
agreement between Digital Research and Microsoft had come to an end. Thus, in
1980 Microsoft bought the QDOS Quick and Dirty Operating System from Seattle
Computer, changed its name to 86-DOS (86 Disk Operating System, for use with
Intel 8086 microprocessor), introduced a few modifications, and licenced it
to IBM as PC-DOS for the Personal Computer. Microsoft kept the right to
sell the system separately as MS-DOS. Foul play from the part of Microsoft.
This company later produced some outstanding professional software (such as
the Quick Basic translator and QBasic interpreter), but it is fair to
recognise that its trajectory has not always been of exemplary honesty.
When Digital Research knew what Seattle Computer and Microsoft had done, and
the gullibility of the Big Blue, it threatened to sue all of them for break
of patent and plagiarism. IBM was alarmed at the prospect, and this time it
negotiated with Digital Research without forcing non-disclosure agreement.
Mister Kildall finally came to terms, and agreed to have CP/M as alternative
operating system for the Personal Computer. This famous microcomputer was
publicly announced in August 1981, and the first units sold at the end of
that year. The series became immediately successful, initially based on Intel
8086, originally incorporating PC-DOS operating system, with the option of a
16-bit version of CP/M, and programmable in Basic.
Regrettably, CP/M was priced much higher than PC-DOS, and most customers saw
no reason to choose CP/M. An IBM Personal Computer with PC-DOS was sold at
an extra of sixty Dollars, but an identical microcomputer with CP/M costed an
extra of two hundred and forty Dollars ! Thus, CP/M gradually lost market
share, while PC-DOS and MS-DOS became predominant. Digital Research made
strong exertions to compete against Microsoft. The most significant effort
was the release of DR-DOS (Digital Research DOS), which the corporation
developed after its own CP/M, although very modified. For ending the sad
story, with the demise of Mister Kildall who died in 1994, and of Digital
Research, CP/M was completely abandoned except by a group of enthusiasts,
active as of 2021 and hyper-linked below. DR-DOS was taken by Novell and
then by Caldera, and made open source. Unforgettable relict of Computing
History.
Working with CP/M is feeling the fragance of the good old times, back in the
second half of the 1970's, those heroic times when computers were still a
rarity, and the chosen few who worked with computers REALLY KNEW how to work
with them. There was no other option, since the operating systems were all
based on text written at a command line prompt, and no graphic icons existed.
The thing would have been too cryptic for many people of today, who lack the
minimal idea of how a computer works. They just 'click' the button of a table
mouse or another pointing device on one or another of several funny icons in
a 'desk top'. There are usually no parameters given by command line. Instead,
there are 'pull down menus' or other visual aids for choosing various options.
The user can be a perfect idiot. In fact, the more idiot that those stupid
ignorant users become, the better for commercial fool catchers, advertisers
and vendors, to extort advantageous business from the ubiquitous donkeys.
By comparison, CP/M is a treasure of a software. Still a few enthusiasts
cling to it and even improve it beyond the point in which it was inherited as
a bequest of the legendary Digital Research. Below there are some hyper links
to the most complete lists of programmes available for the CP/M operating
system. The system itself can be downloaded and executed, often from a floppy
disk, mainly in one of three ways:
-In one of the computer models for which CP/M was originally intended, using
Intel 8080 of 1974 or compatible processors. Those computers are very rare
pieces of collection nowadays, therefore not easily available. The CP/M
versions for those computers are from the second half of the 1970's. When
CP/M 86 was released in the early 1980's, for Intel 8086 of 1978 or compatible
processors, then the earlier versions of CP/M were retroactively called
"CP/M 80" (meaning "Intel 8080" or compatible processors).
-In a compatible computer of more recent production. For example in an IBM
Personal Computer or compatible, with Intel 8086 or compatible processors. As
it has been said above, versions of CP/M called "CP/M 86" were made for those
platforms in the early 1980's, and computers of that kind are the most common
until today, therefore this is probably the most obvious solution for working
with a real CP/M system at the present time. CP/M can be boot-strapped from a
partition (slice) in hard disk, or from an entire hard disk, but CP/M is so
small that it can be just boot-strapped from floppy disk.
-In a computer while running another operating system. In this case CP/M is
not a real operating system as in either of the two cases above, it is simply
an executable programme that emulates approximately the behaviour of a CP/M
system. Different emulators of CP/M exist, at least for Unics systems (BSD,
GNU Hurd, Linux, Minix...), for DOS systems (Free-DOS, DR-DOS, PC-DOS, MS-DOS,
PTS-DOS, ROM-DOS...), for DOS boxes under Windows, or for Apple Macintosh.
Possibly for other operating systems as well. Besides the emulator itself,
programmes are often included for converting from the CP/M data set system to
another data set system, both ways. That conversion is necessary for working
with floppy disks formatted for CP/M or containing CP/M data sets in them,
without having a real CP/M system available. The CP/M sector size is of 128
bytes, while that of DOS is usually of 512 bytes, and this is one of the
reasons for incompatibility between the CP/M system and the FAT 12 (File
Allocation Table) system that is always used in DOS-formatted floppy disks
(although DOS can format floppy disks for 128 or 256 bytes of sector size if
necessary, but that formatting is now uncommon).
Zimmers
Programmes for CP/M operating system
http://www.zimmers.net/anonftp/pub/cpm/
Retrotechnology, Mister Herbert Johnson
Resources for CP/M and other software made by Digital Research
http://retrotechnology.com/dri/
CP/M and Zilog Z-80, Mister Gaby Chaudry
Resources for CP/M system and for Zilog Z-80 processor
http://www.cpm.z80.de/
Operating systems: Programmes for DOS
If not so deliciously old flavoured as CP/M, we may at least recognise that
DOS systems offer more resources, if only because they were developed some
years later and also because they lasted longer in the market (CP/M system
predominated only five years, from 1976 to about 1981, while DOS systems
predominated fifteen years, from 1981 to about 1996). The difference is
notorious, in the number of programmes made for CP/M and those made for the
various sub-systems and versions of DOS. Working with CP/M is mainly done
today for the sake of treasuring a classic jewel. Not to detract the system
at all, because it allows to do good things to those who know how to do them,
but the truth is that CP/M stopped development too early, and as a consequence
it does not offer many resources today. DOS systems, on the other hand, are a
REAL PROPOSAL for doing serious work with a computer even today. A look at the
hyper links listed below will convince many sceptics that DOS systems can
perfectly be considered efficient and reliable for many purposes, although
for some purposes, such as accessing the World Wide Web or Universal Serial
Bus, they somewhat lack updated software or the software is scarce.
General collections of programmes or resources
The first hyper-link concentrates solely upon DOS systems. The next offers
software for DOS as well, but it really has non-DOS systems as the main focus.
Note, however, that Windows 1.0, 2.0, 286, 3.0, 386, 3.1, 3.11, 3.1 For
Workgroups, 3.11 For Workgroups, and 3.2 (for the Chinese market), produced
from 1985 to 1993, are simply graphic interfaces executed on top of MS-DOS,
or of another DOS system. Windows as the true system began only with Windows
95, in 1995. It means that a computer with versions of Windows produced in
1993 or earlier, can work with a stand-alone DOS system or with those Windows
versions, coexisting without problems. Therefore any software that had been
developed for DOS will be perfectly executable by the MS-DOS originally
included with early Windows versions, which is not always the case with
Windows 95 or later, whose MS-DOS dialogue boxes contain MS-DOS version 7 or
MS-DOS version 8, both of them of fewer resources than the last stand-alone
version, MS-DOS 6.22 of April 1994.
Vetusware
Programmes for DOS and for other systems
http://vetusware.com/
Dendarii
Resources for DOS operating systems
http://www.dendarii.co.uk/DOS/
Angel Fire
Resources for DOS operating systems
http://www.angelfire.com/mo/dossource/
The Bensley Collection
The Bensley Collection of old programmes for DOS was offered for free as a
single data set of huge size, almost 45 Megabytes, compacted by the Zip
algorithm and named PDS.ZIP, which contained over 1 500 directories with
almost 11 000 data sets, occupying about 150 Megabytes.
The Bensley Collection was the biggest availability of executables for DOS
that existed in a single place of the Internet. Most of the collection was from
the 'Personal Computer Software Interest Group' in Sunnyvale, California,
a society of microcomputer enthusiasts who were very active in the 1980's. The
collection had well-explained tutorials for DOS systems and for Basic, Pascal
and some other languages, software for programming in Assembly, Basic, Pascal,
Forth, Fortran or others, utilities for many different purposes, varied games,
and plenty of other programmes for all tasks imaginable, from the elementary
to the advanced. Even the most experienced could find much of his interest.
Unfortunately, like with other old software, the collection does not exist anymore.
Part of the programmes were distributed under the concept of 'Shareware', initially
developed by Mister Bob Wallace of Quicksoft, who programmed a full-featured text
editor called PC-Write. Programmes under Shareware can be freely distributed under
certain conditions imposed by the author, and a small payment is expected from those
persons who like the programmes and use them regularly. Other programmes exist that
are under the 'Freeware' concept, initially developed by Mister Andrew Fluegelman.
The term 'Freeware' is trademark of The Headlands Press, but after the mysterious death
of Mister Fluegelman, the property of the term has not been legally enforced, and it is used
now by other persons. Freeware means that the programmes can be used for free, and
distributed or modified by anyone, only acknowledging the name of the original programmer.
In Shareware or Freeware the programmes may have a legal owner, but there is
a final legal group of progammes called 'Public Domain', where no one legally
owns them (although they may have an historically recognised author). It may
be so due to one of two causes: or because their author or legal holder at
some point chose to release the programmes to the Public Domain, or because
a number of years has passed and they have naturally fallen into the Public
Domain. The number of years depends on the legal jurisdiction, on the kind of
intellectual property, such as software, patented invention, literature, sound
or image (music, graphic art, photography, cinematography, et cetera), and on
either the life of the author or else the year when the work was released or
was published for the first time, or when it was legally registered.
Image
Image formats have gradually become more standard than they were in the 1980's
or the early 1990's, first because many of the old formats have for the most
part fallen out of common use, and second because there has been a work on
unified specifications of existing formats (such as JPG), or on creation by
international organisations of standard and non-proprietary formats (such as
PNG, by the World Wide Web Consortium). The first programme below is a small
image viewer, originally intended for tiny portable computers (palm tops),
but which can perfectly be used by any computer executing a DOS system. The
second programme is an image viewer in more than forty formats. It can modify
some characteristics of the image, and convert between some of those formats.
Lx Pic, Mister Stefan Peichl
Image viewer for DOS, version 7.3 of 2002
http://hplx.pgdn.de/
Pict View, Mister Jan Patera
Image viewer and format converter for DOS, version 1.94 of 2000
http://www.pictview.com/
Drawing, painting
As it has been noted when speaking of image formats, most drawing or painting
programmes made for DOS systems in the 1980's or early 1990's absolutely
lacked any concept of unified standards. Some of them could produce relatively
complex illustrations, or even animations (the DANCAD 3D programme stored in
the huge Bensley Collection), but those images had to be viewed exclusively
by a copy of the programme with which they had been created. Such a fastidious
situation meant that if sending the image to be viewed from another computer,
a copy of the creating programme had also to be sent along. That solution was
perhaps possible for communication amongst a few computers, but definitely it
is not a proposal for a vast network, such as the Internet.
The now gone Bensley Collection was a really huge bazaar of old programmes for
DOS. It contained nothing less than some dozens of drawing, painting or other
graphic illustration programmes, from simple to complex, nearly all of them
released in the 1980's, and NONE OF THEM capable of producing even a single
standard format. Only one programme can work with an old version of GIF (the
Graphics Interchange Format of Compuserve), but the operation of that programme
was so cumbersome as to make it almost useless. The Pict View image viewer given
above can open images in over forty formats, but in spite of that versatility, IT CANNOT
OPEN ANY IMAGES produced by any of those programmes (save the already mentioned
GIF). A fastidious situation that makes those old programmes useful only for a group of
related computers or a local area network, where all the machines possess a copy of
the drawing or painting programme. Those exclusive formats make their images
practically unexportable to distant computers.
They cannot efficiently be published in the hyper text protocols of Internet,
such as Gopher or World Wide Web, since it would require to provide a link to
a server whence the drawing or painting programme could be downloaded, and
instructions on how to execute that programme. In practice that is unfeasible,
because the person wishing to view the image would have to quit the Gopher or
HTML user agent, or to open a second shell of the command interpreter, just
for viewing the image, then to return to the Gopher or HTML session. Sure,
the programmers of the 1980's or early 1990's had not in mind a World Wide
Web that still did not exist, when not even the Gopher protocol existed yet
(the Web began in 1990 inside the CERN of Genevre and was made public in 1993,
Gopher in 1991). If those programmers had the Internet in mind at all, they
could only think of the File Transfer Protocol for transferring images AND
their viewing programmes. In most cases they did not even consider that
possibility. Most software distributions were done hand to hand, by physical
post through the Mail, by electronic post, or by bulletin board services.
Sound
Sound has not had the problem of multitude of incompatible formats that image
has had, probably because until relatively recent times most computers lacked
sound, except for the primitive internal computer speaker. The internal
computer speaker is in fact intended just as a beeper for warning the human
operator of actions performed by him or by the machine, but since its sound
can be adjusted by software for frequency, duration and interval (intensity
is adjusted by hardware), then it follows that the internal computer speaker
can be used to produce music. The Basic programming language can be used for
that purpose, and there were also specific programmes for DOS systems that could
be used to compose music playable through the internal computer speaker. A
technically primitive music by modern taste, but exportable to almost every
computer. The Bensley Collection had some of those music-making programmes.
HTML, Hyper Text Mark-up Language
Out of the many Web user agents collected at Evolt below, at least six were
specifically created or modified for DOS. Two of them are graphic user agents:
Arachne and Web Spyder. The other four are text-only user agents (images or
sounds can be displayed by means of a separate programme): Bobcat, DOS Lynx,
Minuet and Net Tamer.
Bobcat or Net Tamer can be executed even with old computers having the 16-bit
Intel 8086 processor of 1978, or compatible, although with that old processor
they will be slow for normal operation. DOS Lynx is a conversion for DOS of
the Lynx text-only user agent that had originally been made at the University
of Kansas for the Unics and the VMS operating systems, but DOS Lynx needs the
32-bit Intel 80386 processor of 1984, or compatible, it will not work at all
with an older processor. DOS Lynx needs a packet driver, even for browsing
local documents, and oddly enough no network packet driver has been included
in the distribution of DOS Lynx. Only a fake packet driver called Nullpkt has
been included for local browsing. Besides this limitation, DOS Lynx needs
almost 600 Kilobytes of Low Conventional Memory, which may imply the need of
boot-strapping the DOS system with less than 40 Kilobytes, out of the 640
Kilobytes of maximum executable programme size in the Megabyte of Low
Conventional Memory that can be directly accessed by DOS in Real Mode.
Begun in 1996, Bobcat can be considered a simplification of DOS Lynx. Bobcat
works better with HTML (Hyper Text Transfer Protocol), but it also acts as a
client for FTP (File Transfer Protocol), Gopher (University of Minnesota),
Telnet (Tele Network), and WAIS (Wide Area Information System). E-07 is the
latest version of Bobcat. Because Net Tamer, Bobcat or DOS Lynx browse only
text and textual hyper links and do not waste any time displaying images or
sounds, they are perfect browsers for old or limited computers, or for
networks with slow connections, when textual content be more important than
graphics, audio or video. If necessary, images or sounds can still be shown
separately, opening their hyper links and using, for instance, Lx Pic, Pict
View, or similar programme, linked above. Minuet is an adaptation for DOS of
a user agent that has its own operating system, also called Minuet.
Evolt
Collection of HTML user agents
http://browsers.evolt.org/
Archivers, compressors
Compression is used for diminishing storage space or also for handling a
shorter sequence of bytes when transmitting data from a computer to another,
especially through a network. A compressed programme or document CANNOT be
used directly, it first needs to be expanded to its original full size. Some
programmes for compression, such as Double Space or Drive Space (included in
some versions of MS-DOS), silently perform this expansion when a compressed
data set be called from the command line prompt, and silently compress it
again when execution of the data set have been finished. Other programmes for
compression need to be specifically called for executing their compressing or
expanding. The rate of compression depends on both, the programme used for
compression and the data set that is to be compressed. Entire volumes can in
theory be compressed, real (physical) volume, such as floppy discs or hard
discs, as well as logical (virtual) Random Access Memory volumes.
Not all data sets can be compressed, however, and even those that can be, may
under some circumstances be rendered useless or may even crash the computer
system, after being expanded and being then called for execution. In these
cases, the expanded data set is not identical to its own original before the
compression. Therefore it is not recommended to rely solely on a storage of
compressed data set, it is a good idea to keep an original full size copy of
every important data set stored under compression. A data set compressed with
a certain programme can only be expanded with that programme, otherwise it is
lost. There is a diversity of programmes for compression that can be used
with CP/M, DOS, or other operating systems.
Squeeze is one of the oldest compression algorithms, UPX one of the newest,
and plenty of others lie in between. Info Zip, listed below, is a monster
with a down-load size of over 60 Megabytes !!! It sounds ridiculous, to have
a programme for diminishing the size of others, when the programme used for
that purpose is in itself many times bigger than even the operating system
(if working with DOS, for instance). Having been already warned against the
risks of compression, a computer operator must exercise sound judgment on
what to compress, by which programme, and how to use it. And ALWAYS keep an
original full size copy of every important compressed data set, or accept the
risk of losing it.
Logical volume in Random Access Memory
The original programmes for MS-DOS (and their equivalents for DR-DOS or for
other DOS systems) called HIMEM.SYS (extended memory manager), EMM386.EXE
(expanded memory manager) and RAMDRIVE.SYS (logical volume in Random Access
Memory) are all limited to a maximum of 32 or of 64 Megabytes, although in
fact most computers of the XXI century feature much more Random Access Memory.
Such programmes were written in the 1980's or early 1990's, a time when most
microcomputers had no more storage space than 64 Megabytes per volume (in a
hard disc, for instance), therefore it was thought unnecessary to complicate
the software by giving access to bigger volumes. Many of those computers in
fact lacked hard disc, they operated with floppy discs only.
The set of programmes written by Mister Marko Kohtala (latest version released
in 2005) modify or substitute the three mentioned DOS original programmes. An
obvious purpose of such a modification is the creation of a logical volume. A
logical (also called virtual) volume is of excellent utility for holding any
temporary data sets, but the data sets would completely be lost if turning
off, restarting or resetting the computer, therefore a logical-virtual volume
should not be used for permanent storage. This set is compatible with MS-DOS
versions 3 to 7, DR-DOS versions 5 to 6, or Free-DOS version 1, and includes
mainly these programmes:
XMSSIZE.EXE modifies HIMEM.SYS Extended Memory Manager for over 64 Megabytes.
Operable with a processor at least Intel 80286 or compatible.
Three device drivers modify HIMEM.SYS or EMM386.EXE for over 32 Megabytes:
SRDXMS.SYS, XMS 2.0 Extended Memory, modifies HIMEM.SYS.
SRDEMS3.SYS, LIM/EMS 3.2 Expanded Memory, modifies EMM386.EXE.
SRDEMS.SYS, LIM/EMS 4.0 Expanded Memory, modifies EMM386.EXE.
LIM/EMS is the Lotus - Intel - Microsoft specification for Expanded Memory.
Extended or Expanded Memory allows programmes to access available Random
Access Memory beyond the limit of 640 Kilobytes of maximum executable
programme size in the Megabyte of Low Conventional Memory that can be
directly accessed by DOS in Real Mode.
SRDISK.EXE works with one or more of the above device drivers to create a
logical volume of up to 4 Gigabytes, if having enough memory for it. It may
substitute RAMDRIVE.SYS or it may coexist with it.
Resizable RAM Disc, Mister Marko Kohtala
Set of programmes to create logical volume in Random Access Memory
http://sourceforge.net/projects/srdisk
Free-DOS includes the executable programme SHSURDRV.EXE that can create
several logical volumes of fixed size in Random Access Memory, each volume
being up to 4 Gigabytes of size, if having memory for it. Other alternatives
are VDISK.SYS (Virtual Disk, which like RAMDRIVE.SYS must be installed via
FDCONFIG.SYS or CONFIG.SYS and cannot be re-sized or removed), XMSDSK (which
can be installed or removed from the command line, not from FDCONFIG.SYS or
CONFIG.SYS), TDSK (Turbo Disk, which can create volumes of up to 64 Megabytes
of size, if having memory for it), and a few others.
The comparison below was extracted by Clarence Verge for Arachne, from the
documentation of TDSK Turbo Disk:
RAMDRIVE.SYS VDISK.SYS TURBODSK
(MS-DOS, WINDOWS 3.x) (DR-DOS 6.0) Version 2.1
Maximum size 32 Mb 32 Mb 64 Mb
Low memory support Yes Yes Yes
XMS memory support Yes No Yes
EMS memory support Yes Yes Yes
Extended memory by INT 15h No Yes No
Entries at top level directory 4-1024 4-512 1-65534
Bytes of sector size 128-1024 128-512 32-2048
Dynamic memory allocation No No Yes
Programmable cluster size No No Yes
32-bit bus used by 386 or 486 No No Yes
Low memory used by MS-DOS 5.0 1184-1232 2096-2608 432-608
MS-DOS 5.0 transfer in Kb/second, sector 512 bytes, RAM vol in XMS
386-25 MHz (no cache) 17105 6838 17095
486-25 MHz (8 Kb cache) 10278 7370 10278
Operating systems: Long names in DOS
DOS operating systems have a serious drawback for naming data sets. All names
must follow a set of conventions known as "DOS naming conventions":
-Spaces or other special characters are not allowed. Control characters of
7 bit ASCII (characters 0 to 31) are all excluded, and even some of the other
characters of 7 bit ASCII (characters 32 to 127) are excluded too. Characters
of 8 bit ASCII (characters 128 to 255) are all of them excluded as well.
-The data set name cannot exceed eight characters, and in old DOS versions
the first character must be a letter, not a cipher. New DOS versions (such
as Free-DOS) permit a cipher as the first character. For instance, the name
4DOS.COM is acceptable in Free-DOS, but it would not have been acceptable in
old DOS systems.
-The data set type (also called suffix or extension), if being present, is
limited to three characters. It must follow the data set name and it must be
separated from it by one dot, without surrounding spaces.
-For the whole naming of maximum eleven characters separated by dot (data set
name, dot, data set type), there is no difference between upper case (capital
letters) or lower case (small letters). Old DOS systems only used capital
letters. New DOS systems (for instance Free-DOS) are case retentive but not
case sensitive. This behaviour can be modified, within limits, by using the
appropriate Long Name software.
A name breaking any of the above conventions is not a valid DOS name. In some
cases DOS will try to support the name by adding a tilde and a number at the
end of the name part, and by truncating the type part. For example, the name:
References.html
Would be changed in a new DOS system (but not using Long Name software) to:
REFERE~1.HTM
It may appear as 'refere~1.htm' in a listing done by the DIR command, but in
reality it has been converted to upper case. In some other cases DOS will not
even work with an awkward name. It may be valid in Unics or other systems, but
not at all in DOS. An example of a completely unacceptable name would be:
C++.html
That name is valid in Unics, but the two plus signs would make it impossible
to rename, to move, or even to erase that data set from DOS. Attention must
be paid when copying such names to DOS, since they are invalid inside DOS.
Fortunately, there are a few programmes that allow to work with long names in
DOS systems. One of them is a driver called DOSLFN.COM, which will make some
DOS programmes able to work with long names, with upper or lower case, with
spaces, and with some other special characters. Not all DOS programmes will
be able to work with long names, even with the driver loaded, but some of the
most important programmes will be capable of using or also of producing data
sets with long names.
For example EDIT.COM, which is a text editor included with the last versions
of MS-DOS, will be made ready to create or open data sets with long names,
spaces, and upper-lower case sensitivity, and to save the data sets keeping
those modified names. There was an older EDIT.COM that simply opened the
QBASIC.EXE Basic interpreter in text edition mode. The two EDIT.COM text
editors cannot exist with that name in the same directory. The older text
editor can also be called by invoking QBASIC.EXE with the EDIT switch:
QBASIC /EDIT
Free-DOS includes by default the Free-COM command line interpreter (stored
with the name of COMMAND.COM, though it can be changed). Free-COM has a DIR
command with /LFN switch that will list any long names and their equivalent
short names in any volume or directory, but only after having loaded the
DOSLFN.COM driver into Random Access Memory. Once loaded, DOSLFN.COM gives
long name in the newer EDIT.COM or in the commands COPY CON:, COPY, DIR /LFN,
RENAME, TYPE or TYPE redirected. Load DOSLFN.COM only if necessary, because
it silently converts names to lower case, which is against DOS conventions.
The text editor EDIT.COM of MS-DOS can also work with long names, after having
loaded DOSLFN.COM into Random Access Memory. Unfortunately, the standard text
editor EDIT.EXE of Free-DOS cannot work with long names even with the driver
loaded. That limitation also holds true for several other executables included
in Free-DOS, such as 4DOS.COM command interpreter. They may work with long
names in a true VFAT system, but not in a FAT 12, FAT 16 or FAT 32 system
with a long name driver, as it must be the case inside Free-DOS.
Example using Free-DOS with its default Free-COM command interpreter. First
load DOSLFN.COM into RAM with the following command (the optional c+ switch
gives access to compact disc, if having compact disc installed):
DOSLFN -c+ [ENTER]
Assuming the compact disc to be at the D: volume drive, this command lists
all its directories with long and short names, pausing at every screenful:
DIR /LFN /P /S D: [ENTER]
Finally, unload DOSLFN.COM by the command:
DOSLFN -u [ENTER]
DOSLFN.COM has many switches, these are some of them:
-c+ compact disc
-d disable
-s status
-t+ tunnel effect
-u unload
-? help
It can be observed that the switch separator is the hyphen (-) common in Unics
systems, and not the slash (/) common in DOS systems. Two last examples for
handling long names, with volume drive C: as the current one and A: as floppy.
The first example is risky because between the second line and the end of the
fourth line the data set may be lost, if the C: volume happen to fail. The
example assumes the existence in the floppy of a short name data set called
COMPUT~1.HTM, which we want to rename as Computing_History.html
COPY A:\COMPUT~1.HTM [ENTER]
ERASE A:\COMPUT~1.HTM [ENTER]
DOSLFN [ENTER]
COPY C:\COMPUT~1.HTM A:\Computing_History.html [ENTER]
DOSLFN /U [ENTER]
ERASE C:\COMPUT~1.HTM [ENTER]
The second example is not risky, but it requires free space in floppy disk.
It also assumes the existence in the floppy of a short name data set called
COMPUT~1.HTM, which we want to rename as Computing_History.html
DOSLFN [ENTER]
COPY A:\COMPUT~1.HTM A:\Computing_History.html [ENTER]
DOSLFN /U [ENTER]
ERASE A:\COMPUT~1.HTM [ENTER]
Warning regarding executable programmes used to access compact disc:
MSCDEX.EXE of MS-DOS is incompatible with DOSLFN.COM, but SHSUCDX.COM included
in Free-DOS, or available separately, is perfectly compatible. However, if
using SHSUCDX.COM and DOSLFN.COM, then SHSUCDX.COM must be loaded before
DOSLFN.COM, otherwise it would not be possible to work with long names in the
compact disk (although it will still be possible for floppy or hard disk).
DOSLFN.COM is in compressed form using the ZIP algorithm, therefore an
expanding programme of that algorithm is necessary for making the driver
workable, for example UNZIP.EXE for DOS (included in the main distribution of
Free-DOS, or available separately). The compressed driver is listed by the
name of DOSLFN.ZIP
There are also in the Internet two other sets of programmes for converting
DOS short names to long names. One is a set of eight executables made by
Mister Ortwin Glueck, the other is a set of five executables made by Mister
Fubi Dan. They are not drivers, thus they cannot modify the behaviour of
other programmes. The software of Mister Glueck is of small storage size, but
unfortunately it is slow and unreliable. The software of Mister Dan is more
efficient and reliable, though also of bigger storage size. Those programmes
may be acceptable for a computer with only floppy disks, without hard disk,
but for a computer having a hard disk, the DOSLFN.COM driver of Herr Henrik
Haftmann is definitely a better proposal.
Redirection of Uniform Resource Locator
Redirection of a document or another data set means to retrieve a copy of
that document or data set from the server computer where it is hosted, and
to show it from the Uniform Resource Locator of another computer. Redirection
is done for various reasons in the case of an HTML document publicly available
in the Internet. Some of those reasons may be:
-The original Uniform Resource Locator is too long or too short, or it is not
well descriptive of the HTML document, or it contains special characters that
are not allowed by protocol conventions.
-The document often changes host, therefore it also changes the original
Uniform Resource Locator, which means that hyper links pointing to that
original Locator may quickly become "dead links" (error 404, document not
found), or may become "switched links" (they find a document that has
replaced the old one).
-The document belongs to someone who has other documents hosted in different
computers, perhaps in different countries and with a diversity of Uniform
Resource Locators, and the owner of those related documents wants to have
some standard in the Uniform Resource Locators for the entire collection.
Other reasons are possible, but the most common is the first mentioned: the
original Uniform Resource Locator is too long and points to a sub-directory.
The redirection service usually offers a top level domain to replace that
long path, with a short path that contains no intermediate slashes between
the HTTP:// heading and the final slash. In most cases the document is not
hosted by the redirection service, it must be retrieved from an active host.
Redirection Services
Redirection services may be paid or may be for free. Paid services tend to
include other services in their package, while free services tend to insert
advertising in the redirected documents, but not always. In the late 1990's
and early 2000's some Internet authorities of national governments began
delegating concession of Internet address names to companies that distribute
those names as they see fit. They want to get worldwide projection for their
Internet connections, and that is why they offer top level domains for little
or no money, even without inserting advertisements in the documents retrieved
through that domain.
They impose conditions, of course. They always keep the legal property of the
domain, and can terminate the redirection service in case of a document being
inactive for too long (without receiving visitors), or of a document that
fail to abide to their Terms of Service, for example "link farm" documents
that consist only of links, without any substantial content, or documents
that repeatedly overload their servers, or documents that promote certain
"objectionable" activities, such as computer cracking, or other situations.
Austria, Iceland, Nauru, Tokelau and Tonga are some of the nations that offer
redirection to their top level domains. Their Internet country codes are
respectively .at .is .nr .tk .to
Automatic redirection by Java Script or PHP
Automatic redirection can be done by a variety of techniques. The hyper link
below, Tech Patterns, provides programmes in Java Script or in PHP language
that a Web Master may use for redirecting a document request to another
Uniform Resource Locator. Redirection can be done to another copy of the
document, for example:
-A copy optimised for the hardware or software characteristics of the computer
making the request, such as certain screen resolution and colour depth,
certain operating system, or certain user agent: a visual and graphical user
agent, a visual but non-graphical user agent (text only), an aural user agent
(text to sound), or a tactile user agent (text to Braille).
-A copy optimised for user agents that can display frames and have the feature
enabled (the "noframes" tag can be used for other user agents), or that can
execute Java Script or another scripting language and have it enabled, or that
can execute Cascading Style Sheets or another style sheet language and have it
enabled. Not all style sheet languages support cascading.
-A copy optimised for high speed Internet connection (copy rich in images or
also in sounds, including animations in high resolution or long videos), or
for low speed connections (copy composed mainly of text, perhaps with a few
still images in low resolution).
-A copy optimised for the medium in which the document will be displayed, such
as a handheld device (small monochrome screen, bitmap, limited bandwidth), a
paged printing device (a printer, or a computer screen in print preview mode),
a projector (on wall), a non-paged computer screen, a fixed-pitch character
grid (a teletype, a terminal, a limited display), or a television-type device
(low resolution, limited scrollability). It is not always possible to detect
the medium automatically. Besides, the document may be requested by a medium
but displayed by another.
-A copy in a certain human language, based on the language configuration of
the operating system (or based on the geographic location of the Internet
Protocol number, although that is not a safe approach, because the prefered
language may not be the language that predominates in the geographic area),
or a copy in a certain character set, character font or character size, or a
copy in black and white, or in limited colour, or in rich colour.
It is important to keep in mind that the human operator of the requesting
computer may be using a shared or public computer, over which he has no easy
way to control configuration, or it is unadvisable or forbidden to him to
change configuration. It is also important to consider that automatic
redirection may fail due to a diversity of causes. Therefore, to offer hyper
links for manual redirection is ALWAYS necessary, for accessibility.
Manual hyper links can also point to copies optimised for older versions of
Hyper Text Mar-up Language or of Hyper Text Transfer Protocol, or to a copy
in an entirely different protocol, for instance to a Gopher document. Some of
the mentioned redirections can be done automatically, but not all of them.
The list of all possible combinations is enormous, it is for the Web of
Gopher Master to decide on the right ones for a particular document.
Java Script must be enabled at the client side for the Java Script programme
to work. Likewise, PHP must be enabled at the host-server side for the PHP
programme to work. The two may be enabled, without conflict between them.
The documents provided by Tech Patterns as well as the notes included with
the programmes must be perused in detail, possibly adapting the programmes
to the specific needs of the Web Master.
Tech Patterns
Automatic redirection by Java Script or PHP
http://techpatterns.com/
Detailed explanation on automatic redirection
Wikipedia, encyclopaedia that can be edited by its readers, has a detailed
explanation on automatic redirection that goes well beyond what has been said
here, and includes references to other information resources. A Gopher or Web
Master is stronly advised to read on the advantages and disadvantages of the
different possibilities, before taking his decision. The detection of the
characteristics of the requesting computer must be made efficiently, in order
to redirect the request to the appropriate page or the appropriate copy of
the document.
Wikipedia, encyclopaedia that can be edited by its readers
Detailed explanation on automatic redirection
http://en.wikipedia.org/wiki/URL_redirection
For counters with full statistics, please see the section on Counters of
visitors:
Counters of visitors
Reference and dictionaries
What would be the life of intellectuals like ourself without the existence of
dictionaries and books of reference ? We should have to take pains in the
creation of our own personal lists of reference, as we do anyway. It is most
evident, however, that having a dictionary, thesaurus, encyclopaedia or other
such books available, greatly helps our creative efforts. A collection of
some of the best books of reference that exist in the Internet is hyper
linked in the next lines.
Dictionary
One of the most complete for the English language
http://www.dictionary.com/
Bilingual Dictionaries
Vast collection for many languages
http://www.dict.cc/
Martin Dale Center
The biggest reference desk on Science
http://www.martindalecenter.com/
Net Lingo
Dictionary of lexicon related to the Internet
http://www.netlingo.com/
Web Reference
Exhaustive source of information
http://www.webreference.com/
Wikipedia
Encyclopaedia that can be edited by its readers
http://www.wikipedia.org/
Metapedia
Encyclopaedia that can be edited by its readers
http://www.metapedia.org/
Resources for Masters
There are here all kinds of programmes, written in diverse programming
languages, for all imaginable ways in which a resourceful Gopher or Web
Master could improve his document according to his needs or preferences:
counters, statistics, visitors books, forms, games, calculators... literally
thousands of programmes are available. Creative programmers wishing to make
their creations available to the public are also invited to send them to one
or more of the collections listed here.
Big Web Master
Programmes and resources for Masters
http://www.bigwebmaster.com/
CGI Resources
Programmes and resources for Masters
http://cgi.resourceindex.com/
Dynamic Drive
Programmes and resources for Masters
http://www.dynamicdrive.com/
Free Ware Java
Programmes and resources for Masters
http://www.freewarejava.com/
Hot Scripts
Programmes and resources for Masters
http://www.hotscripts.com/
Java Script Kit
Programmes and resources for Masters
http://javascriptkit.com/cutpastejava.shtml
Net-Ads
Programmes and resources for Masters
http://www.net-ads.com/
PHP Resource Index
Programmes and resources for Masters
http://php.resourceindex.com/
Prospector
Programmes and resources for Masters
http://www.prospector.cz/
The Free Site
Programmes and resources for Masters
http://www.thefreesite.com/
321 Web Master
Programmes and resources for Masters
http://www.321webmaster.com/
Rings of documents
A ring of Web documents is a "navigation bar" that is prominently shown in
all the documents that belong to that ring, in order to facilitate the
connection of a user agent from any of those documents to any other. Such
documents usually deal on a common subject of interest. There are rings for
all possible subjects that could be imagined, from the ordinary to the
bizarre. Some rings possess thousands of Web documents, other rings possess
very few documents. A document may belong to more than one ring, but it
should be relevant to the subjects of those rings.
Roboti (crawlers, spiders)
There is in the next hyper link a list of all known search engine roboti,
also known as search engine crawlers or search engine spiders. The word
'roboti' is Czech language, translated as "workers". It was widespread by
the Scientific Fiction novel 'R.U.R. Universal Robot', written by Carel
Czapek and published in 1926. The hyper link below also has an explanation of
the ways in which search engine roboti work and of the rules that they follow.
This will be of help to Gopher or Web Masters wishing to understand search
engine rankings, to researchers who use search engines for their work, or to
programmers planning to create their own search robot.
The Web Robot Pages
Information on search engine roboti
http://www.robotstxt.org/
Search: Engine, directory, portal. Meta tags
The different protocols of the Internet form the biggest library that has
ever existed in History, calculated to contain well over thirty thousand
million pages in 2016. An Internet document may be composed of only one page
or of many pages, and a page may contain only a few lines or thousands of
lines. It is clear that some method of classification or search has to be
devised for that vast amount of information, otherwise the time for finding a
particular subject would be enormous. Even for those of us who love to read,
30 000 000 000 pages is a little excessive. It would break the patience of
even a saint to read all of them.
Some of the methods to look for information are:
-Search engine, meta search, directory, or portal (any of them may be general,
specialised, or regional).
-Software, institutions, or professionals specialised on customised searching.
-NNTP News Transfer Protocol (Usenet news groups), IRC Internet Relay Chat,
or other protocols often used for questions and answers on specific topics of
interest.
-Groups in the Internet, or social networks: Google Groups, Orkut, et cetera.
-Rings of documents or pages focused on a certain topic or related topics.
-Documents on a topic or topics, containing hyper links to other documents.
-Documents of reference, dictionaries, encyclopaedias, thesauri, or similar.
The choice of a method will depend on a variety of factors, because each of
them has its characteristic advantages and disadvantages. An institution or
professional specialised on searching may yield the best results (particularly
an academic institution or a knowledgeable person), but to hire such services
is too expensive for most people. An alternative may be a specialised search
engine, a directory, or a portal. Scientific search engines are listed below
in this page. There is also specialised software. Some meta search engines
are excellent.
If having patience enough, then groups, rings or also documents on specific
topics will not only provide previously written information, in fact many of
their members will even answer direct questions or doubts presented by a
serious person contacting them. Most people love to talk about what they know
or what they like. A warning must be given, however: the Internet is open to
anyone, intellectuals and ignorants alike. This means that it is necessary to
exercise discrimination between what is valuable information, and what is
just garbage. On sensitive subjects, it is always advisable to consult more
than one source and carefully compare the answers. It is also important to
keep an ethical code and an elegant etiquette when contacting people or when
publishing information, avoiding for example to disclose confidential affairs
or to offend those who sincerely try to help, although they may not really
know much about the subject on which they talk.
Concepts of search engine, directory and portal
Search engine
A crawler robot, also called spider robot, is a computer programme that
follows hyper links from a page to another, inside the same document or in
different documents. Its searching algorithm reads a part or the whole source
code of a page located at a given Uniform Resource Locator. Important parts
are the protocol and mark-up language (Gopher, HTTP-HTML, or another), the
script language, if any (CSS or another script language), the Document Type
Definition (SGML, XML, et cetera), the HTML version, the title, the meta tags,
other informations that may appear in the head, and part or all of the main
text that appears in the body.
If approving the document, or page of a document, then the searching
algorithm sends the information to its electronic data base. The storing
algorithm of the data base classifies the document or page in one or more
subject categories and assigns a rank value, considering meta tags, correct
code, frequency of words, outbound and inbound hyper links, frames, and other
characteristics. Different search engines have different rules for searching,
classification and ranking, and the rules are often secret, for preventing
potential abuse and for offering relevant results to queries made to the
search engine. After thus storing a copy of all or part of the page, with
hyper link pointing to its Uniform Resource Locator, then the page is
regularly visited by the robot and the information is updated in the data
base. Between robot visits, the data base stores a cached copy of the page.
Directory
In concept it is similar to a search engine, but documents are reviewed by
humans for approval, for classification by subject, and for value ranking.
This means that documents linked by a directory tend to be of higher quality,
or they often are more relevant to the subject researched, than documents
linked by a search engine. On the other hand, the number of documents linked
by a search engine is often much bigger than the number of documents linked
by a directory, and the newest documents may also be found by a search engine
robot before being submitted to a directory and reviewed.
Portal
Similar to a search engine, or more often similar to a directory, but all
hyper links are visibly shown in subject categories rather than located by
means of a search box. Portals also tend to link to highly relevant documents
to the subject of research, because portals are often operated by humans and
not by search robots, or at least a human will review the document found by
the robot, but for this very reason the number of documents linked by a portal
is almost always smaller than the number of documents linked by a directory,
and ALWAYS much smaller than the number of documents linked by a search
engine. The advantage of the portal is clearly the ease of search.
The three concepts of search engine, directory and portal can be variously
combined, and the hyper links that they list can also be manually found by
their human operators as well as suggested by Gopher or Web Masters, or by
other persons.
Meta tags (instructions for search engines)
The Web Master has usually a good deal of control over the process of search
engine crawling and classification, depending on the particular search engine.
For instance, there are meta tags intended for suggesting the frequency with
which the up-dating by the robot should be performed:
meta name="revisit" content="30 days"
meta name="revisit-after" content="30 days"
Those examples suggest the robot to come back after 30 days more or less. It
is better to suggest a rather longer time than what may be necessary for the
regular updating made by the Web Master, because suggesting a time too short
may have as a consequence that the robot will repeatedly find an unmodified
page, and such a page will be penalised in the ranking. If no suggestion be
made by meta tags, most robots will by default visit the page again every
thirty days or so.
Between an update and the next, a search query will look for key words as
they appeared in the page the last time that the robot visited it, not as the
page may be currently written. However, pressing the hyper link pointing to
that page will direct the user agent to the current version of the page.
Sometimes it may be found that the page does not exist any more in that
Uniform Resource Locator, due to its having been moved to another location,
having been renamed, or else having been deleted. After two or three tries
by the robot, the data base of the search engine will eliminate the entry for
that page.
The Web Master can also suggest which robots are allowed to visit the page
and which ones are not, if the page should be indexed in the data base or not,
and if the outbound hyper links should be followed by the robot to other pages
in the document or to other documents. This can be done by a meta tag like:
meta name="robots" content="all, index, follow"
In the example above all robots are welcome. They are told to index the page
where this meta tag is placed, and to follow whichever hyper links they might
find. Meta tags must always be placed inside the head of each page in the
document, and for pages that include script languages or other data inside
their heads, the meta tags should appear inside the first two thousand lines,
always starting with the symbol "lesser than" and finishing with the symbol
"greater than". To see a full example of meta tags, the source code of this
page can be extracted by asking the user agent to "show source code", thus
the meta tags placed inside the head of the page can be carefully studied.
Meta tags are OF FUNDAMENTAL IMPORTANCE. They tell search engine robots and
data bases what the characteristics are for a particular document. This
information can also be read by humans, since it is placed at the head of
the document's source code. The hyper link below lists all meta tags that
exist and explains them in detail, for a Web Master to choose which ones to
insert in his document, or for a reader to know what they mean when they
appear in the source code. Meta tags are very useful, but not mandatory.
Vancouver Web pages
Complete list of meta tags
http://vancouver-webpages.com/META/
There is also the possibility of suggesting robots to visit only certain
pages of the document, but not other pages. This is done by a page written in
7-bit ASCII plain text, called robots.txt and put AT THE TOP LEVEL of the
domain name. In most cases, the robot will not see a robots.txt page located
inside directories. This is a problem for Web Masters who use a subdomain for
their document, typically in a free hosting service, and the Master has no
access to the top level of that domain. In such a case, the only feasible
solution is to use meta tags.
The name robots.txt must be in lower case. This is another problem, because
the DOS convention always gives names in upper case, thus the page will be
stored as ROBOTS.TXT inside a DOS system, or as ROBOTS~1.TXT by the automatic
DOS conversion from the page name in lower case, to its truncated name in
upper case. A solution may be to rename the page as robots.txt in lower case,
using a few programmes available for that purpose, for example the DOSLFN.COM
long name driver for DOS that is listed in the section "Operating systems:
Long names in DOS" that appears further above in this page.
Operating systems: Long names in DOS
The fact is that the Internet itself, as well as most of its protocols and
languages such as Gopher or HTTP-HTML, were for the most part developed by
persons who used Unics work stations, or who used big main frame computers.
Although those pioneers tried to make the Internet "evolvable" (in the words
of Mister Berners-Lee), and accessible to every possible computer or device
in existence regardless of its hardware or software, it is very human to
mirror oneself, to project one's own working methods, means or preferences
onto other people, and to have oneself in mind as a standard when developing
something, for example a computer programme.
So big was the importance of Unics in the Internet of those years (the 1980's
and early 1990's), and so small the importance of all other operating systems
put together, that some people wrongly assumed the Internet as being "a part
of Unics". Big blunder. The Internet is not a part of Unics, or of any other
operating system. The Internet is a re-routable network, or rather a "network
of networks". Any hardware or software can in theory be used, if accomplishing
the minimum requirements that the Internet imposes for its own operation.
There is some provision for DOS systems in HTTP-HTML, because a page can in
theory be named with a type (extension) of .HTM (three characters in upper
case), or with a type of .html (four characters in lower case). In practice,
though, certain host servers expect the initial page of the document to be
named as index.html or default.html or home.html (four characters in lower
case). It was so with Yahoo Geocities and Web Ring Web Space, two servers
where this document of CSS Dixieland was hosted time ago. It was not so with
Heliohost, where CSS Dixieland was also formerly hosted, because Heliohost
accepted the name INDEX.HTM for the initial page. It may also be possible to
point to the initial page by means of an .htaccess text document, located at
the top level directory. The other pages of the Web document can always be
named with a type of .HTM, and the whole name can be in upper case, as it is
usual in DOS when not using long name software.
Search: Directories edited by human reviewers
Thousands of volunteers review the documents submitted to these directories,
of which one of the most important is or was Directory Mozilla (Open Directory
Project), but there are others. The documents linked by a directory tend to
be of higher relevance to any subject, and also of higher quality, than the
documents linked by a search engine, although on the other hand, the number
of documents linked by a search engine tends to be much bigger than the
documents linked by a directory.
Search: Engines by subject, language or location
We may be looking for a very specific subject, or we may wish to see which
documents exist written in a certain language, or to see only documents of
local scope in a certain part of the world. For any of those purposes, there
are specialised search engines. Some major search engines of general purpose
also provide resources that will help to filter that information, but the
results may be better in a specialised search engine than in a general one.
Or they may not, depending on the subject researched and on the manner of
performing the search (knowing how to use Boolean Enunciators correctly is a
distinct advantage for any search). For scientific subjects there are highly
specialised search engines, listed in another section further below in this
page.
Search Engine Colossus
Search engines ordered by geographic location
http://www.searchenginecolossus.com/
Search Engine Links
Collection of search engines ordered by various attributes
http://www.searchenginelinks.co.uk/
Technical recommendations
What would the Internet be without technical standards ? The answer is very
simple: IT WOULD NOT BE. Computer makers would adhere only to their own
proprietary rules, and computers of different brands could not communicate.
To avert such a chaotic exclusivism, international organisations were founded
for providing a common standard that, although not at all mandatory, is
usually honoured by makers of hardware and of software.
The three hyper links further below are all pointing to the World Wide Web
Consortium, international organisation that regulates the part of the
Internet whose transmissions are made by HTTP Hyper Text Transfer Protocol,
whose predominant mark-up language is HTML Hyper Text Mark-up Language, whose
recommended style sheet language is CSS Cascading Style Sheets, and whose
official image format is PNG Portable Network Graphics.
HTML has been since 1990 a dialect of SGML, the Standard Generalised Mark-up
Language. However, the World Wide Web Consortium approved in October 2014 the
official specification of HTML version 5, which is not based on SGML anymore.
Instead, HTML 5 tries to combine features of SGML with others of XML Extended
Mark-up Language. HTML 5 does not further develop frames, and it consolidates
separation of presentational mark-up by means of CSS, Cascading Style Sheets,
or of another style sheet language. Not all style sheet languages support
cascading.
Other parts of the Internet possess their own regulatory organisations, such
as the Internet Engineering Task Force, which provided specifications for
Hyper Text Mark-up Language before the World Wide Web Consortium had been
created. Gopher was initially developed by the University of Minnesota in
1991, but years later that university abandoned its own protocol. Gopher is
today maintained by enthusiasts of the protocol.
World Wide Web Consortium
List of technical reports and recommendations
http://www.w3.org/TR/
World Wide Web Consortium
Validator for Hyper Text Mark-up Language
http://validator.w3.org/
World Wide Web Consortium
Validator for Cascading Style Sheets
http://jigsaw.w3.org/css-validator/
Time keepers
Gregorian Calendar
The Gregorian Calendar was developed by a team of astronomers and approved
by the Roman Pontifex Gregorious. It became official in the year 1582 in all
Catholic nations. Most North European nations gradually accepted it in the
XVIII century, Russia and the Balkans in the XX century (as an historical
detail, the Russian Revolution of 1917 happened in October of the Julian
Calendar, but in November of the Gregorian Calendar). The Gregorian Calendar
is based on the Julian Calendar, developed by the astronomer Sosigenes and
approved by the Roman Emperor Julius Cesar in the I century Before Christ.
The Julian Calendar is still used, more or less oficially, in some Balkan
nations and by Orthodox Christian churches. It is fully official only in
Mount Athos, an Orthodox monastic masculine republic, today an autonomous
nomo of Greece. The Julian Calendar is based on the Numidic Calendar,
approved by the Roman King Numa, before the Roman Republic.
It can be observed that all those three calendars, the Numidic, the Julian
and the Gregorian, were approved by powerful Roman monarchs. Being the
Eternal City of Rome the centre of European affairs, as it was for almost two
thousand years, this official Roman sanction was necessary, otherwise those
calendars would have enjoyed only a restricted geographic or chronologic
acceptance, as in fact it happened to the well thought but short lived French
Republican Calendar, with its romantic names of months (Brumaire, Vendimiaire,
Thermidor, Ventose, Pluviose...), its days of ten hours, hours of a hundred
minutes, and minutes of a hundred seconds. That beautiful French Calendar was
officially used only in France during the First Republic. Some time after the
coup d'etat of xviii Brumaire, year VIII of the French Republic (9 th November
1799 in the Gregorian Calendar) Napoleon Bonaparte, who already had in his hands
the command of France (he had eliminated the Directory and had established the
Consulate), decided to wipe out the French Calendar and return to the Gregorian.
The Gregorian Calendar is today commonly used all over the World, even in
nations where other calendars are also used (Julian, Islamic, Hebrew, Asiatic
calendars and others). The Gregorian Calendar is complicated, with months of
unequal duration, and inexact (almost one day of delay every three thousand
years). Such an error is too big for scientific purposes, hence that the time
counting used in Astronomy be not the Gregorian Calendar, but the Astronomic
Date, also called Astronomic Period or Julian Period. The name 'Julian Period'
refers to its creator Julius Scaligerus. It is not related to the Julian
Calendar at all, the name 'Julian Calendar' refers to Julius Cesar. The two
names including the word 'Julian' is just a coincidence.
Astronomic Date
The Astronomic Date, also called Astronomic Period or Julian Period, was
developed by the astronomer Julius Scaligerus. It must not be confused with
the Julian Calendar, developed by the astronomer Sosigenes and approved by
Julius Cesar. The Astronomic Date is universally used by astronomers today,
because it considerably simplifies calculations in astronomic research. An
astronomic date is composed of seven ciphers as integers representing days,
and it may have decimal fractions representing decimal parts of days. Because
the Astronomic Date arbitrarily begins at a time when even the Sumerians had
not developed the Science of Astronomy yet, the number thus formed will always
be positive for historic times, with the possibility of being a negative number
only for dates prior to all known History.
The programme below converts a given Gregorian Date and Greenwich Mean Time
to the corresponding Astronomic Date (Astronomic Period, Julian Period). It
requires a user agent capable of executing Java Script and with it enabled.