The Computer
In basic terms, a
computer is an electronic device that processes data, converting it into information that is useful to people. Any computer—regardless of its type—is controlled by programmed instructions, which give the machine a purpose and tell
it what to do.
A
computer is an electronic machine, operating under the control of instructions
stored in its own memory that can accept data, manipulate the data according to
specified rules, produce results, and store the results for future use.
Computers process data to create information. Data is a collection of raw
unprocessed facts, figures, and symbols. Information is data that is organized,
meaningful, and useful. To process data into information, a computer uses
hardware and software. Hardware is the electric, electronic, and mechanical
equipment that makes up a computer. Software is the series of instructions that
tells the hardware how to perform tasks.
Computers have touched
every part of our lives: the way we work, the way we learn, the way we live,
even the way we play. It almost is impossible to go through a single day
without encountering a computer, a device dependent on a computer, information
produced by a computer, or a word that was introduced or whose meaning has
changed with the advent of computers. Because of the significance of computers
in today’s world, it is important to be computer literate. Being computer
literate means you have knowledge and understanding
of computers and their
uses.
It
is difficult to think of a field in which computers are not used. In addition
to general-purpose computers, special-purpose computers are used in everything
from automobiles to electric razors. Consider how computers have influenced our
daily lives, both positively and negatively. (“To err is human, but to really
foul things up requires a computer.”
Anonymous from a BBC Radio broadcast.) List ways in which computers are
being used today. What is the most common use?What is the most unusual use? As
a result of the expanding use of computers, in 1986 Florida became the first
state to demand computer literacy of all students by grade 12.
Although
computers are thought of as a relatively recent innovation, the term computer
has a long history. Prior to 1940, “computer” was a job title that referred to
anyone performing calculations.
Consider
how data is different from information. Data is processed into information.
Clifford Stoll – lecturer, computer security expert, and author of Silicon
Snake Oil: Second Thoughts on the Information Superhighway – notes a wide gap
between data and information. Stoll insists that information has a pedigree, or
lineage. Its source is known, whether by a respected professor or a seventh
grader. “The Internet has great gobs of data,” Stoll maintains, “and little,
little information.”
The
first three operations in the information processing cycle — input, process,
and output — are performed to process data into information, while the fourth
operation — storage — refers to a computer’s electronic reservoir capability.
Think about how we perform each phase in the information processing cycle in
the “human computer” (i.e., the human brain) while completing a common task,
such as learning a telephone number.
The history
of computer
The
history of computer
development is often referred to in reference to the different generations of
computing devices. Each
generation of computer is characterized by a major technological development
that fundamentally changed the way computers operate, resulting in increasingly
smaller, cheaper, morepowerful and more efficient and reliable devices. Read
about each generation and the developments that led to the current devices that
we use today.
The
first computers used vacuum tubes for circuitry and magnetic drums for memory, and were often enormous,
taking up entire rooms. They were very expensive to operate and in addition to
using a great deal of electricity, generated a lot of heat, which was often the
cause of malfunctions.
First
generation computers relied on machine language, the
lowest-level programming language understood by computers, to perform
operations, and they could only solve one problem at a time. Input was based on
punched cards and paper tape, and output was displayed on printouts.
The
UNIVAC and ENIAC
computers are examples of first-generation computing devices. The UNIVAC was
the first commercial computer delivered to a business client, the U.S. Census
Bureau in 1951.
Transistors
replaced vacuum tubes and ushered in the second generation of computers. The
transistor was invented in 1947 but did not see widespread use in computers
until the late 1950s. The transistor was far superior to the vacuum tube,
allowing computers to become smaller, faster, cheaper, more energy-efficient
and more reliable than their first-generation predecessors. Though the
transistor still generated a great deal of heat that subjected the computer to
damage, it was a vast improvement over the vacuum tube. Second-generation
computers still relied on punched cards for input and printouts for output.
Second-generation
computers moved from cryptic binary machine language to symbolic,
or assembly,
languages, which allowed programmers to specify instructions in words. High-level
programming languages
were also being developed at this time, such as early versions of COBOL and FORTRAN. These
were also the first computers that stored their instructions in their memory,
which moved from a magnetic drum to magnetic core technology.
The
first computers of this generation were developed for the atomic energy
industry.
The
development of the integrated circuit was
the hallmark of the third generation of computers. Transistors were
miniaturized and placed on silicon chips, called semiconductors, which
drastically increased the speed and efficiency of computers.
Instead
of punched cards and printouts, users interacted with third generation
computers through keyboards and monitors and interfaced with an operating system, which
allowed the device to run many different applications at one time with a central
program that monitored the memory. Computers for the first time became
accessible to a mass audience because they were smaller and cheaper than their
predecessors.
The
microprocessor
brought the fourth generation of computers, as thousands of integrated circuits
were built onto a single silicon chip. What in the first generation filled an
entire room could now fit in the palm of the hand. The Intel 4004 chip,
developed in 1971, located all the components of the computer—from the central processing unit and
memory to input/output controls—on a single chip.
In
1981 IBM
introduced its first computer for the home user, and in 1984 Apple
introduced the Macintosh. Microprocessors also moved out of the realm of
desktop computers and into many areas of life as more and more everyday
products began to use microprocessors.
As
these small computers became more powerful, they could be linked together to
form networks, which eventually led to the development of the Internet. Fourth
generation computers also saw the development of GUIs, the mouse and handheld
devices.
Fifth
generation computing devices, based on artificial
intelligence,
are still in development, though there are some applications, such as voice recognition, that
are being used today. The use of parallel processing and
superconductors is helping to make artificial intelligence a reality. Quantum computation and
molecular and nanotechnology will
radically change the face of computers in years to come. The goal of
fifth-generation computing is to develop devices that respond to natural language input
and are capable of learning and self-organization.
TYPES OF COMPUTERS
Computers can be classified based on
their principles of operation or on their configuration. By configuration, we
mean the size, speed of doing computation and storage capacity of a computer.Types of Computers based on Principles of OperationThere are
three different types of computers according to the principles of operation.
Those three types of computers areAnalog
ComputersDigital
ComputersHybrid
Computers
Analog Computers
Analog
Computer is a computing device that works on continuous range of values. The
results given by the analog computers will only be approximate since they deal
with quantities that vary continuously. It generally deals with physical
variables such as voltage, pressure, temperature, speed, etc.
Digital Computers
On the other
hand a digital computer operates on digital data such as numbers. It uses
binary number system in which there are only two digits 0 and 1. Each one is
called a bit.
The digital
computer is designed using digital circuits in which there are two levels for
an input or output signal. These two levels are known as logic 0 and logic 1.
Digital Computers can give more accurate and faster results.
Digital
computer is well suited for solving complex problems in engineering and
technology. Hence digital computers have an increasing use in the field of
design, research and data processing.
Based on the
purpose, Digital computers can be further classified as,
General
Purpose Computers
Special
Purpose Computers
Special
purpose computer is one that is built for a specific application. General
purpose computers are used for any type of applications. They can store
different programs and do the jobs as per the instructions specified on those
programs. Most of the computers that we see today, are general purpose
computers.
Hybrid Computers
A hybrid
computer combines the desirable features of analog and digital computers. It is
mostly used for automatic operations of complicated physical processes and
machines. Now-a-days analog-to-digital and digital-to-analog converters are
used for transforming the data into suitable form for either type of
computation.
For example,
in hospital’s ICU, analog devices might measure the patients temperature, blood
pressure and other vital signs. These measurements which are in analog might
then be converted into numbers and supplied to digital components in the
system. These components are used to monitor the patient’s vital sign and send
signals if any abnormal readings are detected. Hybrid computers are mainly used
for specialized tasks.
Types of Computers based on Configuration
There are
four different types of computers when we classify them based on their
performance and capacity. The four types are
Mainframe
Computers
Mini
Computers
Micro
Computers
Super Computers
When we talk
about types of computers, the first type that comes to our mind would be Super
computers. They are the best in terms of processing capacity and also
the most expensive ones. These computers can process billions of
instructions per second. Normally, they will be used for applications which
require intensive numerical computations such as stock analysis, weather
forecasting etc. Other uses of supercomputers are scientific simulations,
(animated) graphics, fluid dynamic calculations, nuclear energy research,
electronic design, and analysis of geological data (e.g. in petrochemical
prospecting). Perhaps the best known super computer manufacturer is Cray
Research. Some of the "traditional" companies which produce
super computers are Cray, IBM and Hewlett-Packard.
As of July
2009, the IBM Roadrunner, located at Los Alamos National Laboratory, is the
fastest super computer in the world.
If you want
to know more advanced details about super computers, refer to SuperComputers
Mainframe Computers
Mainframe
computers can also process data at very high speeds vi.e., hundreds of million
instructions per second and they are also quite expensive. Normally, they are
used in banking, airlines and railways etc for their applications.
Mini Computers
Mini
computers are lower to mainframe computers in terms of speed and storage
capacity. They are also less expensive than mainframe computers. Some of the
features of mainframes will not be available in mini computers. Hence, their
performance also will be less than that of mainframes.
Micro Computers
The
invention of microprocessor (single chip CPU) gave birth to the much cheaper
micro
The
six major categories of computers are personal computers, handheld computers,
Internet appliances, mid-range servers, mainframes, and supercomputers. These
categories are based on differences in size, speed, processing capabilities,
and price. A personal computer can perform all of its input, processing,
output, and storage activities by itself. Personal computers include desktop
computers and notebook computers. A desktop computer is designed so the system
unit, input devices, output devices, and any other devices fit entirely on or
under a desk or table. Variations of desktop computers include tower models
(computers with tall and narrow system units that can sit vertically on the
floor), all-in-one computers (less expensive computers that combine the monitor
and system unit into a single device), and workstations (more expensive and
powerful computers designed for work that requires intense calculation and
graphics capabilities).
A
notebook computer is
a portable personal computer small enough fit on your lap. Notebook and desktop
computers are used at home or in the office to perform application
software-related tasks or to access the Internet. A handheld computer is a
small computer that fits in your hand. Handheld computers can perform specific,
industry-related functions, or can be general-purpose.
A PDA (personal digital assistant) is a
handheld computer that provides personal organizer functions, such as a
calendar, appointment book, and notepad. An Internet appliance is a computer
with limited functionality whose main purpose is to connect to the Internet
from home. A mid-range server is more powerful and larger than a workstation
computer. Users typically access a mid-range server through a personal computer
or a terminal, which is a device with a monitor and a keyboard that usually has
no stand-alone processing power.
A
mainframeis
a large, expensive, very powerful computer that can handle hundreds or
thousands of connected users simultaneously. A supercomputer is the fastest,
most powerful, and most expensive category of computer.
computers.
They are further classified into
Desktop
Computers
Laptop
Computers
Handheld
Computers(PDAs)
Desktop Computers
Today the Desktop computers are the most popular computer systems.These desktop computers are also known as personal computers or simply PCs. They are usually easier to use and more affordable. They are normally intended for individual users for their word processing and other
Laptop Computers
Laptop computers are portable computers. They are lightweight computers with a thin screen. They are also called as notebook computers because of their small size. They can operate on batteries and hence are very popular with travelers. The screen folds down onto the keyboard
Handheld Computers
Handheld computers or Personal Digital Assistants (PDAs) are pen-based and also battery-powered. They are small and can be carried anywhere. They use a pen like stylus and accept handwritten input directly on the screen. They are not as powerful as desktops or laptops but they are used for scheduling appointments,storing addresses and playing games. They have touch screens which we use with a finger or a stylus
Computer Components
Computer
hardware components include input devices, output devices, a system unit,
storage devices, and communications devices. An input device is any hardware
component that allows a user to enter data and instructions into a computer.
Six commonly used input devices are the keyboard, mouse, microphone, scanner,
digital camera, and PC camera. An output device is any hardware component
that can convey information to a user. Three commonly used output devices are
a printer, a monitor, and speakers.
The
system unit is a box-like case made from metal or plastic that protects the
internal electronic components of the computer from damage. The system unit
contains the central processing unit and memory. The central processing unit
(CPU) is the electronic device that interprets and carries out the basic
instructions that operate the computer. Memory is a temporary holding place
for data and instructions.
A
storage device records and retrieves data to and from a storage medium. Six
common storage devices are a floppy disk drive, a Zip® drive, a hard disk
drive, a CD-ROM drive, a CD-RW drive, a DVD-ROM drive, and a DVD+RW drive. A
communications device enables computer users to communicate and exchange
items such as data, instructions, and information with another computer. A
modem is a communications device that enables computers to communicate
usually via telephone lines or cable.
A
computer is a powerful tool because it is able to perform the information
processing cycle operations (input, process, output, and storage) with
amazing speed, reliability, and accuracy; store huge amounts of data and
information; and communicate with other computers. Computers allow users to
generate correct information quickly, hold the information so it is available
at any time, and share the information with other computer users.
Computers come in all
types and sizes. There are primarily two main sizes of computers. They are:
The
portable computer comes in various sizes and are referred to as laptops,
notebooks, and hand-held computers. These generally denote different sizes,
the laptop being the largest, and the hand-held is the smallest size. This
document will mainly talk about the desktop computer although portable computer
issues are also discussed in various areas.
Differentiate between storage and memory
Memory,
which is composed of one or more chips on the motherboard, is a temporary
holding place for data and instructions during processing. The contents of volatile
memory, such as RAM, are lost when the power to the computer is turned off.
The contents of nonvolatile memory, such as ROM, are not lost when power is
removed from the computer. Storage holds items such as data, instructions,
and information for future use; that is, storage holds these items while they
are not being processed. Storage is nonvolatile, which means the items in
storage are retained even when power is removed from the computer. Compared
to memory, the access time (the time it takes to locate a single item) for
storage is slow.
Identify various types of storage media and storage
devices
A
storage medium (media is the plural) is the physical material on which items
are kept. A storage device is the computer hardware that records and
retrieves items to and from a storage medium. Storage devices can function as
sources of input and output. When storage devices transfer items from a
storage medium into memory – a process called reading – they function as
sources of input. When storage devices transfer items from memory to a
storage medium – a process called writing – they function as sources of
output. Types of storage media include floppy disks, hard disks, compact
discs, tape, PC Cards, microfilm, and microfiche
A CD-ROM, or compact disc
read-only memory, is a compact disc that uses the same laser technology as
audio CDs. For a computer to read items stored on a CD-ROM, you insert the
disc into a CD-ROM drive or CD-ROM player. When viewing animation or video,
the speed of a CD-ROM drive, or data transfer rate, is important. A higher
the data transfer rate, results in smoother playback of images and sounds.
Most standard CDs are
single-session because manufacturers record (write) all items to the disc at
one time. Variations of standard CD-ROMs, such as PhotoCD, CD-R (compact
disc-recordable), and CD-RW (compact disc-rewritable), are multisession,
which means additional data, instructions, and information can be written at
a later time. A PhotoCD is a compact disc that contains digital photographic
images. A CD-R (compact disc-recordable) is a multisession compact disc onto
which you can record your own items. A CD-RW (compact disc-rewritable) is an
erasable disc you can write on multiple times.
A DVD-ROM (digital video
disc-ROM) is an extremely high-capacity compact disc capable of storing from 4.7
GB to 17 GB. In order to read a DVD-ROM, you must have a DVD-ROM drive. You
also can obtain recordable and rewritable versions of DVD. A DVD-R
(DVD-recordable) allows you to write on it once and read (play) it many
times. With the new rewritable DVD, called a DVD+RW, you can erase and record
on the disc multiple times.
Storage Media and Devices
The first computer storage medium
was a punched card. Herman Hollerith’s punched card tabulating machine helped
complete the 1890 census in just 2½ years (compared to 8 years for the 1880
census) at a savings of more than $5 million. Hollerith later founded the
Tabulating Machine Company, which eventually became known as International
Business Machines (IBM). An understanding of storage terms is very important
for purchasers, and users, of storage devices.
·
1
Kilobyte (KB) = 1 thousand bytes
·
1
Megabyte (MB) = 1 million bytes
·
1
Gigabyte (GB) = 1 billion bytes
·
1
Terabyte (TB) = 1 trillion bytes
·
1
Petabyte (PB) = 1 quadrillion bytes
1 KB stores approximately ½ page
of text. Depending on speed and size, rough costs for RAM are about $40 to
$50 per megabyte, while hard disk storage costs are around $0.20 per
megabyte.
Computers are made of
the following basic components:
The physical elements
of a computer, its hardware, are generally divided into the central processing
unit (CPU), main memory (or random-access
memory, RAM), and peripherals. The last class encompasses all sorts
of input and output (I/O) devices: keyboard, display monitor, printer, disk
drives, network connections, scanners, and more.
The CPU and RAM are integrated circuits (ICs)—small silicon wafers,
or chips, that contain thousands or millions of transistors that function as
electrical switches. In 1965 Gordon Moore, one of the founders of Intel,
stated what has become known as Moore's law:
the number of transistors on a chip doubles about every 18 months. (See figure.) Moore suggested
that financial constraints would soon cause his law to break down, but it has
been remarkably accurate for far longer than he first envisioned. It now
appears that technical constraints may finally invalidate Moore's law, since
sometime between 2010 and 2020 transistors would have to consist of only a few
atoms each, at which point the laws of quantum physics imply that they would
cease to function reliably.
The CPU provides the
circuits that implement the computer's instruction set—its machine language. It
is composed of an arithmetic-logic
unit (ALU) and control circuits. The ALU
carries out basic arithmetic and logic operations, and the control section
determines the sequence of operations, including branch instructions that
transfer control from one part of a program to another. Although the main memory
was once considered part of the CPU, today it is regarded as separate. The
boundaries shift, however, and CPU chips now also contain some high-speed cache
memory where data and instructions are temporarily stored for fast access.
The ALU has circuits that
add, subtract, multiply, and divide two arithmetic values, as well as circuits
for logic operations such as AND and OR (where a 1 is interpreted as true and a
0 as false, so that, for instance, 1 AND 0 = 0; see Boolean
algebra).
The ALU has several to more than a hundred registers that temporarily hold
results of its computations for further arithmetic operations or for transfer
to main memory.
The circuits in the
CPU control section provide branch instructions, which
make elementary decisions about what instruction to execute next. For example,
a branch instruction might be “If the result of the last ALU operation is
negative, jump to location A in the program; otherwise, continue with the
following instruction.” Such instructions allow “if-then-else” decisions in a
program and execution of a sequence of instructions, such as a “while-loop”
that repeatedly does some set of instructions while some condition is met. A
related instruction is the subroutine call, which
transfers execution to a subprogram and then, after the subprogram finishes,
returns to the main program where it left off.
In a stored-program
computer, programs and data in memory are
indistinguishable. Both are bit patterns—strings of 0s and 1s—that may be
interpreted either as data or as program instructions, and both are fetched
from memory by the CPU. The CPU has a program counter that holds the memory
address (location) of the next instruction to be executed. The basic operation
of the CPU is the “fetch-decode-execute” cycle:
At the end of these
steps the cycle is ready to repeat, and it continues until a special halt
instruction stops execution.
Steps of this cycle
and all internal CPU operations are regulated by a clock that oscillates at a
high frequency (now typically measured in gigahertz, or billions of cycles per
second). Another factor that affects performance is the “word” size—the number
of bits that are fetched at once from memory and on which CPU instructions
operate. Digital words now consist of 32 or 64 bits,
though sizes from 8 to 128 bits are seen.
Processing instructions
one at a time, or serially, often creates a bottleneck because many program
instructions may be ready and waiting for execution. Since the early 1980s, CPU
design has followed a style originally called reduced-instruction-set computing
(RISC). This design minimizes the transfer of data
between memory and CPU (all ALU operations are done only on data in CPU
registers) and calls for simple instructions that can execute very quickly. As
the number of transistors on a chip has grown, the RISC design requires a
relatively small portion of the CPU chip to be devoted to the basic instruction
set. The remainder of the chip can then be used to speed CPU operations by
providing circuits that let several instructions execute simultaneously, or in
parallel.
There are two major
kinds of instruction-level parallelism (ILP) in the
CPU, both first used in early supercomputers. One is the pipeline,
which allows the fetch-decode-execute cycle to have several instructions under
way at once. While one instruction is being executed, another can obtain its
operands, a third can be decoded, and a fourth can be fetched from memory. If
each of these operations requires the same time, a new instruction can enter
the pipeline at each phase and (for example) five instructions can be completed
in the time that it would take to complete one without a pipeline. The other
sort of ILP is to have multiple execution units in the CPU—duplicate arithmetic
circuits, in particular, as well as specialized circuits for graphics
instructions or for floating-point calculations
(arithmetic operations involving noninteger numbers, such as 3.27). With this “superscalar” design, several instructions can execute at
once.
Both forms of ILP
face complications. A branch instruction might render preloaded instructions in
the pipeline useless if they entered it before the branch jumped to a new part
of the program. Also, superscalar execution must determine whether an
arithmetic operation depends on the result of another operation, since they
cannot be executed simultaneously. CPUs now have additional circuits to predict
whether a branch will be taken and to analyze instructional dependencies. These
have become highly sophisticated and can frequently rearrange instructions to
execute more of them in parallel.
The earliest forms of
computer main memory were mercury delay lines, which were tubes of mercury that
stored data as ultrasonic waves, and cathode-ray tubes, which stored data as
charges on the tubes' screens. The magnetic drum, invented about 1948, used an
iron oxide coating on a rotating drum to store data and programs as magnetic
patterns.
In a binary computer
any bitable device (something that can be placed in either of two states) can
represent the two possible bit values of 0 and 1 and can thus serve as computer
memory. Magnetic-core memory, the first relatively cheap
RAM device, appeared in 1952. It was composed of tiny, doughnut-shaped ferrite magnets threaded on
the intersection points of a two-dimensional wire grid. These wires carried
currents to change the direction of each core's magnetization, while a third
wire threaded through the doughnut detected its magnetic orientation.
The first integrated circuit (IC) memory chip appeared in
1971. IC memory stores a bit in a transistor-capacitor combination. The capacitor holds a charge to represent a 1 and no
charge for a 0; the transistor switches it between these two states. Because a
capacitor charge gradually decays, IC memory is dynamic RAM
(DRAM), which must have its stored values refreshed periodically (every 20
milliseconds or so). There is also static RAM
(SRAM), which does not have to be refreshed. Although faster than DRAM, SRAM
uses more transistors and is thus more costly; it is used primarily for CPU
internal registers and cache memory.
In addition to main
memory, computers generally have special video memory
(VRAM) to hold graphical images, called bit-maps, for the computer display. This memory
is often dual-ported—a new image can be stored in it at the same time that its
current data is being read and displayed.
It takes time to
specify an address in a memory chip, and, since memory is slower than a CPU,
there is an advantage to memory that can transfer a series of words rapidly
once the first address is specified. One such design is known as synchronous DRAM (SDRAM), which became widely used by
2001.
Nonetheless, data
transfer through the “bus”—the set of wires that connect the CPU to memory and
peripheral devices—are a bottleneck. For that reason, CPU chips now contain cache memory—a small amount of fast SRAM. The
cache holds copies of data from blocks of main memory. A well-designed cache
allows up to 85–90 percent of memory references to be done from it in typical
programs, giving a several-fold speedup in data access.
The time between two
memories reads or writes (cycle time) was about 17 microseconds (millionths of
a second) for early core memory and about 1 microsecond for core in the early
1970s. The first DRAM had a cycle time of about half a microsecond, or 500
nanoseconds (billionths of a second), and today it is 20 nanoseconds or less.
An equally important measure is the cost per bit of memory. The first DRAM
stored 128 bytes (1 byte = 8 bits) and cost about $10, or $80,000 per megabyte
(millions of bytes). In 2001 DRAM could be purchased for less than $0.25 per
megabyte. This vast decline in cost made possible graphical
user interfaces
(GUIs), the display fonts that word processors use, and the manipulation and
visualization of large masses of data by scientific computers.
Secondary memory on a
computer is storage for data and programs not in use at the moment. In addition
to punched cards and paper tape, early computers also used magnetic tape for
secondary storage. Tape is cheap, either on large reels or in small cassettes,
but has the disadvantage that it must be read or written sequentially from one
end to the other.
IBM introduced the first magnetic disk,
the RAMAC,
in 1955; it held 5 megabytes and rented for $3,200 per month. Magnetic disks
are platters coated with iron oxide, like tape and drums. An arm with a tiny
wire coil, the read/write (R/W) head, moves radially over the disk, which is
divided into concentric tracks composed of small arcs, or sectors, of data.
Magnetized regions of the disk generate small currents in the coil as it
passes, thereby allowing it to “read” a sector; similarly, a small current in
the coil will induce a local magnetic change in the disk, thereby “writing” to
a sector. The disk rotates rapidly (up to 15,000 rotations per minute), and so
the R/W head can rapidly reach any sector on the disk.
Early disks had large
removable platters. In the 1970s IBM introduced sealed disks with fixed
platters known as Winchester disks—perhaps because the
first ones had two 30-megabyte platters, suggesting the Winchester 30-30 rifle.
Not only was the sealed disk protected against dirt, the R/W head could also
“fly” on a thin air film, very close to the platter. By putting the head closer
to the platter, the region of oxide film that represented a single bit could be
much smaller, thus increasing storage capacity. This basic technology is still
used. (See figure.)
Refinements have
included putting multiple platters—10 or more—in a single disk drive, with a pair
of R/W heads for the two surfaces of each platter in order to increase storage
and data transfer rates. Even greater gains have resulted from improving
control of the radial motion of the disk arm from track to track, resulting in
denser distribution of data on the disk. By 2002 such densities had reached
over 8,000 tracks per centimetre (20,000 tracks per inch), and a platter the
diameter of a coin could hold over a gigabyte of data. In 2002 an 80-gigabyte
disk cost about $200—only one ten-millionth of the 1955 cost and representing
an annual decline of nearly 30 percent, similar to the decline in the price of
main memory.
Optical storage devices—CD-ROM (compact
disc,
read-only memory) and DVD-ROM
(digital videodisc, or versatile disc)—appeared in the mid-1980s and '90s. They
both represent bits as tiny pits in plastic, organized in a long spiral like a
phonograph record, written and read with lasers. (See figure.) A CD-ROM can hold
2 gigabytes of data, but the inclusion of error-correcting codes (to correct
for dust, small defects, and scratches) reduces the usable data to 650
megabytes. DVDs are denser, have smaller pits, and can hold 17 gigabytes with
error correction.
Optical storage
devices are slower than magnetic disks, but they are well suited for making
master copies of software or for multimedia (audio and video) files that are
read sequentially. There are also writable and rewritable CD-ROMs (CD-R and
CD-RW) and DVD-ROMs (DVD-R and DVD-RW) that can be used like magnetic tapes for
inexpensive archiving and sharing of data.
The decreasing cost
of memory continues to make new uses possible. A single CD-ROM can store 100
million words, more than twice as many words as are contained in the printed Encyclopædia
Britannica. A DVD can hold a feature-length motion picture. Nevertheless,
even larger and faster storage systems, such as three-dimensional optical
media, are being developed for handling data for computer simulations of
nuclear reactions, astronomical data, and medical data, including X-ray images.
Such applications typically require many terabytes (1 terabyte = 1,000
gigabytes) of storage, which can lead to further complications in indexing and
retrieval.
Computer peripherals
are devices used to input information and instructions into a computer for storage
or processing and to output the processed data. In addition, devices that
enable the transmission and reception of data between computers are often
classified as peripherals.
A plethora of devices
falls into the category of input peripheral. Typical examples include
keyboards, mice, trackballs, pointing sticks, joysticks, digital tablets, touch
pads, and scanners.
Keyboards contain mechanical
or electromechanical switches that change the flow of current through the
keyboard when depressed. A microprocessor embedded in the keyboard interprets
these changes and sends a signal to the computer. In addition to letter and
number keys, most keyboards also include “function” and “control” keys that
modify input or send special commands to the computer.
Mechanical mice and trackballs operate
alike, using a rubber or rubber-coated ball that turns two shafts connected to
a pair of encoders that measure the horizontal and vertical components of a
user's movement, which are then translated into cursor movement on a computer
monitor. Optical mice employ a light beam and camera lens to translate motion
of the mouse into cursor movement. (See figure.)
Pointing
sticks, which are popular on many laptop systems, employ a technique that uses
a pressure-sensitive resistor. As a user applies pressure to the stick, the
resistor increases the flow of electricity, thereby signaling that movement has
taken place. Most joysticks operate in a similar manner.
Digital
tablets and touch pads are similar in purpose and functionality. In both cases,
input is taken from a flat pad that contains electrical sensors that detect the
presence of either a special tablet pen or a user's finger, respectively.
A scanner
is somewhat akin to a photocopier. A light source illuminates the object to be
scanned, and the varying amounts of reflected light are captured and measured
by an analog-to-digital converter attached to light-sensitive diodes. The diodes generate
a pattern of binary digits that are stored in the computer as a graphical
image.
Printers are a common example
of output devices. New multifunction peripherals that integrate printing,
scanning, and copying into a single device are also popular. Computer monitors
are sometimes treated as peripherals. High-fidelity sound systems are another
example of output devices often classified as computer peripherals.
Manufacturers have announced devices that provide tactile feedback to the
user—“force feedback” joysticks, for example. This highlights the complexity of
classifying peripherals—a joystick with force feedback is truly both an input
and an output peripheral.
Early printers often
used a process known as impact printing,
in which a small number of pins were driven into a desired pattern by an
electromagnetic print head. As each pin was driven forward, it struck an inked
ribbon and transferred a single dot the size of the pinhead to the paper.
Multiple dots combined into a matrix to form characters and graphics, hence the
name dot matrix. Another early print technology,
daisy-wheel printers, made impressions of whole characters with a single blow
of an electromagnetic print head, similar to an electric typewriter. Laser printers
have replaced such printers in most commercial settings. Laser printers employ
a focused beam of light (see figure) to etch patterns of
positively charged particles on the surface of a cylindrical drum made of
negatively charged organic, photosensitive material. As the drum rotates,
negatively charged toner particles adhere to the patterns etched by the laser
and are transferred to the paper. Another, less expensive printing technology
developed for the home and small businesses is inkjet printing
(see figure). The majority of
inkjet printers operate by ejecting extremely tiny droplets of ink to form
characters in a matrix of dots—much like dot matrix printers.
Computer display
devices have been in use almost as long as computers themselves. Early computer
displays employed the same cathode-ray tubes (CRTs) used in television and
radar systems. The fundamental principle behind CRT displays is the emission of
a controlled stream of electrons that strike light-emitting phosphors coating
the inside of the screen. The screen itself is divided into multiple scan lines,
each of which contains a number of pixels—the rough equivalent of dots in a dot
matrix printer. The resolution of a monitor is determined by its pixel size.
More recent liquid
crystal displays
(LCD's) rely on liquid crystal cells that realign incoming polarized light. The
realigned beams pass through a filter that permits only those beams with a
particular alignment to pass. By controlling the liquid crystal cells with
electrical charges, various colors or shades are made to appear on the screen.
The most familiar
example of a communication device is the common telephone modem (from modulator/demodulate). Modems
modulate, or transform, a computer's digital message into an analog signal for
transmission over standard telephone networks, and they demodulate the analog
signal back into a digital message on reception. In practice, telephone network
components limit analog data transmission to about 48 kilo bits per second.
Standard cable modems operate in a similar manner over
cable television networks, which have a total transmission capacity of 30 to 40
megabits per second over each local neighbourhood “loop.” (Like Ethernet cards,
cable modems are actually local area network devices, rather than true modems,
and transmission performance deteriorates as more users share the loop.) Asymmetric digital
subscriber line (ADSL) modems can be used for
transmitting digital signals over a local dedicated telephone line, provided
there is a telephone office nearby—in theory, within 5,500 metres (18,000 feet)
but in practice about a third of that distance. ADSL is asymmetric because
transmission rates differ to and from the subscriber: 8 megabits per second
“downstream” to the subscriber and 1.5 megabits per second “upstream” from the
subscriber to the service provider. In addition to devices for transmitting
over telephone and cable wires, wireless communication devices exist for
transmitting infrared, radiowave, and microwave signals.
|
Major Types of Software
Software is the means by which computer systems speak with
computer users. Software forms the heart of computer systems. What are the
major types of software? Read on to find out.
Software,
by definition, is the collection of computer programs, procedures and
documentation that performs different tasks on a computer system. The term
'software' was first used by John Tukey in 1958. At the very basic level,
computer software consists of a machine language that comprises groups of
binary values, which specify processor instructions. The processor instructions
change the state of computer hardware in a predefined sequence. Briefly,
computer software is the language in which a computer speaks. There are
different types of computer software. What are their major types? Let us see.
Main Types of Software
Main Types of Software
Application
software consists of programs designed to perform specific tasks for users.
Application software can be used as a productivity/business tool; to assist
with graphics and multimedia projects; to support home, personal, and
educational activities; and to facilitate communications. Specific application
software products, called software packages, are available from software
vendors. Although application software also is available as shareware,
freeware, and public-domain software, these usually have fewer capabilities
than retail software packages.
Many
application programs are designed to run with a specific operating system. When
shopping for an application software package, buyers must make sure they have a
compatible operating system. A software package designed to be used with the
Macintosh operating system may not work with the Windows operating system. The
operating system version also is important. An application designed for Windows
XP may not work with Windows 3.1. Yet, because most operating systems are
downward compatible, software written for earlier versions of an operating system
(such as Windows 98) usually can be used with recent versions of the operating
system (such as Windows XP).
System
software consists of programs that control the operations of a computer and its
devices. System software serves as the interface between a user, the
application software, and the computer’s hardware. One type of system software
is the operating system. Before application software can be run, the operating
system, which contains instructions that coordinate the activities among
computer hardware devices, must be loaded from the hard disk into the
computer’s memory.
The
user interface controls how you enter data or instructions and how information
displays on the computer screen. Many of today’s software programs have a
graphical user interface. A graphical user interface (GUI) combines text,
graphics, and other visual images to make software easier to use.
- productivity/business software
applications
- graphic design/multimedia software
applications
- home/personal/educational software
applications
- communications software applications
People
use productivity software to become more effective and efficient while
performing daily activities. Word processing software allows users to create
and manipulate documents that contain text and graphics. With word processing
software, you can insert clip art into a document; change margins; find and
replace text; use a spelling checker to check spelling; place a header and
footer at the top and the bottom of a page; and vary font (character design),
font size (character scale), and font style (character appearance).
With
spreadsheet software, data is organized in rows and columns, which collectively
are called a worksheet. The intersection of a row and column, called a cell,
can contain a label (text), a value (number), or a formula or function that
performs calculations on the data and displays the result.
Database
software allows you to create and manage a database. A database is a collection
of data organized to allow access, retrieval, and use of that data. A query is
used to retrieve data according to specified criteria, which are restrictions
the data must meet.
Presentation
graphics software is used to create presentations that communicate ideas,
messages, and other information to a group through a slide show. You can use a
clip gallery to enhance your presentation with clip art images, pictures, video
clips, and audio clips.
A
personal information manager (PIM) is software that includes an appointment
calendar to schedule activities, an address book to maintain names and
addresses, and a notepad to record ideas, reminders, and important information.
A software suite is a collection of individual applications sold as a single
package.
Project
management software allows you to plan, schedule, track, and analyze the
progress of a project. Accounting software helps companies record and report
their financial transactions.
Power
users often use software that allows them to work with graphics and multimedia.
Computer-aided design (CAD) software assists in creating engineering,
architectural, and scientific designs. Desktop publishing (DTP) software is
used to design and produce sophisticated documents. DTP is developed
specifically to support page layout, which is the process of arranging text and
graphics in a document. Paint software is used to draw graphical images with
various on-screen tools. Image editing software provides the capability to
modify existing images. Video editing software and audio editing software can
be used to modify video and audio segments.
Multimedia
authoring software is used to create electronic interactive presentations that
can include text, images, video, audio, and animation. Web page authoring
software is designed to create Web pages and to organize, manage, and maintain
Web sites.
Many
software applications are designed specifically for use at home or for personal
or educational use. Integrated software combines several productivity software
applications that share a similar interface and common features into a single
package. Personal finance software is an accounting program that helps pay
bills, balance a checkbook, track income and expenses, follow investments, and
evaluate financial plans. Legal software assists in the creation of legal
documents and provides legal advice. Tax preparation software guides users
through the process of filing federal taxes. Personal DTP software helps
develop conventional documents by asking questions, presenting predefined
layouts, and supplying standard text.
Photo-editing
software is used to edit digital photographs. A clip art/image gallery is a
collection of clip art and photographs that can be used in all types of
documents. Home design/landscaping software assists with planning or
remodeling. Educational software teaches a particular skill and exists for
about any subject. Reference software provides valuable and thorough
information for all individuals. Entertainment software includes interactive
games, videos, and other programs designed to support a hobby or provide
amusement.
One
of the main reasons people use computers is to communicate and share
information. E-mail software is used to create, send, receive, forward, store,
print, and delete e-mail (electronic mail). A Web browser is a software
application used to access and view Web pages. A chat client is software that
allows you to connect to a chat room, which permits users to chat via the
computer. A newsreader is a software program used to participate in a
newsgroup, which is an online area on the Web where users conduct written
discussion about a particular subject. An instant messenger is a software
program installed to use instant messaging (IM), a real-time communications
service that notifies you when one or more people are online and then allows
you to exchange messages or files. Groupware is a software application that
helps groups of people on a network work together and share information. A
videoconference is a meeting between two or more geographically separated
people who use a network or the Internet to transmit audio and video data
Programming Software: This is one of the most commonly known and popularly used types of computer software. These software come in the form of tools that assist a programmer in writing computer programs. Computer programs are sets of logical instructions that make a computer system perform certain tasks. The tools that help programmers in instructing a computer system include text editors, compilers and interpreters. Compilers translate source code written in a programming language into the language which a computer understands (mostly the binary form). Compilers generate objects which are combined and converted into executable programs through linkers. Debuggers are used to check code for bugs and debug it. The source code is partially or completely simulated for the debugging tool to run on it and remove bugs if any. Interpreters execute programs. They execute the source code or a precompiled code or translate source code into an intermediate language before execution.
System Software: It helps in running computer hardware and the computer system. System software refers to the operating systems; device drivers, servers, windowing systems and utilities. System software helps an application programmer in abstracting away from hardware, memory and other internal complexities of a computer. An operating system provides users with a platform to execute high-level programs. Firmware and BIOS provide the means to operate hardware.
Application Software: It enables the end users to accomplish certain specific tasks. Business software, databases and educational software are some forms of application software. Different word processors, which are dedicated to specialized tasks to be performed by the user, are other examples of application software.
Malware: Malware refers to any malicious software and is a broader category of software that are a threat to computer security. Adware, spyware, computer viruses, worms, trojan horses and scareware are malware. Computer viruses are malicious programs which replicate themselves and spread from one computer to another over the network or the Internet. Computer worms do the same, the only difference being that viruses need a host program to attach with and spread, while worms don't need to attach themselves to programs. Trojans replicate themselves and steal information. Spywarecan monitor user activity on a computer and steal user information without their knowledge.
Adware: Adware is software with the means of which advertisements are played and downloaded to a computer. Programmers design adware as their tool to generate revenue. They do extract user information like the websites he visits frequently and the pages he likes. Advertisements that appear as pop-ups on your screen are the result of adware programs tracking you. But adware is not harmful to computer security or user privacy. The data it collects is only for the purpose of inviting user clicks on advertisements.
There are some other types of computer software like inventory management software, ERP, utility software, accounting software among others that find applications in specific information and data management systems. Let's take a look at some of them.
Inventory Management Software: This type of software helps an organization in tracking its goods and materials on the basis of quality as well as quantity. Warehouse inventory management functions encompass the internal warehouse movements and storage. Inventory software helps a company in organizing inventory and optimizing the flow of goods in the organization, thus leading to improved customer service.
Utility Software: Also known as service routine, utility software helps in the management of computer hardware and application software. It performs a small range of tasks. Disk defragmenters, systems utilities and virus scanners are some of the typical examples of utility software.
Data Backup and Recovery Software: An ideal data backup and recovery software provides functionalities beyond simple copying of data files. This software often supports user needs of specifying what is to be backed up and when. Backup and recovery software preserve the original organization of files and allow an easy retrieval of the backed up data.
Types of Software and their licensing
A software license determines the way in which that software can be accessed and used. Depending on the software licensing, the end users have rights to copy, modify or redistribute the software. While some software have to be bought, some are available for free on the Internet. Some licenses allow you to use, copy and distribute the software while others allow only one of the three operations. In some software, the source code is made available to the end users, while in others it is not. Here we will see the ways in which different types of software are distributed to users.
There
are two categories of computer software: system software and application
software. System software consists of the programs that control the operations
of a computer and its devices. Two types of system software are the operating
system and utility programs. An operating system (OS) coordinates all
activities among hardware devices and contains instructions that allow you to
run application software. A utility program performs specific tasks, usually
related to managing a computer, its devices, or its programs. You interact with
software through its user interface.
Application
software consists of programs that perform specific tasks for users. Popular
application software includes word processing software, spreadsheet software,
database software, and presentation graphics software. Application software can
be packaged software (copyrighted software that meets the needs of a variety of
users), custom software (tailor-made software developed at a user’s request),
freeware (copyrighted software provided at no cost), public-domain
What is a Network?
First of all when was Internet discovered.
First of all when was Internet discovered.
The Internet was activated
in 1969 as a network of university mainframe computers. The hypothesis of the Internet was first written about by J.C.R. Licklider, but the actual building of it
did not occur until 1969 with the creation of ARPANET. The first two nodes to
be connected were at UCLA's Network Measurement Center and the Stanford
Research Institute.
Leonard Klein rock from MIT theorized the
concept of packet switching that would enable computers to communicate
effectively via a network. The ARPANET was first demonstrated at the
International Computer Communication Conference in 1972, which is also the
first year electronic mail was introduced. The idea of the Internet is based on
the concept of many independent networks joined together, including
packet-based radio networks and satellite networks.
A
network consists of 2 or more computers connected together, and they can
communicate and share resources (e.g. information). In the early 1970s, other computer networks were added and TCP, or
transmission control protocol, on which the modern Internet is based was
developed by scientist Vinton Cerf. In 1989, a Swiss programmer named Tim
Berners-Lee created the World Wide Web as a place to store information as well
as communicate. In 1992, students at the University of Illinois developed the
first browser, called Mosaic, and later that year Congress allowed the Web to
be used for commercial purposes.
A
computer network, often simply referred to as a network, is a collection of
hardware components and computers interconnected by communication channels that
allow sharing of resources and information.[1] Where at least one process in
one device is able to send/receive data to/from at least one process residing
in a remote device, then the two devices are said to be in a network. Simply,
more than one computers interconnected through a communication media for
information interchange are called a computer network.
Networks
may be classified according to a wide variety of characteristics such as the
medium used to transport the data, communications protocol used, scale,
topology, and organizational scope.
Communications
protocols define the rules and data formats for exchanging information in a
computer network, and provide the basis for network programming. Well-known
communications protocols are Ethernet, a hardware and link layer standard that
is ubiquitous in local area networks, and the internet protocol suite, which
defines a set of protocols for internetworking, i.e. for data communication
between multiple networks, as well as host-to-host data transfer, and
application-specific data transmission formats.
A
network is a collection of computers and devices connected together via
communications devices, such as a modem, and communications media, such as
cables, telephone lines, cellular radio, and satellites. Networks allow users
to share resources, such as hardware devices, software devices, data, and
information. Most business computers are networked, either by a local area
network (LAN) in a limited geographic area or by a wide area network (WAN) in a
large geographical area.
The
world’s largest network is the Internet, which is a worldwide collection of
networks that links together millions of businesses, government agencies,
educational institutions, and individuals. Users connect to the Internet to
send messages, access information, shop for goods and services, meet or
converse with other users, and access sources of entertainment and leisure.
Most users connect to the Internet through an Internet service provider (ISP)
or an online service provider (OSP). The World Wide Web is a popular segment of
the Internet that contains billions of documents called Web pages. These
documents can contain text, graphics, sound, video, and built-in connections,
or links, to other Web pages stored on computers throughout the world.
Types
Of Network
Networks
are often classified by their physical or organizational extent or their
purpose. Usage, trust level, and access rights differ between these types of
networks.
Personal area network
A
personal area network (PAN) is a computer network used for communication among
computer and different information technological devices close to one person.
Some examples of devices that are used in a PAN are personal computers,
printers, fax machines, telephones, PDAs, scanners, and even video game
consoles. A PAN may include wired and wireless devices. The reach of a PAN
typically extends to 10 meters.[11] A wired PAN is usually constructed with USB
and Firewire connections while technologies such as Bluetooth and infrared
communication typically form a wireless PAN.
Local area network
A
local area network (LAN) is a network that connects computers and devices in a
limited geographical area such as home, school, computer laboratory, office
building, or closely positioned group of buildings. Each computer or device on
the network is a node. Current wired LANs are most likely to be based on
Ethernet technology, although new standards like ITU-TG.hn also provide a way
to create a wired LAN using existing home wires (coaxial cables, phone lines
and power lines).[12]
Typical
library network, in a branching tree topology and controlled access to
resources
All interconnected devices must understand the
network layer (layer 3), because they are handling multiple subnets (the
different colors). Those inside the library, which have only 10/100 Mbit/s
Ethernet connections to the user device and a Gigabit Ethernet connection to
the central router, could be called "layer 3 switches" because they
only have Ethernet interfaces and must understand IP. It would be more correct
to call them access routers, where the router at the top is a distribution
router that connects to the Internet and academic networks' customer access
routers.
The
defining characteristics of LANs, in contrast to WANs (Wide Area Networks),
include their higher data transfer rates, smaller geographic range, and no need
for leased telecommunication lines. Current Ethernet or other IEEE 802.3 LAN
technologies operate at speeds up to 10 Gbit/s. This is the data transfer rate.
IEEE has projects investigating the standardization of 40 and 100 Gbit/s.[13]
LANs can be connected to Wide area network by using routers.
A
home area network (HAN) is a residential LAN which is used for communication
between digital devices typically deployed in the home, usually a small number
of personal computers and accessories, such as printers and mobile computing
devices. An important function is the sharing of Internet access, often a
broadband service through a cable TV or Digital Subscriber Line (DSL) provider.
Storage area network
A
storage area network (SAN) is a dedicated network that provides access to
consolidated, block level data storage. SANs are primarily used to make storage
devices, such as disk arrays, tape libraries, and optical jukeboxes, accessible
to servers so that the devices appear like locally attached devices to the
operating system. A SAN typically has its own network of storage devices that
are generally not accessible through the local area network by other devices.
The cost and complexity of SANs dropped in the early 2000s to levels allowing
wider adoption across both enterprise and small to medium sized business
environments.
Campus area network
A
campus area network (CAN) is a computer network made up of an interconnection
of LANs within a limited geographical area. The networking equipment (switches,
routers) and transmission media (optical fiber, copper plant, Cat5 cabling
etc.) are almost entirely owned (by the campus tenant / owner: an enterprise,
university, government etc.).
In
the case of a university campus-based campus network, the network is likely to
link a variety of campus buildings including, for example, academic colleges or
departments, the university library, and student residence halls.
Backbone network
A
backbone network is part of a computer network infrastructure that
interconnects various pieces of network, providing a path for the exchange of
information between different LANs or subnetworks. A backbone can tie together
diverse networks in the same building, in different buildings in a campus
environment, or over wide areas. Normally, the backbone's capacity is greater
than that of the networks connected to it.
A
large corporation which has many locations may have a backbone network that
ties all of these locations together, for example, if a server cluster needs to
be accessed by different departments of a company which are located at
different geographical locations. The equipment which ties these departments
together constitute the network backbone. Network performance management
including network congestion are critical parameters taken into account when
designing a network backbone.
A
specific case of a backbone network is the Internet backbone, which is the set
of wide-area network connections and core routers that interconnect all
networks connected to the Internet.
Metropolitan area network
A
Metropolitan area network (MAN) is a large computer network that usually spans
a city or a large campus.
Wide area network
A
wide area network (WAN) is a computer network that covers a large geographic
area such as a city, country, or spans even intercontinental distances, using a
communications channel that combines many types of media such as telephone
lines, cables, and air waves. A WAN often uses transmission facilities provided
by common carriers, such as telephone companies. WAN technologies generally
function at the lower three layers of the OSI reference model: the physical
layer, the data link layer, and the network layer.
Enterprise private network
An
enterprise private network is a network built by an enterprise to interconnect
various company sites, e.g., production sites, head offices, remote offices,
shops, in order to share computer resources.
Virtual private network
A
virtual private network (VPN) is a computer network in which some of the links
between nodes are carried by open connections or virtual circuits in some
larger network (e.g., the Internet) instead of by physical wires. The data link
layer protocols of the virtual network are said to be tunneled through the
larger network when this is the case. One common application is secure
communications through the public Internet, but a VPN need not have explicit
security features, such as authentication or content encryption. VPNs, for
example, can be used to separate the traffic of different user communities over
an underlying network with strong security features.
VPN
may have best-effort performance, or may have a defined service level agreement
(SLA) between the VPN customer and the VPN service provider. Generally, a VPN
has a topology more complex than point-to-point.
Internetwork
An
internetwork is the connection of multiple computer networks via a common
routing technology using routers. The Internet is an aggregation of many
connected internetworks spanning the Earth
Intranets and extranets
Intranets
and extranets are parts or extensions of a computer network, usually a LAN.
An intranet is a set of networks, using the
Internet Protocol and IP-based tools such as web browsers and file transfer
applications, that is under the control of a single administrative entity. That
administrative entity closes the intranet to all but specific, authorized
users. Most commonly, an intranet is the internal network of an organization. A
large intranet will typically have at least one web server to provide users
with organizational information.
An extranet is a network that is limited in
scope to a single organization or entity and also has limited connections to
the networks of one or more other usually, but not necessarily, trusted
organizations or entities—a company's customers may be given access to some
part of its intranet—while at the same time the customers may not be considered
trusted from a security standpoint. Technically, an extranet may also be
categorized as a CAN, MAN, WAN, or other type of network, although an extranet
cannot consist of a single LAN; it must have at least one connection with an
external network.
Internet
The
Internet is a global system of interconnected governmental, academic,
corporate, public, and private computer networks. It is based on the networking
technologies of the Internet Protocol Suite. It is the successor of the
Advanced Research Projects Agency Network (ARPANET) developed by DARPA of the
United States Department of Defense. The Internet is also the communications
backbone underlying the World Wide Web (WWW).
Participants
in the Internet use a diverse array of methods of several hundred documented,
and often standardized, protocols compatible with the Internet Protocol Suite
and an addressing system (IP addresses) administered by the Internet Assigned
Numbers Authority and address registries. Service providers and large
enterprises exchange information about the reachability of their address spaces
through the Border Gateway Protocol (BGP), forming a redundant worldwide mesh
of transmission paths.
The
Internet is a worldwide collection of networks that links millions of
businesses, government offices, educational institutions, and individuals. Data
is transferred over the Internet using servers, which are computers that manage
network resources and provide centralized storage areas, and clients, which are
computers that can access the contents of the storage areas. The data travels
over communications lines. Each computer or device on a communications line has
a numeric address called an IP (Internet protocol) address, the text version of
which is called a domain name. Every time you specify a domain name, a DNS
(domain name system) server translates the domain name into its associated IP
address, so data can route to the correct computer.
You
can access the Internet through an Internet service provider, an online service
provider, or a wireless service provider. An Internet service provider (ISP)
provides temporary Internet connections to individuals and companies. An online
service provider (OSP) also supplies Internet access, in addition to a variety
of special content and services. A wireless service provider (WSP) provides
wireless Internet access to users with wireless modems or Web-enabled handheld
computers or devices.
Employees
and students often connect to the Internet through a business or school network
that connects to a service provider. For home or small business users, dial-up
access provides an easy and inexpensive way to connect to the Internet. With
dial-up access, you use a computer, a modem, and a regular telephone line to
dial into an ISP or OSP. Some home and small business users opt for newer,
high-speed technologies. DSL (digital subscriber line) provides high-speed
connections over a regular copper telephone line. A cable modem provides
high-speed Internet connections through a cable television network.
A search engine is a software program you can use to find Web sites, Web pages, and Internet files. To find a Web page or pages, you enter a relevant word or phrase, called search text or keywords, in the search engine’s text box. Many search engines then use a program called a spider to read pages on Web sites and create a list of pages that contain the keywords. Any Web page that is listed as the result of the search is called a hit. Each hit is a link that can be clicked to display the associated Web site or Web page.
There
are six basic types of Web pages. An advocacy Web page contains content that
describes a cause, opinion, or idea. A business/marketing Web page contains
content that promotes or sells products or services. An informational Web page
contains factual information. A news Web page contains newsworthy material including
stories and articles relating to current events, life, money, sports, and the
weather. A portal Web page offers a variety of Internet services from a single,
convenient location. A personal Web page is maintained by a private individual
who normally is not associated with any organization.
Pull
technology is a method of obtaining information that relies on a client such as
your computer to request a Web page from a server. On the other hand, Web casting, also called push technology, is a method of obtaining information
in which a server automatically downloads content to your computer at regular
intervals or whenever updates are made to the site. Web casting saves time by
delivering information at regular intervals and allows users to view Web
content when they are offline, that is, when they are not connected to the
Internet.