The history of the central processing unit (CPU)


The


central
processing
unit
(CPU)

is
the
computer’s
brain.
It
handles
the
assignment
and
processing
of
tasks,
in
addition
to
functions
that
make
a
computer
run.

There’s
no
way
to
overstate
the
importance
of
the
CPU
to
computing.
Virtually
all
computer
systems
contain,
at
the
least,
some
type
of
basic
CPU.
Regardless
of
whether
they’re
used
in
personal
computers
(PCs),
laptops,
tablets,
smartphones
or
even
in
supercomputers
whose
output
is
so
strong
it
must
be
measured
in
floating-point
operations
per
second,
CPUs
are
the
one
piece
of
equipment
on
computers
that
can’t
be
sacrificed.
No
matter
what
technological
advancements
occur,
the
truth
remains—if
you
remove
the
CPU,
you
simply
no
longer
have
a
computer.

In
addition
to
managing
computer
activity,
CPUs
help
enable
and
stabilize
the
push-and-pull
relationship
that
exists
between
data
storage
and
memory.
The
CPU
serves
as
the
intermediary,
interacting
with
the
primary
storage
(or
main
memory)
when
it
needs
to
access
data
from
the
operating
system’s
random-access
memory
(RAM).
On
the
other
hand,
read-only
memory
(ROM)
is
built
for
permanent
and
typically
long-term
data
storage.

CPU
components

Modern
CPUs
in
electronic
computers
usually
contain
the
following
components:


  • Control
    unit:

    Contains
    intensive
    circuitry
    that
    leads
    the
    computer
    system
    by
    issuing
    a
    system
    of
    electrical
    pulses
    and
    instructs
    the
    system
    to
    carry
    out
    high-level
    computer
    instructions.

  • Arithmetic/logic
    unit
    (ALU):

    Executes
    all
    arithmetic
    and
    logical
    operations,
    including
    math
    equations
    and
    logic-based
    comparisons
    that
    are
    tied
    to
    specific
    computer
    actions.

  • Memory
    unit:

    Manages
    memory
    usage
    and
    flow
    of
    data
    between
    RAM
    and
    the
    CPU.
    Also
    supervises
    the
    handling
    of
    the
    cache
    memory.

  • Cache:

    Contains
    areas
    of
    memory
    built
    into
    a
    CPU’s
    processor
    chip
    to
    reach
    data
    retrieval
    speeds
    even
    faster
    than
    RAM
    can
    achieve.

  • Registers:

    Provides
    built-in
    permanent
    memory
    for
    constant,
    repeated
    data
    needs
    that
    must
    be
    handled
    regularly
    and
    immediately.

  • Clock:

    Manages
    the
    CPU’s
    circuitry
    by
    transmitting
    electrical
    pulses.
    The
    delivery
    rate
    of
    those
    pulses
    is
    referred
    to
    as
    clock
    speed,
    measured
    in
    Hertz
    (Hz)
    or
    megahertz
    (MHz).

  • Instruction
    register
    and
    pointer:

    Displays
    location
    of
    the
    next
    instruction
    set
    to
    be
    executed
    by
    the
    CPU.

  • Buses:

    Ensures
    proper
    data
    transfer
    and
    data
    flow
    between
    the
    components
    of
    a
    computer
    system.

How
do
CPUs
work?

CPUs
function
by
using
a
type
of
repeated
command
cycle
that
is
administered
by
the
control
unit
in
association
with
the
computer
clock,
which
provides
synchronization
assistance.

The
work
a
CPU
does
occurs
according
to
an
established
cycle
(called
the
CPU
instruction
cycle).
The
CPU
instruction
cycle
designates
a
certain
number
of
repetitions,
and
this
is
the
number
of
times
the
basic
computing
instructions
will
be
repeated,
as
permitted
by
that
computer’s
processing
power.

The
basic
computing
instructions
include
the
following:


  • Fetch:

    Fetches
    occur
    anytime
    data
    is
    retrieved
    from
    memory.

  • Decode:

    The
    decoder
    within
    the
    CPU
    translates
    binary
    instructions
    into
    electrical
    signals
    that
    engage
    with
    other
    parts
    of
    the
    CPU.

  • Execute:

    Execution
    occurs
    when
    computers
    interpret
    and
    carry
    out
    a
    computer
    program’s
    set
    of
    instructions.

With
some
basic
tinkering,
the
computer
clock
within
a
CPU
can
be
manipulated
to
keep
time
faster
than
it
normally
elapses.
Some
users
do
this
to
run
their
computer
at
higher
speeds.
However,
this
practice
(“overclocking”)
is
not
advisable
since
it
can
cause
computer
parts
to
wear
out
earlier
than
normal
and
can
even
violate
CPU
manufacturer
warranties.

Processing
styles
are
also
subject
to
tweaking.
One
way
to
manipulate
those
is
by
implementing
instruction
pipelining,
which
seeks
to
instill
instruction-level
parallelism
in
a
single
processor.
The
goal
of
pipelining
is
to
keep
each
part
of
the
processor
engaged
by
splitting
up
incoming
computer
instructions
and
spreading
them
out
evenly
among
processor
units.
Instructions
are
broken
down
into
smaller
sets
of
instructions
or
steps.

Another
method
for
achieving
instruction-level
parallelism
inside
a
single
processor
is
to
use
a
CPU
called
a
superscalar
processor.
Whereas
scalar
processors
can
execute
a
maximum
of
one
instruction
per
clock
cycle,
there’s
really
no
limit
to
how
many
instructions
can
be
dispatched
by
a
superscalar
processor.
It
sends
multiple
instructions
to
various
of
the
processor’s
execution
units,
thereby
boosting
throughput.

Who
invented
the
CPU?

Breakthrough
technologies
often
have
more
than
one
parent.
The
more
complex
and
earth-shaking
that
technology,
the
more
individuals
who
are
usually
responsible
for
that
birth.

In
the
case
of
the
CPU—one
of
history’s
most
important
inventions—we’re
really
talking
about
who
discovered
the
computer
itself.

Anthropologists
use
the
term
“independent
invention”
to
describe
situations
where
different
individuals,
who
may
be
located
countries
away
from
each
other
and
in
relative
isolation,
each
come
up
with
what
are
similar
or
complementary
ideas
or
inventions
without
knowing
about
similar
experiments
taking
place.

In
the
case
of
the
CPU
(or
computer),
independent
invention
has
occurred
repeatedly,
leading
to
different
evolutionary
shifts
during
CPU
history.

Twin
giants
of
computing

While
this
article
can’t
honor
all
the
early
pioneers
of
computing,
there
are
two
people
whose
lives
and
work
need
to
be
illuminated.
Both
had
a
direct
connection
to
computing
and
the
CPU:

Grace
Hopper:
Saluting
“Grandma
COBOL”

American
Grace
Brewster
Hopper
(1906-1992)
weighed
a
mere
105
pounds
when
she
enlisted
in
the
US
Navy—15
pounds
under
the
required
weight
limit.
And
in
one
of
US
maritime
history’s
wisest
decisions,
the
Navy
gave
an
exemption
and
took
her
anyway.

What
Grace
Hopper
lacked
in
physical
size,
she
made
up
for
with
energy
and
versatile
brilliance.
She
was
a
polymath
of
the
first
order:
a
gifted
mathematician
armed
with
twin
Ph.D.
degrees
from
Yale
University
in
both
mathematics
and
mathematical
physics,
a
noted
professor
of
mathematics
at
Vassar
College,
a
pioneering
computer
scientist
credited
with
writing
a
computer
language
and
authoring
the
first
computer
manual,
and
a
naval
commander
(at
a
time
when
women
rarely
rose
above
administrative
roles
in
the
military).

Because
of
her
work
on
leading
computer
projects
of
her
time,
such
as
the
development
of
the
UNIVAC
supercomputer
after
WWII,
Hopper
always
seemed
in
the
thick
of
the
action,
always
at
the
right
place
at
the
right
time.
She
had
personally
witnessed
much
of
modern
computing
history.
She
was
the
person
who
originally
coined
the
term
“computer
bug,”
describing
an
actual
moth
that
had
become
caught
within
a
piece
of
computing
equipment.
(The
original
moth
remains
on
display
at
the
Smithsonian
Institution’s
National
Museum
of
American
History
in
Washington,
DC.)

During
her
experience
working
on
the
UNIVAC
project
(and
later
running
the
UNIVAC
project
for
the
Remington
Rand
Corporation),
Hopper
became
frustrated
that
there
was
not
a
simpler
programming
language
that
could
be
used.
So,
she
set
about
writing
her
own
programming
language,
which
famously
came
to
be
known
as
COBOL
(an
acronym
for

CO
mmon

B
usiness-Oriented

L
anguage).

Robert
Noyce:
The
Mayor
of
Silicon
Valley

Robert
Noyce
was
a
mover
and
shaker
in
the
classic
business
sense—a
person
who
could
make
amazing
activity
start
happening
just
by
showing
up.

American
Robert
Noyce
(1927-1990)
was
a
whiz-kid
boy
inventor.
He
later
channeled
his
intellectual
curiosity
into
his
undergrad
collegiate
work,
especially
after
being
shown
two
of
the
original
transistors
created
by
Bell
Laboratories.
By
age
26,
Noyce
earned
a
Ph.D.
in
Physics
from
the
Massachusetts
Institute
of
Technology
(MIT).

In
1959,
he
followed
up
on
Jack
Kilby’s
1958
invention
of
the
first
hybrid
integrated
circuit
by
making
substantial
tweaks
to
the
original
design.
Noyce’s
improvements
led
to
a
new
kind
of
integrated
circuits:
the
monolithic
integrated
circuit
(also
called
the
microchip),
which
was
formulated
using
silicon.
Soon
the
silicon
chip
became
a
revelation,
changing
industries
and
shaping
society
in
new
ways.

Noyce
co-founded
two
hugely
successful
corporations
during
his
business
career:
Fairchild
Semiconductor
Corporation
(1957)
and
Intel
(1968).
He
was
the
first
CEO
of
Intel,
which
is
still
known
globally
for
manufacturing
processing
chips.

His
partner
in
both
endeavors
was
Gordon
Moore,
who
became
famous
for
a
prediction
about
the
semiconductor
industry
that
proved
so
reliable
it
has
seemed
almost
like
an
algorithm.
Called
“Moore’s
Law,”
it
posited
that
the
number
of
transistors
to
be
used
within
an
integrated
circuit
reliably
doubles
about
every
two
years.

While
Noyce
oversaw
Intel,
the
company
produced
the
Intel
4004,
now
recognized
as
the
chip
that
launched
the
microprocessor
revolution
of
the
1970s.
The
creation
of
the
Intel
4004
involved
a
three-way
collaboration
between
Intel’s
Ted
Hoff,
Stanley
Mazor
and
Federico
Faggin,
and
it
became
the
first
microprocessor
ever
offered
commercially.

Late
in
his
tenure,
the
company
also
produced
the
Intel
8080—the
company’s
second
8-bit
microprocessor,
which
first
appeared
in
April
1974.
Within
a
couple
of
years
of
that,
the
manufacturer
was
rolling
out
the
Intel
8086,
a
16-bit
microprocessor.

During
his
illustrious
career,
Robert
Noyce
amassed
12
patents
for
various
creations
and
was
honored
by
three
different
US
presidents
for
his
work
on
integrated
circuits
and
the
massive
global
impact
they
had.

ENIAC:
Marching
off
to
war

It
seems
overly
dramatic,
but
in
1943,
the
fate
of
the
world
truly
was
hanging
in
the
balance.
The
outcome
of
World
War
II
(1939-1945)
was
still
very
much
undecided,
and
both
Allies
forces
and
Axis
forces
were
eagerly
scouting
any
kind
of
technological
advantage
to
gain
leverage
over
the
enemy.

Computer
devices
were
still
in
their
infancy
when
a
project
as
monumental
in
its
way
as
the
Manhattan
Project
was
created.
The
US
government
hired
a
group
of
engineers
from
the
Moore
School
of
Electrical
Engineering
at
the
University
of
Pennsylvania.
The
mission
called
upon
them
to
build
an
electronic
computer
capable
of
calculating
yardage
amounts
for
artillery-range
tables.

The
project
was
led
by
John
Mauchly
and
J.
Presper
Eckert,
Jr.
at
the
military’s
request.
Work
began
on
the
project
in
early
1943
and
didn’t
end
until
3
years
later.

The
creation
produced
by
the
project—dubbed
ENIAC,
which
stood
for
“Electronic
Numerical
Integrator
and
Computer”—was
a
massive
installation
requiring
1,500
sq.
ft.
of
floor
space,
not
to
mention
17,000
glass
vacuum
tubes,
70,000
resistors,
10,000
capacitors,
6,000
switches
and
1,500
relays.
In
2024
currency,
the
project
would
have
cost
USD
6.7
million.

It
could
process
up
to
5,000
equations
per
second
(depending
on
the
equation),
an
amazing
quantity
as
seen
from
that
historical
vantage
point.
Due
to
its
generous
size,
the
ENIAC
was
so
large
that
people
could
stand
within
the
CPU
and
program
the
machine
by
rewiring
connections
between
functional
units
in
the
machine.
 

ENIAC
was
used
by
the
US
Army
during
the
rest
of
WWII.
But
when
that
conflict
ended,
the
Cold
War
began
and
ENIAC
was
given
new
marching
orders.
This
time
it
would
perform
calculations
that
would
help
enable
the
building
of
a
bomb
with
more
than
a
thousand
times
the
explosive
force
of
the
atomic
weapons
that
ended
WWII:
the
hydrogen
bomb.

UNIVAC:
Getting
back
to
business

Following
WWII,
the
two
leaders
of
the
ENIAC
project
decided
to
set
up
shop
and
bring
computing
to
American
business.
The
newly
dubbed
Eckert-Mauchly
Computer
Corporation
(EMCC)
set
out
to
prepare
its
flagship
product—a
smaller
and
cheaper
version
of
the
ENIAC,
with
various
improvements
like
added
tape
drives,
a
keyboard
and
a
converter
device
that
accepted
punch-card
use.

Though
sleeker
than
the
ENIAC,
the
UNIVAC
that
was
unveiled
to
the
public
in
1951
was
still
mammoth,
weighing
over
8
tons
and
using
125
kW
of
energy.
And
it
was
still
expensive:
around
USD
11.6
million
in
today’s
money.

For
its
CPU,
it
contained
the
first
CPU—the
UNIVAC
1103—which
was
developed
at
the
same
time
as
the
rest
of
the
project.
The
UNIVAC
1103
used
glass
vacuum
tubes,
making
the
CPU
large,
unwieldy
and
slow.

The
original
batch
of
UNIVAC
1s
was
limited
to
a
run
of
11
machines,
meaning
that
only
the
biggest,
best-funded
and
best-connected
companies
or
government
agencies
could
gain
access
to
a
UNIVAC.
Nearly
half
of
those
were
US
defense
agencies,
like
the
US
Air
Force
and
the
Central
Intelligence
Agency
(CIA).
The
very
first
model
was
purchased
by
the
U.S.
Census
Bureau.

CBS
News
had
one
of
the
machines
and
famously
used
it
to
correctly
predict
the
outcome
of
the
1952
US
Presidential
election,
against
long-shot
odds.
It
was
a
bold
publicity
stunt
that
introduced
the
American
public
to
the
wonders
that
computers
could
do.

Transistors:
Going
big
by
going
small

As
computing
increasingly
became
realized
and
celebrated,
its
main
weakness
was
clear.
CPUs
had
an
ongoing
issue
with
the
vacuum
tubes
being
used.
It
was
really
a
mechanical
issue:
Glass
vacuum
tubes
were
extremely
delicate
and
prone
to
routine
breakage.

The
problem
was
so
pronounced
that
the
manufacturer
went
to
great
lengths
to
provide
a
workaround
solution
for
its
many
agitated
customers,
whose
computers
stopped
dead
without
working
tubes.

The
manufacturer
of
the
tubes
regularly
tested
tubes
at
the
factory,
subjecting
tubes
to
different
amounts
of
factory
use
and
abuse,
before
selecting
the
“toughest”
tubes
out
of
those
batches
to
be
held
in
reserve
and
at
the
ready
for
emergency
customer
requests.

The
other
problem
with
the
vacuum
tubes
in
CPUs
involved
the
size
of
the
computing
machine
itself.
The
tubes
were
bulky
and
designers
were
craving
a
way
to
get
the
processing
power
of
the
tube
from
a
much
smaller
device.

By
1953,
a
research
student
at
the
University
of
Manchester
showed
you
could
construct
a
completely

transistor-based
computer
.

Original
transistors
were
hard
to
work
with,
in
large
part
because
they
were
crafted
from
germanium,
a
substance
which
was
tricky
to
purify
and
had
to
be
kept
within
a
precise
temperature
range.

Bell
Laboratory
scientists
started
experimenting
with
other
substances
in
1954,
including
silicon.
The
Bell
scientists
(Mohamed
Italia
and
Dawn
Kahng)
kept
refining
their
use
of
silicon
and
by
1960
had
hit
upon
a
formula
for
the
metal-oxide-semiconductor
field-effect
transistor
(or
MOSFET,
or
MOS
transistor)
modern
transistor,
which
has
been
celebrated
as
the
most
widely
manufactured
device
in
history
,”
by
the
Computer
History
Museum.
In
2018
it
was
estimated
that

13
sextillion

MOS
transistors
had
been
manufactured.

The
advent
of
the
microprocessor

The
quest
for
miniaturization
continued
until
computer
scientists
created
a
CPU
so
small
that
it
could
be
contained
within
a
small
integrated
circuit
chip,
called
the
microprocessor.

Microprocessors
are
designated
by
the
number
of
cores
they
support.
A
CPU
core
is
the
“brain
within
the
brain,”
serving
as
the
physical
processing
unit
within
a
CPU.
Microprocessors
can
contain
multiple
processors.
Meanwhile,
a
physical
core
is
a
CPU
built
into
a
chip,
but
which
only
occupies
one
socket,
thus
enabling
other
physical
cores
to
tap
into
the
same
computing
environment.

Here
are
some
of
the
other
main
terms
used
in
relation
to
microprocessors:


  • Single-core
    processors:

    Single-core
    processors
    contain
    a
    single
    processing
    unit.
    They
    are
    typically
    marked
    by
    slower
    performance,
    run
    on
    a
    single
    thread
    and
    perform
    the
    CPU
    instruction
    cycle
    one
    at
    a
    time.

  • Dual-core
    processors:

    Dual-core
    processors
    are
    equipped
    with
    two
    processing
    units
    contained
    within
    one
    integrated
    circuit.
    Both
    cores
    run
    at
    the
    same
    time,
    effectively
    doubling
    performance
    rates.

  • Quad-core
    processors:

    Quad-core
    processors
    contain
    four
    processing
    units
    within
    a
    single
    integrated
    circuit.
    All
    cores
    run
    simultaneously,
    quadrupling
    performance
    rates.

  • Multi-core
    processors:

    Multi-core
    processors
    are
    integrated
    circuits
    equipped
    with
    at
    least
    two
    processor
    cores,
    so
    they
    can
    deliver
    supreme
    performance
    and
    optimized
    power
    consumption.

Leading
CPU
manufacturers

Several
companies
now
create
products
that
support
CPUs
through
different
brand
lines.
However,
this
market
niche
has
changed
dramatically,
given
that
it
formerly
attracted
numerous
players,
including
plenty
of
mainstream
manufacturers
(e.g.,
Motorola).
Now
there’s
really
just
a
couple
of
main
players:
Intel
and
AMD.

They
use
differing
instruction
set
architectures
(ISAs).
So,
while
AMD
processors
take
their
cues
from
Reduced
Instruction
Set
Computer
(RISC)
architecture,
Intel
processors
follow
a
Complex
Instruction
Set
Computer
(CISC)
architecture.


  • Advanced
    Micro
    Devices
    (AMD):

    AMD
    sells
    processors
    and
    microprocessors
    through
    two
    product
    types:
    CPUs
    and
    APUs
    (which
    stands
    for
    accelerated
    processing
    units).
    In
    this
    case,
    APUs
    are
    simply
    CPUs
    that
    have
    been
    equipped
    with
    proprietary
    Radeon
    graphics.
    AMD’s
    Ryzen
    processors
    are
    high-speed,
    high-performance
    microprocessors
    intended
    for
    the
    video-game
    market.
    Athlon
    processors
    was
    formerly
    considered
    AMD’s
    high-end
    line,
    but
    AMD
    now
    uses
    it
    as
    a
    general-purpose
    alternative.

  • Arm:

    Arm
    doesn’t
    actually
    manufacture
    equipment,
    but
    does
    lease
    out
    its
    valued
    processor
    designs
    and/or
    other
    proprietary
    technologies
    to
    other
    companies
    who
    make
    equipment.
    Apple,
    for
    example,
    no
    longer
    uses
    Intel
    chips
    in
    Mac
    CPUs,
    but
    makes
    its
    own
    customized
    processors
    based
    on
    Arm
    designs.
    Other
    companies
    are
    following
    suit.

  • Intel:

    Intel
    sells
    processors
    and
    microprocessors
    through
    four
    product
    lines.
    Its
    premium
    line
    is
    Intel
    Core,
    including
    processor
    models
    like
    the
    Core
    i3.
    Intel’s
    Xeon
    processors
    are
    marketed
    toward
    offices
    and
    businesses.
    Intel’s
    Celeron
    and
    Intel
    Pentium
    lines
    (represented
    by
    models
    like
    the
    Pentium
    4
    single-core
    CPUs)
    are
    considered
    slower
    and
    less
    powerful
    than
    the
    Core
    line.

Understanding
the
dependable
role
of
CPUs

When
considering
CPUs,
we
can
think
about
the
various
components
that
CPUs
contain
and
use.
We
can
also
contemplate
how
CPU
design
has
moved
from
its
early
super-sized
experiments
to
its
modern
period
of
miniaturization.

But
despite
any
transformations
to
its
dimensions
or
appearance,
the
CPU
remains
steadfastly
itself,
still
on
the
job—because
it’s
so
good
at
its
particular
job.
You
know
you
can
trust
it
to
work
correctly,
each
time
out.

Smart
computing
depends
upon
having
proper
equipment
you
can
rely
upon.
IBM
builds
its
servers
strong,
to
withstand
any
problems
the
modern
workplace
can
throw
at
them.
Find
the
IBM
servers
you
need
to
get
the
results
your
organization
relies
upon.

Explore
IBM
servers

Was
this
article
helpful?


Yes
No

Comments are closed.