Basic Workings

You are here: HomeComputersInner Workings → Basic Workings

About this page

Most people haven't got the faintest idea how a computer really works inside. Now in a sense, you don't “need” to know how a computer works in order to user one, any more than you “need” to know how an internal combustion engine works in order to drive a car. However, if — like me — you are curiose about such things, then this is the page for you. I'm not going to go into the level of minute technical detail that you'd need to actually design a brand new computer from scratch. (Who on earth needs to know all that? Apart from the people who actually design computers that is!) But I will attempt to explain it in terms simple enough for a non-expect to understand.

What is a computer?

These days, computers are everywhere. Your washing machine probably has a computer in it. And perhaps your watch. We use computers to play music from CDs, and watch movies from DVDs. We use computers to play computer games. We use computers to communicate by email and even VoIP. Will all this going on, it's easy to forget what computers were originally created for: computing stuff!

Devices to help humans do calculations have a very long history. Perhaps you've heard of an abacus. (These date almost to prehistoric times.) Or a slide-rule. During the industrial revolution, mad Victorians came up with bizzare, fantastical designs for mechanical calculating machines. (And various other oddities, by the way.) There were machines designed to tabulate logarithms, and machines for computing the time and hieght of the tides on each day of the year, and so forth.

In more recent times, electronics engineers worked on “analogue computers” — devices which could do simple arithmatic using electricity. Useful for, say, controlling the speed of a turbine by connecting pressure monitors to a small analogue computer which then controls the inlet valves. Not much use for anything else.

The issue with most of these machines is that each one is designed to do just one calculation. The calculation to be done is actually designed into the machine itself, and to do a different calculation you would have to completely redesign the machine. (In fact it would probably be easier to just start all over again from scratch and design a complete new machine.)

The crucial, defining characteristic of a modern computer, in the sense the word is understood today, is that it is programmable. The computer executes a program — a sequence of instructions which tell the machine what to do. That means that given the right program, the same machine can perform any possible calculation. Crucially, this program is stored electronically, and that means that it is trivial to change one program for another, thus making the same machine do a completely different calculation. No need to take it apart and rewire it!

(Of course, the fact that programs can easily be swapped around doesn't neccessarily mean that it's easy to write a correctly functioning program in the first place!)

Anatomy of a Computer

There are actually several workable designs for a computer, some slightly (or radically) different from what I'm about to describe. But the description below is by far the most common design.

Overview

Conceptually, the components of a computer can be divided into three parts.

Right in the center, we have the most important part of all: the processor. It's also known as the central processing unit (CPU). This does all the interesting stuff — it runs the programs, it does all the calculations, it controls all the other components, and so forth.

Next is the memory, also referred to as primary storage. As its name suggests, the purpose of memory is simply to store stuff. That includes the program currently being executed, the data that program is working on, final results, intermediate results, etc.

Last but not least, we have the input/output devices. (Input/output is frequently abbriviated “I/O”.) An input device is any device that brings data into the computer. That's the keyboard, mouse, and so forth. An output device is anything that brings data out of the computer. The video screen is the main one, but also the sound system (if there is one), printer, etc.

Note that, for example, the computer's power supply unit (PSU) is not an input device, since it doesn't bring any data into the computer (just electricity). Similarly, the cooling fans don't come under any of the three categories above. The categories are a conceptual grouping, whereas power supplies and cooling are mere practicalities. (For example, if you somehow made a computer that runs on light instead of electricity, it wouldn't have a PSU, and probably wouldn't have any cooling fans. But it would still have a processor, memory and some I/O devices.)

I/O Devices

Some surprising things are classed as I/O devices. For example, a disk drive is an I/O device. When you save a file onto a disk and take the disk out of the computer, you are taking data “out of” the computer. More obviously, when you put a disk in and load some file off it, you are taking data “into” the computer. The disk drive is the device that moves this data, hence it is classed as an I/O device.

In fact, the disk drive is a special kind of I/O device — it's called a secondary storage device. (Recall that “primary storage” means the computer's memory.) The disks themselves are classed as secondary storage. Anything that stores computer information is a secondary storage device. That's disk drives, CD drives, DVD drives, etc. (Note that if, say, a CD drive can only read CDs and doesn't have the capacity to write to them then technically it would be an input device, not an input/output device. But the CD itself is still classed as “secondary storage”.)

Somewhat confusingly, a computer's hard drive is counted as a secondary storage device, and hence an I/O device. This even though it is (usually) physically inside the computer's chassis, and it can't (easily) be taken out of the computer. It still takes data “out of” and “in to” the main computer, even if physically it's inside the same box.

In particular, some people seem to refer to the memory and the hard drive together as just “memory”. This is quite incorrect. While both store data and both are inside the physical casing, they are seperate components with very different and important characteristics. In particular, the memory is a purely electronic device with absolutely no moving parts, while the hard drive is more like a suped-up disk drive with the disk fixed inside it so it can't be removed. Most significantly, the memory is many thousands of times faster to access than the hard drive, and when the power is turned off the memory goes completely blank! (On the other hand, the contents of the hard drive survive permanently until recorded over.)

In a similar way to the above, any device that connects a computer to other computers would be considered an I/O device. For example, a modem or a network interface card. (Clearly such a device can transmit information from and receive information into the computer.) Depending on your point of view, a device that enables other devices to connect to a computer could be considered an I/O device. (For example, if you have a USB printer, is the printer an output device? Or is the USB controller it connects to an I/O device? Both viewpoints are conceptually valid.)

Memory

The computers memory is (conceptually) a very simple device. You can imagine it as a huge rack of little pigeon holes. Each one has a unique (roughly sequential) identifying number known as a memory address. Inside each such pigeon hole is a single (small) item of data. (The technical term for each “pigeon whole” is memory register, but many people use “memory location” or even “memory address” as a synonym.) To access any item of data, you just need to know what address it's stored at.

More specifically, each memory register holds an 8-digit code — but each of the digits can only be “1” or “0”. That gives exactly 256 possible code numbers. In other words, each memory register has 256 possible values.

Into the circuits

To understand what's happening here, let's briefly drop down into the world of electronics. For each conceptual “memory register”, somewhere in the computer there is a set of 8 little electronic circuits. Each such circuit can be individually turned on or off. If a particular circuit is turned on, that means “1”. If it's turned off, that means “0”. So in this way, we can use electricity to represent one of two possible numbers.

Into the codes

Thus far, I've explained that each memory register holds an 8-digit code, such as (say) 01101110. At this point, you might well ask “so what does 01101110 actually mean?”

Somewhat perplexingly, the answer is that it could mean absolutely anything! Now that doesn't sound like much of an answer. But it's very important to understand. Next time you try to open some file and the computer complains that it doesn't know what to do with it, what's really happening is that the computer has a file with a bunch of codes in it, and has no idea what they're supposed to mean.

This is absolutely critical, so I will say it one more time: you cannot tell what a code means just by looking at it! You have to know what it's supposed to be first.

Binary, Bits & Bytes

I could write an entire book on all the different ways you can use 1s and 0s to encode information. Suffice it to say that the most basic way is a system called binary. Now people often use that word to describe anything composed only of 1s and 0s. But strictly speaking, binary is a way of using 1s and 0s to encode numbers. (In the simplest case, positive whole numbers.)

Most other encodings are at least partly based on binary. For example, if you have a way to turn whole numbers into 1s and 0s, you can use that to encode other kinds of numers too. (Negative ones, or fractional ones.) And if you want to represent things that aren't numbers, you can number them all, and then encode the numbers using binary.

Eventually maybe I'll get round to writing a long intricate document describing the multitude of ways you can turn information into 1s and 0s. For now, all you really need to understand is that a given sequence of digits can be interpreted in many different ways. You've got to know which way it's meant to be interpreted to make any sense out of it. (And that's really what all those different file types are about, for example.)

Before we leave this subject, I want to introduce some terms.

Because binary is the basic means for encoding information into 1s and 0s, they are often referred to as binary digits or bits. So a “bit” simply means a “1” or q “0”, and is thus the smallest unit of computer storage.

Next up in the line is a byte. (Note the weird spelling. Yes, that's how you spell it.) 1 byte = 8 bits. (You'll notice that a memory register holds exactly 1 byte. There's no particular fundamental reason why it has to be, it's just a popular convention.) An older archaic term you probably won't here any more is nyble, which is half a byte (i.e., 4 bits). Again, note the strange spelling.

Moving up, we have a kilobyte (KB). Usually, 1 KB = 1,024 bytes. Why one thousand and twenty four? Because it's a round figure in binary. Similarly, 1 megabyte (1 MB) = 1,024 KB, and 1 gigabyte (1 GB) = 1,024 MB. Even larger still — and currently very rare — 1 terabyte (1 TB) = 1,024 GB. (That's about 100 DVD's worth of data!)

Travelers beware: The prefixes (“kilo”, “mega”, “giga”, etc.) come from the SI units. But there they represent multiples of 1,000 rather than 1,024. Usually 1 KB = 1,024 bytes, but just occasionally people use 1 KB = 1,000 bytes, which is obviously a different measurement. The council who writes the SI standards has actually demanded that people stop using 1 KB to mean 1,024 bytes, and instead use “1 KiB” to mean 1,024 bytes. This is not widespread yet.

The Processor

The purpose of I/O devices is essentially to get data into and out of the computer. The purpose of memory is essentially to store that data. And so the processor does everything else. So it would seem that this is where all the really interesting parts happen!

The processor executes programs. A program is just a sequence of instructions telling the processor what to do. Each design of processor has a particular instruction set — that is, it understands a certain set of instructions. Each possible instruction is given a unique number called a operation code or (much more commonly) op-code. The op-codes, and any data that goes with them, are layed out in an area of memory, and the processor fetches and executes them in sequence.

The Instruction Set

So what kinds of instruction does the processor understand? You might expect to find instructions such as

Actually, all of the above are far more complicated than anything the processor knows how to do. Actual processor instructions can typically be divided into a few small groups:

That's more or less it. Not a lot to work with, eh?

People often seem to think of computers as highly sophisticated and complex arrangements of electronics. They aren't. They are a bunch of circuits thrown together in such a way as to be just bearly able to do trivial calculations. All the rest of the sophistication is in the software, not the hardware.

Registers

A processor contains within it a number of internal registers or processor registers. These are essentially similar to memory registers, except there are only a few of them, and each as a specific purpose. They are usually given names rather than numbers, and they come in various sizes depending on the design of the processor.

Jumps, Loops & Conditionals

In particular, all processors have a register known as the program counter. (Intel calls this the instruction pointer just to be different.) This register holds the memory address of the next instruction to be executed. Amoung the group of instructions for moving data around, there is usually at least one that moves new data into the program counter. This has the effect of causing the processor to start executing a different part of the program. The technical term for this is a jump.

In particular, there is virtually always a group of instructions that execute a jump but only if some particular condition holds. (Typical “conditions” are usually whether the most recent calculation produced a negative/positive/zero result, or what mode the processor is currently in.) This is fundamentally what allows a program to do something different each time it is run.

By making the processor jump backwards, it is possible to get a sequence of instructions to be executed several times. This is called a loop. The jump instruction used is usually a conditional jump (otherwise and endless infinite loop would result). Why would you want to repeat instructions several times? Well, remember that each time the contents of the processor's registors (and possibly memory too) will be different.

Arithmetic

Early processors only had instructions to add numbers together. To perform subtraction, you make one of the numbers negative and then add them together. Newer processors usually feature the ability to add, subtract, multiply and divide with a single instruction. (Although only for sufficiently small numbers.)

You may hear the term floating-point processing unit (FPU) banded about. This is a special unit that handles calculations of floating-point numbers. (In simple terms, fractional values.) On older processors, the FPU was an optional add-on chip. (And one that drastically improved performance for software that does a lot of floating-point calculations.) On modern processors, it's almost always built-in.

Remember that computers calculate in binary, whereas humans do everything in decimal. Just converting from one to the other requires several dozen instructions. Also, even when performing quite simple calculations (which you might think would take 1 instruction) you have to check for numbers that are too large/too small to fit in a register, and so on.

The usual method of storing signed numbers (numbers that can be negative or positive) is to store positive numbers as usual, and to use large positive numbers to stand for negative values. Thus, if you add two large enough positive numbers together, the result can appear to be negative. This is called a sign overflow, and is just one of a long list of errors you have to check for and correct by hand. The processor itself does not do this; it must be programmed to do so.

Getting work done

An Example

As I have explained, all the processor really knows how to do is move data around, do very simple arithmetic on it, and decide which part of the program to execute next based on the result. All of the sophisticated stuff that computers can do is due to very cleaver programming. In particular, the most trivial tasks that you can do with a computer actually require a monumental number of processor instructions to accomplish.

Allow me to describe a small example. You're sitting at your computer, and you press the letter “J” on your keyboard. Instantly, the letter “j” appears on the screen. This certainly sounds like an utterly trivial operation. But in fact, each time you press a key, a huge amount of activity is performed inside the computer.

  1. When you press the “J” key, this causes an interrupt signal to be sent to the processor. On a standard PC, it's IRQ #10. (I have no idea what it is on, say, an Apple Mac.) Basically this means that the keyboard controller chip sends a little electric pulse along one of the connections to the processor.
  2. When the processor receives this interrupt signal, it finishes the instruction it's in the middle of executing, and then “remembers its place” by saving it into a special area of memory called the stack. Then it goes and executes a special mini-program called an interrupt handler. There is a different handler for each possible interrupt signal, so in this case the processor finds the handler for IRQ #10 and executes that.
  3. The handler for IRQ #10 — the keyboard interrupt handler — accesses the keyboard controller and makes a note of which keys are pressed/not pressed. In this case, the handler notes that scancode #37 is present, which wasn't there on the last keyboard scan, indicating that that key has been pressed. It notes this, and then the handler exits.
  4. When the handler exits, the processor “picks up its place” by looking it up on the stack, and carries on what it was doing before as if nothing happened.
  5. When you let go of the “J” key, the same sequence repeats all over again. This time the keyboard interrupt handler notes that scancode #37 is no longer present, when it was present before, so that's a key-up event instead of a key-down event.
  6. Next, the keymap is consulted to find out, for your particular model of keyboard, what key “scancode #37” actually corresponds to. If your keyboard is anything like mine, that's the “J” key. Since neither of the Shift keys (scancode #44 and #57) were pressed, we have a lowercase J. This information is added to the keyboard buffer.
  7. Whatever window is currently selected will have registered itself with the operating system saying “please send me any keyboard events”. Next time the window event loop runs, the event loop will “notice” the new keypress event, and direct that to the currently selected window.
  8. At some point in time, the program that the window belongs to will be “woken up” by the operating system, and it will see the new keyboard event and process it. It will probably do this by sending a message to the operating system requesting that the letter “j” be printed at such-and-such coordinates in the current typeface and colour.
  9. The operating system will respond to this request by looking at the font table for the current typeface and finding the glyph for “j”. This glyph must then be drawn onto the screen at the right position.
  10. Somewhere in the computer's memory is the frame buffer. This is a huge sea of code numbers. For each pixel that makes up the video display, there is a code saying what colour that pixel should be. For each individual pixel in the font glyph, the operating system asks a mini-program called the device driver for the graphics card to turn that pixel to the currently selected text colour.

I wish to emphasise that the above description is massively simplified! What “really” happens is way more complicated even than this — but hopefully you get the idea. Even the tiniest thing your computer does is actually wildly complicated under the surface.

Programming Languages

Machine Code

Clearly, programming a computer by writing the individual commands to move codes around one at a time is an agonisingly slow, complicated and tedious way to make software. (Let's not even talk about how difficult it is to sort out what's going on if there's a mistake in the program and it doesn't function correctly.)

The program that is actually fed into the processor and executed is called machine code. As I've explained, it's a mixture of op-codes representing instructions, and other associated data. Basically, a machine code program is a vast sea of codes.

In the early days of computing, computers was astronomically expensive machines, and only a few people in the world had access to them. Computer programming was the domain of super-experts with multiple PhDs to their names. Programs were designed and written out using nothing but pencil and paper. The program was probably finished before they actually completed building the computer to run it!

As time went on, computers became more widespread, so more software had to be written. Computers became more powerful, so it was possible to write more complicated software. And it became possible to type your program and all the designs for it actually on the computer itself, thus allowing you to store the designs on tape and so forth.

Assembly Language

Eventually it seems people grew tired of writing machine code directly, and they came up with assembly language. An assembly language program is (vaguely) human-readable text saying what commands to execute, and on what data. This text is then fed into a small program called an assembler, which basically replaces each command name with the corresponding command code, thus producing a normal machine code program that can be run in the normal way. (Most assemblers can also do things like convert decimal to binary and a few other simple things.)

Assembly language is very much easier to read than machine code. For example, “COPY #3B, X” is cryptic, but vastly easier to understand than just F7 BA 3B. Still, it doesn't solve the problem of it taking many pages of program to get even the tiniest task done. And it doesn't solve the problem of needing to be a super-expert to program a computer.

The problem of program size was partly solved by writing libraries of little mini-programs to perform common tasks. These mini-programs are variously called subroutines, functions or procedures. They help a little with reducing the need to repeat common sequences of commands.

(Next time you see a DLL file, that's what's in it — lots of little mini-programs for performing some task or other.)

The High-Level Languages

Even so, much more was needed. The next breakthrough came with the invention of high-level programming languages. Machine code and assembly language was dubbed low-level programming languages. One of the very popular early high-level programming languages was a thing called the Beginner's All-purpose Symbolic Instruction Code — otherwise known as BASIC.

Typical BASIC instructions are things like “load file XYZ” or “calculate 17% of 82”. The BASIC program is then fed into another program which turns what is essentially lines of text into something the machine understands and can execute. (I.e., machine code.)

There are in fact two ways this conversion can be done. One way is to feed the whole program into a compiler, which reads the BASIC text (the source code) and converts the whole thing into a finished machine code program which can then be run as normal (the target code or object code). The other way is to feed the program into a interpreter, which executes the commands in the source code one at a time as it goes along.

Interpreters are much easier to make, but in general much less efficient. In particular, if the program contains a loop (which almost all programs do), all the instructions in that loop get re-interpreted every single time the loop runs. That is wasted computer power. A compiler, on the other hand, only has to compile each instruction once. Also, when you run an interpretted program, you are really running two programs at once — the real program, and the interpreter program.

Whether compiled or interpreted, high-level languages offer several vast and compelling advantages:

The above are all fairly obvious. Less obvious, but arguably even more important:

In today's world, there are very few different types of computer; most computers are PCs, and a few are Apple Macs. But even so, portability is a great advantage.

And beyond…

While BASIC is certainly a huge improvement on assembly language, there are yet further improvements that can be made. The next development was the structured programming languages, which (as the name suggests) make programs more organised and managable. Even they were eventually more or less superceeded by the object-oriented programming languages. And even this is not yet the end of the story; it may yet be that functional programming languages will ultimately inherit the earth.

In short, a vast range of languages and even categories of languages have come into existence over the years. The important point is that the computer itself still understands only pure machine code; everything else must be translated by one route or another. (Assembled, interpreted, compiled, or some combination thereof.)

Operating Systems

The Hardware Problem

Back in the days when BASIC was popular, people owned home computers such as the Commodore 64. (Also known as the “C64”.) Significantly, every single C64 ever sold is identical. If a program works on one, it will work on all of them.

Today it is very difficult to buy two PCs that are exactly the same. The CPU could be made by either Intel or AMD. (Though to a large extent that's not important.) The motherboard could be made by any of Gigabyte, Jetway, Asus, Iwill… The graphics chip might be designed by ATi, or nVidia, or S3, Matrox, Intel, VIA, SiS… As for the sound, there is an almost endless list of possible makers.

The point is, each of these devices may do the same job, but their designs are totally different. More importantly, the machine code instructions required to draw a picture on one graphics card are wildly different to the ones needed to control a different brand (or even model).

Clearly, it would be utterly hopeless if your wordprocessor only works with ATi graphics cards, but your spreadsheet only works with nVidia graphics cards. Similarly, imagine if you had to buy one brand of printer for each program you want to print from! This is exactly the kind of thing that would happen if the wordprocessor itself had to manually control the graphics card or the printer. But it does not.

Virtually all computers these days run some kind of operating system. As the name implies, the operating system is some software who's only job is to operate the computer. In the case of most home PCs, that's Microsoft Windows — or possibly Linux. (Or maybe even FreeBSD, OpenBSD, or something else.) If you own an Apple Mac, it's probably Mac OS X or similar.

All the software you run that isn't the operating system — games, word processors, spreadsheets, databases, etc. — is application software.

Device Drivers

The main job of an operating system is to operate the computer's hardware on behalf of the application(s). In many cases, the OS does that using device drivers. For example, when you buy a graphics card, it comes with a device driver. When Windows (or whatever) wants to display something using the graphics card, it actually sends a message to the device driver software, and the device driver actually does it.

In this way, one piece of software (your operating system) is able to operate any hardware you connect to your computer. You just need to feed it the right driver software and it's good to go. So, when you try to print something, your application software talks to your operating system software, and your operating system software talks to your device driver software, and the device driver software makes the thing actually print (all being well).

Other Functions

The operating system has far more features than just a go-between for applications and drivers though. For example, a device driver allows the operating system to control secondary storage devices, but the operating system itself decides how to store files and folders on secondary storage. When an application wants the contents of a file, it asks the operating system for that filename. The OS itself figures out where that file is, and tells the device driver to fetch the appropriate data.

On top of that, on many operating systems, files and other items can be given security settings that control who can or cannot access them. The OS enforces this too. If a user does not have permission to access a given file, the OS will tell the application “you can't have that”, even though it knows perfectly well where the data is and how to get it.

Similarly, the OS is usually responsible for anything to do with networking.

In addition to this, most operating systems allow the user to run more than one application at once. This is a seriously complicated trick to pull off. The OS is responsible for sharing out the available memory, processor power, screen space, etc. between applications. It has to do this while making sure that applications can't accidentally interfere with each other. (Additionally, there is usually some facility for allowing applications to communicate if they want to.)

On top of all that, many operating systems also provide something called virtual memory. In the good old days, if your computer didn't have enough RAM to be able to do something, that's it. You can't do it. Today, if your computer doesn't have enough RAM, the OS provides “virtual” memory. That is, it makes a portion of your hard drive act like it's memory. But, of course, it isn't — and so this trick requires some serious fancy footwork.


Valid XHTML 1.0 Strict Valid CSS 2

Generated by Indoculate Release #2b (17-Feb-2007) [Compiled by GHC 6.6]. Document generated on 2007-03-23 20:40:32.