In 1945 John von Neumann reported on the electronic digital computer in his famous First Draft of a Report on the EDVAC. His architecture is still used today. It included a shared “store” for both instructions and data. A shared store makes efficient use of memory, which was an expensive resource. However, it allows poorly written programs to halt unexpectedly when the CPU’s program counter (often called the “instruction pointer”) lands on a word of data, instead of a valid instruction. Most CPUs halt when they attempt to execute an undefined instruction; to a user, the computer has “frozen” or “locked up”.
Since today’s memory is cheap, why not break the shared store into two stores — one for instructions and the other for data? That will prevent the CPU’s program counter from pointing to data, thus eliminating one source of computer crashes. This segregated memory computer architecture might be appropriate for mission-critical applications involving human life. One downside is that it will consume more power than the von Neumann model. How many crashes would it prevent? 10 percent? 90 percent? I don’t know.
Of course programmers will continue to write buggy code that will result in other forms of crashes.
Update, 3 December 2012: I’ve learned that a computer with separate stores for data and instructions is said to use the Harvard Architecture.