Monday, 4 August 2014

Moore’s Law IS NOT Ending Soon

There is a whole lot of talk and a lot of assumptions about the direction that technology is heading. One of the most irritating is: "Moore’s Law will be ending soon."

A prediction attributed to the co-founder of Intel Gordon E. Moore in the early 1960s - that the number of transistors on a single integrated circuit would double every 18 months - later this got rounded up to every two years. Going by this strict interpretation of Moore’s Law it probably has ended. But many people have avoided providing a strict definition of Moore’s Law. One reason for this, even though the number of transistors for a single integrated circuit has not gone up much lately, is we have many other methods of improving computer performance.

The latest processors have multiple cores enabling true multithreading. A computer can now do many things simultaneously, rather than just giving the illusion that was doing multiple things at once (multitasking). Say a computer has a huge calculation to do. Theoretically, it can split that calculation into, say, 12 chunks and complete it 12 times faster. In practice it isn’t quite like that, but as software developers start to make programs that take full advantage of multithreading our computers can become a hell of a lot more powerful without having an increasing number of transistors on a single integrated circuit.

Another thing that will make computers more powerful in the coming years is the development of more efficient software programs. Throughout the early 2000 when you were upgrading your computer the performance did not go up at all even though your computer had more grunt. That’s because software of that era, especially on the Windows platform, was becoming increasingly bloated. It probably took longer to boot a computer in 2005 than in 1995.

Since then, the trend has been in the opposite direction. This is because of the overwhelming abundance of embedded computing devices such as mobile phones and single board computers such as the Rasberry Pi and Beagle Bone Black. These devices are as powerful as the desktops we had in the early 2000s. People aren’t going to upgrade to a smartphone phone that takes five minutes to boot up an old Nokia brick would be a far better phone. This forced developers to pursue lightweight operating systems such as Linux and UNIX.

Apple OS X is built on top of both open source and closed source versions of UNIX. Android, the most popular mobile operating system, is a flavour of Linux. These lightweight operating systems are very friendly to developers as they are generally open source and work very well with a lot of the UNIX / Linux infrastructure that is already out there.

Many developers have started to developing multiplatform applications. This has two benefits, one that you don’t need to learn different programs that do the same thing on different platforms, the other is that developers need to account for systems that don’t have a lot of processing power. The result of this is slick programs that run well on any platform and even better on desktop PCs.

Improvements in computer systems architecture could also greatly increase the performance of computers. Once upon a time there was only soldering on only one side of a circuit board. Chips sat in rows of holes drilled into the board. Now many components are now surface mounted greatly reducing their size and reducing space requirements, while allowing circuits on both sides of the board. Instead of rows of holes CPU mounts are now an extremely accurate forest of electrodes. Circuit board manufacturers have mastered the two-dimensional and I believe eventually they will go up into three dimensions. Instead of looking like an extremely detailed miniature suburbia the inside of our computers will look more like the CBD of a large city.

Further on into the future we might start using optical components inside our computers. The speed of light is a lot faster than the speed that electrons take through a circuit. To begin with this new breed of computer will be a hybrid of optical and electronic computing, then maybe a complete optical computer. After that, who knows? Quantum computers maybe?

I think we still have a whole lot of latent power that we can bring out of our current technologies. Computers will continue getting faster the rate might slow or speed up, but overall, I believe that Moore’s Law will hold into the distant future provided something catastrophic doesn’t happen to the human race. Over this time the processing of and access to information will continue to get cheaper.

Extremely cheap information technology coupled with distributed sources of energy such as solar and wind will allow developing nations to bootstrap themselves up to our technology level and maybe even overtake us. Children in these places will have access to an essentially free education, learning at their own place and learning what they want. They will not have the legislative baggage such as copyright, patent law and an old slow, and human, bureaucracy.

The future will be an exciting place!

No comments:

Post a Comment