Futurist Raymond Kurzweil predicts that humans will attain technological singularity (the point at which humans merge with machines) in 2045.
He believes that we’re approaching a moment when computers will not just become intelligent, but will outstrip human intelligence.
Computers are getting faster. Everybody knows that. Also, computers are getting faster faster—that is, the rate at which they’re getting faster is increasing.
So, if computers are getting so much faster, so incredibly fast, there could conceivably come a moment when they’re capable of something comparable to human intelligence.
From that point on, there’s no reason to think computers would stop getting more powerful. They’d keep on developing until they were far more intelligent than we are.
Their rate of development would also continue to increase, because they’d take over their own development from their slower-thinking human creators. Imagine a computer scientist that was itself a super-intelligent computer.
Since the intellectual capabilities of such a super-intelligent entity would be difficult to read and assess, the technological singularity is seen as an “intellectual event horizon,” beyond which the future becomes difficult to see or predict.