Here is an interesting article on SSD vs. HD pricing: Flash Memory vs. Hard Disk Drives - Which Will Win?
In the 1990s, in my advanced computer architecture course, I demonstrated that the price trend of hard drives was going to intersect the price trend of DRAM at around 2004 to 2005. Of course an solid state disk (SSD) can't be made out of DRAM without a robust power source; FLASH has fixed that problem. Even if FlASH is more expensive per bit than DRAM, taking away the need for power means it's a viable alternative to disk and the crossover point wouldn't have moved that far out. The price trend is approximately the same as that of DRAM.
So why aren't we all using FLASH drives, with disks relegated to museums?
Obviously disk manufacturers also have smart people working for them and didn't want to be put out of business. If (as seems very likely) they also spotted the trend, they had 2 options: get into SSDs early (a few did) or push the improvement rate of HDs to match DRAM -- which happened in the late 1990s.
The other interesting observation in the article is that there is a floor price for hard drives -- there is a bare minimum cost for the mechanism that doesn't reduce if you make them smaller. Their graph illustrating this point is a bit misleading: disks are not a fixed price per unit irrespective of capacity; they were trying to represent the scenario that a 64GB drive was the only one available, which is not fully representative (for clarity, they should have shown the graph to the left of the 64GB point as an extrapolation e.g. by dotting the line). For example, very small drives are more expensive to make, on general principles, than very big ones, priced per bit -- an effect to some extent masked by economies of scale for smaller drives. But anyway, the floor price observation is correct.
Why is this floor size observation useful? Because below that size, FLASH can be cheaper than the smallest disk you can buy. Obviously, it will also be lower capacity, but if that's all you need, why spend more on a bigger, slower technology?
If these trends persist, we may reach a point where FLASH is big enough for most mobile devices (at a price point below the cheapest disk). Currently, the cheapest disk is around $50. a 64GB FLASH drive -- as is an option the MacBook Air -- costs about 20 times that. However, if you are happy to have up to 3GB of FLASH, you come out ahead on cost. This would be enough to make minimal bootable system but once you start adding applications and user data with all the cruft (not to mention useful stuff like large data files) those entail, you can burn through 3GB very fast. (My iMac has 2GB in its /Applications directory tree alone, and 77GB in /Users and it's not as if I am making movies on a regular basis.)
If the current rate of improvement continues, with prices halving roughly every three years, if the minimum disk price doesn't drop, you'd be able to use 6GB of FLASH by the same argument. However, minimum disk prices are dropping, and the minimum space you need is a moving target. The only way you would see a substantial change is if there is a new mindset in systems and application development. Even so, users needing large-scale data (movies, big graphics files, databases) would still need disk storage a long way into the future.
So where does this all take us?
First, trends do not continue indefinitely, as we saw with the change in price trend for disks. A breakthrough in how SSDs are constructed could tilt the balance away from disks.
More likely, though, is the emergence of a new computing platform with a smaller memory footprint, but which can use external storage efficiently. It is this detail that is interesting about the MacBook Air -- it's a gentle step towards distributed storage, if only to dispense with the need for an optical drive. A more exciting idea would be the development of a global secure distributed file store, which takes away the need to store all your files on your own platform.
This already exists in the form of the Google File System (GFS) though unfortunately that is not available for general use, outside of services Google provides. In research projects, the Carnegie Mellon Andrew File System (AFS) and its successor Coda were nice ideas. Although the original AFS and Coda projects don't appear to be going anywhere, versions such as OpenAFS continue to be developed. The Hadoop Distributed Filesystem looks interesting too.
So in the long term, I would expect computing to look a bit more like Google's services. Computation would go wherever it was most efficiently performed, and data would be stored wherever was most convenient and efficient, if necessary fragmented and replicated for greater redundancy and bandwidth. Your own personal device would not contain all your data -- only that which was critical for system performance, and cached copies of data for immediate use.
How big a step is the MacBook air towards this? Pretty tiny, actually, but it got me thinking. Good on Apple.