Jump to ratings and reviews
Rate this book

The Silicon Eye: Microchip Swashbucklers and the Future of High-Tech Innovation

Rate this book
Technology insider George Gilder delivers a "compelling" ( Wired ) look under the hood at a genius-fueled startup. Thanks to the digital technology revolution, cameras are everywhere―PDAs, phones, anywhere you can put an imaging chip and a lens. Battling to usurp this two-billion-dollar market is a Silicon Valley company, Foveon, whose technology not only produces a superior image but also may become the eye in artificially intelligent machines. Behind Foveon are two legendary figures who made the personal computer possible: Carver Mead of Caltech, one of the founding fathers of information technology, and Federico Faggin, inventor of the CPU―the chip that runs every computer.

George Gilder has covered the wizards of high tech for twenty-five years and has an insider's knowledge of Silicon Valley and the unpredictable mix of genius, drive, and luck that can turn a startup into a Fortune 500 company. The Silicon Eye is a rollicking narrative of some of the smartest―and most colorful―people on earth and their race to transform an entire industry. 13 illustrations

320 pages, Paperback

First published April 25, 2005

2 people are currently reading
119 people want to read

About the author

George Gilder

72 books295 followers

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
9 (26%)
4 stars
13 (38%)
3 stars
8 (23%)
2 stars
3 (8%)
1 star
1 (2%)
Displaying 1 - 10 of 10 reviews
Profile Image for Keliani.
54 reviews9 followers
April 17, 2013
Strongly recommended for any tech students and whatnot. Actually, everybody interested in the origins of our recent electronic devices.

This book explains how, from the hands of Carver Mead and his interest of "listening the technology" and Federico Faggin, both the father of the microprocessor and business stunt, several companies stablished in Silicon Valley with the purpose of integrating/applying neuroscience in the development of new technologies and the creation of consumer electronics that recreated at its best the behavior of human analog functions.

I loved this book from beggining to end. It's a nice personal tale, and it described all the ups and downs of mixing electronic/mechanical/electric engineering with business administration, that process of transforming a regular brainstorm session at the Caltech lab to applying each new idea into a new product that could improve actual technology. It makes one wonder how do these fields collide today, and if we will ever be able to surpass Moore's Law, or did we?

The floating gates, the microprocessor, the silicon retina, the digital camera, the touch pad, the TI calculators, all the software needed to control these tools and all the people that influenced their creation. It doesn't do a instant praise to Intel or IBM as the one and only recognized giant corporations; it highlights the entrepreneurs, venture capitalists and amateur engineers that made it happen on the low, probably without even knowing all this technology would come to this point.

The greatest parts are the chapters focused on Michelle "Misha" Mahowald. It was pleasant to see a woman included in the hard work, and it's remarkable how all the interest she and Carver had in Biology made such an impact on how their team oriented all the future research in trying to represent human brain functions and replicate them to make electronics more intuitive, reliable and efficient.
Profile Image for Mike.
511 reviews139 followers
November 26, 2010
This book brings me back a few years to when I was still designing equipment for the semiconductor industry. I still work with high-end imagers ("eyes" if you will), but for another world.

I really liked this book and would give it a "4.5" if such a score existed. There are a few minor editing errors that a normal reader wouldn't catch, but seem silly and unfortunate given the otherwise high level of accuracy and depth of the book. Gilder has done a superb job with this book and while I know (and he alludes to in "The Silicon Eye") that some of his knowledge is from having researched and written earlier books, I remain impressed. I don't recall off the top of my head having read any other books by the author and that's too bad. In fact, I will be looking up these prior books, soon.

For most people the people and topic may be entirely new, but that should not cause you to skip this book. It is non-fiction and history of how ideas and technology can be melded together. Sure it is about a niche technology, but because everyone has eyes it may be more approachable than just a book on microprocessor or integrated circuit design & fabrication.

For myself, I already new many of the more obscure names in the tale, but that only made its wealth of detail and writing more valuable to me. I recall having heard of the key company name several years ago, but haven't looked it up since starting this book. The other thing I learned is that many of the other titles in the "Enterprise" series look to be of interest to me.

So, if you are open to reading great non-fiction and aren't turned off by the idea of high-technology (there is very little description of actual technology or processes), then pick up this book. I think you will be pleased and intrigued by the people and their world.


9 reviews3 followers
July 21, 2015
My friend recommended a sci-fi book to me, in which cheap cameras take over the world, and privacy becomes obsolete. I spotted this for free at a book fair, and figured it was the "cameras take over the world" book.

Nope. Silicon Eye is just a history book about a bunch of old guys designing digital cameras. Booooring.
Profile Image for youzicha.
26 reviews5 followers
Read
March 31, 2022
This is a startup hagiography: if you have read WIRED magazine you will be familiar with the genre. In this case the company is Synaptics, founded by Caltech professor and VLSI pioneer Carver Mead. In the 1980s his research focus was on biology-inspired neural networks implemented as analog VLSI circuits, so Synaptics tapped into the mid-80s AI hype.

Analog computation can be several order of magnitude more efficient than digital (particularly in signal processing applications, if one can avoid analog-to-digital and digital-to-analog converters), but it is much harder to design, so it makes sense to look to biological models for inspiration. The eponymous "silicon retina" designed by Mead's phd student Misha Mahowald modelled the way the eye encodes light intensity based on how much brighter a given spot in the scene is than the surroundings: the retina has horizontal neuron connections which inhibit the sensitivity of a cell based on the activity of its neighbors, as well as vertical feedback connections which inhibit it based on the average of a larger area. For an analog circuit designer this is interesting because it points to a way of dealing with noise. If the eye encoded light intensitity directly the darker areas would be overwhelmed by noise, but in Mahowald's circuit (and, metaphorically, in a real eye) each photo-detecting element is directly connected to an amplifier, so the subsequent signal paths can use the full range of voltages.

I thought the neuromorphic computing concept was interesting as a "could have been". Unfortunately it was poorly timed, because conventional computers were improving exponentially while the pace of basic research is slow. Mahowald started her project around 1982 and finished her dissertation 1992. The company Synaptics that Mead co-founded started around 1986, but already in 1987 his co-founder Federico Faggin had to tell their investors that "it's going to take a long time, this neural stuff. Not a lot of money, but a lot of time". They spent the next few years in "research mode", basically acting as a part of Mead's Caltech research group and hiring a few people who equally well could have been his phd students and done the same style of work. Then they finally made a concerted push to collaborate with other companies and develop commercial applications, and in 1991-92 developed a scanner/OCR device to read account numbers printed on checks. Technically, the chip they designed is quite remarkable compared to what modern machine learning is like: it implemented a support vector machine as an analog network of transistors, with the weights programmable by the amount of charge on disconnected transistor gates. However, it became clear almost immediately that such analog OCR technology had no chance of being commercially successful, because it had become cheap to implement the same thing in software with more flexibility and less risk of hardware problems in the field. Synaptics eventually did become a successful company by pivoting to making trackpads for laptops, which feels a bit anticlimactic.

Nowadays large neural network models are getting very expensive and it seems we could really use an efficiency boost. But I fear we have missed our chance, because of another trend which is also visible in the book: silicon manufacturing has become inaccessible. In the 1980s and 90s, the U.S. was still the main country manufacturing integrated circuits, and as late as the mid-90s Synaptics could partner with a large foundry, National Semiconductor, which had the labs to manufacture all their experimental devices. For universities, there was the DARPA-funded MOSIS service to make a small production run of anything you asked them to, either for a fee for experimental designs, or for free for undergraduate class projects. Nowadays you see essentially no research and no startups about novel forms of integrated circuits. The problem, of course, is not that we have forgotten the secret of silicon production, but that we have become too good at it, so no new design can compete with the extremely optimized existing products.

This is something I have been thinking about lately. After all the experimentation of the 1970s and 80s, in the year 1990 a kind of phase transition suddenly froze a big segment of the tech stack into place, from bottom to top:

•People used to predict that silicon would be replaced by gallium arsenide, which has better performance. In 1988 Seymour Cray decided that the time was finally ripe to make the jump for his new Cray-3 machine (planned to be available in 1991). But the project foundered, and computers have been made from silicon ever since.
•In the 60s and 70s we went through a dozen logic families. Then in the second half of the 80s CMOS became dominant, and we're still using it.
•There used to be a lot of new exciting concepts at the gate level, like the neuromorphic computing described above, but also e.g. clockless circuits. Wikipedia tells me that Caltech made a clockless microprocessor in 1988, which clearly was too close to the new decade to get anywhere.
•The 1980s had diversity in instruction sets, from Lisp machines on one end of the spectrum to the RISC revolution on the other. Now only x86 and ARM remain.
•Rob Pike singled out 1990 as the year people stopped writing new operating systems.
•Japan's Fifth Generation Computer project sums up the entire trend: it was launched to grand fanfare in 1982 and then ignominiously shut down in 1992 when became clear that there would be no fifth generation.

Thirty years later we are still using the descendants of whatever computing paradigms happened to be most cost effective in 1989, because we have invested too much to abandon them, but presumably not all these choices are still optimal.

The second half of the book illustrates this problem. Mahowald's "silicon retina" had used transistors as photodetectors, and Mead together with a small group of researchers at Synaptics continued to pursue this technology as an alternative to CCD photodetectors. (I guess a perogative of being a co-founder is that you get to work on your pet projects.) In 1996 they founded a new spin-off company called Foveon to develop this, with investments from Synaptics and National Semiconductor.

An image sensor using CMOS-process transistors can free-ride on all investments into improving production of microprocessors, so eventually it should get much cheaper and higher-resolution than CCDs. (This much is true; nowadays digital cameras have all switched over to CMOS.) But Foveon struggled to find a market niche. In 1997 they created 9 megapixel sensor, which only captured greyscale images but had much higher resolution than commercially available competitors. They made a small run of expensive studio cameras (priced around $70k) that used a prism to split light onto three sensors to capture high-resolution images with very good color reproduction. Then in 2000, they made a new invention: by stacking three transistors on top of each other, one can take advantage of the wavelength-dependent absorption of light in silicon to capture color information. Unlike other mass-market cameras which put a color filter on top of the sensor, the new Foveon sensor could capture three color channels in every pixel, effectively doubling the resolution. Foveon eventually partnered with small camera manufacturer, Sigma, to put out a line of digial SLR cameras based on the new technology.

To cut a long story short, this did not work. Camera sensor manufacturers like Sony and Panasonic had enough capital to outperform the Foveon sensor by scaling down their existing pixels while still using color filters. By the time this book was written (2005) you could see the writing on the wall, which provides an interesting tension with the genre requirements: after 20 chapters about how every person in Foveon is a childhood prodigy and scientific genius and business visionary, it suddenly switches track to apologetically note how difficult it is to disrupt a mature industry.

The book still ends on a happy note. The final chapter tells us that if Foveon succeeds in selling their cameras, that is merely the first step towards the technological singularity. Every room will be full of dirt-cheap camera sensors, backed by human-like neural networks, and connected by fiber-optic cables to envelop the entire planet in super-human artificial intelligence. The back of the book has a blurb by Ray Kurzweil, of all people. Isn't it interesting that the key singularity texts (Vinge etc) were written in the early 1990s, just after the computer tech progression had started to settle down?
Profile Image for Asif Khan.
15 reviews1 follower
May 13, 2017
A mix of biographical account of Carver Mead and a class of his students, the "beginning" of neuromorphic electronics, and the journey of Foveon-a silicon imager. The story ends in 2005; there have been many more developments in regards to Foveon since then. Very good read; as a researcher, I think all who intends to work in the field of brain-like computing should read this book.
5 reviews
April 14, 2025
This started off interesting by talking through the start of a camera chip that revolutionized the industry and the mechanics of how it worked. Then it kinda devolved and started talking through tech that didn’t ever really make it.
5 reviews
Read
September 22, 2010
George Gilder introduce el relato detectivesco para explicarnos el desarrollo del ojo en equipos de inteligencia artificial
Profile Image for Jim Mccormick.
28 reviews2 followers
January 9, 2013
Fascinating story. Last couple chapters are mostly marketing of a technology that doesn't seem to have much traction.
Displaying 1 - 10 of 10 reviews

Can't find what you're looking for?

Get help and learn more about the design.