A lot of things are similar, but some are really very different. The most fundamental problem is that an image sensor is a combination of both analog and digital design elements and they are very hard to combine in one chip. The analog part is what interacts with incoming light and that is where some very fundamental laws of optics interfere with the drive to miniaturization. On top of that, some of the same characteristics that make digital circuitry really excellent, tend to really fubar the analog stuff. That is one of the reasons we are seeing stacked chips all the time: high quality analog and high quality digital require different processes.
The situation with microbolometers is even more different since they work differently form every other kind of image sensor out there. There are several excellent efforts under way to firmly decouple the ROIC from the limitations by the sensor layer in microbolometer imagers. Hopefully, it will bear fruit in the foreseeable future.
My field of expertise is more with this stuff: electro-optics, than with classical optomechanical systems. I used to work on cooled image sensors for space programs, then worked on early microbolometer imaging systems and means of calibrating and characterizing them, then spent some time developing CMOS image sensors, dabbled with movie cameras, then transitioned to working with test systems for all of the above and a bunch of other electro-optical devices.
ILya