Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Phase detect AF on mirror less
#1
Hi



Having phase detect AF on mirrorless cameras seems to be gathering traction, apart from µFT. Since I am mostly interested in µFT it sort of bypassed me so far.



Anyone got an idea how that works? I understand some pixels are devoted to AF work. Are these still used for the image also? Or are there holes in the image which need to be interpolated away? Similar interpolation to mapping out dead pixels?



Any good diagrams out there on how this works?



Thanks

Joachim
enjoy
#2
Joachim,



From memory, Fuji was the first to introduce it on its 300EXR back in 2010, might be wrong but it's a good starting point perhaps :



http://www.dpreview.com/news/2010/8/5/fujifilmpd



Doesn't seem trivial to master when considering how poorly Canon has done with its implementation so far.



I'm hoping Olympus will get to it on its next gen. Would tick one of the last checkboxes for tracking subjects ?
#3
[quote name='Sylvain' timestamp='1359798105' post='21707']

Joachim,



From memory, Fuji was the first to introduce it on its 300EXR back in 2010, might be wrong but it's a good starting point perhaps :



http://www.dpreview.com/news/2010/8/5/fujifilmpd



Doesn't seem trivial to master when considering how poorly Canon has done with its implementation so far.



I'm hoping Olympus will get to it on its next gen. Would tick one of the last checkboxes for tracking subjects ?

[/quote]

Hi,

Very interesting article,it's surely the way forward for mirrorless and maybe even DSLRs!



Dave's clichés
#4
[quote name='joachim' timestamp='1359793754' post='21705']

Hi



Having phase detect AF on mirrorless cameras seems to be gathering traction, apart from µFT. Since I am mostly interested in µFT it sort of bypassed me so far.



Anyone got an idea how that works? I understand some pixels are devoted to AF work. Are these still used for the image also? Or are there holes in the image which need to be interpolated away? Similar interpolation to mapping out dead pixels?



Any good diagrams out there on how this works?



Thanks

Joachim

[/quote]



Yes, these are interesting questions that I posed elsewhere in a different context as well. What happens with these pixels, are they mapped out, or if not. Possibly half of the pixel still has information content? Simplistically, Canon's implementation, given the randomness of the distribution of the points on the sensor, suggests they are simply mapped out.

The reason the issue come up elsewhere was, why Canon didn't have/allow the high magnification zoom in during video that the previous model had. Well, the only difference is the sensor. If you zoom in now in video to "pixel" peeping levels, you may now suddenly notice your "bad" pixels, and you have may not enough processing power or speed during video to map it out.



The Nikon implementation has apparently rows of pixels for Phase detect on the sensor - more in line with phase detectors behind the mirror. Again, they must be mapped out somehow.



I'll try to find the web site that shows the sensors.



Maybe the reason the Canon implementation doesn't work well is the single random dots. That sounds like it's not going to give high resolution, just some approximation, which then is supplemented with contrast detect. This fits with how it's described as working.





Edit: found the teardown web site now.

http://www.chipworks.com/blog/recenttear...-t4i-dslr/



shows how the pixels look.
#5
The article about fuji states that they get occluded illumination but that they are indeed exposed, so could it be that they have set a local signal boost on these pixels ? And do they get a color filter ?
#6
[quote name='Sylvain' timestamp='1359852569' post='21717']

The article about fuji states that they get occluded illumination but that they are indeed exposed, so could it be that they have set a local signal boost on these pixels ? And do they get a color filter ?

[/quote]





ah right, he does say "And, he says, they don't simply go to waste when taking pictures: 'sometimes they are used to compose image data and sometimes not, depending on the situation.'". I guess you can use the information if the pixel is in focus, but if a pixel is not in focus, you don't know what the other half would have looked like, so maybe these are the ones that are mapped out. In the Fuji design, you have pairs of sensels, so presumably if a pair has the same signal it counts as matched
#7
Hi Everyone,



Thanks for writing in. It seems a bit clearer what it is, but not really clear. It seems companies are not really disclosing how it works. Judging by the remark in the interview: "we use several tens of thousands of pixels in the center area of a CCD", I assume Fuji is just interpolating them out quite often.



Has anyone seen a sort of performance comparison, how much the PD helps over a CD system?



Best wishes

Joachim
enjoy
  


Forum Jump:


Users browsing this thread:
1 Guest(s)