You want a simple answer to a complex question. And first you have to separate the issue of optic resolution from colour perception.
Speaking crudely, colour perception takes place in V4, a higher cortical region of the brain. It is not until you go a long way up the processing hierarchy that the responses of neurons correlates with perceived hue rather than wavelength information.
The brain in fact is discounting actual wavelength information by this stage, as illustrated by the Land effect - http://en.wikipedia.org/wiki/Color_constancy
Your eyes see one thing, and your brain corrects so you see the hue that is "really there".
Resolution is a second issue - at what point does vision fuse/discriminate these "pixels of information". And again the answer is as much psychological as mechanical. See for example phi effects - http://en.wikipedia.org/wiki/Phi_phenomenon
Visual experience is surely the single most complicated process in the known universe. In your first five minutes of studying psychophysics, you should learn that the eye is just like a camera. Then you should spend the rest of your career understanding all the ways it is in fact not.
But essentially, the blending of wavelength information is not a matter of shedding information (mechanical blurring due to lossy resolution) but the synthesis of a rich experience from a surprisingly narrow sampling of the available information.
And it is a beautiful case of less is more.
Well, the simplest case is single cone vision – monochromacy – which gives us 200 shades of gray. We can distinguish that many luminance levels.
And two is better. Dichromacy, employing a long wave and short wave cone, swells our visual experience geometrically to about 10,000 distinguishable shades.
But three gives us a vast range of easily discriminated hues. Trichromacy, the addition of a third red-green opponent channel, multiplies the total number of shades to several million.