Skip to main content

AI in machine vision. Why image quality matters more than ever.

AI in machine vision is not a new topic. Anyone like us, who has been developing machine vision solutions for many years knows these situations all too well.

Standing at the production line, staring at a dozen parameters, tweaking two values in the third decimal place until well past midnight, always hoping that this time it will finally run stable.

the founders of phil-vision, Gregor Philipiak, Patrick Gailer

Those days seem to be largely over, at least in many applications.

Is that a good thing? We would say yes.

Even though today we often can't transparently trace which image features a model weights how strongly in a given case.

AI can do a lot. But it cannot perform magic.

Why image data quality determines AI performance

Despite all the progress, an old principle still applies:
Garbage in, garbage out.

An AI model can only be as good as the image data it receives. This inevitably raises the question:

How do I optimize my image acquisition to create the foundation for stable, reliable results in the first place?

The foundation: Illumination, optics and camera as a system

What really matters is the selection and interaction of the components. Illumination, optics, and camera cannot be considered independently of each other.

At the beginning, we therefore ask a few fundamental questions:

  • How do the features relevant for inspection appear, and how large are they?
  • How are these features spatially distributed?
  • What optical properties do the features and the carrier material have, for example glossy, matte, transparent, or diffusely reflective?
  • Is there a single type of feature or multiple ones?
  • Is the carrier material homogeneous, or does it vary significantly, possibly even from image to image?

Answers to those questions are the basis for any further technical decision

Feature size defines resolution, optics, and camera setup

In this first article of the series, we deliberately focus on one aspect:

The size of the relevant features and what this means in very concrete terms for resolution, optics, and camera selection.

Feature size determines the required local resolution. And with that, it directly defines the camera and lens setup.

Focal length, magnification, and working distance are not secondary decisions, but central design parameters.

Why resolution is not a camera specification

Practical example - detecting small scratches:

If small scratches on a car’s headlight are to be detected, with a minimum width of around 10 µm, it will not be possible to reliably inspect the entire headlight using a single 5 MP camera image.

For robust solutions, a simple rule of thumb applies:
Depending on the feature characteristics, at least two to three pixels should cover the smallest relevant structure.

In this example, that would correspond to an object resolution of approximately 3.3 µm per pixel. Depending on part size and cycle time, this points more towards a line-scan setup or a tiled, multi-image strategy rather than a classic single-shot approach.

Outlook: Additional parameters for robust AI vision systems

This still leaves several exciting parameters to be addressed in future articles.

  • The distribution and contrast of features.
  • The optical properties of both features and carrier material.
  • And finally, the question of how to achieve true process reliability when dealing with inhomogeneous or changing surfaces.

In practice, these are often the factors that determine whether an AI model works somehow, or whether it runs stably and reliably in the long term.

Stable AI results do not start with training, but with image acquisition.

If you would like to systematically review or optimize your image acquisition,
we are happy to support you from physical design through to a robust solution.