Skip to main content

AI-POWERED DOCS

What do you want to know?

Classification: one label per region

A classifier looks at a cropped region and answers a single question: which bucket is this? Pass or fail. Empty or full. Red, green, or blue. It's the simplest, fastest inspection type, and the right default unless you need to know where a defect is.

The setup for a classifier is built on a simple hierarchy: Types own Classes, and Types are stamped onto images as ROIs.

Think of it like a rubber stamp system. The Inspection Type is the stamp itself (the design). The ROIs are the marks you make on the page (the image). Every mark inherits the same design, the same class list, and the same trained model.

The three concepts

Before you can train an AI model, the camera needs to know where to look, what the possible outcomes are, and how each crop should be evaluated. Those three questions map to three concepts, and they nest in a very specific way.

01 / Concept

Inspection Type

A named bucket. Holds one shared dataset, one shared list of classes, and one trained AI model.

02 / Concept

Classes

The vocabulary of possible outcomes for this inspection. Defined once on the type.

03 / Concept

Region of Interest (ROI)

A rectangle drawn on the image. Many ROIs can share one type and, therefore, one dataset and one model.

The Mental Model: a type owns the classes and the ROIs

An Inspection Type is a bucket. It holds one shared dataset, one shared list of classes, and one trained AI model. You then stamp that bucket onto the image in multiple places, those stamps are the ROIs. Every ROI of the same type inherits the same classes and is evaluated by the same model.

INSPECTION TYPEone bucket1 dataset · 1 model · N ROIsCLASSES · DEFINED ONCEclass_aclass_bclass_c+ addAdd or edit classes here andevery ROI on this type updates.Single source of truth.ROIs · PLACED ON IMAGEroi_1roi_2roi_3roi_4roi_5+ add ROIEach ROI is a tight crop at a specificlocation, evaluated by the same model.All share one dataset.classes applied to ROIs
  • Inspection Type is the bucket.
  • Classes are the outcome vocabulary.
  • ROI is a location on the image.

For a classifier: one label per ROI, picked from the class list

A classifier takes each ROI crop and asks a single question: which class does this look like? Present or absent. Pass or fail. Good, scratched, or cracked. The output per ROI is one class name plus a confidence score, a clean categorical answer the rest of your pipeline can act on.

Live example: PCB screw-presence check

Consider a PCB with six screw locations. Four screws are present, one is missing, one is damaged. You'd configure it like this:

OV-MCU · v2Screw_1 · presentScrew_2 · absentScrew_3 · presentScrew_4 · presentScrew_5 · presentScrew_6 · damaged
  • Type: Screws (6 ROIs, classifier)
  • Classes: present, absent, damaged
  • ROIs: Screw_1 through Screw_6, each labeled with one of the three classes

Output per ROI is a label plus confidence:

ROILabelConfidence
Screw_1present0.98
Screw_2absent0.94
Screw_3present0.97
Screw_4present0.96
Screw_5present0.95
Screw_6damaged0.82

Three things to internalize from this example:

  1. One label per ROI. The classifier assigns exactly one class to each crop, a dropdown pick, not a painting task. The output is class_name plus a confidence score.
  2. The type is the owner. All six ROIs share the Screws dataset. Capturing one image gives you six new training samples, and one trained model decides all six.
  3. ROIs are just locations. Draw tight (under 512 × 512 px). Use Duplicate to stamp Screw_1 → Screw_2 → Screw_3, each inherits the class list automatically.
OV80i supports both classifiers and segmenters

On the OV80i, a single recipe can mix multiple model types. Use a classifier for verdicts and known categories, then add a segmenter on top for pixel-level measurements. See Understanding Segmenter for the other half of the story.

Deep dive: how classes behave

Classes live on the type, not the ROI

This is the single most important thing to internalize. Classes are a property of the Inspection Type, which means adding or removing one changes the label options for every ROI that uses that type, automatically.

TYPE · SCREWS · CLASSESpresentabsentdamagedstripped_headNEWAdd a class here →Screw_1LABELpresent ▾other options:· absent · damaged· stripped_head ← newScrew_2LABELabsent ▾other options:· present · damaged· stripped_head ← newScrew_3LABELpresent ▾other options:· absent · damaged· stripped_head ← new…AND EVERY OTHER ROI OF THIS TYPE
  1. Define once. Click + Add class in the Classes panel. Give it a name (e.g. stripped_head) and a color.
  2. Propagates instantly. The new class appears in the dropdown on every ROI of that type. No per-ROI configuration.
  3. Relabel as needed. Existing training images keep their labels; you can revisit any image and reclassify to the new class.
  4. Keep it tight. Start with the smallest set of classes that captures your decisions. Two classes (pass / fail) often outperforms five fuzzy ones.

Deep dive: how ROIs behave

The Golden Rule of ROIs

Smaller regions win. Make each ROI just big enough to contain the feature. Smaller ROIs mean less training data, faster iteration, and more accurate AI decisions, the feature dominates the crop instead of getting lost in background, and nothing gets downscaled.

Small, specific, and numerous

An ROI tells the camera where to crop. The tighter the crop, the clearer the signal the model gets. Because ROIs share a type, adding more of them multiplies your training data without multiplying your work.

GOOD · TIGHT CROP60 × 60 px · feature dominatesBAD · TOO LARGEscrew is a speck · downscaledDUPLICATE PATTERNScrew_1Screw_2Screw_3…NName the first ROI, click Duplicate, and the rest auto-increment.All share the same type → same classes, same dataset, same model.10 ROIs × 1 CAPTURE = 10 TRAINING SAMPLES
  1. Keep crops under 512 × 512 px. Anything larger is downsized to fit the model input, and detail is permanently lost.
  2. Tight is better. A small ROI around a single feature gives the model a clear signal and needs less training data to converge.
  3. Many ROIs, one type. 10 screws → 10 ROIs on the same Screws type. One capture becomes ten training samples, and one model handles all ten at inference.
  4. Use Duplicate. Name the first ROI meaningfully (Screw_Top_Left). Duplicate auto-increments names so you're not retyping.
  5. Need full coverage? Don't draw one giant ROI, tile a grid of small ones. Each preserves full resolution.

Data flow: every ROI takes its own trip through the model

At runtime the camera crops each ROI out of the full image, feeds it to the trained model individually, and records which class won along with a confidence score. The result is one label per ROI, every capture.

1 · CAPTUREFull frame + 6 ROIs2 · CROP INDIVIDUALLYScrew_1Screw_2Screw_3Screw_4Screw_5Screw_6max 512 × 512 px each3 · CLASSIFIER MODELScrews · trained modelpicks 1 class + confidence4 · OUTPUTScrew_1present 0.98Screw_2absent 0.94Screw_3present 0.97Screw_4present 0.99Screw_5present 0.96Screw_6damaged 0.82resultFAIL
  1. Capture the full frame with all ROIs marked.
  2. Crop individually so each ROI becomes its own small image.
  3. Classifier model picks one class plus a confidence score for each crop.
  4. Output is a table of ROI → label → confidence. Pass/fail logic on top of that table is up to your IO Block rules.

Setup recap

A quick checklist before you train. If each of these is true, your classifier will have a solid foundation.

  • One Inspection Type per decision. Don't mix "screws" and "labels" in the same type, give each its own so they get their own classes, dataset, and model.
  • Classes defined at the type level. Every ROI gets the same dropdown. If an option doesn't apply to every ROI, it probably belongs on a different type.
  • ROIs drawn tight and named descriptively. Screw_Top_Left beats New ROI. Keep every ROI just big enough for its feature, and under 512 × 512 px.
  • Alignment works first. If the part shifts or rotates, the aligner moves your ROIs with it. Tight ROIs only work when alignment is solid.
  • 3 to 5 training images per class to start. Train, find the failures, add targeted data, retrain. Two to four iterations is typical.
  • Every label double-checked. One mislabel in five training images is 20% corruption. Click View All ROIs before each train.

What's next