Skip to main content

First Recipe Creation

This deep dive explains what a Recipe is, outlines the differences between Classification and Segmentation, and provides step-by-step guidance on creating a Recipe. It also includes a detailed walkthrough of Imaging Setup configuration, Template Image capture and Alignment setup, ROI optimization, data collection and AI training, as well as image augmentation configuration.


Learning Objectives

By the end of this deep dive, you will understand:

  • what a recipe is
  • the difference between classification vs. segmentation – and when to use each
  • how to create a recipe
  • how to configure Imaging Setup
  • how to capture a Template Image and configure the Aligner
  • what ROIs (Regions of Interest) are and how to optimize them
  • data collection for AI training
  • recipe Testing and Validation

What is a Recipe?

  • A configured set of instructions that tells the camera how to inspect a specific part or product.
  • Defines camera settings, including exposure, focus, and lighting parameters for consistent image capture.
  • Includes processing logic such as ROI definitions, Aligner, classification, or segmentation classes.
  • Stores input/output configurations to integrate with automation systems for pass/fail or advanced signals.
  • Can be saved and reused to ensure consistent inspections across shifts, lines, or facilities.

Classification vs. Segmentation

Definitions

  • Classification: Identifying the type of object in the ROI
  • Segmentation: Locating and analyzing regions in the image/ROI

Examples

Image ClassificationImage SegmentationImage ClassificationImage Segmentation
What is a sheep?Which pixels belong to which object?Is this pizza acceptable or defective?Where is each pepperoni?
Sheep classifiedSheep segmentedPizza classifiedPizza segmented

Key Comparison

ClassificationSegmentation
SpeedSpeed depends on Image Setup and complexity. Generally efficient and fast with simple setupsCan be as fast or even faster than classification when optimized, especially with streamlined models
AccuracyGood for overall pass/fail or part type identificationHigher accuracy for precise defect localization
ComplexitySimple to set up and maintain; fewer parametersComplex – Needs more data, labeling, and tuning
Data RequirementLow – Needs fewer labeled imagesModerate – Requires many images with detailed pixel accurate annotations
Use CasesPart presence, orientation, basic quality checks, part inserted/not inserted etc.Surface defects, fine feature inspection, multi-defect detection, count, measurement etc.

Creating and Exporting a Recipe

Use the Export Recipe button next to a Recipe to export an individual Recipe.

Export Recipe button

Use the Export button at the top of the screen to export multiple Recipes at once.

Export multiple Recipes button

Use the Import button at the top of the screen to import Recipes.

Import Recipe button

note

Remember: Each recipe supports only one inspection type at a time, either segmentation or classification. Choose the correct type before beginning your setup.

Imaging Setup

Focus

Focus settings in Imaging Setup

  • What it is: Adjusts the sharpness of the captured image.
  • How to use it: Slide until edges and details in the image look crisp and clear.
tip

Use a target object with clear edges (like a ruler or calibration card) when focusing.

Image Rotation

Image Rotation settings in Imaging Setup

  • What it is: Rotates the image (0° or 180°).
  • When to use it: If the camera is mounted at an angle but you want the image displayed the other way in the interface.
note

If you need to rotate the image by 90°, rotate the camera.

Exposure (ms)

Exposure settings in Imaging Setup

  • What it is: How long the sensor is exposed to light during image capture.
  • Effect:
    • Higher exposure → brighter images, but risk of motion blur.
    • Lower exposure → less light, but sharper images in fast-moving applications.
UnderexposedCorrectly ExposedOverexposed
Example of underexposureExample of correct exposureExample of overexposure
tip

Exposure is logarithmic, and higher exposure means more latency (because more time is required for image capture).

Gain

Gain settings in Imaging Setup

  • What it is: Artificially brightens the image digitally (like ISO on a camera).
  • Effect:
    • Higher gain → brighter image, but adds noise (grainy look).
    • Lower gain → cleaner image, but needs good lighting.
High GainLow Gain
Example of high gainExample of low gain
Brighter and noisierDarker and less noise
tip

Only increase gain if adjusting exposure or lighting is not possible.

Auto White Balance

Auto White Balance settings in Imaging Setup

  • What it is: Automatically adjusts color balance so whites appear white.
  • When to use it:
    • Ideal for environments with variable or shifting lighting conditions.
    • For stable setups, manual white balance provides more consistent and repeatable results.
note

To manually adjust white balance:

  • Turn ON the Auto White Balance toggle.
  • Place a white sheet of paper under the camera or in front of the lens.
  • Turn the toggle OFF to lock in the white balance setting.

Gamma

Gamma settings in Imaging Setup

  • What it is: Adjusts the brightness of mid-tones without affecting dark or bright areas too much.
  • Effect: Helpful for revealing details in shadows or reducing overly bright highlights.

Lens Correction

Lens Correction settings in Imaging Setup

  • What it is: Corrects distortion from wide-angle lenses.
  • When to enable: If edges of the image look curved or distorted, toggle this ON for accuracy in alignment tasks.

LED Strobe Mode

LED Strobe Mode settings in Imaging Setup

  • What it is: Controls when the camera’s built-in LED light triggers.
  • Options:
    • Off: LED is continuously on.
    • On: LED only flashes during capture, reducing reflections.

LED Light Pattern

LED Light Pattern settings in Imaging Setup

  • What it is: Selects how the LEDs light up (e.g., All on, all off, Left and right, top and bottom etc).
  • Use case: Adjust based on your lighting setup for optimal part illumination.
tip

Use directional patterns to reduce glare or reflections by turning off the LEDs that shine directly at reflective surfaces, while keeping angled light sources active for better visibility.

LED Light Intensity

LED Light Intensity settings in Imaging Setup

  • What it is: Adjusts how bright the LED lighting is.
  • Best practice: Start low and gradually increase to avoid glare or reflections.

Photometric Control

LPhotometic Control settings in Imaging Setup

  • What it is: Captures multiple images (typically four) with different directional lighting (left, right, top, and bottom) and then combines them into a single enhanced image.
  • Purpose: This technique reduces shadows and highlights subtle surface features by providing even, consistent illumination across the part.
  • When to use: Ideal for complex parts, highly reflective surfaces, or parts with uneven textures where standard single-light images may miss critical details.

Trigger Settings

Trigger Settings in Imaging Setup

Manual Trigger

  • What it is: Captures images when you press the button on the HMI screen.
  • Best for: Testing, setup, or manual inspections.

Hardware Trigger

  • What it is: Uses an electrical signal (e.g., from a sensor) to trigger the camera.
  • Best for: Automated lines where a sensor detects part presence.

PLC Trigger

  • What it is: Trigger signals are sent through industrial controllers (PLCs) for synchronized operation with other machines.
  • Best for: Best for: Fully automated systems requiring precise timing.

Aligner Trigger

  • What it is: Automatically triggers when the system detects part alignment in the field of view.
  • Best for: Applications where parts need consistent positioning before capture or when there are no other reliable triggers present.

Interval Trigger

  • What it is: Fires the camera at set time intervals.
  • Best for: Continuous processes or monitoring moving lines without part detection sensors.

Template Image and Alignment

Skip Aligner

Skip Aligner settings in Template Image and Alignment

  • What it is: Turns off the alignment step during inspection.
  • When to use: If the part is always in the same position and orientation in the image.

Template Regions

Template Regions settings in Template Image and Alignment

  • What it is: Defines the area(s) of the template image used for alignment.
    • Rectangle: Draw a rectangular region of interest.
    • Circle: Draw a circular region of interest.
    • Ignore Template Region: Exclude certain areas from alignment to avoid distracting patterns or irrelevant features.
  • Best use: Helps the system focus only on the most distinctive part features for accurate alignment.

Rotation Range

Rotation Range settings in Template Image and Alignment

  • What it is: Sets how much rotation (in degrees) the system will tolerate when matching the part to the template.
  • Example: Setting ±20° allows the part to rotate slightly but still be detected.
  • When to adjust: Increase if parts tend to rotate during production; decrease for highly consistent orientations.

Sensitivity

Sensitivity settings in Template Image and Alignment

  • What it is: Controls how finely the system looks for a match between the live image and the template.
  • Effect:
    • High sensitivity → detects more subtle details, useful for complex parts.
    • Lower sensitivity → reduces false matches but may miss fine features.

Confidence Threshold

Confidence Threshold settings in Template Image and Alignment

  • What it is: Sets the minimum confidence score required for the system to accept a detection.
  • Effect:
    • Higher threshold → fewer false positives but might miss borderline matches.
    • Lower threshold → more detections, but with increased risk of false positives.
tip

Start moderate and adjust based on test results.

Scale Invariant

Scale Invariant settings in Template Image and Alignment

  • What it is: Allows the system to detect parts that are slightly larger or smaller than the original template image.
  • When to enable: If part size may vary slightly due to positioning, distance changes, or manufacturing tolerances.

Live Preview Legend

Live Preview in Template Image and Alignment

1. A configurable bounding box that defines the specific region of the camera’s field FOV to monitor during triggering.

  • Purpose: Ensures the camera focuses only on the relevant area, ignoring unnecessary background regions.
  • Best use:
    • For moving objects, to guarantee the part stays fully within the detection area.
    • To optimize processing speed by reducing the amount of image data analyzed.

2. A visual red dot showing the center point of all defined ROIs (Regions of Interest) in the image.

  • Purpose: Helps you align and position the search region relative to the part or camera view.

3. The green line indicates the edge of the object is detected.

tip

If you see the line change to red, try increasing the ROI size, adjusting the ROI, or increasing the Sensitivity.

Example of edge detection

ROI (Region of Interest) Definition and Optimization

Inspection Types

Inspection Type settings in Inspection Setup

  • What it is: Defines the type of inspection being performed and groups similar ROIs (Regions of Interest).
  • Example: “Holes” for checking the presence, size, or quality of holes in a part.
  • Key features:
    • Add Inspection Type: Create new categories for different inspection requirements.
    • # of ROIs: Shows how many ROIs are currently assigned to that inspection type.

Transformation

Transformation settings in Inspection Setup

  • What it is: Adjusts the position and geometry of selected ROIs for precise alignment and placement.
  • Fields and their purpose:
    • Height/Width: Changes the size of the ROI.
    • X / Y: Moves the ROI’s position along horizontal (X) and vertical (Y) axes.
    • Angle: Rotates the ROI around its center.
  • Best use: Speeds up setup when you have repetitive patterns, like multiple identical holes.

Inspection Regions

Inspection Regions settings in Inspection Setup

  • What it is: A list of all ROIs defined in the template image.
  • Features:
    • Add Inspection Region: Create a new ROI manually.
    • Ignore Regions: Exclude specific regions from processing.
    • Edit: Save, delete or cancel.
    • Lock Icon: Indicates locked ROIs that cannot be moved without unlocking.

Live Preview Mode

Live Preview Mode in Inspection Setup

  • What it is: Shows real-time feedback after adjusting or adding ROIs.
  • Use case: Great for fine-tuning ROI positions and sizes during setup.

Test Button

Test button in Inspection Setup

  • What it is: Run backtesting based on old images to verify changes.
  • Use case: To compare current results with previous settings for accuracy and consistency.

Data Collection and AI Training

Define different inspection classes and label each ROI based on its designated inspection type (see the example below).

Example of defined inspection classes and labeled ROIs

Use the Annotation Tools to label/annotate the image. Use the Brush Class drop-down menu to select the class to annotate. The current limit is up to 10 classes per recipe for segmentation.

Example of annotated classes

Importance of good data

Examples of good and bad data

  • Garbage In, Garbage Out: AI models can only be as good as the data you feed them. Poor-quality or inconsistent data leads to inaccurate results.

  • Diversity Matters: Collect data that represents all real-world variations: different shifts, lighting conditions, part positions, and surface conditions.

  • Quality Over Quantity: A smaller, clean, well-labeled dataset will often perform better than a large but noisy or inconsistent dataset.

Annotation Basics:

  • Classification: Tag entire images or ROIs as a specific class (e.g., “Good”, “Damaged”).
  • Segmentation: Brush over, outline, or highlight specific areas of interest with pixel-level accuracy (e.g., scratch location on a surface).
  • Consistency: Use consistent rules and definitions for labeling to avoid confusion during training.

Example of good annotations

Common Pitfalls

  • Insufficient Data: Too few samples will lead to underfitting, causing poor real-world performance.
  • Imbalanced Classes: Overrepresentation of one class (e.g., many “good” parts but few defective ones) skews the model.
  • Poor Labeling: Incorrect, inconsistent, or rushed labeling leads to significant accuracy drops.
  • Ignoring Environment Changes: Not updating the dataset when lighting, part orientation, or surface conditions change leads to drift in accuracy.
  • Not Validating Data: Skipping quality checks before training often results in wasted time and rework.

Data Augmentation

Image augmentations artificially modify your training images to improve the model's robustness. They simulate real-world variations like brightness shifts, rotations, or noise so the model performs well in different conditions.

Color Augmentations

Color Augmentation settings

Brightness

  • What it is: Adjusts how light or dark the image appears.
  • Use case: To handle slight changes in lighting during production.
tip

Use ±0.1 for stable setups; increase if lighting varies more.

Contrast

  • What it is: Changes the difference between light and dark areas.
  • Use case: Useful for parts with texture or varied surfaces to help the model adapt to visual differences.

Hue

  • What it is: Shifts the color tones slightly.
  • Use case: Good for setups where lighting color (e.g., LED temperature) might shift over time.

Saturation

  • What it is: Adjusts the intensity of colors.
  • Use case: Helps handle variations in illumination that make images appear duller or more vibrant.

Geometric Augmentations

Geometric Augmentation settings

Rotation Range

  • What it is: Rotates the image randomly within the set range (e.g., ±20°).
  • Use case: For parts that may come in slightly rotated positions.
tip

Avoid excessive rotation for parts that are usually fixed in orientation.

Flip

  • What it is: Flips the image horizontally, vertically, or both.
  • Use case: Helpful for symmetrical parts or when orientation may flip during handling.

Lighting & Color Simulation

Lighting & Color Simulation settings

Planckian

  • What it is: Simulates variations in color temperature (e.g., warm or cool lighting).
  • Use case: Handles different shifts or work cells with varying light sources.

Gaussian Noise

  • What it is: Adds subtle noise to the image.
  • Use case: Improves robustness if your production environment has small visual noise or camera sensor artifacts.

Motion Simulation

Motion Simulation settings

Motion Blur

  • What it is: Simulates slight blurring as if the part moved during capture.
  • Use case: Critical for high-speed lines where motion blur may occur.

Probability (prob)

Probability settings

  • What it is: Sets the likelihood of applying each augmentation during training.
  • Example: 0.50 = 50% chance of applying that change to any given training image.
tip

Start at 0.5 for most augmentations and adjust based on real-world variability.

Training Parameters (Segmentation)

Training parameters (also called hyperparameters) are the settings that control how a machine learning model learns from data.

Learning Rate

Learning Rate settings

  • Definition: Controls how quickly the model updates its internal weights during training.
  • Value (0.003): The higher the learning rate, the faster the model learns, but too high may cause instability or poor accuracy.
  • Slider Range: From 10^-4 (very slow) to 10^-1 (very fast).
tip

Usually, a value between 0.001–0.01 is a good starting point for segmentation tasks.

ROI (Region of Interest) size

Override ROI Size settings

  • Definition: Defines the size (width × height) of the image area used during training.
  • Unchecked: By default, the model automatically determines ROI based on your data.
  • When Checked: You can manually set the width and height if you need consistent input dimensions (for example, all images cropped to 256×256 pixels).
tip

Use a fixed size (e.g., 256×256) when your dataset has images of different sizes and you want consistent input for better stability, reproducibility, or to match a known model architecture.

Let it automatically choose when your data already has a uniform resolution or when you want the system to optimize for the best region of interest based on your dataset’s characteristics.

Number of Iterations (Epochs)

Number of Iterations (Epochs) settings

  • Definition: One epoch = one full pass through the entire training dataset.
  • Value (100): The model will train for 100 complete passes.
tip

Increasing this number usually improves accuracy up to a point but takes longer.

Rule of thumb: Monitor the training and validation loss during training. If the validation loss stops decreasing while training loss keeps dropping, it’s a sign the model is overfitting and you should stop training earlier.

Architecture

Architecture settings

  • Definition: Selects the size and complexity of the neural network.
  • Small: Trains faster and is often enough for most datasets. Ideal for quick experimentation or smaller datasets.
  • Larger models can capture more detail but may overfit on small datasets, while smaller models are more efficient and generalize better when data is limited.
tip

Start with Small, it’s often sufficient and helps you iterate faster before scaling up.

External GPU

External GPU IP Address settings

Contact Support to know more about External GPU.

Training Parameters (Classification)

Training parameters (also called hyperparameters) are the settings that control how a machine learning model learns from data.

Learning Rate

Learning Rate settings

  • Definition: Controls how quickly the model updates its internal weights during training.
  • Value (0.003): The higher the learning rate, the faster the model learns, but too high may cause instability or poor accuracy.
  • Slider Range: From 10^-4 (very slow) to 10^-1 (very fast).
tip

Usually, a value between 0.001–0.01 is a good starting point for segmentation tasks.

Validation Percent

Validation Percent settings

  • Definition: Defines what portion of your dataset will be set aside for validation (testing during training).
  • Purpose: Validation data helps monitor how well the model performs on unseen examples, preventing overfitting.
  • Range: 0–50%.
tip

Common choices are 10–20%.

If set to 0%, all data is used for training, which may improve training accuracy but makes it harder to detect overfitting.

ROI (Region of Interest) size

Override ROI Size settings

  • Definition: Defines the size (width × height) of the image area used during training.
  • Unchecked: By default, the model automatically determines ROI based on your data.
  • When Checked: You can manually set the width and height if you need consistent input dimensions (for example, all images cropped to 256×256 pixels).
tip

Use a fixed size (e.g., 256×256) when your dataset has images of different sizes and you want consistent input for better stability, reproducibility, or to match a known model architecture.

Let it automatically choose when your data already has a uniform resolution or when you want the system to optimize for the best region of interest based on your dataset’s characteristics.

Number of Iterations (Epochs)

Number of Iterations (Epochs) settings

  • Definition: One epoch = one full pass through the entire training dataset.
  • Value (100): The model will train for 100 complete passes.
tip

Increasing this number usually improves accuracy up to a point but takes longer.

Rule of thumb: Monitor the training and validation loss during training. If the validation loss stops decreasing while training loss keeps dropping, it’s a sign the model is overfitting and you should stop training earlier.

Architecture

Architecture settings

  • Definition: Selects the size and complexity of the neural network.
  • Small: Trains faster and is often enough for most datasets. Ideal for quick experimentation or smaller datasets.
tip

Start with Small, it’s often sufficient and helps you iterate faster before scaling up.

Architecture and CameraDescriptionRecommended Use
ConvNeXt-PicoUltra-light model optimized for speed and low memory use.Great for quick experiments or limited hardware.
ConvNeXt-NanoSlightly larger than Pico; better accuracy with minimal added cost.Good balance for small–medium datasets.
ConvNeXt-TinyOffers improved accuracy while still efficient.Suitable for moderate datasets and longer training runs.
ConvNeXt-SmallMost capable variant in this list. Higher capacity and accuracy.Use for large datasets or when maximum performance is needed.

External GPU

External GPU IP Address settings

Contact Support to know more about External GPU.