Close Menu
SkytikSkytik

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    At Least 32 People Dead After a Mine Bridge Collapsed Due to Overcrowding

    November 17, 2025

    Here’s how I turned a Raspberry Pi into an in-car media server

    November 17, 2025

    Beloved SF cat’s death fuels Waymo criticism

    November 17, 2025
    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    SkytikSkytik
    • Home
    • AI Tools
    • Online Tools
    • Tech News
    • Guides
    • Reviews
    • SEO & Marketing
    • Social Media Tools
    SkytikSkytik
    Home»AI Tools»How to Improve the Performance of Visual Anomaly Detection Models
    AI Tools

    How to Improve the Performance of Visual Anomaly Detection Models

    AwaisBy AwaisJanuary 8, 2026No Comments6 Mins Read0 Views
    Facebook Twitter Pinterest LinkedIn Telegram Tumblr Email
    How to Improve the Performance of Visual Anomaly Detection Models
    Share
    Facebook Twitter LinkedIn Pinterest Email

    1. Introduction: Why this article was created.
    2. Anomaly detection: Quick overview.
    3. Image size: Is a larger input size worth it?
    4. Center crop: Focus on the object.
    5. Background removal: Remove all you don’t need.
    6. Early stopping: Use a validation set.
    7. Conclusion

    1. Introduction

    There are several methods to improve performance, which are used by authors in academia to make it easier for the proposed model to stand out by showing more impressive results compared to other models. For example, using a larger input size, which helps to detect small defects; another is removing part of the background to reduce false positives.

    Such an approach can be weak in academia because it makes comparisons across different models less fair and might not perform equally well across all datasets. However, these methods can also be used to improve performance in practical applications if applied carefully. In this article, we will review several of the most powerful methods and explain how to use them to achieve better results while avoiding potential downsides.

    2. Anomaly detection

    Anomaly detection models are often called “unsupervised”, but this name can be misleading because most of them require only one class for training, normal images without defects. To train with a single class, the data needs to be labelled into separate classes, which differs from the usual definition of unsupervised learning.

    Based on the normal images used during training, the model learns what “normality” looks like and should be able to identify deviations from it as images with defects. These defects are often small
    and hard to see, even for professional inspectors on a production line. The example below shows a drop of welding paste on one of the contacts, which is difficult to spot without the ground truth mask showing the defect location on the right.

    For more details on visual industrial anomaly detection, see this post or this survey.

    Image taken from the VisA dataset (CC-BY-4.0) and processed using the Anomalib library

    3. Image size

    If images in your dataset have small defects (less than 0.2% of the image or so, this number is arbitrary and depends on the model used and other factors) that the model cannot detect, try increasing the input size. It often helps to detect such defects by making them large enough for the model to see.

    When large defects (10% of the image or more, this number is also arbitrary) are present, you should be more careful with the model selection. Some models, like PatchCore, show better results for different defect sizes with larger input size, others, like RD4AD, might degrade significantly for larger defects, as described in our benchmark paper, Tab. 5 and 14. The best practice is to test how the selected model performs on different defect types you have.

    Another important consideration when using a larger input size is the inference speed and memory constraints. As shown in
    MVTec AD 2 paper, Fig.6, the inference time and memory usage increased significantly for almost all tested models with larger input sizes.

    4. Center crop

    If you have data with an object at the center of an image, and the rest can be cropped safely, go for it. As shown in the image below, cropping closer to the inspected part helps to avoid false positives. An important side effect is that the relative size of the inspected part also increases; as described earlier, this might help you to obtain better results for small defects or increase inference speed by allowing you to make the image smaller.

    Image taken from the VisA dataset (CC-BY-4.0) and edited by the author
    Potential false positive circled in red

    Warning: Most popular datasets present a case in which the main object can be safely center-cropped, as shown in Fig. 2 here, or in the image above. For this reason, many original implementations of state-of-the-art methods include center crop augmentation. Using a center crop may be problematic in real-world applications with defects near the image edges; in that case, ensure that such cropping is disabled.

    5. Background removal

    Remove background to have even fewer false positives. Similarly to applying a center crop, make sure that anomalies or defects in the removed area do not affect the quality of the produced part. If you have never had defects in some part of the object in the past, do not remove it, because defects can emerge there in the future, and you do not want to miss them.

    Image taken from the VisA dataset (CC-BY-4.0) and edited by the author
    Potential false positive circled in red

    6. Early stopping

    Most anomaly detection models use a fixed epoch count, which is often optimized for popular datasets. It might be beneficial to try early stopping on your data to avoid overfitting or train faster with fewer epochs. Early stopping is sometimes misused by utilizing test set performance to stop training, making reported results unrealistically good. However, if you apply it to a separate validation set, you can still achieve a substantial improvement, as shown in Tab. 9 here.

    Warning: Some original implementations of state-of-the-art models may use early stopping on the test set or report the best results across all epochs based on test set performance. Look at the code before running it to ensure that you won’t have a model overfitting the test set with overly optimistic results.

    7. Conclusion

    • Increase image size
      • DO: check if the selected model is capable of detecting different defect sizes; make sure that the inference speed is sufficient
      • DON’T: miss large defects
    • Center crop
      • DO: make sure that the inspected object is fully in the image after cropping
      • DON’T: miss defects in the removed area
    • Remove background
      • DO: make sure that the area you are removing is irrelevant for inspection
      • DON’T: miss defects in the background
    • Early stopping
      • DO: use validation set
      • DON’T: overfit test set

    Make sure that applying these methods or their combination won’t cause missed defects. Some of them can backfire even if applied to a different publicly available dataset. In a real-world scenario, this might result in defective parts being delivered to a customer.

    If used carefully, however, they can noticeably improve the performance of anomaly detection models in practical applications by leveraging knowledge of your data and defects.

    Follow the author on LinkedIn for more on industrial visual anomaly detection.

    References

    1. A. Baitieva, Y. Bouaouni, A. Briot, D. Ameln, S. Khalfaoui, and S. Akcay. Beyond Academic Benchmarks: Critical Analysis and Best Practices for Visual Industrial Anomaly Detection (2025), CVPR Workshop on Visual Anomaly and Novelty Detection (VAND)
    2. Y. Zou, J. Jeong, L. Pemula, D. Zhang, and O. Dabeer, SPot-the-Difference Self-Supervised Pre-training for Anomaly Detection and Segmentation (2022), ECCV
    3. S. Akcay, D. Ameln, A. Vaidya, B. Lakshmanan, N. Ahuja, and U. Genc, Anomalib (2022), ICIP
    4. J. Liu, G. Xie, J. Wang, S. Li, C. Wang, F. Zheng, and Y. Jin, Deep Industrial Image Anomaly Detection: A Survey (2024), Machine Intelligence Research
    5. L. Heckler-Kram, J. Neudeck, U. Scheler, R. König, and C. Steger, The MVTec AD 2 Dataset: Advanced Scenarios for Unsupervised Anomaly Detection (2025), arXiv preprint
    6. K. Roth, L. Pemula, J. Zepeda, B. Schölkopf, T. Brox, P. Gehler, Towards Total Recall in Industrial Anomaly Detection (2022), CVPR
    Anomaly Detection Improve Models Performance Visual
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Awais
    • Website

    Related Posts

    Manifold-Matching Autoencoders

    March 18, 2026

    One Model to Rule Them All? SAP-RPT-1 and the Future of Tabular Foundation Models

    March 18, 2026

    Bridging Facts for Cross-Document Reasoning at Index Time

    March 18, 2026

    SpecMoE: Spectral Mixture-of-Experts Foundation Model for Cross-Species EEG Decoding

    March 18, 2026

    How a Neural Network Learned Its Own Fraud Rules: A Neuro-Symbolic AI Experiment

    March 18, 2026

    Bridging Modality Gap with Temporal Evolution Semantic Space

    March 18, 2026
    Leave A Reply Cancel Reply

    Top Posts

    At Least 32 People Dead After a Mine Bridge Collapsed Due to Overcrowding

    November 17, 20250 Views

    Here’s how I turned a Raspberry Pi into an in-car media server

    November 17, 20250 Views

    Beloved SF cat’s death fuels Waymo criticism

    November 17, 20250 Views
    Don't Miss

    Manifold-Matching Autoencoders

    March 18, 2026

    arXiv:2603.16568v1 Announce Type: cross Abstract: We study a simple unsupervised regularization scheme for autoencoders called…

    One Model to Rule Them All? SAP-RPT-1 and the Future of Tabular Foundation Models

    March 18, 2026

    Why customer personas help you win earlier in AI search

    March 18, 2026

    Broccoli Confetti Rice Recipe | Epicurious

    March 18, 2026
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Google AI Overviews Cut Germany’s Top Organic CTR By 59%

    March 18, 2026

    SpecMoE: Spectral Mixture-of-Experts Foundation Model for Cross-Species EEG Decoding

    March 18, 2026
    Most Popular

    13 Trending Songs on TikTok in Nov 2025 (+ How to Use Them)

    November 18, 20257 Views

    How to watch the 2026 GRAMMY Awards online from anywhere

    February 1, 20263 Views

    Corporate Reputation Management Strategies | Sprout Social

    November 19, 20252 Views
    Our Picks

    At Least 32 People Dead After a Mine Bridge Collapsed Due to Overcrowding

    November 17, 2025

    Here’s how I turned a Raspberry Pi into an in-car media server

    November 17, 2025

    Beloved SF cat’s death fuels Waymo criticism

    November 17, 2025

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest YouTube Dribbble
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions
    • Disclaimer

    © 2025 skytik.cc. All rights reserved.

    Type above and press Enter to search. Press Esc to cancel.