Close Menu
SkytikSkytik

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    At Least 32 People Dead After a Mine Bridge Collapsed Due to Overcrowding

    November 17, 2025

    Here’s how I turned a Raspberry Pi into an in-car media server

    November 17, 2025

    Beloved SF cat’s death fuels Waymo criticism

    November 17, 2025
    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    SkytikSkytik
    • Home
    • AI Tools
    • Online Tools
    • Tech News
    • Guides
    • Reviews
    • SEO & Marketing
    • Social Media Tools
    SkytikSkytik
    Home»Tech News»How organizations can mitigate shadow AI without stifling innovation
    Tech News

    How organizations can mitigate shadow AI without stifling innovation

    AwaisBy AwaisJanuary 27, 2026No Comments5 Mins Read0 Views
    Facebook Twitter Pinterest LinkedIn Telegram Tumblr Email
    AI writer
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Shadow AI is on the rise, and it’s causing problems for organizations.

    A recent study by MIT revealed that over 90% of employees use personal AI tools and only 40% of organizations manage official usage.

    Additionally,  IBM’s recent report found that 97% of organizations experienced AI-related cybersecurity incidents, yet most still lack governance.


    You may like

    Steve Povolny

    Social Links Navigation

    Senior Director of Security Research & Competitive Intelligence at Exabeam.

    A great game of tug-of-war has broken out in the cybersecurity landscapes: should organizations limit the use of shadow AI, and potentially stifle the creativity and opportunities that come with it, or should they let it run wild, and eat the risk of exploitation that comes with it?

    Could there be a middle ground that seeks to strike a balance between innovation, visibility, compliance, and enforcement?

    Shadow AI: What is it, and what’s the problem?

    A big problem is the uncertainty that comes with shadow AI overall. Shadow AI does not have a universal definition, but it often occurs when resources are being used that the company is not aware of to perform business functions.

    One reason it’s so difficult to limit shadow AI is because of how easy it is to implement, not just across industries, but in everyday life.

    Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

    People will always find the easiest way to perform a task. If there is a way that they can adopt technology to do their job more efficiently, they will, even if it isn’t approved by their organization.

    The issue is that the creative nature of AI makes it difficult to control. Between the individual and the prompt, there is a lot of grey area for risks to arise.

    Organizations have no way to know if there is any secret information being taken in outside of what is shared by the user, nor can they confirm if the information generated by the AI is correct, or even real.


    You may like

    The dangers of rushed adoption

    The ability to adopt new and exciting tech will always come before the ability to understand and control it, and AI displays this on an unprecedented scale.

    The exponential growth and spread of AI in the modern cyber landscape has resulted in individuals having the greatest control over their own creative expression at any point in history, and with it comes an inarguable opportunity.

    However, on the flip side, organizations have implemented and adopted AI without truly understanding it. As a result, the potential for organizational breaches has skyrocketed, and the amount of work and analysis that security teams must conduct to mitigate these breaches has become overwhelming to the point where the response to danger cannot keep up with the rate of AI growth.

    We have to look at how we manage third-party risk, as well as indemnification and contracts. The reason for this is that oftentimes, while an organization may own the AI agent, it’s developed using another company’s software.

    There lies the issue of how much organizations are willing to help when a problem arises, based on how much stake in the agent or potential fallout they would have.

    AI Agents: The key to unlocking creative freedom

    Additionally, we need a way to create greater visibility into the actions of AI agents. In the past, this has come from measures like network logs, endpoint logs, and data loss prevention strategies. We need to understand the system’s inputs and outputs, which identities were involved, and what the context of the situation was when issues began to arise.

    On the response side, we need to determine how we can quickly identify if there’s a problem. However, response actions need to be updated to address the problems that modern AI agents pose. An AI government group should be established that is responsible for retaining AI agents to complete their programmed tasks without creating risk.

    This would allow individuals to utilize the creative freedom and convenience that comes from AI, while also protecting organizations from risk of attacks and allowing security teams to rely on the agents to do their tasks without needing to constantly supervise them. Trustworthy, reinforced AI agents make for a more efficient security defense system.

    There needs to be an additional response action where we retrain, disable, or force relearning of AI agents, which doesn’t exist today. There should be a counterpart within the SOC for instant response, and there will be business owners responsible for building this structure. Right now, we are on CMMI level one for this process, maybe even a zero.

    Insider threat analysts will be heavily dependent on these adjustments. If we can build a structure and develop a process for handling information overload that shadow AI has created, insider threat analysts will be better suited to handle threats before they become devastating to organizations.

    Having a clear and easily enforceable AI usage policy, with known and vetted tools, and a process to review, test, and implement new AI agents or tools with engineering and security reviews is the only way to achieve an appropriate level of risk mitigation. It is vital that this process be made simple and transparent. Otherwise, employees will always look for ways to circumvent it.

    The path forward for AI usage requires understanding. Organizations can’t control what they don’t comprehend, and too many have prioritized rapid deployment over visibility and governance. If we can strike a balance between innovation and security, organizations can maximize their safety from outside threats while allowing their employees the freedom to innovate and change the world.

    We list the best Antivirus Software: Expert Reviews, Testing, and Rankings.

    Innovation mitigate organizations shadow stifling
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Awais
    • Website

    Related Posts

    What Is Skimo? Everything to Know About the Newest 2026 Winter Olympic Sport

    February 16, 2026

    Samsung ad confirms rumors of a useful S26 ‘privacy display’

    February 16, 2026

    Amazon Props Up Misleading, Junky Laptops No One Should Buy

    February 16, 2026

    African defensetech Terra Industries, founded by two Gen Zers, raises additional $22M in a month

    February 16, 2026

    Australia vs. Sri Lanka 2026 livestream: Watch T20 World Cup for free

    February 16, 2026

    Is Apple Intelligence already dead? A massive 96% of you say don’t use it, and Tim Cook should be worried

    February 16, 2026
    Leave A Reply Cancel Reply

    Top Posts

    At Least 32 People Dead After a Mine Bridge Collapsed Due to Overcrowding

    November 17, 20250 Views

    Here’s how I turned a Raspberry Pi into an in-car media server

    November 17, 20250 Views

    Beloved SF cat’s death fuels Waymo criticism

    November 17, 20250 Views
    Don't Miss

    Search Referral Traffic Down 60% For Small Publishers, Data Shows

    March 18, 2026

    Search referral traffic to small publishers dropped 60% over two years, according to Chartbeat data…

    Bridging Modality Gap with Temporal Evolution Semantic Space

    March 18, 2026

    How to Effectively Review Claude Code Output

    March 18, 2026

    Google adds video visibility to Performance Max reporting

    March 18, 2026
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    The State of Social Media 2026

    March 17, 2026

    [2601.15871] Why Inference in Large Models Becomes Decomposable After Training

    March 17, 2026
    Most Popular

    13 Trending Songs on TikTok in Nov 2025 (+ How to Use Them)

    November 18, 20257 Views

    How to watch the 2026 GRAMMY Awards online from anywhere

    February 1, 20263 Views

    Corporate Reputation Management Strategies | Sprout Social

    November 19, 20252 Views
    Our Picks

    At Least 32 People Dead After a Mine Bridge Collapsed Due to Overcrowding

    November 17, 2025

    Here’s how I turned a Raspberry Pi into an in-car media server

    November 17, 2025

    Beloved SF cat’s death fuels Waymo criticism

    November 17, 2025

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest YouTube Dribbble
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions
    • Disclaimer

    © 2025 skytik.cc. All rights reserved.

    Type above and press Enter to search. Press Esc to cancel.