Iris Networks Technology Spotlight

Securing the benefits of Generative AI

Employees are increasingly turning to Generative AI tools like ChatGPT, Claude and Gemini to summarize meeting notes, create presentations, debug code and draft client emails to speed-up time consuming tasks. The productivity upside is clear, according to a recent report equating to a saving of 1 day per week. However, these impressive potential efficiency gains are tempered by concerns about the leaking of confidential data and the impact not only from a compliance perspective but as fundamental business risk.

AI tools don’t forget the data, most popular AI Tools save prompts, conversation histories and may use data to train future models. This means that your confidential data, including source code, PII, internal reports or product roadmaps, could unwittingly be shared and used by other developers, foreign governments, public AI models and risk exposure in future breaches of these 3rd parties.

So how can companies take advantage of AI safely. The answer isn’t banning AI. All that will achieve is pushing users underground to devices where you have no control at all. It’s building a real policy and enforcing it with the right guardrails.

The key steps are:

Define what not to share
These should be clearly documented in your GenAI usage policy. It isn’t enough to just say “don’t share sensitive data”.

Approve safe GenAI tools
List tools that meet your security and privacy requirements, such as OpenAI Enterprise or Google Gemini Advanced. Equally be clear what’s not approved. This could be AI tools hosted in high-risk regions, personal accounts, or tools that train user inputs by default.

Monitor and enforce GenAI usage in real time
Most companies have written policies but no enforcement. To close the gap, deploy tooling that works where GenAI tools are accessed (usually in the browser). Real-time monitoring and inline intervention at the prompt level are essential to prevent accidental data leaks before they happen.


Iris Networks works with Harmonic to assist their customers to safely utilize the benefits of GenAI to empower their workforce effectively. Harmonics is designed to complement existing DLP and SASE provision, it sits in the browser, right where prompts are being typed and files are being uploaded. It understands context, flags risky behavior, and nudges employees before the data leaves the enterprise boundary.

Harmonic is powered by language models which means it is nuanced which leads to significant reductions (c.95%) in false positives when compared to traditional DLP and with latency under 200 milliseconds it is imperceptible to users. For one Harmonic customer this has led to a 72% reduction in data leakage whilst boosting GenAI adoption by 300% with the resultant increases in staff efficiency and satisfaction.
Harmonic security

Secure AI Adoption In Enterprise

Harmonic have pioneered an intelligent data protection service that is enabling large and small enterprises to accelerate their adoption of genaAI tools - without the sensitive data loss risk. They don’t use traditional DLP/labelling approaches. Harmonic’s powerful, pre-trained language models are trained specifically to detect sensitive business content in-motion as it is about to leave the business via a genAI prompt. They do this fast enough to intercept the prompt data and help the user amend their prompt thus mitigating any sensitive data loss.

What Is Gen-AI

Generative AI refers to deep-learning models that can generate high-quality text, images, and other content based on the data they were trained on.

Harmonic Protection

Harmonic Security in under 60 seconds

Explore Harmonic

Improve User Awareness

Harmonic helps users at the point of data loss understand with alerts.

Report on the adoption of GenAI

Harmonics built-in insights page.

Prevent Data Leakage

Gain visibility into what data is leaking through the cracks when using GenAI

Identify Shadow AI

Understand and monitor employees use of GenAI enabled SaaS.

Identify What Gen-AI Is In Use

Discover and report on what Gen-AI apps, Saas plans are being used within your business

Why Choose Harmonic

Organisations that want to improve visibility into who is using Gen-AI within their organisation and more importantly what Data  is being leaked. Speak with the Iris network team for an introduction to Harmonic Security
Iris Networks are a registered company in England & Wales.

Registered Address: Glebe Business Park, Lunts Heath Road, Widnes, Cheshire, WA8 5SQ

Company No. 07872150
Contact
  • 01925 357770
  • Iris Networks Ltd
    Suite 308,
    The Base,
    Dallam Lane,
    Warrington,
    Cheshire,
    WA2 7NG
© Iris Networks 2025 – VAT Reg : GB127 0977 04 – Company Reg: 7872150