Employees are increasingly turning to Generative AI tools like ChatGPT, Claude and Gemini to summarize meeting notes, create presentations, debug code and draft client emails to speed-up time consuming tasks. The productivity upside is clear, according to a recent report equating to a saving of 1 day per week. However, these impressive potential efficiency gains are tempered by concerns about the leaking of confidential data and the impact not only from a compliance perspective but as fundamental business risk.
AI tools don’t forget the data, most popular AI Tools save prompts, conversation histories and may use data to train future models. This means that your confidential data, including source code, PII, internal reports or product roadmaps, could unwittingly be shared and used by other developers, foreign governments, public AI models and risk exposure in future breaches of these 3rd parties.
So how can companies take advantage of AI safely. The answer isn’t banning AI. All that will achieve is pushing users underground to devices where you have no control at all. It’s building a real policy and enforcing it with the right guardrails.
The key steps are:
Define what not to share
These should be clearly documented in your GenAI usage policy. It isn’t enough to just say “don’t share sensitive data”.
Approve safe GenAI tools
List tools that meet your security and privacy requirements, such as OpenAI Enterprise or Google Gemini Advanced. Equally be clear what’s not approved. This could be AI tools hosted in high-risk regions, personal accounts, or tools that train user inputs by default.
Monitor and enforce GenAI usage in real time
Most companies have written policies but no enforcement. To close the gap, deploy tooling that works where GenAI tools are accessed (usually in the browser). Real-time monitoring and inline intervention at the prompt level are essential to prevent accidental data leaks before they happen.
Iris Networks works with Harmonic to assist their customers to safely utilize the benefits of GenAI to empower their workforce effectively. Harmonics is designed to complement existing DLP and SASE provision, it sits in the browser, right where prompts are being typed and files are being uploaded. It understands context, flags risky behavior, and nudges employees before the data leaves the enterprise boundary.
Harmonic is powered by language models which means it is nuanced which leads to significant reductions (c.95%) in false positives when compared to traditional DLP and with latency under 200 milliseconds it is imperceptible to users. For one Harmonic customer this has led to a 72% reduction in data leakage whilst boosting GenAI adoption by 300% with the resultant increases in staff efficiency and satisfaction.