Ensuring AI comes out of the shadows

Whilst companies are right to encourage their teams to find innovative usages of generative artificial intelligence (GenAI) to streamline workflows, many employees are using the technology in ways that are not being sanctioned by their employers. It is a phenomenon called shadow AI and is a growing concern.

It is a problem that is not going to go away any time soon. A recent study from Deloitte found that only just under a quarter (23 percent) of those who have used GenAI at work believe their manager would approve of how they’ve used it. This is not the time for them to take chances. After all, the unsanctioned use of AI could put an organisation in serious legal, financial, or reputational risk.

The risks associated with employees placing sensitive data into GenAI tools is real. Nearly one-third of employees in a survey completed late last year admitted to placing sensitive data into public GenAI tools. Unsurprisingly, 39 percent of respondents in the same study cited the potential leak of sensitive data as a top risk to their organisation’s use of public GenAI tools.

A step change in adoption

The step change with AI adoption happened with the launch of ChatGPT. From that point forth it wasn’t just a tool for technologists, but a tool for all. It was a collective aha moment. Now, the technology has become almost as ubiquitous in our everyday lives as brushing our teeth in the morning. 

Use of GenAI is growing incredibly fast. In the past 12 months, we have seen organisations across nearly every industry deriving business value from it. In fact, in a recent McKinsey Global Survey, two-thirds (65 percent) of respondents reported that their organisations are now regularly using the technology, nearly double the percentage ten months previous. Respondents’ expectations for GenAI’s impact were highly positive, with three-quarters predicting that it would lead to significant or disruptive change in their industries in the years ahead.

The need to apply zero trust principles

Most of the enterprise GenAI solutions being built are being designed to leverage already available data. It is, after all, the lowest hanging fruit. These solutions are typically centred around customer service because that is where the data is. 

However, with much of this data being sensitive in nature it is important that organisations take no chances. It is time for a shift in thinking. It is time to look at GenAI solutions as a machine that moves data. The top priority, therefore, should be how the data is being controlled both going into the system and when it comes out the other side. 

Businesses need to apply zero trust principles into this data later. A zero trust model operates on the principle of maintaining rigorous verification, never assuming trust, but rather confirming every access attempt and transaction. This shift away from implicit trust is crucial. By embedding zero trust principles throughout generative architectures would offer a proactive path to enabling accountability, safety, and control.

The democratisation of data leverage

Part of the issue we are seeing with GenAI is that the technology has thus far outpaced the need to secure it. Whilst some organisations are cognisant of the risks, the knowledge has not yet percolated out. In many ways, AI has been the democratisation of data leverage. Before GenAI, a business had to have technology sitting in front of a database to get to the data held within. Plus, you needed to understand how to use it. Now, the only barriers to leveraging data is whether you know the alphabet and how to copy and paste. The offshoot of this is that the likelihood of this data going outside of the business markedly increases.

It comes back to the people within the organisation. A business can take steps to secure technology, it can take steps to secure the data, but there are always human beings in the loop. Training and education help, but we as a species remain incredibly flawed. 

As long as GenAI is a tool that staff can use to help them reach their goals they will take advantage of it. Whether you want them to or not. Because of this, people will always remain the most open vector to data leakage.

Shining a light on the problem

The use of AI, whether in or out of the shadows is not going to go away. And nor should it. AI is great for automating tasks, handling big data, facilitate decision-making, reducing human error, and further our understanding of the world around us. However, education of best practices and how to responsibly use AI is needed.

Least privilege access, always on monitoring, and never trust, always verify have been in place at the technology layer for some time. Now, though, it is important to bring these principles down to the data itself. Thankfully, help is at hand. With a Private Content Network, organisations can protect their sensitive content more effectively in this era or AI. The best examples of the technology provide content-defined zero trust controls, featuring least-privilege access defined at the content layer and next-gen DRM capabilities that block downloads from AI ingestion. They also themselves employ AI to detect anomalous activity – for example, sudden spikes in access, edits, sends, and shares of sensitive content. This will help shine a light on any unsanctioned activity going on in the shadows so that a business can remain compliant. 

Tim Freestone

Tim Freestone joined Kiteworks in 2021 and brings over 15 years of experience in marketing and marketing leadership, including demand generation, brand strategy, and process and organisational optimisation. Tim was previously Vice President of Marketing at Contrast Security, a scale-up application security company. Before Contrast, Tim was the Vice President of Corporate Marketing at Fortinet, a multi-billion-dollar, next-generation firewall and cloud security company. Tim holds a Bachelor’s degree in Political Science and Communication Studies from The University of Montana.

Ab Initio partners with BT Group to deliver big data

Luke Conrad • 24th October 2022

AI is becoming an increasingly important element of the digital transformation of many businesses. As well as introducing new opportunities, it also poses a number of challenges for IT teams and the data teams supporting them. Ab Initio has announced a partnership with BT Group to implement its big data management solutions on BT’s internal...

WAICF – Dive into AI visiting one of the most...

Delia Salinas • 10th March 2022

Every year Cannes held an international technological event called World Artificial Intelligence Cannes Festival, better known by its acronym WAICF. One of the most luxurious cities around the world, located on the French Riviera and host of the annual Cannes Film Festival, Midem, and Cannes Lions International Festival of Creativity. 

Bouncing back from a natural disaster with resilience

Amber Donovan-Stevens • 16th December 2021

In the last decade, we’ve seen some of the most extreme weather events since records began, all driven by our human impact on the plant. Businesses are rapidly trying to implement new green policies to do their part, but climate change has also forced businesses to adapt and redefine their disaster recovery approach. Curtis Preston,...