Features, Insight, Opinion

Veritas outlines the best practices for CISOs looking to balance predictive text technology with security

Christos Tulumba, Chief Information Security Officer at Veritas Technologies, has penned a comprehensive op-ed designed to provide CISO’s with a guide to balancing predictive text technology with security and privacy.

Artificial Intelligence (AI) has taken the world by storm, and Generative Pre-trained Transformers (GPT) models are creating waves, transforming everything from the way we communicate, work, and even the way we think.

In GPT models, text prediction analyses the context of a given input and generates a sequence of words or text that is likely to follow based on patterns learned from vast amounts of training data.

It uses a neural network to predict the next word or phrase, making its predictions based on the most probable continuation of the input text. Its power lies in its ability to predict – almost like the way each of us might predict what a friend might say next in a conversation – taking inputs and processing them through a well-structured model to generate outputs that resonate with human understanding.

In the business world, organisations that continue to explore and harness the potential of these technologies are poised to revolutionise their industries and remain agile in an ever-evolving global marketplace.

However, amidst the rising popularity and undeniable advantages, organisations must ensure that the agility and efficiency it brings don’t overshadow the security and privacy.

As the Chief Information Security Officer (CISO) at Veritas Technologies, I have keenly observed the remarkable evolution of AI. Before diving into the technology, it’s imperative to educate our teams and stakeholders on predictive text and its risks and rewards.

As information security leaders, we need to ensure that everyone understands the perception vs. reality of AI’s security and privacy concerns.

Perception 1: GPT may compromise data privacy due to its training on sensitive information.

Reality: GPT models are trained on large datasets, including publicly available text from the internet. However, the models themselves do not retain specific details of the training data. The responsibility lies in the hands of organisations and researchers to ensure appropriate data anonymisation and privacy protection measures are in place during the training and deployment of GPT models.

Perception 2: GPT poses significant security risks and can be easily exploited by attackers.

Reality: While it is true that GPT-based models can be misused for malicious purposes, such as generating convincing phishing emails or automated cyberattacks, the risks can be mitigated with proper security measures and controls. CISOs can implement strategies like data sanitisation, access controls, and continuous monitoring to minimise potential security risks.

Perception 3: Predictive text models store and retain user data indefinitely.

Reality: Predictive text models typically do not retain specific user data beyond the immediate context of generating responses. The focus is on the model’s architecture and parameters rather than preserving individual user information.

However, it is crucial for CISOs to assess and validate the data retention and deletion policies of the specific models and platforms being utilised to ensure compliance with privacy regulations and best practices.

Perception 4: Predictive text models can compromise sensitive or confidential information.

Reality: Predictive text models can generate text based on patterns and examples in the training data. If the training data contains sensitive or confidential information, there is a risk that the model could generate outputs that inadvertently disclose or hint at such information.

CISOs must carefully consider the nature of the training data and implement appropriate data anonymisation techniques to minimise the exposure of sensitive information.

Perception 5: Predictive text models are a potential target for data exfiltration.

Reality: The models themselves typically do not store or retain sensitive data. However, CISOs should still be mindful of potential vulnerabilities in the infrastructure supporting the models, such as the storage systems or APIs used for inference.

Adequate security controls, such as encryption, network segregation, and intrusion detection, should be in place to protect against data exfiltration attempts targeting the underlying infrastructure.

As we navigate the complex waters of AI and machine learning, it’s critical to understand how we balance the use of generative AI with security and privacy, and this requires a proactive approach from CISOs. Here are several strategies to achieve this delicate equilibrium:

1. Evaluate data privacy and protection and implement guidelines: CISOs should carefully assess the data used to train the model and ensure compliance with relevant industry regulations and pertinent company policies.

It is crucial to anonymise or pseudonymise sensitive information and apply stringent access controls to protect user data from unauthorised access or misuse. It’s also important to work with legal and HR teams to establish compliance guidelines for the use of predictive text technology.

2. Secure model training and deployment: Securing the infrastructure used for training and deploying predictive text models is critical. Implement robust security controls, including encryption, secure protocols, and access management, to safeguard the underlying systems and prevent unauthorised modifications or tampering with the models. Implementing a Secure Development Lifecycle (SDL) for predictive text projects is key.

3. Strengthen user awareness and gain consent: Educate users about the implications and capabilities of predictive text technology. Clearly communicate the purposes of data collection and usage, and potential risks involved, empowering users to make informed decisions about opting in or out of predictive text features.

4. Collaborate with vendors and researchers: Engage in partnerships with technology vendors and researchers to stay updated on the latest advancements, security patches, and best practices related to predictive text technology. Active collaboration helps address emerging security and privacy challenges more effectively.

5. Develop incident response and continuous monitoring muscle: Build an incident response plan specific to predictive text technology. Monitor system logs, user feedback, and anomaly detection mechanisms to promptly identify any potential security incidents or privacy breaches. Establish processes to mitigate and respond to such incidents, ensuring minimal impact on users and data.

While predictive text technology holds great promise, we must deploy it responsibly as custodians of data security and privacy in our organisations. Ensuring compliance with international data privacy laws is paramount.

Considered use of emerging technologies like AI has the power to change lives – it can transform consumer experiences, help governments make more informed decisions, accelerate scientific discovery, improve the delivery of more personalised healthcare services, and so much more.

As CISOs, our job is to remain vigilant and steer our organisations through the waves of innovation with security at the helm.

Previous ArticleNext Article


The free newsletter covering the top industry headlines