SJA Exclusive: Navigating the future of generative AI
- October 4, 2023
- 11:55 am
Victoria Rees
Share this content
As the number of generative AI tools continues to proliferate, companies must determine the risks and rewards of using the technology, says Laurent Duperval, Montreal-based writer with more than 20 years of experience writing for the IT and cybersecurity industry.
A new framework
When it comes to generative AI (GAI), there is no going back.
The genie is out of the bottle and companies must now grapple with a number of big questions.
For example, what guardrails should be put in place for employees looking to take advantage of AI’s tremendous potential?
Do the risks associated with the emerging technology outweigh the benefits? Is there a way for humans and machines to co-exist in a mutually beneficial relationship?
GAI is different from what many people think of when it comes to AI.
Instead of the human-like robots that are often portrayed in movies and the media, generative AI is a form of machine learning that can produce content – including audio, code, images, text, simulations and videos – more quickly than humans can on their own. Which makes their use enticing.
According to an April 2023 Gartner report, 82% of organizations are currently planning to issue some guidance on the use of generative AI tools, like ChatGPT.
However, standing in the way is an insufficient framework for implementation.
This has increased the urgency for guidance, training and education that can greatly reduce the fear, anxiety and perceived risks associated with generative AI.
“We can’t underreact or overreact,” said Frank Post, CISO at the Ontario Pension Board.
“This is truly the beginnings of humans and machines working in a much tighter collaboration than we ever imagined possible.”
Unstoppable force versus an immovable object
Doing nothing is also not an option as the number of generative AI miscues rapidly grows.
For example, in June 2023, there was a well-documented example of the misapplication of ChatGPT when a New York lawyer was sanctioned for submitting fictional cases to the court.
The lawyer attempted to use the AI-powered chatbot to build a legal brief that referenced past court cases.
However, instead of pulling from actual case law, the AI tool fabricated a number of cases.
The lawyer failed to verify that information and submitted his filing to great embarrassment and professional self-destruction.
In a separate instance in April 2023, Samsung barred the use of ChatGPT throughout the company when they determined that some of their proprietary information made it into ChatGPT’s database.
This happened because anything submitted to a generative AI tool will get incorporated into the system’s large learning model (LLM) and can then be accessed in the future by others.
Furthermore, that sharing, once completed, is forever and cannot be undone.
An initial risk assessment may force some companies to follow suit with an outright ban of generative AI on company devices.
Conversely, other companies may encourage some experimentation, which could lead to the development of generative AI applications that deliver huge benefits to investors, employees and customers alike.
“You can’t easily stop the use of AI tools,” said Dr James Norrie, Founder and CEO of cyberconIQ – a company focused on the merging of psychology and technology to measure and manage cybersecurity and online risk.
“If you suppress the use of GAI, it will go underground. People won’t admit they use it and they will do it from home, exposing you to greater security and privacy risks.
“Why not get ahead of this curve before that happens?”
A better approach is to encourage the safe exploration of business use cases for GAI that might benefit your organization and its customers.
For example, some companies are now implementing GAI tools to enhance customer service, streamline order processing and support rapid internal knowledge sharing among project teams.
Regardless, most organizations will be confronted with generative AI at some point and would therefore benefit from considering some principles ahead of any possible adoption.
Guidance principles for corporate use of AI
Implementing appropriate guidelines allows companies to use the power of generative AI while reducing the risk of being affected by its negative aspects.
While no set standard will work for all companies, guidelines should adhere to three principles.
Principle 1: Be AI-safe and secure
When you submit a question to tools like ChatGPT, Google Bard and Claude AI, that information is stored and used to train it further.
Once businesses send information to these tools, they effectively hand over that data to an external entity and lose control over its use. That has consequences.
“If you’re in healthcare, finance or any other regulated environment, there are severe implications for misuse of the information you’re in charge of,” added Post.
“Those types of organizations should not jump in until they have been properly trained and have guardrails put in place.”
LLMs can also open the door to intellectual property theft because people unwittingly give them proprietary information such as trade secrets, company financial data, personally identifiable information from clients and customers, plus much more.
Safety, security and privacy comprise the first guiding principle and ensure employees do not input anything into a GAI tool that they should not share.
Principle 2: Demonstrate data provenance integrity and accountability
GAI tools have a fundamental flaw; that being that LLMs are now known to fabricate information.
This behavior is commonly called “hallucination” and has been widely reported in early GAI versions. Currently, there is no known solution.
“Because GAI tools and LLMs don’t have the ability to distinguish fact from fiction, everything produced by those tools needs to be validated,” remarked Post.
“Whatever information you think you’re learning from a GAI tool, verify it with other sources.”
Secondly, in many situations, it can be unethical to portray AI-generated output as if it were done by a human.
For example, if a website visitor uses a chatbot interface, they should know whether they’re interacting with a conversational AI tool or a human being.
Finally, there are accountability concerns to consider.
US courts have, so far, ruled that AI-generated content is not protected by copyright.
Organizations that use AI-generated content as a central portion of their business might not be protected if a competitor takes that content and uses it without permission.
Many AI companies are facing copyright infringement lawsuits because plaintiffs claim the tools illegally used their intellectual property for training.
The ripple effects of these lawsuits on businesses using generative AI are unknown but are rapidly emerging.
Principle 3: Understand information warfare and disinformation
Companies should assume their competitors have bad intentions with their use of AI.
“Not all information online is true,” commented Norrie, whose company compiled the aforementioned guiding principles and has released a set of free AI training resources for organizations or individuals looking for direction.
“AI, too, can be trained to produce incorrect answers.
“Cyber-crime gangs are already using AI in social engineering to advance their criminal agenda and this will lead to an increase in the volume and sophistication of cyber-attacks.”
Companies that use generative AI where they don’t control the input data must be aware of this fact and act accordingly.
However, even with control of the data that goes into the LLM, the hallucination problem remains making fact-checking an essential defense mechanism.
It is also critical to improve the cyber judgment and intuition of employees as AI-enabled attacks grow in sophistication.
A collaboration between bytes and brain
The guidance principles are meant to raise awareness about the current state of AI tools.
Humans will need to learn to work with AI, not rebel against it.
“It’s a bytes and brains collaboration,” concluded Norrie.
“We must figure out the machine instead of letting the machine figure us out.
“It is best to establish your AI guidelines while you’re developing your own knowledge and understanding of how you plan to govern and regulate its use.”