Startups and Gen AI: Should Founders Trust ChatGPT?

The mass adoption of Gen AI like ChatGPT has fundamentally changed our relationship to the internet – but should we trust it with everything?

ChatGPT can be weird. Actually, all generative artificial intelligence (gen AI) seems out of its mind sometimes.

Like other companies – including Twitter/X, Google, and DeepSeek – OpenAI’s flagship gen AI is a large language model (LLM), which is trained on vast amounts of data to generate coherent and contextually relevant content based on prompts. Designed to simulate the human decision-making process, the results have been very, very mixed.

There was the time ChatGPT’s advice to stop eating salt resulted in a man’s diagnosis with a rare condition. Or maybe your AI meal planner suggested a recipe for chlorine gas. Then there was the time at the SearchGPT launch when it's results were littered with misinformation. Hell, gen AI still can't recognize its own work.

But there is one embarrassing AI failure that stands out: after ChatGPT-5 was released, it was jailbroken within 24 hours by NeuralTrust researchers. While the song parodies are fun, we’re only just learning about the risks associated with creating videos of baby fruit eating fruit.

Now, industry leaders and entrepreneurs are taking notice, with concerns ranging from AI hallucinations to cybersecurity to intellectual property theft. The mass adoption of LLMs has fundamentally changed how we interact with the internet –  for better and worse. 

But with so many people, founders included, leaning heavily on the algorithm, it’s essential to not only critique the dependability of ChatGPT, but question how safe it is to use gen AI for the inner workings of a business?

The Next Gen of Cyberthreats

It’s never easy being the underdog, especially in recent years. A 2023 survey by the World Economic Forum showed that ransomware attacks increased by nearly 300%, with over 50% of these attacks specifically targeting small businesses.​ 

Gen AI has introduced increased enterprise risk by lowering the entry barrier for malicious actors. Some of the adversarial risks startups face include AI-generated malware, human-realistic phishing schemes, and impersonation attacks. 

With increasing levels of vulnerability across the board, This underscores the increasing vulnerability of small enterprises to AI-enhanced cyber threats.

In June 2024, Deloitte’s 2024 survey, the Global Future of Cyber survey, discovered that managing risks and regulatory compliance were the top two concerns among global industry respondents with scaling gen AI strategies.

The analysis also found that 77% of respondents are concerned “to a large extent” about how gen AI will impact their cybersecurity strategies. It goes on to say that, “the challenges of this tech is “intersectional, cutting across questions of data provenance, security, and how to navigate a still-maturing marketplace.”

Human error can be especially damaging across development processes. One wrong copy and paste into ChatGPT can result in exposuring of trade secrets or confidential data. Without a vetting process of third-party models, risks to data were higher across the board.

There are also concerns that AI might inadvertently amplify existing vulnerabilities – such as misconfigured code, “increasing the risk of data breaches, malware infections, and reputational damage. The concern is grounded in the lack of transparency into third-party foundation models, which may introduce unknown vulnerabilities.”

And we’re just getting started.

The Four Gen AI Horsemen

The AI apocalypse probably (maybe) won’t happen. But while we wait on superintelligence, its younger sibling is a massive threat against anyone attached to the internet. There are four types of attacks that have caused the most devastation.  

Hallucinations

Gen AI models predict outputs from training data patterns, but when they hallucinate, the results may appear plausible yet be incorrect. Such inaccuracies can cause faulty decisions, damaged reputations, regulatory penalties, and lost opportunities.18 Like hallucinations, misinformation can be perpetrated through gen AI models, which may lead to loss of trust, financial loss, and negative impact on business decisions.

Prompt Injections 

Gen AI incorporates the use of prompts or instructions to operate. But this feature is also a bug. Prompt injections have become the leading security threat for LLMs, with “attackers designing prompts to deceive gen AI systems into revealing secure data, spreading misinformation or tricking the prompt into performing a malicious action or access the model via a backdoor with a hidden trigger.”

Evasion Attacks

In AI systems, an evasion attack occurs when the gen AI is deliberately tricked by conflicting samples, or “adversarial examples,” resulting in the incorrect output. Gen AI systems are often vulnerable to these attacks at a greater scale than traditional AI. 

Data Poisoning

External models trained on public data introduce a host of unknowns. Data poisoningis one such risk: a deliberate attack on an AI system where an adversary alters the model’s training dataset. Data poisoning techniques may include altering data to become deceptive, incorrect, or misleading. The risk of data poisoning increases with retrieval augmented generation systems.

Lurking in the AI Shadows

Shadow AI” is created or used by employees within an organization that lack the necessary approvals or oversight. The major problem comes when you consider that these tools can access, store, and share corporate data.

The rise of this AI really falls down to society’s enabling policies, increasing work demands, and of course, the pandemic. As AI platforms were made widely available to consumers, suddenly, any knowledgeable person was able to create their own AI. 

But the productivity activating “digital steroid” appears to outweigh any risks, at least for employees – with some businesses running entire divisions on shadow apps. “Departments jump on unsanctioned AI solutions because the immediate benefits are too tempting to ignore,” Vineet Arora, CTO at WinWire, explained to VentureBeat.

The majority of shadow AI apps rely on OpenAI’s ChatGPT and Google Gemini. Since 2023, ChatGPT has allowed users to create customized bots in minutes. A recent Software AG survey that found 75% of knowledge workers already use AI tools and 46% saying they won’t give them up even if prohibited by their employer. 

In the same interview, Itamar Golan, CEO and cofounder of Prompt Security, said his company sees around 50 new shadow apps a day, with about 40% of models defaulting to train on any data it is fed, “meaning your intellectual property can become part of their models.”

There isn’t an inherent malicious intention behind every shadow AI. But this tech can quickly erode a startup’s security perimeters while the founder is totally clueless.

You Need Tech. But Do You Need AI?

While this tech has come far, it is still lacking. According to The Brookings Institution:

“Today’s LLMs show no signs of the exponential improvements characteristic of 2022 and 2023. OpenAI’s GPT-5 project ran into performance troubles and had to be downgraded to GPT-4.5, representing only a “modest” improvement when it was released earlier this year. It made up answers about 37% of the time, which is an improvement over the company’s faster, less expensive GPT-4o model, released last year, which hallucinated nearly 60% of the time. But OpenAI’s latest reasoning systems hallucinate at a higher rate than the company’s previous systems.”

Despite this, it is a growing consensus that AI will work alongside humans in the future. There’s likely always going to be a disembodied voice asking if you need help editing that email. ChatGPT can never be put back in the bottle – the good and bad. Implementing AI into a business really falls down to what the owner is hoping to achieve.

But AI is expanding what is already a $2 trillion opportunity for cybersecurity providers, and a 2024 McKinsey & Company report is betting that making AI safer is the next gold rush. 

According to their research, customers say today’s cybersecurity solutions often fall short of meeting demands in terms of automation, pricing, services, and other capabilities. But organizations spent approximately $200 billion on cybersecurity in 2024, up from $140 billion in 2020. While the vended cybersecurity market is expected to grow 12.4 percent annually from 2024 to 2027 – historical levels of growth as organizations look to quell threats. 

Let’s all hope investors pull out their pick axes and gold pans soon.