Hidden dangers related to the use of generative artificial intelligence in your company

Opinions expressed by entrepreneurs’ colleagues are their very own.

AI, although established as a discipline of computer science for several a long time, has turn into a fashionable word in 2022 with the appearance of generative artificial intelligence. Regardless of the maturity of AI as scientific discipline, large language models are deeply immature.

- Advertisement -

Entrepreneurs, especially those without a technical environment, willingly use LLM and AI generative as enabling their business projects. Although it is justified to use technological progress to improve the efficiency of business processes, in the case of artificial intelligence it ought to be done fastidiously.

Many business leaders are now powered by noise and external pressure. From the founders of startups in search of funds after corporate strategists establishing modern programs, the instinct is to integrate the newest AI tools as soon as possible. The race towards integration comes out with critical defects that lie under the surface of generative AI systems.

1. Models of large languages ​​and generative AI have deep abnormal motion

Simply put, they do not understand what they are doing, and although you’ll be able to try to keep them, they often lose their thread.

These systems do not think. Predict. Each sentence produced by LLM is generated by probabilistic estimation of token based on statistical patterns in the data on which they were trained. They do not know the truth of falsehood, logic of error or noise context. Their answers could seem authoritative, but they are completely fallacious – especially when working outside of known training data.

2. Lack of responsibility

The incremental development of software is a well -documented approach in which developers can trace the requirements and have full control over the current status.

This allows them to discover the foremost causes of logical errors and take repair actions while maintaining consistency throughout the system. LLM develops regularly, but has no idea what caused the increase in their last status or what their current status is.

Modern Software Engineering is based on transparency and identification. Each function, module and dependence are observable and responsible. When something fails, diaries, tests and documentation direct the programmer to resolution. This does not apply to generative artificial intelligence.

The weight of the LLM model is refined by opaque processes resembling the optimization of the black box. Nobody – even developers behind them – can indicate what specific training contribution has caused latest behavior. This makes it not possible to debug. It also implies that these models can degrade unpredictably or change performance after retraining cycles, without the audit trail available.

In the case of a company, depending on precision, predictability and compliance, this lack of responsibility should raise red flags. You cannot control the internal version of LLM logic. You can only watch this changing.

3. Zero-day attacks

Zero-day attacks are identifiable in traditional software and systems, and programmers can fix susceptibility because they know what they have built and understand that an incorrect procedure is used.

In LLM there is a zero day every day and no one may even know it because he has no idea about the status of the system.

Safety in traditional calculations assumes that threats may be detected, diagnosed and patching. Attack vector may be modern, but there are answers. Not with generative artificial intelligence.

Since there is no deterministic code database behind most of their logic, there is no way to indicate the basic cause of Exploit. You only know that there is a problem when it becomes visible in production. And by you then can already do reputational or regulatory damage.

Considering these necessary problems, entrepreneurs should take the following warning steps, which I’ll mention here:

1. Use generative AIS in sandbox mode:

The first and most significant step is that entrepreneurs should use generative AIS in sandbox mode and never integrate them with business processes.

Integration implies that they never connect to internal systems using their API interfaces.

The term “integration” implies trust. You trust that the component that you just integrate will act consistently, maintain its business logic and not damage the system. This level of trust is improper for AI generative tools. The use of API interfaces for LLM transfer directly to databases, operational channels or communication is not only dangerous – it is reckless. It creates holes for data leaks, functional errors and automated decisions based on incorrectly interpreted contexts.

Instead, treat LLM as external, insulated engines. Use them in sandbox environments where their exits may be assessed before any man or system will work on them.

2. Use human supervision:

As a sandbox tool, assign a human supervisor to prompt the machine, check the output data and deliver it back to internal operations. You must prevent machine interaction between LLM and internal systems.

Automation sounds efficient – until it is not. When LLM generate outputs that go directly to other machines or processes, you create blind pipelines. There is no one who said, “It doesn’t look good.” Without human supervision, even a single hallucination can survive financial losses, legal issues or disinformation.

The human model in the loop is not a bottleneck-it is a security.

3. Never provide business information AIS and don’t assume that they’ll solve business problems:

Treat them as silly and potentially dangerous machines. Use people’s experts as demand engineers to define business architecture and solution. Then use a fast engineer to ask specific questions about AI machines regarding implementation – functions by function – without revealing a general purpose.

These tools are not strategic advisers. They do not understand the business domain, your goals or nuances of problem space. What they generate is the matching of language designs, not the intention -based solutions.

Business logic have to be defined by people, based on the goal, context and judgment. Use AI only as tools for service, not to design strategies or have decisions. Treat AI like a scripting calculator – useful in parts, but never responsible.

To sum up, generative artificial intelligence is not yet ready for deep integration with business infrastructure. His models are immature, their behavior is opaque, and their risk is poorly understood. Entrepreneurs must reject the noise and adopt a defensive attitude. The cost of improper use is not only inefficient – it is irreversibility.

AI, although established as a discipline of computer science for several a long time, has turn into a fashionable word in 2022 with the appearance of generative artificial intelligence. Regardless of the maturity of AI as scientific discipline, large language models are deeply immature.

Entrepreneurs, especially those without a technical environment, willingly use LLM and AI generative as enabling their business projects. Although it is justified to use technological progress to improve the efficiency of business processes, in the case of artificial intelligence it ought to be done fastidiously.

Many business leaders are now powered by noise and external pressure. From the founders of startups in search of funds after corporate strategists establishing modern programs, the instinct is to integrate the newest AI tools as soon as possible. The race towards integration comes out with critical defects that lie under the surface of generative AI systems.

The rest of this text is blocked.

Join the entrepreneur+ Today for access.

Latest Posts

Advertisement

More from this stream

Recomended