Control and automation engineers are under pressure to keep plants safe, compliant and efficient while supporting ambitious digital transformation programmes. Generative AI has appeared in the middle of this, promising to draft specifications, write code and simplify communication.
It is easy to experience large language models (LLMs) as a kind of magic. You type a question, get a fluent answer and move on. But engineers know that relying on black boxes is dangerous. To use LLMs responsibly and effectively, you need at least a basic understanding of how they work and where they can fail. With that foundation, generative AI becomes a powerful assistant; without it, you are effectively ‘prompting and praying’ – exposing yourself to errors and professional embarrassment.
Generative AI is not magic, it is probabilistic
An LLM is not a rules engine or a database. It is a statistical model that predicts the next likely word based on patterns it has learned from vast amounts of text. This has critical implications:
• Outputs are indeterminate: The same prompt can yield different responses; this behaviour is intrinsic to the model.
• Fluent text is not guaranteed truth: An answer can look authoritative but still be technically wrong, incomplete or misaligned with your standards.
• Models do not ‘understand’ your plant: They replay patterns in language; they do not reason about your specific equipment, constraints or safety culture unless you provide that context.
• Hallucinations are a real risk: When gaps exist, models can invent standards references, parameter values or features that do not exist.
For engineers used to deterministic PLC logic and validated safety systems, this means LLM outputs must always be treated as drafts to be checked, and not as final, authoritative answers.
Where LLMs fit in engineering work
Much of an engineer’s work is knowledge work. It involves:
• Researching technologies and standards.
• Drafting and reviewing specifications and functional descriptions.
• Writing project proposals and status reports.
• Documenting interfaces between SCADA, historians, MES and ERP.
• Communicating decisions to technical and non-technical stakeholders.
In these areas, generative AI can:
• Accelerate research.
• Provide structured overviews of a topic or technology, which you then verify against official sources.
• Improve communication.
• Refine emails, reports and presentations for clarity and tone, without changing the underlying technical intent.
• Support early-stage design thinking.
• Suggest alternative architectures or solution options, including pros and cons, for you to evaluate.
• Reduce documentation drudgery.
• Help structure standard documents such as the URS, test protocols or commission reports based on your inputs.
In all cases, your judgement remains central. The AI is a drafting and structuring tool, not a substitute for engineering responsibility.
From ‘prompt and pray’ to managing a smart intern
The gap between useful and dangerous AI use comes down to how you frame and supervise it. If you issue a vague prompt such as “Write a SCADA design spec” and paste the result into a client document, you are prompting and praying. You may inherit generic or incorrect assumptions, invented details and misaligned standards.
A better mental model is to treat the LLM as a smart intern that is:
• Fast, well-read and tireless.
• Lacking context about your plant and your standards.
• In need of clear instructions, constraints and review.
To manage this effectively, engineers need several key skills:
• Precise problem framing: Provide industry, system context, purpose of the document and intended audience.
• Explicit guardrails: State constraints, required standards context, and where recommendations need verification.
• Structured outputs: Ask first for outlines, lists of considerations or tables before fleshing out detail.
• Iterative refinement: Treat the first answer as a starting point. Challenge it, ask for alternatives and deepen sections that matter.
• Systematic validation: Always cross-check critical details against standards, plant documentation and vendor manuals.
• Used this way, the model amplifies your effectiveness while staying within the bounds of professional practice.
The risk of ignoring LLMs
Concerns about privacy, IP and reliability are valid, and organisations need clear policies. But simply opting out carries its own risks. Engineers who learn to use LLMs well will:
• Produce higher-quality documentation and communication faster.
• Explore more design options in the same amount of time.
• Free up time for deeper analysis and plant optimisation.
Those who ignore these tools may find themselves less productive and less competitive than peers who have mastered them. The choice is not between ‘full automation’ and ‘no AI’; it is between informed use and falling behind.
Foundation skills for AI-literate engineers
You do not need to become a data scientist, but you do need some basic AI literacy:
• Conceptual understanding of LLMs: Knowing they are probabilistic models trained on text, not deterministic calculators or live plant databases.
• Awareness of limitations and risks: Recognising hallucinations, lack of site-specific knowledge and sensitivity to prompt wording.
• Prompt engineering basics: You need to define the role (e.g., “Act as a senior control systems engineer…”), provide clear objectives, constraints and examples, and specify format and level of detail.
• Workflow judgement: Understanding when AI is suitable (drafting, summarising, exploring options) and when human-only work is essential (safety calculations, final design approvals, regulatory submissions).
• Policy and ethics awareness: Knowing what information you may and may not share, and how your organisation expects AI to be used.
These skills align naturally with engineering thinking: defining problems clearly, understanding system behaviour and managing risk.
Practical steps to build AI capability
To turn theory into capability, engineers and technical managers can:
• Experiment on low-risk tasks: Use an LLM this week to help draft an internal memo, meeting summary or non-critical procedure. Evaluate where it helped and where it failed.
• Invest time in learning: Read accessible material on LLMs, watch explainer videos and complete short online courses or vendor training focused on practical usage.
• Integrate AI into routine work: Ask an LLM to proofread technical emails or documents. Then use it to suggest alternative wording for functional descriptions or to summarise long reports.
• Create internal AI interest groups: Share real examples of how colleagues have used AI, including pitfalls, so that the organisation learns collectively.
• Help shape sensible policies: Work with IT/OT and leadership to define boundaries that protect IP and safety while still enabling everyday use.
• Explore small ‘vibe coding’ projects: If you can script or use low-code tools, let an LLM help you write small utilities that automate documentation or basic data handling. This deepens understanding and delivers immediate value.
AI literacy as a core engineering skill
Generative AI will not replace the need for deep process knowledge, sound engineering judgement or rigorous design. But it is rapidly becoming one of the key enablers of high-impact engineering work.
LLM skills are no longer optional, they are emerging as a core element of professional competence. Engineers who understand how these models work, who can frame effective prompts and who know how to validate outputs will be able to deliver more value, more consistently. Those who rely on ‘prompt and pray’, or who ignore the tools altogether, will face growing risks and diminishing relevance.
The message is clear, get curious, get educated and get hands-on. Treat LLMs as smart interns under your supervision, not as infallible experts. Used with understanding and discipline, generative AI can become a powerful ally in designing, operating and improving the industrial systems that matter.
Gavin Halse

Gavin Halse, an experienced chemical process engineer, has been an integral part of the manufacturing industry since the 1980s. In 1999, he embarked on a new journey as an entrepreneur, establishing a software business that still caters to a global clientele in the mining, energy, oil and gas, and process manufacturing sectors.
Gavin’s passion lies in harnessing the power of IT to drive performance in industrial settings. As an independent consultant, he offers his expertise to manufacturing and software companies, guiding them in leveraging IT to achieve their business objectives. His specialised expertise has made contributions to various industries around the world, reflecting his commitment to innovation and excellence in the field of manufacturing IT.
For more information contact Gavin Halse, TechnicalLeaders, [email protected], www.technicalleaders.com, www.linkedin.com/in/gavinhalse
© Technews Publishing (Pty) Ltd | All Rights Reserved