October 30 ~ 31, 2025, Virtual Conference
Rakesh More, 1304 East Algonquin Road, Apt 2P, Schaumburg, IL 60173 USA
Artificial intelligence (AI) is becoming central to insurance operations, especially in property and casualty (P&C) claims processing. AI can speed up workflows and improve efficiency, but it also introduces risks. Large language models (LLMs) may generate false or misleading information, often called hallucinations. These errors can harm customers, cause financial losses, and weaken trust in insurance systems. Current safety tools, such as Llama Guard, focus on filtering harmful or toxic content. However, they do not ensure factual accuracy or address insurance-specific needs. This paper studies these gaps and proposes improvements to NVIDIA NeMo Guardrails to build stronger, domain-specific safeguards. The approach includes defining rules for factual correctness, validating policy details, and preventing unsupported responses. We evaluate these enhancements through experiments with insurance-related queries and measure improvements in accuracy and safety. Results show that customized guardrails significantly reduce misinformation and improve reliability. By integrating these measures, insurers can deploy AI systems that are safer, more accurate, and better aligned with regulatory and customer expectations.
AI safety, insurance automation, guardrails, NeMo, P&C claims, AI hallucination
MD Raziul Hasan Nayon, S M Shezan Ahmed, Md Tariq Uz Zaman, MST Ashima Alom Shova, Fatin Khan, Zhengzhou University, China
The rapid integration of Large Language Models (LLMs) in the field of software engineering is very much changing the methods of coding, which, at the same time, are also being maintained and optimized. Through this article the journey of the coming of capabilities and restrictions as well as the direction of the future of software development with LLM is monitored. The authors of this article have given a detailed survey of LLM utilization in various stages of the life cycle of development organization, a number of them being code generation, bug detection, automated testing, documentation, and translation of natural language into code, productivity, quality, and accessibility being among the improvements indicated. It is demonstrated through comparative analyses that LLMs’ performance is more favorable than that of traditional and rule-based approaches, while the positions of developers, project managers, and executives as stakeholders reveal that they are both excited about the efficiency gains and, on the other hand, concerned about issues of technical reliability, data privacy, and over-reliance on automation. The main problems that LLMs are facing today—like hallucinations, being out-of-date, not having a very large context, and depending too much on prompt engineering—are plainly revealed along with the proposed solutions. Next, we consider progress in the field of multimodal and context-aware systems, autonomous software agents, continuous learning, and human-AI co-creation platforms. LLM-assisted development is likely to change the face of challenges if it is going to be fully materialized, thus bringing about the birth of a new era of collaborative, efficient, and innovative software engineering.
Large Language Models (LLMs), automated software development, code generation, bug detection and repair, automated testing, natural language to code translation, prompt engineering, context-aware systems, human-AI collaboration, software engineering automation.