Setting the standard
To ease surveyors into the world of AI, RICS published a professional conduct standard in September 2025(5).
This will become effective in March 2026 and is designed to ensure AI augments the surveying practice while reducing the risk of complacent or irresponsible use cases.
The aim is to push quantity surveyors up the value chain – their expertise is ever more important in the world of AI and should still be the ultimate voice of authority when extracting meaning from data and using it to inform decision-making.
The RICS standard stresses the inherent risk of error and/or bias in AI systems due to mistakes or bias in the algorithms used.
It recommends surveyors apply professional judgement (i.e. knowledge, skills, experience and professional scepticism) to assess the reliability of AI outputs, stipulating that RICS members and regulated firms must document this in writing.
Written decisions should cover relevant assumptions made, key areas of concern regarding reliability (including the reliability of underlying datasets), the reasons for concern, if concerns can be lessened, and the impact of each concern on the reliability of the output.
The latter action should be accompanied by a statement confirming if the output can be reasonably used for its intended purpose.
RICS’s standard has arrived at a crucial time, when the urgency to use AI to leapfrog ahead of competitors on the global stage is seeping into the everyday practices of businesses across sectors.
Surveyors are not immune to this but the duty of ensuring construction projects stay within budget necessitates a stricter approach to their application of AI.
Effective use of AI systems ultimately relies on high-quality prompts (i.e. sharing accurate data and asking the right questions) and for the professional to determine how sensible the outputs provided are.
Governance and training go hand in hand with this.
AI systems are developing quickly – agentic AI (software that uses the advanced natural processing techniques of LLMs to solve complex problems) is progressing ever closer to simulating human behaviour and thought processes(6).
It’s imperative that cost professionals’ comprehension and skills keep pace with the rate of change and that human accountability is maintained. Regular updates to industry and business-level governance will support this.
Smarter AI systems will be able to present information with increasing confidence and conviction, but relying solely on these outputs for cost plans, benchmarking or valuation assessments comes with huge financial and reputational risks.
Professional indemnity insurance policies don’t automatically provide cover for errors made by AI where liability is unclear and therefore cannot be used as backup plan for overreliance on AI or misplaced judgement.
That’s not to say we shouldn’t embrace AI systems – there’s huge potential for it to solve some of the biggest cost challenges facing construction.
For example, a report from contractor Mace suggests AI has a place in breaking the cycle of late, over-budget projects and restoring principles of direction clarity, trust, the right incentives, accountability and timely decision-making in major programmes(7).
It gives an example of using AI to tackle optimism bias and improve reference class forecasting.
Optimism bias is a common source of cost headaches in construction projects, particularly in big national infrastructure schemes, the fallout of which continue to dampen investment sentiment in the UK.
Used with pragmatism and governance, it’s likely AI will be transformative for enabling construction professionals to overcome such hurdles and improve cost management outcomes.
AI is fundamentally not here to replace professional judgement – it’s here to make it stronger.
To keep up to date with the latest industry news and insights from BCIS, register for our newsletter here.