- Review existing codebases to determine where AI-assisted tools can be effectively applied, taking into account architecture, language mix, and code complexity
- Explore and benchmark AI solutions that support different stages of the engineering lifecycle, such as automated documentation, test creation, and code quality analysis
- Run practical experiments using AI tools on selected components to assess:
- Output quality and accuracy
- Test coverage and completeness
- Time savings compared to traditional methods
- Work closely with engineering teams to:
- Identify suitable candidates for pilot initiatives
- Validate AI-generated outputs
- Feed insights into broader engineering practices
- Contribute to the rollout of AI-enabled engineering practices, including:
- Defining success criteria
- Establishing guardrails and review processes
- Highlighting cost-benefit trade-offs
- Investigate how AI capabilities can be embedded into existing development workflows and delivery pipelines
- Evaluate how ready the organization is to adopt AI-driven practices, and communicate findings to both technical and non-technical stakeholders
What We’re Looking For
Core Experience
- Solid background in software engineering (typically 6+ years)
- Practical exposure to AI-powered development tools in a professional setting
- Experience assessing and trialing new technologies in structured environments, with clear evaluation criteria
- Familiarity with applying AI to:
- Generate or enhance technical documentation
- Support automated testing efforts
- Strong understanding of software delivery practices and where automation can add value without compromising quality
- Working knowledge of unit testing approaches and frameworks across common stacks (e.g., .NET or JavaScript ecosystems)
- Ability to balance technical possibilities with real-world constraints such as team maturity and operating models
- Confident communicator, able to translate technical findings into meaningful insights for stakeholders
- Awareness of considerations around data usage, security, and intellectual property when leveraging AI tools
Additional Advantage
- Exposure to large language model (LLM) use cases in engineering, such as code interpretation or rules extraction
- Experience working with modern front-end frameworks or full-stack environments
- Familiarity with tools that assist in automated code review or quality checks
- Background in enterprise-grade platforms or ecosystems (e.g., Microsoft stack)
- Understanding of prompt design and optimisation for technical use cases
- Experience with behaviour-driven or acceptance-driven testing approaches
- Awareness of techniques that improve AI context handling (e.g., retrieval-based approaches)
- Exposure to vendor/tool evaluation from a security or compliance standpoint
- Contributions to internal or external knowledge sharing on AI in engineering
ST
Reg No. R1768414
BeathChapman Pte Ltd
Licence no. 16S8112





