Apply for this role
Think this could be the right fit for you? Leave us your details below and we’ll be in touch, or email us directly.
Technical / Engineering
Feb 17, 2026
Conversational AI Engineer
Manila, Philippines
Full-time
In Office
About Our Client
Our client is a global industry leader in the BPO and customer-experience sector, with major delivery centers across the USA, Mexico, and the Philippines. Serving numerous Fortune 500 and enterprise customers, they consistently outperform competitors on service quality, operational delivery, and client satisfaction.
With 13 000 + employees, 24 x 7 operations, and a collaborative executive team, the company continues to expand rapidly. Its culture balances client excellence with employee engagement, empowering leaders and teams to perform at their best while fostering innovation and accountability.
The Opportunity
The Conversational AI Engineer will design, build, and deploy production-grade conversational AI systems that enhance customer interactions across a large-scale global enterprise.
This is a hands-on engineering role focused on real-world delivery. You will architect and operationalize AI agents powered by large language models, Retrieval-Augmented Generation (RAG), and agentic frameworks that support customer service, automation, and operational efficiency at scale.
You will work closely with engineering, data, and operations teams to ensure solutions are reliable, measurable, secure, and aligned to enterprise standards.
The Ideal Profile
You are a strong Python engineer with proven experience delivering AI systems into production. You understand that conversational AI is not just about prompts — it is about architecture, observability, cost control, model validation, and lifecycle management.
You have hands-on experience with LLM-powered agents and RAG systems operating in live environments. You are comfortable working across cloud ecosystems and implementing disciplined MLOps practices that ensure performance and reliability.
Working Model & Eligibility
Flexible working model. Hybrid or remote considered depending on location.
US work authorization required. No visa sponsorship available.
What You’ll Lead
Conversational AI & LLM Systems
Design and deploy conversational AI solutions leveraging GPT-4/5, Gemini, and related large language models.
Build and operationalize AI agents using RAG, Agentic RAG/RAT architectures in production environments.
Ensure performance optimization, response accuracy, latency control, and cost efficiency across deployed systems.
Machine Learning & Model Development
Build and validate machine learning models aligned to conversational use cases.
Establish rigorous evaluation methodologies to measure model quality, accuracy, and business impact.
Deploy models into scalable cloud environments with defined monitoring thresholds and SLAs.
MLOps & Production Engineering
Design and manage ML pipelines including data ingestion, preprocessing, training, validation, and deployment.
Implement versioning, monitoring, drift detection, and rollback strategies to ensure stability and resilience.
Leverage modern development environments including Git, Anaconda, PiP, and Docker to maintain reproducible builds.
Deploy and manage AI solutions across Azure AI and Google ecosystems.
Enterprise Collaboration
Partner with CX, operations, and IT teams to translate business needs into production AI solutions.
Document architecture, workflows, and model governance to ensure long-term sustainability and compliance.
What You’ll Bring
Proven experience delivering production-grade conversational AI systems.
Hands-on experience with AI agents, RAG, and Agentic RAG/RAT in live environments — not prototypes.
Experience working with large language models such as GPT-4/5 and Gemini.
Strong Python engineering skills with demonstrated production delivery.
Hands-on experience with Azure AI, Google AI platforms, and cloud-native services.
Experience building, validating, and deploying machine learning models.
Strong MLOps capabilities including pipeline orchestration, versioning, monitoring, drift detection, and rollback.
Proficiency in Git, Anaconda, PiP, Docker, and modern cloud development environments.
Bachelor’s degree in Computer Science, Engineering, Data Science, or related field. Advanced degree preferred.
How Success Will Be Measured
Production Deployment: Stable, scalable conversational AI systems delivered on time and meeting defined SLAs.
Model Quality: Measurable improvements in response accuracy, containment rate, or automation performance.
Operational Stability: Effective monitoring and drift detection minimizing production incidents.
Business Impact: Demonstrated efficiency gains, improved CX metrics, or reduced manual handling attributable to AI systems.