10Pearls

Senior Software Consultant - LLMOps, LangChain, Azure OpenAI - Afternoon Shift

Karachi, Lahore, Islamabad, Pakistan - Full Time

Company Overview
10Pearls is an award-winning end-to-end digital innovation company that helps businesses imagine and build the future. We are proud to announce that 10Pearls was named as winner of the Best Tech Work Culture Timmy Award in Washington DC by Tech in Motion, recognized on the Inc. 5000 Fastest-Growing Companies List, and was ranked the #1 Most Diverse Midsize Company in Greater Washington. We partner with businesses to help them transform, scale, and accelerate by adopting digital and exponential technologies. Our work has ranged from creating highly usable, secure digital experiences, mobile and software products, to helping businesses modernize through cloud adoption and development and the digitalization of their business processes. Our clientele is highly diverse, including Global 1000 enterprises, mid-market businesses, and high-growth start-ups. But those are just the facts. What makes us unique is that we have true heart and soul. We have a strong focus on a double bottom line and actively support and engage with the communities where we live and work to make the world a better place. In a nutshell, we believe in doing well, while doing good, and know how to balance the two.

Role
10Pearls is seeking a Senior AI Engineer to design, build, and scale production-grade Generative AI systems. This role is ideal for an engineer who has moved beyond experimentation and has deep expertise in designing and operating production-ready RAG pipelines using modern cloud AI platforms.
You will work at the intersection of LLMOps, Retrieval-Augmented Generation (RAG), Azure AI Foundry, orchestration frameworks, and cloud AI infrastructure, building intelligent workflows that power customer-facing products across web, messaging, and enterprise platforms. This is a hands-on engineering role with direct ownership of AI system design, performance, reliability, and evolution.

Responsibilities
• Design and implement LLM-powered applications using frameworks such as LangChain and LangGraph
• Architect and build production-grade RAG pipelines, including ingestion, chunking strategies, embedding optimization, retrieval tuning, and grounding
• Leverage Azure AI Foundry and Azure OpenAI services to build scalable, enterprise-ready AI solutions
• Integrate and optimize Azure AI Search / vector-based retrieval systems for high-performance semantic search
• Develop robust prompt engineering strategies, including prompt templates, versioning, and evaluation frameworks
• Implement LLMOps pipelines for deployment, monitoring, experimentation, and continuous improvement
• Optimize latency, accuracy, scalability, and cost efficiency across LLM workflows
• Utilize Redis or similar caching mechanisms to manage conversational memory and improve response performance
• Collaborate closely with product managers, backend engineers, and frontend teams to deliver end-to-end AI-driven features
• Monitor system performance in production and proactively address quality, safety, and reliability issues
• Apply best practices in scalability, security, observability, and fault tolerance for AI systems


Requirements
• Bachelor’s or Master’s degree in Computer Science, AI, Software Engineering, or a related field
• 4–8 years of experience in AI/ML engineering, backend engineering, or applied data systems
• Strong hands-on experience building LLM-based / Generative AI applications in production environments
• Proven expertise in designing and deploying production-ready RAG architectures
• Strong experience with Azure AI Foundry, Azure OpenAI, and Azure AI Search
• Excellent proficiency in Python, including building scalable APIs and services
• Deep understanding of vector embeddings, semantic search, and retrieval optimization techniques
• Experience with LangChain, LangGraph, or similar orchestration frameworks
• Exposure to LLMOps / MLOps practices (deployment, monitoring, evaluation, versioning)
• Strong problem-solving skills with ability to evaluate system trade-offs at scale


Nice to Have
• Experience designing multi-agent or multi-step AI workflows
• Hands-on experience with vector databases and advanced retrieval tuning
• Experience optimizing LLM cost vs. performance trade-offs in production
• Familiarity with CI/CD pipelines, cloud infrastructure, and observability tools
• Experience building enterprise-grade or customer-facing AI products

 
Apply: Senior Software Consultant - LLMOps, LangChain, Azure OpenAI - Afternoon Shift
* Required fields
First name*
Last name*
Email address*
Location *
Phone number*
Resume*

Attach resume as .pdf, .doc, .docx, .odt, .txt, or .rtf (limit 5MB) or paste resume

Paste your resume here or attach resume file

What’s your highest level of education completed?*
College or University*
GPA
LinkedInLinkedIn profile URL:*
Total years of experience in AI/ML or backend development?*
Do you have hands-on experience with LLMs (Azure OpenAI / OpenAI APIs)? (Yes/No - Specify Years of Experience)*
Have you built RAG pipelines or AI workflows? Briefly describe one LLM/RAG project you’ve built (tools used + your role)*
Experience with LangChain, LangGraph, or similar frameworks? (Yes/No – specify)*
Strong in Python development? (Yes/No - Specify Years of Experience)*
Have you worked on prompt engineering / prompt versioning? (Yes/No - Specify Years of Experience)*
Experience with Azure AI services (Azure OpenAI, AI Search)? (Yes/No)*
Have you deployed AI solutions in production environments? (Yes/No - explain)*
Experience with caching tools like Redis? (Yes/No)*
Please mention your Current Salary*
Please mention your Expected Salary and Notice Period*
Do you have hands-on experience with Azure AI Foundry / Azure OpenAI?*
Experience with Azure AI Search or vector-based retrieval systems? (Yes/No)*
Experience with LLMOps (deployment, monitoring, prompt/version management)? (Yes/No)*
Experience with LLMOps (deployment, monitoring, prompt/version management)? (Yes/No)*
Human Check*