• Architected an agentic enterprise AI coding assistant (“LSEG AI Assistant”) for VS Code and JupyterLab; engineered a LangGraph-driven orchestration layer that interfaces with an internal Model Context Protocol (MCP) server, allowing the LLM agent to securely execute internal tools and fetch backend data context.
• Designed a Retrieval-Augmented Generation (RAG) pipeline utilizing vector embeddings, and evaluated Parameter-Efficient Fine-Tuning (PEFT/LoRA) strategies to ground LLM suggestions in proprietary financial codebases, significantly reducing model hallucinations.
• Architected complex, state-driven frontend applications utilizing advanced React patterns (custom hooks, Context API) and strict TypeScript; built highly reusable component libraries that seamlessly bridge web and IDE Webview environments.
• Engineered highly responsive, data-intensive user interfaces optimized for real-time financial data streams; heavily tuned the browser rendering pipeline, DOM reconciliation, and bundle sizes to maintain strictly sub-100ms interaction latency.
• Spearheaded the DevEx architecture for LSEG’s quantitative analytics platform by designing a scalable TypeScript monorepo, eliminating technical debt and accelerating cross-platform feature delivery.
• Optimized LLM inference and system performance by implementing a resilient multi-tier caching strategy (8-hour persisted ICC data, in-memory package cache), drastically reducing redundant API calls.
• Owned end-to-end security and release engineering; designed OAuth 2.0 + PKCE identity flows within IDE constraints, established GitLab CI/CD pipelines with SAST scanning, and monitored AI system health via Application Insights.