Staff AI Application Engineer, Enterprise AI
GE Healthcare
Job Description Summary
GE HealthCare is accelerating its transformation through a series of strategic “AI Big Bets” in Commercial excellence, Logistics optimization, Inventory management, and Manufacturing innovation. The Enterprise AI team, part of the Chief Data and Analytics Office, is at the forefront of delivering robust, enterprise-grade AI and ML solutions that drive measurable business impact at scale.As the Staff AI Application Engineer, you will be at the forefront of developing and delivering innovative GenAI and Agentic AI solutions that generate actionable business insights and transform key areas within GE HealthCare, including Finance, Commercial, Supply Chain, Quality, Operational Excellence and Lean, and Manufacturing. We are seeking a highly skilled and motivated AI Application Engineer to join our dynamic team. You will play a pivotal role in shaping and executing our AI strategy. You’ll collaborate across a unified, cross-functional delivery organization—partnering with experts in data engineering, ML engineering, analytics, and GenAI development—to solve complex business challenges and deliver scalable solutions.
tex
Job Description
Role Overview
GE HealthCare is accelerating its transformation through a series of strategic “AI Big Bets” in Commercial excellence, Logistics optimization, Inventory management, and Manufacturing innovation. The Enterprise AI team, part of the Chief Data and Analytics Office, is at the forefront of delivering robust, enterprise-grade AI and ML solutions that drive measurable business impact at scale.
As the Staff AI Application Engineer, you will be at the forefront of developing and delivering innovative GenAI and Agentic AI solutions that generate actionable business insights and transform key areas within GE HealthCare, including Finance, Commercial, Supply Chain, Quality, Operational Excellence and Lean, and Manufacturing. We are seeking a highly skilled and motivated AI Application Engineer to join our dynamic team. You will play a pivotal role in shaping and executing our AI strategy. You’ll collaborate across a unified, cross-functional delivery organization—partnering with experts in data engineering, ML engineering, analytics, and GenAI development—to solve complex business challenges and deliver scalable solutions.
Core Responsibilities
- Design and develop AI-powered applications, integrating machine learning and generative models into enterprise-grade software products and internal tools. Own the full software development lifecycle (SDLC), including unit, integration, and end-to-end testing.
- Frontend: Develop modern, intuitive interfaces for AI applications (React/Next.js, TypeScript, or equivalent) with a strong focus on usability, accessibility, and AI explainability.
- Backend: Implement scalable and secure back-end services (FastAPI, Flask, or Node.js) to expose AI capabilities (LLMs, RAG pipelines, AI agents) through standardized APIs.
- Translate data science prototypes and GenAI models (LLMs, diffusion models, transformers) into scalable applications or services with intuitive user interfaces and reliable back-end infrastructure.
- Collaborate with insight leaders and business stakeholders on requirements gathering, project documentation, and development planning.
- Partners with MLOps and GenAIOps teams to deploy, monitor, and continuously improve AI applications within standardized CI/CD pipelines.
- Design and implement integrations using REST, GraphQL, and gRPC; work with cloud-based AI APIs (Azure, AWS, GCP) and enterprise data sources.
- Integrate cloud-native AI services (AWS Bedrock, Azure OpenAI) and open-source frameworks (LangChain, LangGraph) into enterprise environments.
- Monitor application performance and user adoption, iterating on models and workflows to enhance usability and business impact.
- Optimize application performance, infrastructure efficiency, and LLM utilization.
- Document architectures, APIs, and deployment processes to ensure transparency, reusability, and maintainability.
Experience Requirements
- Education: Master’s or PhD degree (or equivalent experience) in Computer Science, Software Engineering, Artificial Intelligence, or related STEM field.
- Experience: 3–5 years of hands-on experience developing and deploying AI-powered or data-driven applications in enterprise environments.
- Advanced proficiency in Python, plus strong working knowledge of TypeScript/JavaScript and at least one modern web framework (React, Next.js, FastAPI, Flask).
- Proven track record implementing end-to-end AI systems, integrating ML/LLM models into scalable microservices or enterprise applications.
- Strong experience in ML/GenAI frameworks (TensorFlow, PyTorch, LangChain, AutoGen, Semantic Kernel) and cloud-native AI platforms (AWS Bedrock, Azure OpenAI).
- Working knowledge of cloud environments (AWS, Azure, or GCP) and containerization tools (Docker).
- Deep experience with Docker, Kubernetes, and CI/CD automation for AI workloads.
- Demonstrated experience with RAG pipelines, vector databases, and document retrieval frameworks.
- Solid understanding of LLMOps / GenAIOps integration patterns, model evaluation, and prompt optimization workflows.
- Strong collaboration skills and the ability to communicate effectively within cross-functional teams.
- Ability to mentor junior engineers, perform code reviews, and contribute to architectural decisions.
- Strong problem-solving, debugging, and analytical skills, with clear and persuasive communication to technical and business audiences.