About Kernel Labs by Kuriko IWAI - ML Engineering & Research
Deep dives into LLM deployment, CI/CD for ML, and Reinforcement Learning through 6+ production-grade projects and system architectures.
Explore:
- AI Engineering Masterclass: Build eight AI systems to master LLM techniques.
- Research & Blogs
- Theory & Foundation: Loss landscapes, optimization convergence, statistical models.
- MLOps: Enterprise-grade engineering of ML lineage.
- Learning Scenario: Technical breakdown of specialized learning schemas.
- LLM Engineering: Transformer, tokenization strategies, fine-tuning, and inference optimization.
- Agentic AI: Vector DB embedding strategies, RAG, and Agentic decision logic
- Labs: Experimentations on ML systems with walk-through tutorials and code snippets.
- Solution: ML system and data pipeline engineering, AI audit services.
Hosted by Kuriko IWAI

Kuriko IWAI is a seasoned Applied Machine Learning Engineer with a proven track record in building Agentic AI SDKs (60K+ downloads) and high-concurrency ML architectures. She bridges business and technology by deploying ROI-driven ML systems and large-scale inference pipelines across AWS, Azure, and GCP platforms.
As founder of Kernel Labs, she has architected production-grade ML systems from scratch, including distributed LLM agent networks and Bayesian demand modeling solutions. Her expertise spans PyTorch, TensorFlow, HuggingFace, and MLOps tools like DVC and Airflow, with hands-on experience at leading companies including Indeed, Meta, The Walt Disney Company, Kearney, and Mineski Global, where she has driven scalable AI implementations, infrastructure optimizations, and product-driven growth strategies.
She holds an MBA from INSEAD (2018).
Tech Stacks
- AI/ML: PyTorch, TensorFlow, Keras, Scikit-learn, HuggingFace, LangChain, CrewAI
- MLOps Tools: DVC, Prefect, Airflow, Spark, Git, Docker
- Data Science: Pandas, NumPy, Matplotlib, Excel, Tableau, Power BI, Google Analytics
- Cloud: AWS (Lambda, SageMaker, Bedrock), Azure, Google Cloud Platform (GCP)
- Programming Languages: Python, JavaScript, R, SQL, Java
Engineering Solutions
- End-to-end ML System Development: I architect and deploy end-to-end ML systems for your downstream services.
- Infrastructure Architecture: I design reliable ML systems to ensure your models perform consistently in production.
- ETL Pipeline Engineering: I design data pipelines to cleanse, structure, and optimize raw data for model training.
- Reliability & Security Audit: I audit your AI pipeline using evaluation frameworks and quantify system faithfulness.
- Technical IP & Content: I create technical contents to turn complex ML concepts into actionable insights.
Featured
Background
I am an Applied Machine Learning Engineer, specializing in the architecture and deployment of end-to-end ML systems, distributed agentic frameworks, and high-concurrency inference pipelines.
The Trajectory: From Strategy to Systems
My career began in high-growth product leadership, serving as Director of the Digital Division for a Series-A gaming startup. While I successfully scaled infrastructure for 3M users and generated $M ARR, I identified a critical industry failure point: the black box of vendor proposals. So, I pivoted to software engineering to own the how and why of the stack.
This transition evolved into a focus on Applied ML, where I have since:
- Architected Agentic Ecosystems: Engineered a Python SDK for distributed LLM networks that achieved 60K+ downloads (Top 25% globally) and implemented DAG-based orchestration to enhance workflow reliability.
- Built 0 → 1 ML Systems: As a Senior MLE/Founder at Kernel Labs, I deployed production-grade systems—ranging from Bayesian demand modeling to LoRA multi-adapter orchestration, reducing model deployment latency by 30%.
- Scaled Large-Scale Inference: Currently at Indeed, I lead a mission-critical sprint architecting agentic workflows that process job descriptions daily for automated skill extraction and matching.
Technical Focus
I prioritize speed + impact and high-availability design, bridging the gap between experimental notebooks and production-ready services. My toolkit is centered on Python and PyTorch, backed by a deep MLOps stack (DVC, Airflow, SageMaker) and a history of securing significant cloud infrastructure grants (e.g., $200K AWS technical grant).
Beyond the codebase, I enjoy sharing technical contents. I regularly publish deep dives to synthesize complex ML concepts and have delivered AI Engineering Masterclasses on building scalable LLM infrastructure.


