| Technical Profile | ||
| ML/AI Research | Computer Vision • NLP • Time-Series • Transfer Learning | PyTorch, Transformers, CLIP, Diffusion Models |
| Performance | CUDA • ONNX • Compute Shaders • Quantization | Kernel fusion, memory bandwidth optimization |
| Languages | Python • C • Rust • Odin | ML model development lifecycle, FFI bindings, SIMD-optimized computation, bare metal when needed |
| Infrastructure | Docker • AWS • MLflow • DuckDB | Cloud deployment, reproducible environments, experiment tracking, ETL |
| Trajectory | ||
| 2025–Present | AI Principal Engineer | EAGLYS, Tokyo, Japan |
| 2025–2025 | Project AI Lead | |
| 2022–2025 | AI Research Engineer | |
| 2021–2022 | ML Engineer | Nomura Research Institute, Jakarta, Indonesia |
| 2021–Present | Independent Consultant | Self |
| 2016–2021 | Integrated MS-PhD, Industrial Information Systems Engineering | HUFS, Yongin, South Korea |
| Technical Philosophy | |
| Core Principle | Complexity is debt. Every abstraction must earn its place. Every dependency is liability. |
| Methodology | Measure first, optimize on evidence. Start from fundamentals, not frameworks. |
| Performance | Most slowness is broken defaults. Fix those first, then go deep. |
| Avoid | Premature abstraction • Optimization without profiling • Accepting "that's just how it is" |