Chris Schmidt

Machine Learning Engineer, HII Mission Technologies

Five years of professional experience across data engineering, software development, and ML engineering. Currently supporting the F-35 Joint Program Office's data modernization initiative at HII Mission Technologies on AWS GovCloud.

MS in Applied Mathematics (Towson University, 2021). Pursuing an MSE in AI Engineering at Johns Hopkins University (expected December 2028). The projects here are personal learning work and JHU coursework.

Chris Schmidt
🎓 MSE AI Engineering — Johns Hopkins University
📐 MS Applied Mathematics
⚙️ Inference Optimization & Deployment
🧠 NLP, LLMs & Deep Learning
📊 Applied ML Research

Currently exploring

Real-time ML pipelines on streaming telemetry built
Unsupervised anomaly detection without labeled data built
LLM interpretability: from detection to explanation building
Evaluation harness design for agentic LLM outputs researching

Selected Projects

Highlights from my ML/AI project portfolio — each structured as a case study with problem context, approach, and measurable outcomes.

Production Inference Optimization Study

Apr 2026 Featured

ONNX export, INT8 quantization, and adaptive batching applied to a production sentence-transformer. ~9× throughput improvement on CPU with 74% model size reduction and validated accuracy.

~9× throughput (602 req/s)
74% model size reduction
2.9ms p50 latency

AeroIntel — Real-Time Aviation Intelligence Dashboard

Apr 2026 Featured

A real-time anomaly detection pipeline over live ADS-B telemetry with no labeled training data. Fuses two commercial feeds, applies Kalman filtering to maintain state across sparse position updates, detects orbital and holding patterns with DBSCAN, and scores deviations with IsolationForest. Claude explains flagged aircraft in plain language. Deployed continuously on Fly.io with CI/CD.

Full ML pipeline: Kalman + DBSCAN + IsolationForest, no labeled data required
11,000+ aircraft processed per 60-second inference cycle
4 real anomaly explanations from live military flight data

Optimizer Deep Dive: GD to Adam

Spring 2026 Featured

Pure NumPy BGD, SGD+Momentum, Adam + neural net with backprop trained on MNIST. Loss landscape visualization and initialization sensitivity. 29 tests.

29 passing tests
~93% MNIST accuracy (pure NumPy)
3 optimizers from scratch

Parameter-Efficient Fine-Tuning Study (LoRA)

Apr 2026 Featured

Systematic LoRA rank ablation on GPT-2 for dialogue summarization. Trains rank [2,4,8,16] x alpha [8,16,32] adapters on SAMSum, evaluates with ROUGE + BERTScore, and shows diminishing returns beyond rank 8. CPU-only, fully reproducible.

5 LoRA configurations ablated
12 passing tests
0.65% trainable params (rank 8)