GPT-Research studies how the deployment architecture of AI systems can address safety structurally. On-device inference eliminates external data transmission. Local compute keeps control with the user. These are architectural properties, not policy additions.


"On-device inference means data never leaves the hardware it runs on. That is not a policy decision. It is an architectural one. It makes entire categories of risk structurally impossible."

Lehuy Hoang

Research Scientist at GPT-Research



GPT-Research publishes ongoing work on architectural approaches to AI safety. On-device deployment. Verifiable training methodology. Reproducible model behavior.

View All Posts

AI capability grows. The question is where it runs and who controls the output. GPT-Research is focused on making on-device the answer.


We are pushing the boundaries of on-device AI.