Zürich AI | Agentic Loops
ACP Engineering shares what it takes to make LLMs useful in production: standardized codebases, typed data layers, and enterprise environments that are actually ready for agents.
"When models become the commodity."
Speakers
Details
ACP Engineering works extensively with LLMs to write production code across a small team. What makes that work is not the models, but the groundwork laid before them: a portfolio of Python projects sharing the same tooling, structure, tests, CI, and linters, the same conditions that already made the codebase easy for humans to navigate.
That foundation is what powers the current build of ACP-M, ACP’s industrial IoT platform for manufacturing customers, and an early agentic-loop prototype now running internally. The next step is enabling customers to automate their own operations the same way. Doing that requires the same kind of preparation on their side: turning fragmented operational data into knowledge graphs, unified namespaces, and typed semantic layers, so that agents act on information that is short, focused, and reliable.
Once that environment exists, the model becomes a commodity. What matters is enterprise processes and domain knowledge, not which model happens to be in fashion that quarter. Attendees will leave with a concrete way to assess whether their own environment is ready for agents at all, and what it takes to get there.
AI ships your features faster than ever, but it also ships bugs faster than ever. Code got cheap, maintenance didn’t: customer complaints stack up, Sentry alerts overflow, and triaging the pile is still as expensive as it ever was. Victor Chibotaru will show how LogicStar finds bugs across your Sentry, Jira and codebase, separates signal from noise, and helps your team fix issues that matter.
- Talk: When Models Become the Commodity
- Talk: Shipping Faster, Drowning Sooner
- Date: Tuesday, May 26, 2026, 17:30–19:30 CEST
- Venue: Technopark Winterthur AG