AI Security & Governance at FHNW
As part of the seminar “The Trust Factor: AI for Business Transformation”, Dr. Swantje Westpfahl welcomed guest lecturer Nils Lamb, founder of Askora, for an engaging session that explored the theoretical foundations and practical applications of trust and security in artificial intelligence systems. The lecture, held on November 6th, is part of the module “AI Security & Governance”.
Students discussed key questions including:
- How can the security of AI systems be assessed?
- What governance structures are needed to ensure responsible AI use?
- How can trust be built when AI interacts directly with customers?
Several themes emerged as central to these discussions: security-by-design as a trust-enabling factor, the use of guardrails and data sparsity as key factors for security and privacy measures to gain digital trust.

Lecture Highlights
A key highlight of the session was the evaluation of a real-world AI Voice Agent developed by Askora — a system that automates customer communication and lead qualification in the sales process. This practical example illustrated how AI technologies are developed, deployed, and governed across business contexts, from startups to large enterprises.
After having previously explored ethical concerns, algorithmic bias, and the importance of regulations and best-practice frameworks, students were able to see how these concepts translate into practice, both in large companies such as Harman, where Nils Lamb serves as CISO, and in startups building AI solutions from the ground up.
The guest lecture offered students the chance to connect theoretical knowledge with real industry practice. The interactive discussion reflected their curiosity and critical thinking, making the session a highly rewarding experience for everyone involved.