
Exploring
AI Ethics
Exploring trustworthy AI system development and the importance of aligning technology with human values is central to AI ethics. This includes addressing bias, transparency, accountability, and safety in AI systems.
News & Projects
Exploring the core principles that guide trustworthy AI system development.
Interdisciplinary team designs and drafts future social tech concepts & studies for companies in order to generate human-centered experience and sustainable social impact.
The Trustworthy Artificial Intelligence Implementation (TAII) Framework Canvas is available on Miroverse.
The TAII Framework is listed at the European AI Alliance and at the OECD.AI Catalogue of Tools & Metrics for Trustworthy AI.
Download the TAII Framework Infographic.
TAII Framework® Book
The Trustworthy Artificial Intelligence Implementation (TAII) Framework generates a meta perspective of ethics within the AI system developer’s ecosystem by designing social impact. Information and book order
Steven Umbrello
Managing Director at the Institute for Ethics and Emerging Technologies
“Josef Baker-Brunnbauer’s new book on the application of a practical framework for safe and trustworthy AI is a must-read for anyone working in the field of AI and machine learning. The author expertly guides readers through the complexities of building AI systems that are not only effective, but also safe and trustworthy. With clear and concise explanations, practical examples, and a wealth of insights, this book is an invaluable resource for anyone looking to stay ahead of the curve in the rapidly-evolving field of AI.“
Claretha Hughes
Robonomics, The Journal of the Automated Economy, Vol. 4, May, 2023
“Teaching and training ethics is already a difficult task. Adding AI to the ethics discussion further complicates decision making for managers, but this book provides clear examples and urgency for it to be done. For practitioners and researchers who seek to help with organizational development and implementation of AI and AI ethics, this book can be a valuable asset. The scholarly studies cited, and the historical knowledge provide a rich empirical landscape from which to build a foundation for other empirical studies on TAII system implementation.”


Articles That Refer to the TAII Framework (extract)
Li, B., Qi, P., Liu, B., Di, S., Liu, J., Pei, J., … & Zhou, B. (2023). Trustworthy AI: From principles to practices. ACM Computing Surveys, 55(9), 1-46.
Díaz-Rodríguez, N., Del Ser, J., Coeckelbergh, M., De Prado, M. L., Herrera-Viedma, E., & Herrera, F. (2023). Connecting the dots in trustworthy Artificial Intelligence: From AI principles, ethics, and key requirements to responsible AI systems and regulation. Information Fusion, 99, 101896.
Ivanov, S., Soliman, M., Tuomi, A., Alkathiri, N. A., & Al-Alawi, A. N. (2024). Drivers of generative AI adoption in higher education through the lens of the Theory of Planned Behaviour. Technology in Society, 77, 102521.
Belanche, D., Belk, R. W., Casaló, L. V., & Flavián, C. (2024). The dark side of artificial intelligence in services. The Service Industries Journal, 44(3-4), 149-172.
Ivanov, S., & Webster, C. (2024). Automated decision-making: Hoteliers’ perceptions. Technology in Society, 76, 102430.
Ivanov, S., & Umbrello, S. (2021). The ethics of artificial intelligence and robotization in tourism and hospitality-A conceptual framework and research agenda. Journal of Smart Tourism, 1(4), 9-18.
Herrera-Poyatos, A., Del Ser, J., de Prado, M. L., Wang, F. Y., Herrera-Viedma, E., & Herrera, F. (2025). Responsible Artificial Intelligence Systems: A Roadmap to Society’s Trust through Trustworthy AI, Auditability, Accountability, and Governance. arXiv preprint arXiv:2503.04739.
Ronanki, K., Cabrero-Daniel, B., Horkoff, J., & Berger, C. (2023, July). RE-centric Recommendations for the Development of Trustworthy (er) Autonomous Systems. In Proceedings of the First International Symposium on Trustworthy Autonomous Systems (pp. 1-8).
Mentzas, G., Fikardos, M., Lepenioti, K., & Apostolou, D. (2024). Exploring the landscape of trustworthy artificial intelligence: status and challenges. Intelligent Decision Technologies, 18(2), 837-854.
Corrêa, N. K., Santos, J. W., Galvão, C., Pasetti, M., Schiavon, D., Naqvi, F., … & Oliveira, N. D. (2025). Crossing the principle–practice gap in AI ethics with ethical problem-solving. AI and Ethics, 5(2), 1271-1288.
McFadden, B. R., Reynolds, M., & Inglis, T. J. (2023). Developing machine learning systems worthy of trust for infection science: a requirement for future implementation into clinical practice. Frontiers in Digital Health, 5, 1260602.
Utomo, S., John, A., Rouniyar, A., Hsu, H. C., & Hsiung, P. A. (2022, September). Federated trustworthy ai architecture for smart cities. In 2022 IEEE International Smart Cities Conference (ISC2) (pp. 1-7). IEEE.
Pratap, A., Sardana, N., Utomo, S., Ayeelyan, J., Karthikeyan, P., & Hsiung, P. A. (2022). A synergic approach of deep learning towards digital additive manufacturing: A review. Algorithms, 15(12), 466.
Bai, Q., Ma, J., & Xu, T. (2024). Ai Deep Learning Generative Models for Drug Discovery. In Applications of Generative AI (pp. 461-475). Cham: Springer International Publishing.
Ajayi, O. O., Adebayo, A. S., & Chukwurah, N. (2024). Ethical AI and Autonomous Systems: A Review of Current Practices and a Framework for Responsible Integration.
Ronanki, K. (2023, May). Towards an AI-centric Requirements Engineering Framework for Trustworthy AI. In 2023 IEEE/ACM 45th International Conference on Software Engineering: Companion Proceedings (ICSE-Companion) (pp. 278-280). IEEE.
Lorenz, S., Stinehour, S., Chennamaneni, A., Subhani, A. B., & Nadim, M. (2025). A Case Study of 3rd Party Hardware: The Weakest Link in Google’s Trustworthy Artificial Intelligence Implementation. IEEE Access.
Herrera, F. (2023, October). Toward Responsible Artificial Intelligence Systems: Safety and Trustworthiness. In International Conference on Engineering of Computer-Based Systems (pp. 7-11). Cham: Springer Nature Switzerland.
Nkwo, M., Ikwunne, T., Adejoro, C., & Anuyah, O. (2025). Exploring Usability of AI Systems in the Global South—Toward Responsible Human-Centered AI for Sustainable Cities and Communities. In Usability for the World: Building Better Cities and Communities (pp. 125-165). Cham: Springer Nature Switzerland.
Agbese, M. (2022, November). Implementing Artificial Intelligence Ethics in Trustworthy System Development-Making AI Ethics a Business Case. In International Conference on Product-Focused Software Process Improvement (pp. 656-661). Cham: Springer International Publishing.
Ivanov, S. (2024). The economics of generative AI. In Applications of Generative AI (pp. 491-502). Cham: Springer International Publishing.
Herrera, F., Herrera, A., Del Ser, J., Herrera-Viedma, E., & de Prado, M. L. (2025). Trustworthy Artificial Intelligence: Nature, Requirements, Regulation, and Emerging Discussions. In Transactions of ADIA Lab: Interdisciplinary Advances in Data and Computational Science (pp. 317-342).
Ivanov, S., & Umbrello, S. (2021). The Ethics of Artificial Intelligence and Robotization in Tourism and Hospitality.
AI Literacy by Law
To be compliant with Article 4 of the EU AI Act, companies need to take measures of increading AI literacy.
The following points are outcomes of using the TAII Framework for trustworthy AI system development. By taking these steps, companies can ensure they meet the AI literacy requirements set out in Article 4 of the EU AI Act.
Assessment of current AI literacy levels:
Companies evaluate the existing AI literacy within their organisation, including the technical knowledge, experience, education, and training of their staff.
Implementation of training programs:
Develop and provide regular training programs that cover both the technical aspects of AI and the ethical considerations. This training should be tailored to address the specific roles and responsibilities of employees.
Development of internal guidelines and standards:
Establish clear guidelines and standards for AI use within the company. These should outline best practices, ethical principles, and compliance requirements.
Continuous improvement:
The TAII Framework is iterative and refers to the whole AI system’s life cycle, allowing for ongoing refinement and improvement of ethical considerations.
Encouragement of interdisciplinary communication:
Foster communication and collaboration between different departments such as IT, ethics, and legal. This helps in developing a comprehensive understanding of trustworthy AI systems.
Documentation of compliance efforts:
Keep detailed records of all AI literacy measures implemented, including training programs, guidelines, and any other initiatives. This documentation will be crucial for demonstrating compliance during regulatory inquiries.
Stay updated with evolving standards:
Regularly review updates from regulatory bodies and industry best practices to ensure ongoing compliance.
Embed AI literacy across the organisation:
Ensure that AI literacy is not limited to technical teams but is integrated across all relevant aspects of the business.
