As enterprises build and deploy artificial intelligence systems, it's important to understand the ethical considerations of our work. Ethics are not a separate business objective bolted on after an AI system has been deployed. They are part of business performance. Only by embedding ethical principles into AI applications and processes can we build systems that people can trust.
As AI advances, and humans and AI systems increasingly work together, it is essential that we trust the output of these systems to inform our decisions. Alongside policy considerations and business efforts, science has a central role to play: developing and applying tools to wire AI systems for trust. To encourage the adoption of AI, we must ensure it does not take on and amplify our biases and knowing how an AI system arrives at an outcome is key to trust, particularly for enterprise AI.
IBM Research has open-sourced AI Fairness 360 (http://aif360.mybluemix.net), a comprehensive open-source toolkit of metrics and algorithms to check for and mitigate unwanted bias in AI, to help the community engender trust in AI. IBM also launched its Trust & Transparency service as part of AIOpenScale (https://www.ibm.com/cloud/ai-openscale). This service provides explanations into how AI decisions are being made, and automatically detect and mitigate bias to produce fair, trusted outcomes.
In this meetup we will explore the 'dangers of AI' being bias, (lack of) explanation and robustness issues. Next to that we will explore the AI Fairness 360 toolkit and IBM's Trust & Transparency service. Hands on examples will be available.
Speaker: Stefan Van Den Borre, Technical Professional - Watson Data Platform, IBM.
Doors open at 17:30 hrs.
Session intended to start at 18:00 hrs.
Claim the event and start manage its content.I am the organizer