The very term brings to bear largely dystopian scenarios in a country like India. It is perceived, at best, as yet another means to privilege the affluent and powerful and exclude the rest of the population. At worst, it is, of course, the terminator!
At this moment, countries like India (at least the ones I am familiar with, across Asia) are all on an uncertain journey where ‘growth’ and ‘development’ as defined by western institutions are the goal! The result is a state of ‘unbelonging’ where we don’t belong anymore to our own traditional value systems because they don’t seem to be cool any longer. But at the same time, we don’t totally belong to the Western ecosystem either. In the midst of this state of being uprooted, technology has arrived to further disrupt. This disruption has been good and bad. While technology like the mobile phone has made for a much more inclusive infrastructure to access information etc, there is also technology such as that being used by high end hospitals that is totally out of reach even for many middle class citizens and is further widening the divide between the ‘techno privileged’ and the ‘techno marginalized’.
And, then, there is the scenario of industries being affected by large job losses due to automation. The narrative of losing jobs to machines is entering popular culture in a big way right at this moment.
In the midst of this world of science fiction becoming reality, what does a user experience designer envision the role of AI to be? How can AI serve the greater good? After having participated in panel discussions at the AI for Good conference organized by the United Nations at Geneva in 2017, I am sure that the future with AI does NOT have to dystopian.
1. What is the shared vision of a ‘good’ and desired future for humanity (say in 5 years’ time)? if this is in place, then can AI be designed to help the human race achieve that envisioned future? With no consensus about what is good and desirable, designing of AI will be dependent on corporate/political goals and fragmented / siloed objectives. And these, even if well intentioned, may not work to reinforce each other since they will not be coordinated into a coherent plan. Take the case of algorithmic biases.
There is much that is now being said about the challenge of algorithmic biases in current AI systems. In other words, are the unwanted biases that cause discrimination and conflict in the real world being carried forward by the creators of the algorithms that drive AI systems?
The Gender Shades project at MIT, for example, is using an intersectional approach to test AI systems to ascertain levels of bias.Therefore, can we use a Universal Values Framework like the one created by Schwartz or the United Nation’s Sustainable Development Goals Framework as the nonnegotiable fundamental objectives to be achieved by AI systems.
2. AI must be inclusive and accessible to all. For many emerging countries, can we design and put in place community based access of AI? For our countries where individual access may be difficult AND the cultural DNA is more collective than individualist, community based access of emergent infrastructure has helped tremendously. We should explore this for AI too.
3. Are there new principles of design that are likely to emerge for technologies such as AI that are designed to not just manipulate data but actually learn from users? It is clear that designers and data scientists have to learn to work together, given the critical role both will play in the machine learning and data heavy future.
Fabien Girardin says, in his paper, ‘When User Experience Designers Partner with Data Scientists’, ‘In particular, we are witnessing a new practice that requires a tight partnership between designers and data scientists, as systems with feedback loops can only be imagined, built, and improved with a holistic view of the how users’ experiences are affected by interactions between data, algorithms, and interfaces.’
He also lists an interesting set of objectives for user experience design when working with these new technologies, such as –design for uncertainty, design for peace of mind, design for time well spent, design for fairness, design for conversation, etc.
A beginning has been made, to ensure that there are an agreed upon set of inclusive global objectives and ethics that are always taken into consideration for design of AI systems, by institutions such as the United Nations (AI for Good Global Summit – 2017 Report) and the Future of Life Institute.
With these initiatives in place, we will, hopefully, succeed in changing the world to a better place for all instead of the brave new one that Huxley so eloquently warned us about.