Artificial intelligence is becoming more and more entrenched in our societies. It threatens top talent, and is increasingly infiltrating our private lives. But a new center for AI research at the ETH Zurich is keen to position AI as supporting people rather than replacing them.
Smart speakers, highly fake videos and ads tailored to our interests: AI is ubiquitous, and can hardly be surpassed in everyday life. AI is increasingly affecting our daily lives. But what is its importance to our society? Machines are becoming more capable of performing tasks that were previously thought to be the domain of only humans. Nowadays, a lot of different new technological elements are coming together and accelerating this progress.
There are always different technological developments feeding off each other. For example, the cell phone has changed our society forever. Some people say that in twenty years' time social intelligence will be as big as the discovery of electricity, and it will be hard to imagine what life would have been like without it. What makes the new AI center special is that it conducts research in an interdisciplinary manner. It's not just about applying current AI methods to complex social problems like climate change. Some problems require redefining the very foundations of AI, and we want to work on that. At our center, we don't want to go in just one direction. For us, there must be a mutual influence between the application areas in which we want to make changes and the fundamentals of AI. In other words: AI changes depending on the application, and this change in AI leads to the creation of new application areas.Today's consumers are exposed to a large amount of digital information and applications. At the same time, it is difficult for them to understand the technology behind it and the purposes for which it is used. How do you deal with this reality? Change is always accompanied by a certain amount of doubt and uncertainty. It is important to understand that technological advances will change our society. We want to support building this change, and show that we're doing it responsibly. Particularly on AI, we need to emphasize ethical work practices, and show examples of how we can do it. Work in other areas generally relies on ethical principles already. But there are other questions as well, for example regarding algorithms: Are these algorithms fair? How do we judge that? Is it all inclusive? A lot of clarifications are needed on this topic.
Right now, there is a torrent of different directives. It is important that we engage in this dialogue and achieve clarity about it. To this end, we want to invite different groups to participate in the dialogue. But we don't just want to discuss the issue, we want to take positions as well. The other ethical question concerns who has access to certain AI technologies, for example in military applications. To what extent will the center handle such issues? Will you advise companies on how to use and understand these technologies?One of the most important things we want people to know is that technology in and of itself is not neutral. The person who designs something new also has a certain responsibility. This is why it is important for us to help shape this dialogue, and to allow our European values to influence the development of AI applications. We want to set a good example, stand up for our values and our culture, and allow them to shape the evolution of our projects. This is a very effective method, because we train people who live these values and take them with them when they go out into the world.