2nd of September 2024
written by Louis de Diesbach
Technology seems to be everywhere, and the question is no longer whether we will adopt it but rather how it will evolve and be regulated. This technological question is now on everyone's minds, from research centers to businesses to governments. However, as technology continues to change rapidly, reactions and perceptions of it vary widely. This is where culture plays a crucial role in our relationship with technology.
Recently, the Boston Consulting Group (BCG) published a study revealing that nearly 80% of workers are familiar with ChatGPT. Most people are aware of the benefits of artificial intelligence—whether in efficiency gains, medical advancements, or artistic innovation. However, concerns around risks (data security, environmental impact, ethical issues) remain significant. While around 43% of those surveyed expressed enthusiasm about AI, about 29% reported feeling somewhat anxious.
What’s particularly interesting are the geographical and cultural disparities highlighted in this study. Southern countries—including China, India, the Middle East, and Brazil—tend to be more optimistic and less anxious about generative AI compared to northern countries. In France, for example, trust in generative AI struggles to reach 40%, while in India and Brazil, it comfortably surpasses 50%.
These differing attitudes toward AI reminded me of another major study: MIT’s "Moral Machine Experiment," published in 2018. This project, based on hypothetical scenarios involving autonomous vehicles, aimed to understand the ethical and moral decisions people make when faced with dilemmas. For instance, if an autonomous car must choose between saving a child or an elderly person, what would be deemed the morally acceptable choice?
The findings revealed marked differences in preferences across regions. Broadly, the world divides into three groups: the "West" (North America and much of Europe), the "East" (Buddhist, Confucian, and Muslim countries), and the "South" (Latin America, France, and other French-influenced countries). While these groups generally agree on basic principles—like saving the young over the elderly—their priorities differ significantly. These differences underscore that our perception of technology, and the ethics we attach to it, are deeply rooted in cultural specifics.
These results show that our relationship with technology cannot be homogenized. The same technology does not elicit the same responses everywhere. Tech giants’ algorithms, which are often uniform worldwide, would benefit from adjustments that reflect cultural differences. Stefan Zweig was right when he warned of the dangers of world uniformity. This homogenization risks erasing cultural nuances and making technology unsuitable or even intrusive in certain contexts.
For businesses, understanding these differences is essential—whether it’s to motivate employees, tailor products to meet customers' expectations, or adapt communications. Likewise, governments should consider the technophile or technophobic perceptions of their populations when implementing regulations. In the business world, a chatbot that adapts its approach to the cultural profile of each user could offer a more relevant and respectful experience.
Adopting an international vision for technology while respecting the cultural specifics of each region is a major challenge. Whether a company, a research center, or a government, it’s crucial to consider cultural particularities to establish trust with users and citizens.
Technology has the potential to transform our lives in remarkable ways, but its success will depend on our ability to embrace the diversity of perspectives and expectations that surround it. Ultimately, technology should not be a tool of global standardization but a bridge between cultures, capable of adapting to each individual’s values and sensitivities.
Written by Louis de Diesbach
Tech ethicist and consultant