“Data is the new oil”. Repeated hundreds of times, this quote was extracted from a The Economist’s article  in 2017, and it quickly became the most famous analogy for the general public to understand importance of data in the new technology era. The quote is simple, but it somehow reflects the reality of the data-driven value creation. Data by itself is a powerful asset, but if you “refine” it you can generate exponential value for your organization: smarter decisions, new business models, etc.
However, data as a valuable asset is not a new thing. Companies like Facebook or Google saw it way before. Where you saw a powerful internet search service, Google saw a mine of data. When you discover Facebook and adopt it as the main social network, Zuckerberg was already dreaming about the value of our personal information, interactions and even feelings. The consequence: their current business models are mainly based on the targeted advertisement business. And then we got another famous quote : “If you are not paying for it, you become the product”.
Let’s face it: This is not necessarily wrong. Ok, we didn’t see it coming, they could have explained things clearly to their users, but at the end of the day these giants where ideating new businesses and they found the magic fuel to scale up and attract more investors. Data privacy regulations were different then, and they are heavily evolving to protect the end-user (thank you GDPR ), and there is an increasing the level of understanding of the potential risks and benefits. Uncle Ben4 was right: “Great power comes great responsibility”.
A pessimistic reader could think that this is the end of the world. Skynet  owning our data, AI-enabled killing robots… Reality is that we are not there yet (and hopefully never), but there are already a few concerns about the usage of data for unethical purposes: Cambridge Analytica and Facebook , racist algorithms , Amazon and its sexist AI-enable recruiting tools … not a beautiful panorama, you may think. My goal, however, is to provide you with some pieces of information that will improve your day and give a wider view of the topic.
There are already private and public initiatives around the world trying to establish clear frameworks, guidelines and principles for a responsible usage of data . There are companies investing in AI-enabled social good. There are multidisciplinary groups of people  (researchers, citizens, industry professionals, etc.) combining backgrounds to define what it is acceptable or not. There are even data professionals quitting their jobs if they consider that the output of their AI projects is not aligned with their personal values.
Summarizing, we are starting to learn from past mistakes and to detect unethical, biased or unfair cases. This is a growing field and a novelty on this technology era. For the first time, we are thinking not only about respecting the law but also about ethical and social impact, we don’t see only technology and business but also human implications… and this is very good news my dear friend.
As you can see, there is a lot of space for discussion and plenty of new concepts to assimilate. My teaching experience says that students love to have a pragmatic view of what is going on so they can learn and include these concepts as part of their critical thinking process. And companies are starting to invest part of their budgets in AI and data ethics for both executives and employees.
This is just the beginning and there is a lot to do. My humble contribution will be based on my research and content development for the analysis of the ethical impact of AI and data projects at both corporate strategy and project/product management levels.
Don’t hesitate to connect for discussion via https://www.linkedin.com/in/adriangs86
Note: This article was originally published via Concordia University’s website in 2020 https://www.concordia.ca/cunews/offices/provost/cce/2020/01/23/data-ai-and-the-hopeful-rise-of-ethics.html?c=/cce/about/news