HOW DID YOUR BACKGROUND IN TECHNOLOGY AND YOUR PERSONAL EXPERIENCE WITH YOUR DAUGHTER’S ILLNESS SHAPE YOUR CURRENT PHILOSOPHY?
I've always been involved in technology. I’ve had a few companies that helped large organizations to disrupt themselves to make new products and services. But the real change for me came when my daughter was born. She was very ill, and the doctor said she would not make it beyond her first birthday.
I started to look into biology. I started to look at the human body from a more holistic perspective.
When I studied cells, I discovered a lot of parallels between how cells use biological systems to build organisms and how people use technology to build organizations—we follow exactly the same path. That makes it more predictable to see where things are going, how technology will evolve, and how it will impact society and the human experience.
YOU SPEAK ABOUT TECHNOLOGY PROGRESSING IN WAVES AND MIMICKING BIOLOGICAL PATTERNS. CAN YOU EXPLAIN THIS CONCEPT?
Biology has had seven waves over two billion years, and human beings have had seven waves over hundreds of thousands of years. Technology follows a similar pattern. Every new wave makes technology more accessible.
It has progressed from the internet to the smartphone and now to AI conversational interfaces like ChatGPT.
As every wave progresses, it becomes more intuitive for us to interact with technology and to connect with other people through that technology.
We are now in the sixth wave in technology, which is parallel with the neocortex of the brain in biology. In this wave, we use a 2D interface on our smartphone and laptop. The upcoming seventh wave will correspond to the prefrontal cortex and focus on holographic computing, mixed reality, spatial interfaces, and immersive technology.
WHERE DO YOU SEE ORGANIZATIONS HANDLING THE NEXT TECHNOLOGICAL WAVE?
Today, 80%-90% of organizations are hierarchical, centralized organizations.
And I think all the problems that we have in our society are a result of that.
We need to move toward a different kind of mindset where we don’t use technology for profit but use technology to connect and align with a sense of purpose.
I believe that we will move away from the old, centralized structures—government as we know it, healthcare as we know it, education as we know it will change. These new organizations will evolve into swarm-like organizations, where we use artificial intelligence to create fluid swarms of communities, which are aligned around a common purpose. That purpose can be overall well-being or clean water or renewable energy—whatever you want. That’s how I think technology should be used.
In this type of organization, we will use AI to ask: What is our purpose? What are our strengths? What are our weaknesses? And then the algorithm will look for people around you who are complementary to you.
What kind of human values do we prioritize, and how do we protect our values? If you have these rules of engagement in place, then your organization will better react to problems and utilize AI in a way that’s consistent with your values.
HOW DO YOU SEE TECHNOLOGY ENHANCING RATHER THAN REPLACING THESE THAN HUMAN ELEMENTS?
I see the human body as a roadmap for the future of society. So all the answers that we are looking for in the outside world are inside of us. The purpose of biology is connecting and empowering cells so they can do what they were here for. Technology has the same kind of purpose: it should unite people and empower people so that they can collaborate in larger wholes.
The big problem is that a lot of companies use technology for a different kind of purpose. They try to make us addicted to our social media because it’s very profitable. I think the challenge that we are in right now is finding a balance between the human part of the story and the digital part.
HOW SHOULD ORGANIZATIONS APPROACH ETHICS AND IMPLEMENTATION?
AI ethics is a complex topic that is mostly uncharted territory because we are now starting to put ethics into programming code. I recommend that organizations start to make someone responsible for ethics and morality in AI.
I believe the future CEO is not a chief executive officer but a chief ethical officer who is responsible for the ethical implications of employees, customers, and the supply chain.
The second part is to create rules of engagement for your organization. How do we want to engage with AI? What is important for us? What kind of human values do we prioritize, and how do we protect our values? If you have these rules of engagement in place, then your organization will better react to problems and utilize AI in a way that’s consistent with your values.