We all know that data collection has been far from transparent in our society. Massive corporations have been collecting data, developing technologies, and deploying them in the shadows for years. They have one main concern: the bottom line. At the same time, they have been extracting data from the population without compensation, or even without being clear as to how that data is being used.
Not only are they acting this way toward the general public, but they are also stomping on any potential competition. Big Tech has exabytes of data and incredible processing power, which they are using to develop highly centralized, and in many cases unregulated AI technologies. All of this power is helping them succeed in preventing many great companies from emerging, especially those that can use AI to benefit the greater good instead of selling our data to rake in profits.
It’s time for that to change!
Synesis One is creating a transparent ecosystem that is focused on human-centered AI, ethics, data safety, community governance, and most importantly — you! We believe that you should be rewarded for your data, as it’s the most important asset of our time. It’s long been overdue for the public to be involved in the process, so that’s exactly what we’re doing.
One of the main focuses of Synesis One is to help develop human-centered AI, which enables us to live alongside AI rather than be replaced by it. If it was left up to the same shadowy companies we talked about earlier, technology would replace any meaningful notion of human intelligence, creativity, and responsibility.
Human-centered AI helps us monitor the technology so that it doesn’t inherit many of our own biases, which can often happen unintentionally. By remaining transparent and keeping the public in the loop, it enables checks and balances in the system while creating inclusive outcomes.
Back in 2020, Google fired Dr. Timnit Gebru, who was Co-leader of the company’s Ethical AI team. The reason for her firing? She refused to retract a research paper that was critical of Google’s approach to natural language processing (NLP).
In other words, Google fired Dr. Timnit Gebru for doing exactly what the Co-leader of the Ethical AI team should be doing.
Ethics is crucial to minimizing harm, and it is one of the most important aspects of the development of AI. We must have a strong ethical grounding if we are to create technology with the same.
This is why Synesis One will include a diverse Ethics Board, with members drawn from academia, non-profits, and the Synesis One community. The Foundation proposes members that are then voted on and confirmed by the community, and the Ethics Board will be focused on ensuring the safe and ethical use of AI data.
Members of Synesis One’s Ethics Board will not be punished for doing this kind of work, but rather encouraged to find any potential problems with the technology.
Data Process and Safety
As we further breakdown our ecosystem, here are a few key terms that will be helpful for this section:
 Ontologies are morsels of information in the form of phrases, sentences, or expressions.
It’s important that players know the data process so that they can feel safe and secure while contributing to the AI training. When it comes to the formulation of ontologies , the original natural language expression is recorded on a public blockchain. The original data, which is a string of text in natural language, is stored on a separate centralized server in the cloud. The exact storage location of this data is cryptographically encrypted, and all of this makes it unalterable.
As for the corresponding meta data, this is collected separately before being grouped and stored into the AI-specific ontology database, which is stored off-chain.
This data storage approach acts as a safety measure in case the meta data is compromised, as it prevents the original data from being altered while also enabling the reconstruction of canonicals . Since the data processing takes place on a centralized server, AI systems can achieve a much higher processing speed than what is currently possible with a blockchain.
Democratization and Distributed Data Ownership
One of the fundamental values of Synesis One is that democratization and distributed data ownership depend on one another. This is why we will work with leading universities, institutions, and non-profits to constantly improve the Synesis One ecosystem, always pushing forward to create a more transparent and democratic environment.
Through the development of human-centered AI, a diverse ethics board, a secure and safe data process, and democratization and distributed data ownership, Synesis One is determined to enable the public to play our natural language reasoning (NLR) games, and AI companies to use our technology, all while feeling confident in a trustworthy and transparent process.
By following these values, we will put ourselves on the path to creating AI to benefit all of society!