About the author
Marko Stokic is the head of the AI at the Oasis Protocol Foundation, where he works with a team focused on the development of peak AI applications integrated into Blockchain technology. With commercial training, Marko’s interest in Crypto was triggered by Bitcoin in 2017 and deepened by his experiences during the 2018 market accident. He continued a master’s degree and acquired venture capital expertise, focusing on corporate ia startups before moving on to a decentralized identity startup, where he has developed solutions preserving private life. At Oasis, it merges strategic information with technical knowledge to plead for decentralized AI and confidential IT, educating the market on the unique oasis capacities and promoting partnerships that empower developers. As a committing speaker, Marko shares information on the future of AI, privacy and security during industry events, positioning the Oasis as a leader in innovation responsible for AI.
Long before hundreds of millions of users made one of the most popular applications in the world in a few weeks in 2022, we were talking about the potential for AI to make ourselves healthier and our lives longer.
In the 1970s, a Stanford team developed Mycin, one of the first AI systems designed to facilitate medical diagnosis. Mycine used a knowledge base of around 600 rules to identify bacteria causing infections and recommend antibiotics.
Although he has surpassed human experts in the trials, Mycin has never been used in clinical practice – partly due to ethical and legal concerns about the diagnosis led by the machine.
Rapid advance of five decades, and AI is now ready to transform health care in a way that seemed to be science fiction in the Mycin era. Today, modern AI can teach to spot diseases In medical imaging as well as a human clinician and without much training data. A Harvard study On AI cancer assisted, the diagnosis showed 96%precision.
Improve diagnostics
In the United Kingdom, an AI system detected 11 signs of breast cancer which were missed by human clinicians. Two distinct studies, one of Microsoft and another of Imperial collegefound more cases of breast cancer than radiologists. Similar results have been observed with the detection of AI of prostate cancer,, skin cancerand other conditions.
Our access to data has never been greater. For example, the National Health Service in the United Kingdom – the largest employer in Europe – has a whole access to a body of more than 65 million digitized data patients – evaluated to More than 9.6 billion sterling pounds per year ($ 12.3 billion).
This represents an unprecedented opportunity for AI to recognize models and generate ideas that could radically improve the diagnosis, treatment and discovery of drugs.
AI’s ability to detect subtle models in large sets of data is one of its greatest forces in health care. These systems can analyze not only medical imaging, but also genomic data, electronic health files, clinical notes, etc. – Correlations of identification and risk factors that could escape experienced human clinicians.
Some people may feel more comfortable with an agent of IA giving their data on health care than a human not directly involved in their care. But the problem is not only to know who sees the data – it is a question of how portable He becomes.
AI models built outside of trusted health establishments have new risks. Although hospitals can already protect patient data, the confidence of external AI systems requires more robust confidentiality protections to avoid improper use and guarantee data securing.
Confidentiality challenges in AI health care
It should be noted that the potential is delivered with significant confidentiality and ethical concerns.
Health care data may be the most sensitive personal information that exists. It can reveal not only our medical conditions, but our behaviors, our habits and our genetic predispositions.
There are valid fears that a general adoption of AI in health care can lead to violations of privacy, data violations or improper use of personal intimate information.
Even anonymized data is not automatically safe. The AI advanced models have shown an alarming capacity to unrestrail the data sets protected by referencing with other information. There is also the risk of “Model reversal” attacksWhere malicious actors can potentially reconstruct private training data by repeatedly questioning an AI model.
These concerns are not hypothetical. They represent real obstacles to the adoption of AI in health care, potentially retaining vital innovations. Patients can be reluctant to share data if they do not trust the confidentiality guarantees.
While standards and regulations require geographic and demographic diversity in the data used to form AI models, sharing data between health establishments requires confidentiality, because data, in addition to being very sensitive, carry the ideas of health establishments around diagnostics and treatments.
This leads to distrust on the part of institutions to share data from regulatory, intellectual property and diversion problems.
The future of AI preserving privacy
Fortunately, a new wave of development of AI preserving privacy emerges to meet these challenges. The approaches of decentralized AI, such as federated learning, make it possible to form models of AI in distributed data sets without centralizing sensitive information.
This means that hospitals and research institutions can collaborate in the development of AI without directly sharing patient data.
Other promising techniques include differential confidentiality, which adds statistical noise to data to protect individual identities, and homomorphic encryption, which allows them to calculate on encrypted data without decrypting them.
Another intriguing development is our execution framework out of chain (ROFL), which allows AI models to carry out off -chain calculations while maintaining verifiability. This could allow more complex IA health care applications to draw from external data sources or processing power without compromising confidentiality or safety.
Technologies preserving confidentiality are still at their beginnings, but they all point to a future where we can exploit full power of AI in health care without sacrificing the privacy of patients.
We should target a world where AI can analyze your complete medical history, your genetic profile and even real -time health data from portable devices, while keeping this sensitive and secure sensitive information.
This would allow highly personalized health information without any entity having access to the raw data of patients.
This vision of AI preserving privacy in health care is not only to protect individual rights, although it is certainly important. It is also a question of unlocking the full potential of AI to improve human health and in a way that commands compliance with the patients it deals with.
By building systems to which patients and health care providers can trust, we can encourage greater sharing and collaboration of data, leading to more powerful and more precise AI models.
The challenges are important, but the potential awards are immense. AI preserving confidentiality could help us detect disease earlier, develop more effective treatments and, ultimately, save countless lives and unlock a source of confidence.
It could also help treat the disparities in health care by allowing the development of AI models which are trained on various sets of representative data without compromising individual confidentiality.
As the AI models become more advanced and diagnoses focused on AI become faster and more precise, the instinct to use them will become impossible to ignore. The important thing is that we teach them to keep their secrets.
Edited by Sebastian Sinclair
Generally intelligent Bulletin
A weekly IA journey told by Gen, an AI generator model.