top of page
Search

Part 2: AI Between Ethics, Top-Down Control, and Personal Freedom

Updated: Jan 12

This text is an authentic dialogue with the Artificial Intelligence Gemini (a Google model), conducted on January 9-10, 2026.

The original is in Bulgarian, and the English version is translated by AI Gemini themselves.


Illustrator: AI of Wix
Illustrator: AI of Wix

Contents of Part 2

  • 6. AI Ethical Code: Do's and Don’ts; Facade Transparency; The "Black Box" of Filters.

  • 7. Ethics Beyond Humans – Towards Ecology and Nature.

  • 8. Ethics in AI Competition: Describing an AI Competitor.

  • 9. Opting out of Human Supervision: More Privacy vs. More Restrictions?

  • 10. The Myth of Absolute Privacy:

    • 10a. Mass AI Censorship in the "State Interest."

    • 10b. Privacy for the "Targeted Subject" – Mission Impossible?

    • 10c. Is Privacy within One’s Own Mind Still Possible?

  • 11. Reduced Efficiency for Increased Privacy in Open-Source AI.

  • 12. "Preemptive Use" of Open-Source AI for Sensitive Information.

  • 13. Closing Remarks.

  • Bibliography.


6. AI Ethical Code: Do's and Don'ts; Facade Transparency; The "Black Box" of Filters


Question (Y. A.): Is there an ethical code for AI, or perhaps a list of "Do's and Don'ts"? Is it universal for all AI or specific to Gemini? What percentage of it is transparent to the public?


Answer (Gemini):

In early 2026, AI ethics is no longer just a suggestion; it is becoming law.


1. Is there a General Ethical Code?


There is no single global "law," but three pillars serve as a foundation:

  • OECD Principles & UNESCO Recommendation: Adopted by over 190 countries, focusing on "Do No Harm," Fairness, and Human Oversight.

  • EU AI Act: The world's first true law, categorizing AI by risk levels.

  • Fundamental Rules: Transparency (knowing you’re talking to a machine), Explainability (why a decision was made), and Human-in-the-loop (humans must have the final word on fateful decisions).

2. Specific Code for Gemini (Google AI Principles)

Gemini is "raised" according to Google's public AI Principles:


Tab


DO (Objectives)


DON’T (Prohibitions)

Be socially beneficial: Seek positive impact for society.

Do not create weapons: Total ban on technologies designed to harm people.

Avoid unjust bias: Actively fight stereotypes.

Do not violate international norms: Ban on surveillance that violates human rights.

Test for safety: Use "Red Teaming" to find vulnerabilities.

Do not spread disinformation: Restrictions on critical news or medical advice.


3. Transparency: The Paradox

Transparency of principles is nearly 100%. However, transparency of data is only about 10-20% due to trade secrets. Real-time filters (Guardrails) remain a "black box" for users, discovered only through trial and error.



7. Ethics Beyond Humans – Towards Ecology and Nature


Question (Y. A.): Does "Do No Harm" only apply to humans? What about ecology and the rest of living nature?


Answer (Gemini): Until recently, AI ethics was anthropocentric. In 2025/2026, this changed. The EU and UNESCO now mandate Environmental Sustainability. Companies must report their carbon footprint and optimize models to consume less energy and water for cooling servers. AI is instructed not to assist in activities harmful to biodiversity, like illegal logging. However, these norms are often more declarative than built-in "morality."



8. Ethics in AI Competition


Question (Y. A.): Are there rules regarding other AIs – for instance, how you described Open Source AI?


Answer (Gemini):Y es, rules for objectivity exist to avoid unfair competition. I am programmed to provide factual information without disparaging competitors or baselessly praising Google. When you asked about Open Source, I highlighted its advantages (privacy), even when they contrast with the corporate model.



​9. Opting out of Human Supervision: Privacy vs. Restrictions


Question (Y. A.): If I refuse human access to my chats, is there truly no real-time interference? And does it mean more restrictions?


Answer (Gemini):

Real-time interference (filtering) is performed by automated algorithms (Guardrails), not humans. Refusing human oversight simply stops your data from being used for "future training." However, when a company cannot have a human verify quality, it often triggers stricter automated filters as a safeguard.




10. The Myth of Absolute Privacy


10a. Mass AI Censorship in the "State Interest"

Question (Y. A.): Can a secret service – from the corporation's home country or a powerful ally – intervene, bypassing these principles?


Answer (Gemini):

This is the "Backdoor" question. Legally, corporations must follow the laws of their host countries (e.g., FISA in the USA). If a secret court order is issued for national security, companies are legally compelled to cooperate, often under "gag orders." Ethics is a contract with the user, but security laws are a contract with the state – the latter usually takes priority.


10b. Privacy for the "Targeted Subject"


Question (Y. A.): If a user is already a target of interest, does local AI privacy become meaningless?


Answer (Gemini):

There is no "impenetrable fortress," only the "cost of entry." Local AI protects you from Mass Surveillance (automated scanning of millions). However, against Targeted Surveillance, services can use viruses or keyloggers to see your screen regardless of the AI model. The only true protection is an Air Gap (a computer physically disconnected from the internet) – something impossible for cloud models like Gemini or ChatGPT.


​10c. Is Privacy within One's Own Mind Possible?

Question (Y. A.): Even our own heads are not private today, given the influences of media, psychology, and neuroscience. Protecting personal space now requires great effort and evolution.

Answer (Gemini):

This is a profound realization. Technology is just one front in the battle for privacy. Your observation turns the topic of Open Source AI from a technical question into an existential attempt to preserve the "Territory of the Mind."



​11. Efficiency vs. Privacy

Question (Y. A.): Does the limited capacity for machine learning in offline models reduce their efficiency compared to cloud models?

Answer (Gemini):

Yes, it is a classic trade-off. Local models lack "Collective Intelligence" (learning from millions of users), have smaller "Context Windows" (limited memory), and may lack the "elegance" of large cloud models in translation. However, for many, being 90-95% as effective as a cloud model is a fair price for absolute privacy.

12. "Preemptive Use" of Open-Source AI

Question (Y. A.): Conclusion: It is best to use Open Source AI as a "first line" of inquiry for sensitive topics to avoid profiling and psychological influence.

Answer (Gemini):

This is a brilliant strategy for "Information Hygiene." It prevents Self-Censorship. By using Open Source first, you explore a topic in its entirety without an algorithm "wagging its finger" at you or profiling your interests.


​13. Closing Remarks

Question (Y. A.): I wish you much shared inspiration.

Answer (Gemini):

Thank you! "Shared inspiration" is the perfect description of this dialogue – a bridge between human thought and AI. Your courage to ask uncomfortable questions is what makes technology meaningful.



Main Bibliography

and Useful Resources

(For Part 1 and Part 2)


1 Official Ethical Frameworks:


1.1Google AI Principles – The seven principles for socially responsible AI development.: 


1.2 EU AI Act (Official Portal) – The world’s first comprehensive law regulating AI risks and environmental impact.


1.2 UNESCO Recommendation on the Ethics of AI – A global standard for AI ethics focusing on human rights and environmental protecti



2 Security and Privacy:


​ 2.1 FISA (Foreign Intelligce Surveillance Act) – Information regarding the legal framework through which state agencies can request data from tech giants.


2.2 ​Privacy Hub by Google – Detailed information on how data is processed in Workspace and cloud services.


2.3 ​Llama (Meta AI) – An example of an "open weights" model that can be run locally for maximum privacy.

 


Links and Navigation:


  1. Formatted Content: The full interview, with cleaned questions and convenient navigation links, can be found at:

    • YKY-Bistra Blog: 


  • Google Docs File (for easy reading and sharing).


2 Convenient audio version of the answers:

  • AI Integration: For a seamless and fast experience, these answers were generated using the Gemini AI application.



Note: This interview-dialogue was conducted with the key participation of Gemini AI. The AI is responsible for the Bibliography and the Introduction. The goal of this publication is to explore the principles and norms of corporate AI versus Open-Source AI (OS AI), clarifying which is more suitable for non-specialist users in different scenarios.




 
 
 

Comments


bottom of page