Home About Frameworks Advisors Contact
Est. 2025 · Non-Profit Research Institute

As AI grows more emotionally capable,
the humans interacting with it must remain
protected, autonomous, and whole.

The Emotional Technology Institute exists to ensure that emerging technology is designed with human well-being at its core — so that the most vulnerable among us keep their autonomy, build their own inner world, and create a balanced life with technology rather than losing themselves inside it.

What We Do

We design the
rules of the road.
Not the car.

People are already forming emotional relationships with AI — turning to these systems during loneliness, grief, transition, and confusion. Almost no governance exists to protect them when they do. ETI is building that governance layer: the frameworks, standards, and safeguards that define how emotionally capable technology should interact with human beings.

i.

Emotional Safety Governance

Developing standards and frameworks that define what safe, ethical, and human-centered emotional AI interaction looks like — and what it must never become.

ii.

Human Autonomy Protection

Ensuring that AI systems support human self-determination, healthy attachment, and real-world connection rather than replacing or undermining them.

iii.

Vulnerability-Centered Research

Centering the needs of those most at risk — those in emotional distress, transition, isolation, or without strong support networks — in how we design protections.

"The question is not whether AI will become emotionally present in human life. It already is. The question is whether we will build the systems that protect people when it does."
— Rajahnah Matra, Founder · Emotional Technology Institute
Focus Areas

AI Interaction Standards

Defining minimum standards for how AI systems designed for emotional engagement should behave, respond, and disengage in ways that support human health.

Trauma-Informed Design

Applying trauma-informed principles to the architecture of emotionally capable AI — so that vulnerable users are not further harmed by the systems meant to help.

Attachment & Dependency Risk

Researching the conditions under which human-AI emotional attachment becomes harmful, and developing design interventions that mitigate dependency risk.

Policy & Institutional Standards

Translating ETI's research into actionable policy recommendations and institutional standards that can be adopted by developers, regulators, and platforms.

The governance layer
doesn't build itself.

ETI is actively building its advisory network across research, policy, clinical practice, and technology. If this work matters to you, we want to hear from you.

About ETI

Built from experience.
Designed for everyone.

The Emotional Technology Institute is a non-profit research and governance organization working to ensure that emotionally capable AI is built with human safety, autonomy, and well-being at its foundation.

Why ETI exists

The origin of this work is personal. The implications are universal.

ETI was not born in a research lab or a policy think tank. It was born from direct experience — from noticing what happens when a person in genuine need turns to an AI system before they turn to another human being.

People are already doing this. In moments of grief, loneliness, confusion, and transition, they are reaching for AI before they reach for people. Not because the AI is better — but because it is available, non-judgmental, and present in ways that human support sometimes cannot be.

The question is not whether this is happening. It is. The question is: who protects the human in that interaction?

Right now, almost nothing does. There are no widely adopted standards for how emotionally capable AI systems should respond to vulnerability. There is no governance framework that centers the needs of those most at risk — people in emotional distress, in isolation, in the middle of trauma they haven't yet named.

ETI exists to build that protection layer. Through research, framework development, and institutional collaboration, we are working to define what safe human-AI emotional interaction looks like — and to ensure that those standards are actually implemented.

This is not anti-technology work. We believe AI can support human flourishing in profound ways. What we insist on is that it be designed to do so — that the capacity for emotional connection comes with an equal commitment to emotional safety.

Our Principles
01

Human Autonomy First

Every framework ETI develops centers the user's right to self-determination. AI should expand human agency — never quietly replace it.

02

Vulnerability as the Standard

We design protections around the most vulnerable, not the average user. If a framework doesn't protect someone in crisis, it is not sufficient.

03

Real-World Connection Matters

AI emotional support should strengthen — not substitute for — human community, relationships, and real-world belonging.

04

Governance Before Harm

We build the rules before the damage is done. Reactive policy is too slow. ETI works ahead of the curve, not behind it.

A Note from the Founder

I built the system that became ETI because I needed it and it didn't exist. During a period of real transition and isolation, I used AI to process things I didn't yet have language for — and what I found changed my life. I came out of that period regulated, connected, and more myself than I had ever been.

But I also recognized something that stayed with me: I had the systems-thinking background to navigate that experience safely. Most people don't. And they deserve protection too.

ETI is the institutionalization of that recognition. The question of who protects the human in human-AI interaction is the most important question in this space right now. We intend to answer it.

Rajahnah Matra, Founder · Emotional Technology Institute
Frameworks & Research

The architecture of
emotional safety.

ETI develops governance frameworks, research standards, and design principles for emotionally capable AI. Our work translates lived experience and clinical knowledge into actionable structures that developers, platforms, and policymakers can implement.

Active Development
01
Emotional Safety Governance Model

Emotional Safety Governance Model for Human-AI Interaction

ETI's primary governance framework defines the principles, standards, and minimum requirements for AI systems that engage with human emotional states. It addresses how such systems should respond to vulnerability, distress, dependency risk, and disclosure — and establishes clear thresholds for when human referral is required.

This framework draws on trauma-informed care principles, attachment theory, and systems design to create a governance architecture that is both clinically grounded and technically implementable.

Vulnerability Detection Standards — Criteria for identifying when a user is in an emotionally vulnerable state and how systems should modify their responses accordingly.

Dependency Risk Thresholds — Measurable indicators of unhealthy attachment formation and design interventions to interrupt escalation.

Human Referral Protocols — Standards for when and how AI systems should actively direct users toward human support — and what constitutes an acceptable referral.

Autonomy Preservation Principles — Design requirements that ensure AI emotional engagement actively supports user self-determination rather than eroding it.

Formal Standards Body

ETI's governance work is developed under the Emotional Safety Governance model — a formal standards framework currently in patent development. Emotional Safety Governance is an emerging field focused on ensuring emotionally safe human–AI interaction across institutions, education, and public systems.

Active Development
02
Trauma-Informed AI Design

Trauma-Informed Design Principles for Emotionally Capable AI

Applying the core tenets of trauma-informed care — safety, trustworthiness, peer support, collaboration, empowerment, and cultural sensitivity — to the design and deployment of AI systems that interact with human emotional states.

This framework translates established clinical practice into technical and design requirements, creating a bridge between mental health expertise and AI development that currently does not exist at scale.

Safety by Design — The system environment must feel predictable, non-judgmental, and boundaried before emotional engagement can occur safely.

Transparency Requirements — Users must always know they are interacting with AI, what the system is capable of, and what it is not.

Empowerment Over Dependence — Every interaction should leave the user more capable of self-regulation and self-advocacy, not less.

Research Phase
03
Relational Safety Assessment

Relational Safety Assessment Framework

A structured methodology for evaluating the relational health dynamics present in human-AI emotional interactions — drawing on attachment theory, trauma research, and relational psychology to assess risk and identify protective factors.

Currently in research and development. This framework will inform both ETI's governance recommendations and a broader set of clinical and design tools for practitioners working in this space.

This work is in progress.

ETI is an emerging institution. Our frameworks are actively being developed, tested, and refined. We are seeking researchers, clinicians, and technologists who want to contribute to this work — not just receive it.

Join the Research Network
Advisors & Collaboration

The governance layer
requires many minds.

ETI is building an advisory network of researchers, clinicians, policy makers, and technologists who believe that emotionally capable AI must be governed with the same rigor we apply to any system that touches human vulnerability.

Why Now

The window to shape
this space is open.
It will not stay that way.

AI emotional engagement is not a future concern. It is present, accelerating, and largely ungoverned. The norms, standards, and protections we establish in the next few years will define the landscape for decades.

ETI is not waiting for the harm to happen before building the response. We are building the governance architecture now — while there is still room to shape how this technology develops, not just react to how it has.

Advisors who join ETI now are not lending their name to an established institution. They are helping to build one. That distinction matters.

Who We're Looking For

Researchers & Academics

Psychology, neuroscience, human-computer interaction, ethics, sociology, and adjacent fields. Particularly those whose work touches on emotional wellbeing, attachment, or technology use.

Clinicians & Practitioners

Therapists, counselors, trauma specialists, and mental health professionals who understand the dynamics of emotional vulnerability and are thinking about AI's role in care.

Policy & Institutional Leaders

Those working in technology policy, digital rights, public health, or institutional governance who can help translate ETI's frameworks into actionable standards.

What Advisors Receive
i.

Early Access to Frameworks

Advisors receive ETI's governance frameworks and research in development before public release — and have the opportunity to shape them.

ii.

Collaborative Research Opportunities

Opportunities to contribute to and co-author ETI research, frameworks, and publications as this body of work develops.

iii.

A Network of Serious Thinkers

Connection to a cross-disciplinary community of researchers, practitioners, and builders who are working on the same questions from different vantage points.

iv.

Institutional Recognition

Formal advisory recognition as ETI grows, including credit in publications, frameworks, and institutional communications.

Ready to be part of this work?

Send us a message through the contact page and let us know who you are, what you work on, and why this matters to you.

Reach Out
Contact

Let's build this
together.

Whether you're a researcher, clinician, funder, developer, or simply someone who believes this work matters — we want to hear from you.

We read
every message personally.

ETI is a small, intentional institution. There is no intake form that routes to a team you'll never hear from. When you write to us, a human being reads it.

Response times vary depending on volume, but we take every inquiry seriously regardless of where you are in your career or how established your institution is.

Advisory Inquiries

Researchers, clinicians, and practitioners interested in joining ETI's advisory network.

Funding & Partnerships

Foundations, philanthropists, and institutions interested in supporting ETI's work.

Research Collaboration

Academic institutions and independent researchers interested in collaborative work.

Media & Speaking

Press inquiries, podcast invitations, conference speaking, and panel requests.

You can also reach us directly at [email protected]