Get Involved

Join the AISUCI community and make an impact.

Technical Intro Fellowship

Spring 2026 · Open

An 8-week reading group on technical AI safety. Participants meet weekly in small sections on Thursdays from 5–7pm in Humanities Hall, with dinner provided. No work is expected outside of weekly meetings.

Topics include:

  • AI risk and the current trajectory of AI development
  • Scalable oversight
  • Mechanistic interpretability
  • Robustness and unlearning

Open to undergraduate and graduate students. Participants receive a completion certificate and early access to membership opportunities.

We run the fellowship every quarter.

Membership

Rolling Admissions

Being a member of the AISUCI community comes with both opportunities and responsibilities. Membership includes:

Free Claude Pro / Claude Code subscription
Compute and research tools
Weekly member meetings to read and discuss alignment research
Small group discussions with alignment researchers and professors
Connections with top orgs like Redwood Research, the U.S. AI Safety Institute, and METR
Opportunities for AI safety community workshops & retreats
A community of talented students interested in reducing risks from advanced AI

Members generally contribute by running or participating in workshops, discussions, socials, hackathons, and more. While we are a UCI-recognized student group, membership is not restricted to UCI students — independent researchers and students from other universities are welcome.

If you aren't very familiar with AI safety, we recommend applying for our Technical Intro Fellowship above. We typically offer fellowship alumni priority in the application process.

Membership admission is rolling, but the board tends to make decisions every month. If we are slow to respond, please don't hesitate to email us at .

Join the Board

We're always looking for motivated people to help run AISUCI. Board members organize fellowships, events, workshops, and outreach, as well as steer the direction of the group.

We value people who are genuinely excited about AI safety. There's no single mold, but we look for responsible, agentic, and high-context individuals who have demonstrated commitment to our mission.

The best first step is a conversation. Book a coffee chat and tell us what you're interested in.