We analyze the norms, policies, and technical architectures that shape speech, data, and identity online, and develop normative frameworks for ethical platform design and governance.
We develop systematic frameworks and practical guidelines for responsible data use across AI, platforms, and public institutions, with attention to power, consent, and social impact.
We address questions of algorithmic accountability, bias and fairness, transparency, and the distribution of power in AI development, with a focus on both technical standards and policy frameworks.
We investigate technology-induced psychological harms, the access and effectiveness of digital mental health interventions, and the ethical boundaries between treatment and enhancement.