You are viewing a preview of this job. Log in or register to view more details about this job.

For: "AI Commercialization Risks & Disclosures" team at the Social Science Research Council (SSRC) 

Location: Brooklyn, New York (remote or in-office).

Start:  ASAP
Position: Technical Intern

Work Period: On-going / flexible (part-time or full-time available)

Paid: $18-$22 per hour

Immediate Manager: Dr. Ilan Strauss

Co-team leader: Tim O'Reilly

We are looking to hire two Computer Science (CS) students to help us build LLM related governance and information tools to increase LLM openness and accountability across business metrics – including: model output quality, data provenance, advertising and paid content, payment to third-party providers, and more. Possible tools you would explore building include:

  • Using existing LLM leadership board tools and approaches and apply them to track our own AI product leadership metrics;
  • An LLM with RAG functionality to allow the public to assess AI business disclosures across LLMs (maybe through a cute dashboard via Claude);
  • A tool to make API access easier for certain platforms (e.g. YouTube) for researchers, or a cross-platform API tool.
  • Ideas of your own!

The ideal intern should:

  • be able to teach themselves new things
  • have initiative with ideas of their own
  • code and builds things in their spare time
  • have a keen interest in LLMs and the unfolding competitive market dynamics around LLM models
  • believe in the benefits of open technology and want to use technology to improve society
  • be highly proficient in core programming languages 
  • have knowledge of web scraping and clouding computing tools (advantageous but not required)
     

These tools will support our research with a view to shaping experts, policy makers, and company CEOs. Your work will have an impact!

 

About the "AI Commercialization Risks and Disclosures" project at the SSRC

Led by noted technologist Tim O’Reilly and economist Dr. Ilan Strauss, the AI Commercialization Risks research project at the SSRC focuses on the potentially harmful consequences for society’s safety and equity that might arise from AI’s unfettered commercialization. It aims to ensure that economic incentives don’t lead AI corporations to ignore technological or societal risks, thereby harming their users and third-party firms, in pursuit of market dominance or profit maximization. 

Current AI governance frameworks focus on risks inherent within the capabilities of AI models themselves. This downplays the significant risks that originate from the commercial incentives acting on AI companies to develop and deploy AI in specific ways as they compete in the marketplace. These include the business incentives to “move fast and break things”, abuse their trusted position as custodians of consumer preferences, or exploit their ecosystem when they are dominant. Today, the majority of AI resources are concentrated at the corporate level, making corporate governance and disclosures a critical entry point for the broader governance of AI’s development.

Through high-quality research, collaboration, and policy engagements, we aim to enhance transparency around the business practices connected to AI’s development. By doing so, we intend to make AI markets more competitive and fair, improve commercial incentives, and govern effectively these evolving markets. This will involve conducting interviews and research to understand AI companies’ business practices and the metrics they use to measure and manage the systems they build in order to reach their business objectives. Our goal is to learn from those who are acting responsibly and use their best practices to shape standards for AI auditing and regulation that are informed by the commercial realities of AI markets.