OpenAI has announced its commitment to integrating public feedback into the development of its future AI models. In pursuit of this goal, the company is establishing a new team, named the Collective Alignment team, comprised of researchers and engineers.
This team will focus on developing a system to gather and incorporate public opinions and values into the behavior of OpenAI’s products and services. The company writes in a blog post: “We’ll continue to work with external advisors and grant teams, including running pilots to incorporate … prototypes into steering our models. We’re recruiting … research engineers from diverse technical backgrounds to help build this work with us.”
The formation of the Collective Alignment team represents an expansion of OpenAI’s initiative, started in May last year, to support research through grants. This initiative aimed to establish a “democratic process” for determining the guidelines governing AI systems.
OpenAI’s intention with this program was to provide financial support to individuals, teams, and organizations. The funding was meant to facilitate the creation of prototypes and studies addressing issues related to AI guardrails and governance.
In its latest blog post, OpenAI reviewed the diverse projects undertaken by its grant awardees, which included everything from video chat interfaces to platforms for community-driven AI model audits and methods for aligning beliefs with parameters to refine AI behavior. Now OpenAI has released all the code from these projects, accompanied by concise summaries of each proposal and key insights, making it publicly accessible.
OpenAI is currently facing heightened attention from regulators, particularly due to an investigation in the U.K. concerning its ties with Microsoft, a key partner and investor. The company has been taking steps to minimize its regulatory exposure in the EU regarding data privacy issues, utilizing a subsidiary based in Dublin to limit the unilateral power of some privacy authorities within the bloc to address concerns.
In what appears to be a move to appease regulators, OpenAI announced yesterday that it is collaborating with various organizations to develop strategies to prevent the misuse of its technology in manipulating or negatively influencing elections.
The startup is enhancing its methods to ensure that images created using its tools are easily identifiable as AI-generated. This includes developing techniques to recognize generated content even if the images have undergone modifications.