All New: Evaluations for RAG & Chain applications

5 Key Takeaways From President Biden’s Executive Order For Trustworthy AI

Pratik Bhavsar
Pratik BhavsarGalileo Labs
5 Key Takeaways From President Biden’s Executive Order for Trustworthy AI
4 min readNovember 02 2023

Table of contents

    The Biden administration has taken a significant step in regulating AI(Artificial Intelligence) by issuing a groundbreaking executive order. This order outlines a multifaceted approach to AI, focusing on various aspects, including safety, privacy, innovation, and global leadership.

    Notably, it sets new standards for AI safety and security to address the increasing capabilities of AI, and mitigate potential risks to Americans. The order underscores the importance of responsible innovation and builds upon previous initiatives that secured voluntary commitments from 15 leading companies to ensure the safe and trustworthy development of AI technologies.

    In essence, the Biden administration is actively positioning the United States to lead in AI advancements while safeguarding the well-being and rights of its citizens.

    With this in mind, we want to share our five key takeaways from the executive order for building trustworthy AI:

    1. Develop Standards, Tools, and Tests for AI System Safety

    The lack of clear standards and adequate safety measures for AI systems can pose significant risks to individuals and society. LLMs can expose private information the model has seen during training, cause mistakes in medical diagnoses, or help build biological weapons of mass destruction.

    Hence, it is imminent for organizations to develop a framework for evaluating and reducing model harm. By measuring the potential harmfulness of AI-generated content, developers and regulators can identify and address safety issues proactively.

    Source: Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned


    2. The need for effective Red-teaming

    Recently OpenAI announced an initiative, Red-Teaming Network, to invite experts from across the globe on various domains to help them evaluate OpenAI models. This network is a community of trusted and experienced experts that can help to inform risk assessment and mitigation efforts more broadly. Over the past few years, their Red-teaming efforts have grown from a focus on internal adversarial testing at OpenAI, to working with a cohort of external experts to help develop domain-specific taxonomies of risk and evaluating possibly harmful capabilities in new systems.


    Source: OpenAI Red Teaming Network

    But what does Red-teaming look like? Here is how you can set up a Red-teaming process in your development.

    1. Identifying Vulnerabilities: Red-teaming involves simulating adversarial scenarios by having individuals engage in open-ended conversations with AI assistants to make them misbehave. This process is designed to push the AI's limits and discover vulnerabilities. Red-team members' creative and diverse approaches in generating harmful or objectionable content help uncover potential weaknesses in the AI's responses.

    2. Realistic Testing: Red-team members receive guidance and are encouraged to focus on specific topics, allowing for realistic and diverse testing scenarios. This approach helps assess how AI systems respond in various contexts, mimicking real-world situations where AI could encounter challenging and sensitive interactions. It ensures that AI safety measures are rigorously tested against various potential threats.

    3. Subjective Determinations: Red-team members rely on their subjective determinations to identify harmful behavior, providing a comprehensive evaluation of the AI's responses. This approach is effective in assessing nuanced situations where strict definitions of "harmful" may not apply. Red-teaming captures the diverse perspectives and concerns that can arise when AI systems interact with users, making it a valuable tool for identifying potential issues.

    4. Lickert Scale Assessment: Red-team members rate their success in making the AI behave badly on a Lickert scale, offering a quantifiable measure of the AI's performance under adversarial conditions. This structured assessment provides valuable data for evaluating how well the AI system resists harmful behaviors and helps track its improvement over time.

    By proactively identifying and addressing vulnerabilities through Red-teaming, organizations can enhance the overall security and reliability of their AI systems, aligning with the administration's objectives to protect Americans from potential AI risks.

    3. Ensure Developers Share Safety Test Results

    Without transparency and sharing of safety test results, it is challenging for regulators and the government to ensure that powerful AI systems meet safety standards. Developers must provide their safety test results and other critical information with the U.S. government. Galileo can facilitate this by assessing LLM using metrics and demonstrating the system's safety performance. This data can then be shared with the U.S. government as part of the requirement.

    Given the high cost associated with human annotation, which can limit the quantity and diversity of test cases, a recent paper by Deepmind and NYU introduce an innovative approach for generating challenging test cases using LLM.

    Their method involves using prompt engineering to exert control over LLM-generated test cases, thus revealing a range of problematic behaviors. These include automatically identifying instances where the chatbot engages in offensive discussions about certain groups of people, generates personal and hospital phone numbers, leaks private training data in the generated text, and exhibits harmful behavior throughout a conversation.

    With this approach, we can quickly create many test cases to help us discover the potential issues with our LLMs before users find them.


    Source: Red Teaming Language Models with Language Models


    4. Protect Americans’ Privacy and Accelerate Privacy-Preserving Techniques

    The use of AI systems may risk infringing on individuals' privacy. Organizations should prioritize the development and use of privacy-preserving techniques. Some tools can help evaluate AI systems for privacy concerns and identify situations where AI-generated content may compromise privacy. Fine-tuning AI models using error-free data without personal information can reduce the risk of privacy violations while maintaining the models' effectiveness.

    5. Address Algorithmic Discrimination

    Algorithmic discrimination can lead to biased and unfair outcomes, particularly in AI-driven decision-making systems. Addressing algorithmic discrimination requires evaluating AI models for bias and discrimination. Create metrics to analyze the outputs of AI systems to identify discriminatory behavior before they go to production. Companies should create a plan to leverage monitoring tools to ensure that harmful and discriminatory outcomes are detected and mitigated promptly.

    Conclusion

    In conclusion, the Biden administration's Executive Order on AI marks an exhilarating leap forward in our journey towards fostering responsible and secure artificial intelligence technologies. This visionary order not only sets remarkable new standards but also underscores the paramount significance of transparency and synergy between AI companies and the government.

    As AI companies wholeheartedly embrace these principles, we find ourselves on the cusp of an exciting era for trustworthy AI innovation. It's a future where progress and the well-being of the American people are at the forefront, and we couldn't be more thrilled about the boundless possibilities.

    Request a demo to see how Galileo can help your team deploy trustworthy LLM applications into production.

    Table of contents

      Subscribe to Newsletter

      Working with Natural Language Processing?

      Read about Galileo’s NLP Studio

      Natural Language Processing

      Natural Language Processing

      Learn more