With just months left in the Biden administration, Arati Prabhakar reflects on AI risks, the importance of immigration reform for STEM talent, and the success of the CHIPS Act. She highlights the urgent need for trust in technology, while addressing the concerning rise in skepticism towards science. Prabhakar’s remarks point to the ongoing challenges and future paths for AI safety in a changing political landscape.
As the Biden administration nears its end, Arati Prabhakar, the head of the White House Office of Science and Technology Policy, prepares to step away. She has played a pivotal role in shaping AI safety standards. Prabhakar was the first person to demonstrate ChatGPT to President Biden, and in 2023, she helped the president sign an executive order on AI aimed at increasing the safety and transparency of these technologies, albeit on a voluntary basis.
The incoming administration led by Trump hasn’t clearly outlined its approach to AI. Though Trump has criticized the current executive order as stifling innovation, the tech landscape is fraught with uncertainty. Will there be a push to roll back these safety guidelines? Meanwhile, Elon Musk, while often warned of AI’s potential dangers, has called for some regulatory measures focusing on AI safety.
I sat down with Prabhakar to discuss the challenges and directions AI may head under the new leadership. We covered a range of topics, including the pervasive risks of AI, the future of immigration policies affecting tech talent, and implications of the CHIPS Act.
Prabhakar didn’t hold back on the risks posed by AI advancements, particularly mentioning how deepfakes and image-based abuse have escalated. “We worked with the Gender Policy Council to demand immediate action from the industry, and while some companies are responding, we still have a long way to go,” she noted. It seems clear the conversations around AI’s misuse have gone from theoretical to alarming reality.
When asked if certain AI risks proved overstated, she reflected, “Initially, there was significant concern about AI being used for biological weapons, but studies showed the risk wasn’t as formidable compared to traditional bad actor exploits. That perspective shifted as people started assessing AI initiatives alongside other existing vulnerabilities.”
There’s an inherent skepticism surrounding tech use in national security and law enforcement, especially regarding privacy concerns and biases. Prabhakar emphasized that trust is critical: “If individuals fear these AI tools are unsafe or biased, the potential benefits won’t materialize.” She pointed out the contrasts in facial recognition technology applications: misuse can lead to false arrests, while trusted practices can enhance security at airport check-ins. “We need to aim for responsible application of technology,” she added.
The recent veto of an AI safety bill in California didn’t catch her off-guard. She explained that the bill attempted to set safety protocols for AI that were, in her view, unrealistic due to a lack of concrete safety assessment methods. The scientific community, she added, needs more public research to tackle these deep questions of trust and safety in AI.
Shifting focus, we dug into the immigration landscape and its importance for attracting global talent essential for the STEM fields. Prabhakar reflected on her own journey from India and acknowledged that while the U.S. immigration narrative has evolved over decades, hurdles remain. “We’re not just competing for talent with other nations; strategic needs mean weaving in our interests with security concerns,” she noted. This reflects the broader conversations happening around the election and national talent acquisition.
As for the recently implemented CHIPS Act aiming to boost semiconductor manufacturing in the U.S., she expressed optimism. “We’re seeing significant investments, not just from Intel but also TSMC, Samsung, and others, which is a positive sign for diversifying our semiconductor supply chain.” The future of chip production does look brighter, but there’s a cautious eye on how that industry evolves within the American landscape.
Finally, Prabhakar touched on the waning trust in institutions, including science itself. The rise of anti-science sentiments and confusing public health narratives complicate this relationship. She stressed, “The disconnect between public trust in research and actual health outcomes is alarming. If we’re investing in science, we need to see tangible improvements in people’s lives to sustain that trust. When that disconnect exists, it breeds skepticism and conspiracy theories that can lead to dangerous consequences.”
In conclusion, as Arati Prabhakar prepares to leave her role, her insights shed light on significant issues surrounding AI technology, immigration, and the CHIPS Act. The dialogue reveals a complex landscape marked by both challenges and hopeful advancements. The future of AI governance and trust in science relies on the next administration’s actions and the broader societal embrace of scientific facts over skepticism. Prabhakar underscores that the need for responsible regulations and public engagement in these areas is critical moving forward.
Original Source: www.technologyreview.com