The US Department of Energy (DOE) has long stood out as one of the most science, technology, and innovation-focused US federal agencies. It should come as little surprise then that the DOE continues to invest in transformative technology such as artificial intelligence and machine learning.
The DOE established the Artificial Intelligence and Technology (AITO) office to help transform the DOE into a world leading Artificial Intelligence (AI) enterprise by accelerating the research, development, delivery, and adoption of AI. Pamela Isom, the new Director of the AITO, will be presenting at the February 2021 AI in Government event to share how they are maximizing the impacts of AI through strategic coordination, planning, and customer service excellence. In this interview article Ms. Isom goes into greater detail about how the DOE is leveraging data, and transformative technologies to help advance the agency’s core missions.
What are some innovative ways you’re leveraging data and AI to benefit your agency?
Pamela Isom: The responsibility of coordinating cross-cutting AI initiatives and strategically planning department-wide AI outcomes are critical to secure our infrastructure and maximize mission impacts. In 2022, my team is focused on innovative AI governance where responsible and trustworthy AI outcomes the standard. We do need more human centric integration in the AI lifecycle and a federated catalog of algorithms and data sets so that it is easier to track the impacts of our AI investments, which we are pursuing.
The AI risk management playbook (AIRMP) is an applied innovation that we anticipate deploying to the public if all goes according to plan in 2023. AIRMP captures risk scenarios and provides prescriptive guidance to mitigate those risks so that AI decisions are responsible and trustworthy. The playbook even takes into consideration mitigations that are relevant to edge devices like unmanned systems and personal devices. Edge AI systems allow teams, such as our emergency responders, to act quickly on data right where it’s captured. There are adversarial threats and vulnerabilities, however, that AIRMP supports.
Speaking of innovation, the AI team kicked off the year 2022 with an industry focus group session on the convergence of AI and immersive technologies, paying close attention to the convergence of AI and extended reality (XR) because of the significant growth in this space now and in the future. Immersive experiences are valuable for training and precision modeling of critical situations such as autonomous vehicle scenarios where sometimes synthetic data is safer and not as invasive as real-time data. In partnership with other program offices, my team is pursuing the use of AI and mixed reality to establish AI training curriculum for the workforce and for talent management across communities.
How are you leveraging automation at all to help on your journey to AI?
Pamela Isom: We apply automation on key business processes. We initiated a pilot to streamline loan processing and answer some key questions that customers typically ask so that processors can focus on more strategic assignments. We are applying both conversational AI and robotic process automation to address operational tasks. We are taking advantage of capabilities that are out of the box in cloud environments as an entry point for automation platforms and technologies, but we also are known for our supercomputers that we utilize for the most complex workloads and where it makes sense. Some stakeholders prefer commercial off- the-shelf products but given the advancements in data science, we do find that hybrid is the most suitable approach to addressing our needs at this time.
How do you identify which problem area(s) to start with for your automation and cognitive technology projects?
Pamela Isom: Two expressions come to mind. First and foremost is ‘focus on mission’ and 2nd is ‘listen’. The application of innovations for mission achievement is an imperative. For example, AI algorithms could be leveraged to ensure that grid transmissions are resilient and so that clean energy accounting is fairly applied across communities. We conduct AI research, development, demonstrations and practice reuse and audits to maximize efficiencies of such AI solutions. We listen to the needs, desires as well as pain points of stakeholders. We maintain an inventory of AI investments that we review and update at least annually through our artificial intelligence exchange (AIX) system. Focus sessions with industry and academia to hear individual perspectives are conducted to exchange opinions and capture industry insights on targeted AI topics. In essence, we assess current and target state, identify gaps and through our AI strategy, prioritize, orchestrate, and partake in the delivery of programs that move us forward with automation and cognitive technology projects.
What are some of the unique opportunities the public sector has when it comes to data and AI?
Pamela Isom: Strategic partnerships with the private sector, academia and international teams are great opportunities for the public sector. Agencies have an opportunity to get out front and create AI regulations for asset development, sharing and modern-day privacy practices. Legislation such as improving the Nation’s cybersecurity and Transforming Federal Customer Experience and Service Delivery to Rebuild Trust in Government all count on ethical, responsible, trustworthy solutions like AI that respect our civil rights and liberties. Together, through strategic partnerships, we can research and discover the most diverse scenarios and compose solutions that protect data while enabling wider access. There has to be a national platform for research and collaboration, and that is why the National AI Research Resource Task Force , of which my team is a member, is so very important. The public sector cannot meet regulatory requirements alone – it requires industry, academia as well as international collaboration.
What are some use cases you can share where you successfully have applied AI?
Pamela Isom: Specifically, the AI team applies machine learning text analysis and clustering along with natural language processing advancements to assist with strategic analysis of the Department’s AI project and use case inventory. The use cases range from next generation domain-aware AI methods research to strengthen our national security to clean energy projects that identify materials that must be utilized to address the climate crisis. We can identify themes based on inventoried data and align stakeholders from across the department with common synergies so that we maximize economies of scale, reduce waste, inform, and drive more cross-cutting AI activities. We continuously evolve our inventory data and today we can identify where the AI investments are and whether opportunities exist to improve customer experiences. Without applied AI, my team and department stakeholders would have to sift through vast amounts of data, and it would be nearly impossible to draw timely AI portfolio inferences that are necessary for strategic decision making.
Keeping an eye on mission, our research into the subsurface area is profound towards carbon capture and storage. The Science-informed Machine Learning for Accelerating Real-Time Decisions in Subsurface Applications (SMART) Initiative. This is transforming our interactions within and understanding of the subsurface, and significantly improving efficiency and effectiveness of field-scale carbon storage and unconventional oil and gas operations. SMART is a multi-organizational effort funded by DOE’s Carbon Storage and Upstream Oil and Gas Program with three focus areas of real-time visualization, virtual learning, and forecasting.
Can you share some of the challenges when it comes to AI and ML in the public sector?
Pamela Isom: Ownership of the AI is a challenge that we are working through. The plethora of data presents an ever increasing need for AI to navigate and predict with accuracy. Data annotation standards for the verticals, e.g., energy is not readily accessible. There is an opportunity to evolve machine learning before applying more advanced unsupervised learning to address mission critical use cases. There is also a significant opportunity to extend AI talent management outside of the Department. As we did with cyber, there needs to be more focus on data science and AI growth for the nation, we have no choice in the matter.
How do analytics, automation, and AI work together at your agency?
Pamela Isom: While analytics may be a starting or entry point for AI, we apply all three (analytics, automation, and AI) to provide the greatest impacts of responsible recommendations and credible decision-making. There are opportunities to improve some fundamentals so that AI operations (AIOps) advances DevSecOps concepts with integrated AI assurances, and through the capabilities (analytics, automation, and AI) there are significant opportunities to enhance inter-agency collaboration for shared decision making. I will admit that I am seeing more of that cohesiveness today, but opportunities remain.
How are you navigating privacy, trust, and security concerns around the use of AI?
Pamela Isom: These are critical elements of the AI risk management playbook (AIRMP) that was released internally in 2021. AIRMP guides stakeholders through privacy, trust, and security matters (from an adversarial perspective) and informs users of potential vulnerabilities introduced with AI. We want others, including the National Institute of Standards and Technology (NIST) to benefit and contribute to this effort.
What are you doing to develop an AI ready workforce?
Pamela Isom: We partner with the national laboratories and teach AI to DOE stakeholders twice a year. In 2022 we want to take the training to another level with, as mentioned, introduction to immersive learning.
I have a personal goal to help communities that are impacted by the automation aspects of AI. One area of concern is jobs which is also a focus of the Secretary of Energy and the Administration. We need citizens to sustain and grow in their jobs, not lose them because of AI advancements. Workers need to know how to work in tandem with robots, for instance, and how to augment the explainability aspects of AI so that inferences are validated and communicated properly. This capability is along the lines of the softer but critical skills that facilitates consumer confidence while creating unique opportunities for skills development. School teachers, for instance, should be included in algorithmic training and at a minimum, testing to assist in the generation of fair, unbiased outputs. They need assurances that AI inferences will not adversely impact student behaviors or put lives at risk upon adoption. Explainable AI is promising in this respect. These examples represent a fraction of the skills and talent development potential that could save lives.
What AI technologies are you most looking forward to in the coming years?
Pamela Isom: I am excited about 2022 and the forward leaning activities that are surfacing relative to the next generation AI. I am very much looking forward to advancements in AI so that the reliance on data is not as profound and rather, AI figures out what data it needs on its own to address problems. I am leaning on tools and technologies that provide explanations of solutions and the rationale behind predictions. The Department is taking a stronger leadership role in AI by improving the coordination of strategy, planning and implementation of programs. The national laboratories and the AI incubator initiative, sponsored by Lawrence Livermore is one of many examples of innovation enablement that’s happening. When it comes to risk mitigation, we want to ensure that AI doesn’t introduce energy and resource inefficiencies that could counter decarbonization efforts and we are passionate about delivering responsible, ethical AI for the good of the mission, the Nation, and in particular our children.
Pamela Isom, will be presenting at the February 2021 AI in Government event where she will address how the DOE is maximizing the impacts of AI through strategic coordination, planning and customer service excellence including addressing AI ethics, AI principles, and AI risk management playbook highlights.