Generative AI and Specialized Computing Infrastructure Acquisition Resource Guide
Introduction
Executive Order 14110
Executive Order 14110 Section 10.1(h) directs GSA to facilitate agencies’ access to commercial Generative AI capabilities.
“Within 180 days of the date of this order, to facilitate agencies’ access to commercial AI capabilities, the Administrator of General Services, in coordination with the Director of OMB, and in collaboration with the Secretary of Defense, the Secretary of Homeland Security, the Director of National Intelligence, the Administrator of the National Aeronautics and Space Administration, and the head of any other agency identified by the Administrator of General Services, shall take steps consistent with applicable law to facilitate access to Federal Government-wide acquisition solutions for specified types of AI services and products, such as through the creation of a resource guide or other tools to assist the acquisition workforce. Specified types of AI capabilities shall include generative AI and specialized computing infrastructure.”
This resource guide focuses on Generative AI not all of AI
Artificial Intelligence or “AI” is one of the most profound technological shifts in a generation or more. At its roots, AI is software. That means it follows many of the same acquisition policies and rules as other software and IT.
Contracting officials should consider cybersecurity, supply chain risk management, data governance and other standards and guidelines when procuring Generative AI just as they would with other IT procurements.
AI is a very large space and Generative AI is just one type of AI. This resource guide focuses on Generative AI acquisition in particular, not AI acquisition as a whole.
Look for definitions for both Artificial Intelligence and Generative Artificial Intelligence in Section 1.1.
Assisting the acquisition workforce
This resource guide is primarily meant to assist the acquisition workforce in navigating the complexities of acquiring Generative AI technologies in collaboration with relevant agency officials from other domains.
This resource guide is focused on civilian agencies’ Generative AI needs. Defense, National Security and the Intelligence Community have very specific needs and procurement solutions have been tailored to meet their requirements.
Generative AI can be difficult to understand both for technologists and non-technologists. While those in the acquisition community don’t need to become experts in AI, a basic understanding of certain Generative AI terms, common issues and ways to derisk acquisitions will help you make better decisions about what to buy and how.
As a member of the acquisition workforce, it is critical to work with technical subject matter experts (SMEs) like:
- Software Engineers
- Data Scientists
- Security Specialists
- Privacy Specialists
- Program Staff who can identify the problems they are trying to solve, validate users’ needs, safeguard data and establish product goals or requirements
- Agency Officials responsible for security, privacy and other relevant disciplines to meet a range of agency responsibilities in this context.
This resource guide also simplifies complex technical IT and AI concepts to make them easier for non-technologists to understand. In some cases, this resource guide oversimplifies concepts for the sake of clarity. Seek support from AI, IT and data professionals when making decisions.
Because Generative AI is relatively new and evolving quickly, it’s early to designate “best practices.” Think of the information in this resource guide as prompts to consider and frame your thinking and approach rather than directive recommendations for what to do. This content is non-binding. The recommendations presented do not supersede, modify, or direct an interpretation of existing requirements mandated by law or governmentwide policy.
This is version 1.0
This resource guide isn't static. It’s meant to evolve as technology advances while aligning with relevant laws, policies and frameworks.
Links in this resource guide were verified just prior to the site’s launch. If you do happen to find a bad link, please report it using the Send Feedback tool.
Help make this resource guide better
Please contribute your insights and help refine this resource guide by submitting pull requests on Github and submitting feedback using the Send Feedback tool.
Companies or product mentions are not endorsements
This resource guide may reference actual companies and/or offerings to define, illustrate or explain a concept. Do not consider these recommendations or disparagements. We do not endorse any company or their works.
Generative AI content disclosure
The team used Generative AI tools when it was writing first rough drafts of a few sections. This early content was used as an input not a replacement and was reviewed and revised multiple times by Subject Matter Experts (SMEs) in technology, acquisition and policy. The team gathered additional feedback from SMEs through multiple channels and made further edits based on that feedback. All content in this resource guide has been reviewed and revised by SMEs for quality.
All use of Generative AI was done in alignment with the principle of responsible AI innovation and to help the team learn about the capabilities of this technology. This attribution highlights our commitment to evaluating new technologies with the right safeguards in place while maintaining transparency.
1. Generative AI 101
1.1 Key Definitions
Artificial Intelligence
15 U.S.C. 9401(3) defines “Artificial Intelligence” as:
“a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.”
More simply, it refers to the science and engineering of making intelligent machines.
Generative Artificial Intelligence, Generative AI and GenAI
Executive Order 14110 defines “Generative Artificial Intelligence” as:
“the class of AI models that emulate the structure and characteristics of input data in order to generate derived synthetic content. This can include images, videos, audio, text, and other digital content.”
Fundamentally, this is software that can generate images, video, audio, text and computer code when prompted by a user (a human or another machine).
This resource guide uses Generative Artificial Intelligence, Generative AI and GenAI to mean the same thing.
Cloud
The National Institute of Standards & Technology (NIST) defines “Cloud Computing” as:
“a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.”
In simple terms, the “cloud” is the interconnected network of servers in data centers where information, data, and software are stored and accessed.
For further reading…
In 2023, NIST released a living glossary called “The Language of Trustworthy AI,” which it maintains as a companion resource to its AI Risk Management Framework (AI RMF).
The glossary is designed to be flexible and offer multiple meanings of commonly used terms you may encounter.
1.2 A Basic Overview of Generative AI
Here is a simplified crash course into Generative Artificial Intelligence or “Generative AI.”
Predictive Text is a Simple Example of Artificial Intelligence
We’ve all seen the basic version of artificial intelligence in computers and smartphones.
When we are writing an email or composing a text message the software suggests what the next word we might want to type is. That “typeahead” feature is an example of basic artificial intelligence in action.
Predictive text relies on the most fundamental building block of Generative AI tools, called “foundation models.” Simply put these models are pieces of software that identify patterns in large quantities of data to predict the next item in a sequence.
Predictive text uses one of these foundation models to predict your next word based on what most people have written and what you’ve written in other sentences.
The Rise of Large Language Models
We can now go much farther than simple predictive text.
That’s because in the last few years we’ve created more sophisticated artificial intelligence models with immensely more data and more powerful computers. These new models have learned how we construct sentences, paragraphs and longer pieces of writing.
The creators of these models gathered trillions of pieces of written language, from encyclopedia entries and literature to social media posts and song lyrics and even the spoken word data and imagery of videos shared online. They used all that data to “train” their models.
Those humans wrote computer code and “algorithms” that looked at every word and phrase and analyzed how they connect to one another: What do those words mean? Which words tend to show up near others? What words are used in which contexts?
The algorithms looked for patterns and statistics in the interrelationships of words and phrases. And the software kept track of the results.
All of that together created a powerful kind of foundation model called a “Large Language Model” or LLM.
Artificial Intelligence that Generates Content
The human software programmers also figured out how to build those LLMs to get really good at responding to simple human questions and prompts. The LLMs could interpret the language a person inputted, analyze what it should do and provide a response.
The programmers built chat interfaces similar to text messages that allow someone to write a simple prompt or question and have the tool respond.
The tool wouldn’t just search all the trillions of pieces of written language information in its memory and serve up potential answers. It would use its pattern analysis of language to compose new phrases and sentences and give an answer. The artificial intelligence system would generate content; hence, Generative AI.
Another benefit of the chat interface has been that someone could ask the system to improve the output in specific ways like:
- make the tone more friendly
- use fewer bullets
- say more about this specific issue referencing the events of the last few months
In this way, humans can keep shaping and refining the output from Generative AI systems. And after a few rounds of refinement, we can get something very close to what we want.
(Note: Large Language Models nowadays are often called Generative Pre-Trained Transformers or GPTs. Pairing a “Chat” function with a “GPT” is what gave the popular Generative AI product its name. While GPTs and LLMs are technically different, you can think of them as about the same for the purpose of Generative AI acquisition.)
Beyond Text
From there companies and researchers started creating other kinds of foundation models and teaching them to learn how to analyze and find patterns in different inputs.
People started feeding the new models photographs and paintings, songs and speeches, videos and animation, computer code and numerical data. The models found patterns in these kinds of content as well.
With a similar chat-based interface it became pretty easy to now ask for more than a written text response. Now these foundation models could generate images, music, movies, and smartphone apps.
As time goes on and models get better, the ability of these artificial intelligence tools to generate more sophisticated content will only increase.
1.3 Potential Benefits of Generative AI
Generative AI offers a range of potential benefits including the ones below. This is not an exhaustive list. New benefits are being discovered continually. These are also benefits that exist today but have limitations and need careful attention and monitoring.
Create and Refine Content
Probably the biggest value that Generative AI tools offer is the ability to create text, pictures, sounds, videos and even computer code.
A Generative AI tool will mine the information available in its training data or on the internet, much like a search engine, then combine it with its own pattern-making to construct sentences, paragraphs, graphics, and other outputs.
Users then have the ability to shape, refine, and adjust the outputs to make them what users want and need. This allows a user to make rough prototypes, first drafts, and initial passes at content very quickly then spend time editing, curating, adjusting, and refining rather than having to create from a blank page.
Easy, Chat-Based Interface for Prompts
Generative AI systems typically respond to questions and requests that a person types into a chat or text box. The Generative AI tools then create text, pictures, sounds, or videos to respond to those “prompts.”
The person can then refine and improve the output by typing more specific directions or guidance into the prompt box and the Generative AI tool can iterate what it created to better suit what the person wants.
Interestingly, there is a growing area of expertise called “prompt engineering” describing people’s skills in crafting the right words, context, examples, and guidance to type into chat interfaces to iterate and get the right output from different Generative AI systems.
Customize How They Work
Generative AI tools often allow both programmers and end users to alter them. People can train a model using different data so it can learn the patterns of specific kinds of information. This can be especially useful if an organization wants to use the Generative AI tool for a specific purpose.
For example, an agency could train a model using its own agency policies, data, regulations, and action with the goal that the model would provide more agency-specific outputs.
Models can also be “fine tuned,” meaning you can change how the model works, adjusting what it should treat as more important or less important. You can even change what it outputs to make it more relevant, such as following a particular style of writing or imagery.
And if no existing model seems to come close you can create your own model then train it with custom data, fine tune how it works, and tailor the outputs to meet your specific needs.
1.4. Potential Risks of Generative AI
Generative AI poses a range of potential risks including the ones below. This is not an exhaustive list. New risks are being discovered continually.
Misinformation and Disinformation
Generative AI can create highly realistic but false content, such as “deepfakes” – images, audio, video or text which make it look like someone said or did something they didn’t. This can be used to spread falsehoods that polarize people, destabilize communities, and even incite crimes.
Errors and Inaccuracies
Generative AI models can produce incorrect or nonsensical outputs, often because of limitations in their training data or algorithms. These errors can lead to misinformation, incorrect decisions, and misinterpretations, which can lead to flawed policy decisions or misinformed public statements.
Bias and Discrimination
AI systems, if they have been trained on discriminatory or one-sided data, can lead to unfair or discriminatory outcomes in public services.
Prompt Manipulation
There is a risk of Generative AI systems providing biased or misleading outputs, especially when users deliberately input incorrect or slanted information. Malicious actors could exploit this to generate harmful content, spread disinformation, or extract sensitive data.
Misalignment with Laws and Policies
The use of Generative AI must be carefully evaluated to ensure compliance with applicable laws, policies and procedures around transparency, accountability, and other aspects of ethical use.
Security and Cybersecurity Threats
Generative AI can be exploited for cyber-attacks, including sophisticated phishing and malware that evades detection, and is vulnerable to adversarial attacks aimed at manipulating outputs. Insecure AI systems risk data theft and sabotage, undermining public trust and national security.
Privacy Risks
The volume and type of data involved in the use of Generative AI raises concerns about privacy and the use and protection of personal information when training these technologies, inputting data and outputting content.
Intellectual Property and Plagiarism
Generative AI raises complex issues regarding copyright infringement and originality. This poses significant challenges for determining the ownership and originality of AI-generated works.
Environmental Impact
Similar to cloud computing, training and operating large AI models requires substantial computational resources, leading to significant electricity usage. Additionally, the production and disposal of the specialized computing hardware (like GPUs and CPUs) necessary for AI development involve resource-intensive processes and contribute to electronic waste. This environmental impact is compounded by the infrastructure needs of data centers, including land usage and cooling systems.
Model Training Delay
Since there is a time lag between when the model is trained and when the model is used there is a risk that the model doesn't reflect current norms, regulations, or realities, which could cause the AI tool to generate irrelevant or inappropriate outputs.
Ownership
Generative AI tools create data based on the inputs that it has been trained on and new data inputted by users. Aside from the intellectual property issues mentioned above, there are significant concerns over who owns the inputted and outputted data. If a user inputs documents for the Generative AI tool to use, is that data now owned by the company who owns the Generative AI tool or does the user still own it? If the Generative AI created lines of computer code, is that code now owned by the user or the company who owns the Generative AI tool? Is the Generative AI tool allowed to “learn” from the data it created and use it to create code for some other user?
1.5 Typical Generative AI Tools
Many kinds of Generative AI tools exist right now in the market that an agency may want to buy or get access to. Here are a few examples and how they might be used by an agency. These are illustrative products that might be tailored to meet an agency’s specific goals.
Generative AI-Enabled ChatBots
Applications that are capable of answering a series of related questions or prompts in a conversational tone. These might be useful to respond to the public’s inquiries on policies, services, programs and products.
Text Summarizers
Tools that can take in reports, legal documents, case files and other documents, analyze the information and produce summaries. These might be useful to reduce the time to review documents and allow an agency user to focus on the most relevant content.
Data Analyzers
Software that can crawl through large amounts of numerical data, organize and structure it, find patterns and anomalies, create visualizations and make projections based on what it finds. These might be useful to save effort and enable agency staff to do higher level analysis.
Explainer and Tutoring Tools
Modules that can generate natural language explanations and teach users through a process, an application form or a reporting requirement. These might be useful to reduce frustration with processes and increase the completeness and accuracy of document submissions.
Text to Image Generators
Programs that let an end user quickly produce images and visuals by inputting text prompts that describe the picture they want. These might be useful to enhance the creation of images, diagrams, illustrations, and graphics for use in training materials and guides.
Coding Assistants
Applications that can write new computer code, update and improve legacy code, translate code from one language to another, find and use appropriate code libraries, detect errors in code and recommend changes. These might be useful to reduce the time programmers spend developing or modernizing code.
Language Translators
Tools to analyze text in multiple languages, recognize differences in written and spoken language, understand context and properly interpret words to have the right meaning. These might be useful to translate materials into multiple languages.
Search and Recommendation
Add-on functions that improve search results as users build on and refine their queries and by interpreting more from what users ask. These might be useful to help online systems find the pieces of information that a user didn’t know how to ask for but really what they want.
1.6 For further reading
-
Argonne Leadership Computing Facility:
An Intro to AI Driven Science on Supercomputers: A Student Training Series
-
Defense Acquisition University (account needed):
CENG 003, Artificial Intelligence (AI) Foundations for the DoD
-
AI Community of Practice:
Learn about the governmentwide AI Community of Practice
-
IT Modernization Center of Excellence:
IT Modernization CoE - AI Guide for Government
-
Federal Acquisition Institute (account needed):
Demystifying AI: A Learning Playlist for Federal Acquisition
2. Generative AI in the Government
If implemented responsibly, AI can improve operations and deliver efficiencies across the Federal Government. When implemented with appropriate safeguards and risk management practices Generative AI tools can help automate and refine data processing, improve analysis and speed up decision-making while reducing human errors. This can enable better policy making, operational planning, service delivery and communications. More specific and up-to-date information on government use of Generative AI is available to help agencies understand such possibilities.
The value of Generative AI tools is expanding as more individuals and companies experiment with Large Language Models, AI-enabled chatbots and content generators. Some of them have potential applications for the government to increase efficiencies, lower operating costs, and prevent and detect criminal activity against the government. There is a range of current capabilities within agencies to support these emerging project areas.
Automated Documentation Drafting
Using custom GPTs, Generative AI tools can assist in creating initial drafts of required documents for caseworkers, contracting officials, and other government professionals for repetitive or common tasks.
Fraud Detection in Financial Transactions
Utilizing pattern recognition and trend analysis of financial data, Generative AI can identify fraudulent activities in federal financial systems by flagging inconsistencies in the data sets for further human review and decision.
Synthetic Data Generation
Generative AI can create sample data sets to test software for functionality and scalability, potentially reducing the development timeline for new products or modification and personalization of commercial software by eliminating or reducing the need to provide and/or clean actual government data.
Cybersecurity Threat Detection
Using a generative AI model trained on historical cyber data like network traffic and user interactions, cyber professionals can identify patterns and trends allowing them to anticipate and respond to cyber threats against federal IT systems.
Human Resource Management
Generative AI can assist in summarizing resumes, personalizing recruitment activities, drafting job descriptions, interview guides and training materials.
Modernizing and Managing Software Code
Generative AI can help agency software developers draft new code, translate and transform legacy code, and identify potentially outdated or compromised code libraries in existing products.
Energy Consumption Analysis
By analyzing historical weather trends and utility usage, Generative AI can help predict scenarios and optimize energy usage in federal buildings.
AI-Enabled Training Simulations
Generative AI can be used to develop virtual training environments and create interactive scenarios based on organizational policies, doctrine and norms. This allows for more realistic interactions before an employee is put into a new role or when a new policy is created.
Customizing Citizen Engagement
AI can personalize communication with citizens based on their interactions with government services. Through the use of intelligent chatbots, agencies can provide a level of automated customer service for government services, more quickly and effectively resolving inquiries from the public.
3. Generative AI Procurement
Overview
Select Components of a Generative AI Acquisition
Acquiring Generative AI solutions can look very similar to other IT acquisitions but there are additional aspects that could be considered. Below are some considerations and questions that could be helpful for the acquisition workforce to think about in collaboration with relevant agency officials from other domains when acquiring Generative AI.
If you are not familiar with purchasing software or have not conducted a software acquisition recently, consider reviewing the information at the TechFAR Hub or seeking out some additional training from the providers listed on the site or your agency.
Here are select components of a Generative AI acquisition. These are starting points and don’t represent the only practices or approaches for acquiring Generative AI. This also is not a strict sequence as much as it is a collection of focus areas that should be considered as part of any Generative AI acquisition.
Additionally, your agency may have specific requirements, policies and guidelines that you must follow as part of any Generative AI acquisition.
3.1
Integrated Product Team
3.2
Agency Goals and Needs
3.3
Solutions Scoping and Testing
3.4
Data Management and Protection
3.5
Acquisition Strategy
3.6
Market Assessment
3.7
Cost Estimation
3.8
Performance Monitoring and Evaluation
3. Generative AI Procurement
3.1 Integrated Product Team
Generative AI procurement is not just a procurement or IT department task. Many agencies have consolidated their AI decision-making under a Chief Artificial Intelligence Officer (CAIO). Some agencies, commissions and departmental components may assign this responsibility under their CIO or IT support organization.
The CAIO plays a vital role in any Generative AI project. CAIOs can be strong partners for acquisition teams looking to help program staff get a Generative AI project off the ground and running smoothly.
As a start, your agency’s CAIO and program officials can provide insight into the maturity of the agency’s IT and AI capabilities, policies, and skills. Your agency may already be using Generative AI tools and may be familiar with how to best work with them.
Consider creating an Integrated Product Team (IPT) drawing from across the agency or sub-agency. Integrated Product Teams are successful because each role brings different perspectives and knowledge. Given the complex and cross-cutting nature of many Generative AI decisions a cross-agency IPT involving agency officials with expertise in the specific program, technology, data, and other relevant domains (e.g., security, privacy, legal, finance, acquisition, etc.) can help work through challenges and identify opportunities. Having multiple voices and ideas can make working through thorny challenges more effective.
Led by a technical program manager, the IPT consists of program staff who can voice needs and advocate for the agency’s mission, as well as AI practitioners, software engineers, data engineers, security experts, privacy officials and acquisition professionals. Finance, legal, accessibility and other representatives can be brought in along the way.
An IPT can set the project objectives and constraints, research and vet solutions, and identify potential risks and consequences. The team can identify and review agency related AI policies and guidance. In the case of Generative AI, the IPT can also serve as the means for continued monitoring and evaluation of the capability as described in Section 3.8 Performance Monitoring and Evaluation.
In coordination with agency officials with relevant expertise and responsibilities, the IPT can address these types of issues:
- Integration and interoperability of solutions with existing systems and data
- Data rights and ownership
- Data protections
- Responsible use of data and tools
- Intellectual property provisions
- End-user licensing agreements
- Appropriations implications for different types of pricing
- Security measures and privacy implications to start a pilot or scale a solution
- Performance testing, monitoring and control
- Accessibility considerations
Refer to OMB Management Memo M-24-10 Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence for more details about specific requirements.
Many governmentwide communities can help you charter an IPT. The IT Buyers Community of Practice unifies agency contracting officers, program managers, software asset managers, information security officers as well as industry partners and many other active stakeholders in the Federal IT Marketplace to increase access to data, tools, and information.
3. Generative AI Procurement
3.2 Agency Goals and Needs
There are many different types of Generative AI products and services, each with their strengths and weaknesses. Don’t focus solely on one particular solution at the risk of overlooking better options. Instead, begin by understanding what program staff want to do and achieve, then find the right solutions and suppliers that will best help them. Articulating the problem statement even before defining requirements can help ensure that the IPT is focused on the right needs.
Program staff and leaders can be a great source for identifying and refining problem statements and how they link back to the agency’s mission. Technical leaders including the agency Chief Information Officer (CIO), Chief Artificial Intelligence Officer (CAIO), Chief Data Officer (CDO), Chief Information Security Officer (CISO) and Senior Agency Official for Privacy (SAOP) offer important perspectives about what current systems and tools can do and how they can be used. Many agencies have also selected a Chief Artificial Intelligence Officer (CAIO) to oversee AI efforts including Generative AI work. CAIOs are a vital voice to have at the table when scoping agency goals and needs.
Some potential questions to ask include:
- What specific problems does the agency want to solve?
- What opportunities does Generative AI present that the agency wants to pursue?
- Are other agencies seeking to solve similar problems or use Generative AI for similar purposes that you can learn from?
- What are the gaps in current technology tools that Generative AI could fill?
- What are the desired outcomes of using Generative AI tools?
Sometimes just by posing these questions an agency can determine that it already has tools (and even non-AI tools) that can be deployed quickly or that the problems themselves aren’t worth the investment to solve.
Many agencies are in a space of needing or wanting to experiment or “do something” with Generative AI solutions to understand what the technology can do. Technical experimentation is healthy and important for agencies. If your agency is just trying to get exposure to Generative AI tools and their capabilities, it could be helpful to start with smaller problems - perhaps ones that are not mission critical - but ultimately ones that can’t be easily solved with current non-Generative AI tools.
For ideas and inspiration on what problems and projects might be good to work on, see Section 1 Generative AI 101 and Section 2 Generative AI in the Government of this resource guide. Agency use cases are also published on AI.gov and may also be on your agency website.
3. Generative AI Procurement
3.3 Solutions Scoping and Testing
It’s often difficult to know which Generative AI tool will be best for your agency’s purposes. That’s why it’s helpful to try them out first. In coordination with agency officials with relevant expertise and responsibilities, the Integrated Product Team (IPT) can see which tools have the right combinations of functions and features that work with the data the agency has and help get to the goals the agency wants to achieve.
3.3.1 Categories of Generative AI Solutions
By knowing what the agency’s priority problems are, and focusing on how Generative AI tools might address those needs, the IPT can scout potential Generative AI solutions. To jumpstart an agency’s reporting needings, perhaps tools that can analyze and generate first-draft text can help. If the agency is having a hard time keeping up with public inquiries, perhaps they need Generative AI-enabled chatbots that can retrieve specific information that an agency has uploaded in the LLM.
The IPT can identify potential categories of solutions, which functions and features are needed to address the agency’s priority problems and initial performance requirements.
For a starter list of Generative AI solutions categories, consider the typical Generative AI tools shown in Section 1.5 Typical Generative AI Tools of this resource guide.
3.3.2 Cloud Platform Sandboxes
No two Generative AI systems work exactly the same. What a supplier says their product can do might be close to what your agency wants or needs but may require tweaks to really make them work. At the same time, Generative AI systems are also evolving rapidly. Functions that don’t exist now might be available later. OMB Management Memo M-24-10 provides a helpful reference about what sort of AI-enabling infrastructure and/or testing might best inform this process.
Testing solutions before doing a full-scale purchase can help ensure the Generative AI products meet your specific agency’s needs. The IPT can then translate their experiences testing solutions into requirements and use cases for an acquisition plan and contract documents that will meet their goals.
“Sandboxes” are test beds that agencies can use to experiment with Generative AI tools. Think of them as low scale versions of tech you are exploring. These sandboxes allow users to try different tools and identify which ones work best for your agency’s needs. They can also help more accurately estimate the amount of effort and cost it might take to complete the project.
A subset of the IPT can bring their expertise in data, workflows and security to scope the sandbox and begin running tests with data and different Generative AI tools, subject to legal and policy restrictions and with appropriate safeguards.
In many cases, the IPT can create a sandbox on a cloud platform that allows them to upload their data and try different Generative AI modules to see what kinds of results each gives. Because most Generative AI tools are available through cloud platforms this is an easy and relatively quick way to get started.
Consider what data would be appropriate to use in a testing environment and what restrictions may apply to certain kinds of data (e.g., personally identifiable information, confidential business information, etc.) including certain restrictions on publicly available data. Working with agency officials with relevant expertise and responsibilities to address a system’s privacy and data security risks and incorporate appropriate safeguards (e.g., through an ATO process or FedRAMP certification) is an important part of this process.
Multiple Sandboxes
Some IPTs may even decide to set up multiple sandboxes on a few different cloud platforms so they can test which modules work best for the agency’s needs and which platforms offer the most flexibility, lowest cost and best return on investment. Because each cloud platform and module works differently, be cautious and conduct individual security and privacy reviews for each sandbox before putting data into each sandbox.
Setting Up Sandboxes
Typically setting up and using these sandboxes can happen under the Simplified Acquisition Threshold (SAT) since they are used for small scale experiments. The IPT can create sandboxes using current contracts with a Value Added Reseller (VAR). The IPT may have to go through extra steps to access third party products through your sandbox. You may need to update or include new terms and conditions with the vendor. Consider creating a concept of operations (CONOPS) or charter with your agency’s CIO, CAIO and CISO organizations to operate these. Sandboxes that do not use certain types of data (for example, government data) and are not connected to the agency network may allow for flexibility to experiment with new software quickly.
Performance Metrics and Evaluation
To get the most useful feedback about the Generative AI tool, test their performance in conditions that mirror how the tool will be deployed in the real world. The IPT can create metrics and evaluation criteria for testing and assessing the performance of solutions at any dollar threshold. If the tool will be used by the public or non-technical staff, the IPT may want to also conduct basic user experience, usability and accessibility testing of the tool across a range of devices (desktops, laptops, tablets and smartphones).
3.3.3 Non-Cloud Platform Sandboxes
Some tools may not be available through a traditional cloud platform. Working with the Generative AI tool vendor to use a trial version or create a custom sandbox for your agency might be needed. As with other testbeds, always make sure to check with your agency’s CIO, CISO, CAIO and OGC to ensure that the tools comply with all relevant security and privacy laws and policies.
The IPT may opt to test “self-hosted” tools that sit on the agencies’ own IT infrastructure and use “open source” models that are widely available rather than closed, proprietary systems. These options can offer the agency greater control over the Generative AI tools and may save costs paid to external providers but may require more internal resources to manage. Because such open source software is used by many different players, there are many opportunities to put in “backdoors” where data can be accessed or siphoned off without the users knowledge so security scans and monitoring are critical.
3.3.4 Specifying Generative AI Solutions
With some sense of what combinations of platform and Generative AI tools work for the data the agency has and the outcomes it wants, the IPT can determine what the agency will need to acquire.
The agency may require access to a new cloud platform or to change the terms of use for an existing platform. The IPT may decide it needs to have access to several different Generative AI tools. The experiments they ran in the sandbox may show the need for several accounts to scale up operations. Engineers may decide that they want to be able to integrate Generative AI functions into their own systems and need access to software interfaces that allow them to exchange data with other applications, commonly known as an “API” or an Application Programming Interface.
Determine the products and services that the IPT needs to succeed, the quantities, the performance requirements and any other specifications needed to write a clear statement about what the agency is interested in purchasing. If the IPT has specific requirements that only certain products will meet, start to develop the appropriate justification and associated authority based on the documented need to limit competition supported by the results of the IPT’s initial experiments.
3.3.5 Advanced Generative AI Options
Sandbox tests and other considerations may show that the IPT requires more sophisticated Generative AI capabilities. They may want to:
-
Fine-tune a Pre-Trained Model
LLMs rely on algorithms where different connections and information are given more or less “weight” or significance. Fine tuning is a process of changing those weights so different information is treated as more or less important.
-
Customize a Pre-Trained Model
LLMs are trained on a certain set of initial data. The LLM can be customized further with more up-to-date information or information relevant to a specific field which makes it significantly more useful or even links to searchable databases.
-
Train and Fine-Tune a Model
A team may decide that they want to take the basic architecture and algorithms of an LLM and train it on completely different data and weight the information in a very specific way. This can give a specialized result meant for a particular purpose.
-
Build a Model
A team may want to develop everything themselves from the ground up, crafting their own algorithms, determining their own weights of what information is more important and training it with its own data. This is a significant lift but may be necessary for very specific uses such as high-security applications.
Typically these kinds of projects require greater effort, expertise, data security and privacy monitoring and oversight so IPTs should think deeply about the investment and whether the agency truly requires it.
3. Generative AI Procurement
3.4 Data Management and Protection
Data is at the heart of all Generative AI tools. The foundation models are trained on billions to trillions of pieces of data. Users can put new data in and the models can learn from that data. Generative AI tools produce text, images and a whole host of other data. Properly protecting and managing that data as required by applicable law and policy is a critical consideration for any Generative AI acquisition.
It’s critical to know what data was used to build and train the system initially and how the training data relates to the data you will use the Generative AI tool on, including potential implications for bias. It’s also more important to know whether the data you input will be used to train and improve the system or whether it will be deleted after the query is complete. You’ll need to know how the data you input might be outputted to other users when responding to their prompts. And you will also want to understand how you can transfer or use your data in other systems. The IPT can then identify, document and monitor where there are unknowns around the data outputs, usage and ownership. The agency’s Chief Data Officer (CDO) could be able to answer specific questions and concerns about data rights and ownership.
3.4.1 Data Security
Using data responsibly and protecting data from unauthorized access are high priorities for data from both inside and outside of the government. Working with agency officials responsible for security, privacy and other areas of data governance is important for understanding what laws, regulations and policies apply to any data that will be involved in agency use of Generative AI; for assessing associated risks; and for implementing appropriate safeguards.
Protections can help prevent unauthorized access. If there is any chance information considered non-public by your agency officials would be used in a Generative AI tool, then those tools can only run on systems that have the right protections and safeguards in place. But even with publicly available information or information available for unlimited distribution there may be applicable laws, regulations, policies and standards.
3.4.2 Data Inventory
Listing the data that will be involved in the Generative AI tools the agency plans to use can help manage the data. Major sources of data to manage include but aren’t limited to:
- Generative AI Training, Testing and Validation Data - This is the information that the manufacturer of the Generative AI tool used to create the LLM and see if it worked well.
- Generative AI Fine-Tuning Data - This is the information that either the manufacturer of the Generative AI tool or the users of the tool (including your agency) use to customize the LLM to better serve specific purposes.
- Agency Inputted Data - This is the information that the agency staff feeds into the model so the LLM can summarize it, find patterns and insights or transform it and the prompts that agencies put in to get answers from the LLM.
- Generative AI Outputted Data - This is the information that the LLM creates based on the training, testing and agency inputted data.
- Generative AI Code and Models - This is the set of algorithms and programming that make up the model itself.
3.4.3 Understanding How Data is Used
Understanding and identifying training and testing data, input data and output data are critical to managing a successful Generative AI project or program. Both users and owners of the tool and data have equities in how they are managed and used.
Generative AI’s need for training data raises questions about where, when and how that data was collected. Model cards and similar resources can help explain the origin and nature of the training data and quantify the risk of bias, discrimination or similar harm to people. Understanding the data, its biases and its current applications and usage will help to identify whether a particular application or model is more appropriate to meet needs.
Asking suppliers to share information about their products helps acquisition staff know that there is transparency and accountability in the procurement. At the same time, it helps the IPT, CAIO and other relevant agency officials monitor compliance later.
Here are initial questions the IPT can ask to help get a better sense of what data are associated with the agency’s use of Generative AI, how to work with relevant agency officials to assess and mitigate risks, how commercial products were developed, what actions suppliers have taken and how to ensure government data rights are protected.
This list of questions is not exhaustive, nor will all the questions be applicable in every situation.
- What steps have you taken to ensure the data are suitable for the intended use case?
- What steps have you taken to evaluate the provenance and quality of the data?
- What is the source of the data (e.g., agency data, social media, news, legal documents, scientific reports, etc.)?
- Have you worked with relevant agency officials to evaluate the security risks, privacy risks, and other data governance issues associated with the data (including the training data), the model, and the intended use case? Have you worked with relevant agency officials to implement appropriate safeguards?
- What data were used for training and testing? Have you considered, for example, the size of the data set and the demographics associated with the data set? Have any inherent biases been identified in the data? What biases are likely present in the data?
- Was specific data used to “fine tune” the model? If so, what were the characteristics of those data? What steps have you taken to ensure these data are suitable for the intended use case?
- For training/testing data and agency inputted data, what was the context in which the data set was gathered or generated? What was its initial intended purpose(s)?
- For agency inputted data, are agency inputs stored by vendors? If so, what are the limitations on how vendors use that data?
- For Generative AI outputted data, who owns the data after they are generated/modified?
- Will Generative AI outputted data be used to train another AI model you are using?
- Could the intended use case have an impact on rights or safety?
- Does the agency have the ability to assess, monitor, and control outputs?
- Can the outputs be explained?
- Are there any risks of data leakage?
- Who owns the model once the contract ends?
- Does the model have any sort of history of bias or ethical challenges that have not been corrected?
- Could use of the model inadvertently synthesize information in a way that makes the output more sensitive than the input?
- What kinds of protections or submodels are being formed with data that could inadvertently lead to adverse outcomes for government customers?
3.4.4 Potential Data-Related Risks
This resource guide lists some but not all of the potential risks of Generative AI in Section 1.4 Potential Risks of Generative AI including Misinformation and Disinformation, Privacy, Bias and Discrimination and Security/Cybersecurity. Clarifying ethical challenges related to data and completing a risk assessment are potential ways to identify risks for data and determine how to monitor and mitigate them.
Clarifying ethical challenges
Documents like the Blueprint for an AI Bill of Rights and the Tradewinds Responsible AI Toolkit are published ethical frameworks that help explain what issues to prioritize.
From the Responsible AI website:
“The Responsible Artificial Intelligence (RAI) Toolkit provides a centralized process that identifies, tracks, and improves alignment of AI projects to RAI best practices and the DoD AI Ethical Principles, while capitalizing on opportunities for innovation. The RAI Toolkit provides an intuitive flow guiding the user through tailorable and modular assessments, tools, and artifacts throughout the AI product lifecycle. The process enables traceability and assurance of responsible AI practice, development, and use.”
Completing a risk assessment
For each of the areas listed above, capturing both what might go wrong and the potential harm, damage or loss that the agency would suffer if that happened can be helpful for determining how data are used.
"The Responsible AI Toolkit also provides an example guide on risk.
The Responsible Artificial Intelligence (RAI) Defense AI Guide on Risk (DAGR) is intended to provide DoD AI stakeholders with a voluntary guide to promote holistic risk captures and improved trustworthiness, effectiveness, responsibility, risk mitigation, and operations.”
The NIST Risk Management Framework (RMF) helps to better manage risks to individuals, organizations, and society associated with artificial intelligence (AI) by incorporating trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems. The accompanying NIST AI RMF Playbook provides suggested actions for achieving the outcomes laid out in the AI Risk Management Framework (AI RMF). Suggestions are aligned to each sub-category within the four AI RMF functions (Govern, Map, Measure, Manage). The Playbook is neither a checklist nor set of steps to be followed in its entirety. Playbook suggestions are voluntary. Organizations may utilize this information by borrowing as many – or as few – suggestions as apply to their industry use case or interests.
If any data was used for training, tuning, validating or fine-tuning the model, IPTs are encouraged to persist the data for contestability or observability of issues like data drift.
3. Generative AI Procurement
3.5 Acquisition Strategy
FAR Subpart 34.004 defines the acquisition strategy as “the program manager’s overall plan for satisfying the mission needs in the most effective, economical, and timely manner” and FAR Subpart 7.1 identifies the components of an acquisition plan.
With the IPT having documented the agency’s goals, tested Generative AI solutions in sandboxes, developed requirements and worked with relevant agency officials to define how data will be managed, you can map out what you intend to buy and how you intend to buy it. You can specify the performance metrics of the Generative AI products, tools, functions, modules or add-ons that will best serve the agency based on the sandbox tests the IPT completed. There are multiple ways to answer the question of “how to buy it.”
3.5.1 The fastest acquisition may be no new acquisition.
Generative AI tools are often software delivered via the web or “Software as a Service” (SaaS). So like other SaaS tools most Generative AI tools are easily available through cloud platforms, other software packages and publicly available websites.
Generative AI tools may already be available to agency staff. They might have them in tools they already use every day. Generative AI tools may be accessible through existing government cloud platforms. And they may be available through professional service and system integrator contracts agencies already have in place.
That means your agency might be able to leverage existing enterprise wide solutions to access Generative AI solutions instead of purchasing a new product.
In that way, the fastest acquisition for Generative AI tools may be no new acquisition, or as simple as using an existing cloud platform contract to access Generative AI tools. Verify the additional requirements are within scope for the existing contract.
The fastest or easiest solutions may not be the best for the agency in the short and/or long term. Work with the IPT in the market research phase to ensure adequate competition is anticipated to avoid vendor lock-in.
Before embarking on a large scale or involved process for scoping and conducting a new acquisition for Generative AI tools, see if there is a simpler route. Work with your agency’s Chief Information Officer, Chief Artificial Intelligence Officer, Chief Data Officer, Chief Information Security Officer, Senior Agency Official for Privacy and other relevant officials to determine what you already have in place and whether you can just use an existing solution or contract.
Here are a few options:
From public web tools you can access
Many providers offer basic Generative AI tools and services from a freely available public page or a search page. If your agency staff’s Generative AI needs are basic, consider using one of these basic, free tools. This might only be acceptable in very few use cases. Also note that the use of free public tools will likely limit the data you can use and the amount of information available to address some of the risks and concerns identified in previous sections, so carefully consider these limitations before use. Review the terms of service and any fiscal law implications with your agency’s legal team before using these tools to ensure compliance.
In software you have
Lots of software packages have basic Generative AI functionality built in. Before you start a new acquisition, check with your agency’s IT group. You may already have access to the right Generative AI tools within your existing software. Expanding access to or leveraging built in capabilities may require a review of the existing agency Authority to Operate (ATO) and the boundaries for the current operating state.
Via a cloud platform you use
Many cloud platforms that offer large scale data storage and compute power provide access to Generative AI tools as modules from right within their platforms. Agency staff might be able to use their current cloud platform accounts to pay for and use those modules if it is within the scope of the underlying, pre-existing contract.
Considerations when Accessing Generative AI Tools via a Cloud Platform
- Coordinate with agency Chief Information Officer (CIO), Chief Artificial Intelligence Officer (CAIO), Chief Data Officer (CDO), Chief Information Security Officer (CISO) and Senior Agency Official for Privacy (SAOP), and Office of General Counsel (OGC) on appropriate risk management processes. Executive Order 14110 stipulates AI use cases need to be risk managed by contract authorities or agency IT. If there is not yet a formalized process, document your approvals from the relevant agency officials.
- Consider creating a concept of operations (CONOPS) or charter for these tools in collaboration with your agency’s CIO, CAIO, CDO, CISO, SAOP, and OGC organizations.
- Identify which cloud platforms your agency has access to, which office(s) own the contracts and what the terms and conditions are. In many cases, the contract and contracting officer are assigned to the CIO organization.
- Prioritize privacy and security including for experiments. Choose cloud platforms and modules that have received FedRAMP authorizations and incorporate appropriate safeguards to mitigate risks. Understand how your data may be used to train the model or improve outputs, with and without your consent.
- Develop a cost estimate based on your requirements. One way to do that is to use publicly available price data from the cloud service providers you are considering. If the contract is with a Value Added Reseller, there may be management costs for the vendor needed to stand up the environment.
- Ensure you are getting access to the right version of the platforms. Many cloud platforms have different versions for commercial and government each with special security options and pricing models. You may also need to request permission to access Generative AI modules if there are only test versions available for federal environments.
From a professional services provider or IT system integrator on contract
A growing number of consultants are offering Generative AI access through their existing professional services contracts. IT system integrators have also built Generative AI tools and interfaces to others’ tools. Agency staff can sometimes access such tools through a custom portal or from software that the consultant or SIs have deployed. Depending on their contract and/or task order, a consultant or SI might already be able to provide or build a custom solution to provide access to Generative AI tools. Anything outside the contract provisions and clauses may incur additional costs and requirements.
With any of the above options, keep in mind that even though you might not do a new acquisition you may still need to modify the terms and conditions of an existing contract, upgrade subscriptions or exercise an option.
And of course, especially for what looks to be “free,” check with your agency’s Chief Information Officer, Chief Artificial Intelligence Officer, Chief Data Officer, Chief Information Security Officer, Senior Agency Official for Privacy, and other relevant officials to ensure the tools meet all appropriate regulations, standards and policies.
3.5.2 Acquisition Channels for Generative AI
If you need to do an acquisition to access Generative AI tools a few options exist. The acquisition method or vehicle you choose may influence which of these options is most appropriate. As with any acquisition, existing providers should be a consideration and not a requirement that impedes competition.
Acquire access to Generative AI tools or functions within software you already have.
Typically the acquisition process for getting these Generative AI features will look like paying more for a higher subscription or to move up to the next feature tier that has the more expensive functions. If you procure that software through a Value Added Reseller (VAR) there may be a management fee in addition to the cost of Generative AI tools access.
Note: that many software providers are adding Generative AI capabilities into existing software. Ensuring you review any changes to the functionality or the terms and conditions will help you to identify whether these capabilities are present in new software updates and deliveries.
Acquire access to Generative AI tools directly from the manufacturer.
The acquisition process will probably look like paying the manufacturer according to their pricing structures. The manufacturer may use a Value Added Reseller (VAR) who may charge a management fee in addition to the cost of Generative AI tools access.
Acquire access to a cloud platform and Generative AI tools through that platform.
The acquisition process should involve procuring access to the cloud platform and then paying for using the Generative AI tools. Many of the cloud platforms use resellers who may charge a management fee in addition to the cost of Generative AI tools access.
Acquire access to Generative AI tools through a system integrator.
System integrators all have different pricing models but you can expect to find many that bill as professional services by the hour, week or month. They may charge separately for Generative AI tools access.
Many Generative AI tool manufacturers use multiple channels for their products. They sell their tools directly, they use resellers and they make their tools available through cloud platforms and system integrators. Often the prices vary among the channels. In some cases you have to pay if you go to the manufacturer but you can get limited uses of products for free through partners.
Look carefully at the different acquisition options and determine what best meets your agency’s needs. In accordance with governmentwide and agency regulations and policies, consider as part of your requirements development which options offer:
- The trackable, transparent and cost-effective pricing models
- The flexibility and scalability for changing users, usage or use cases
- The terms and conditions for security, privacy, and other data management issues
- The ease for agency staff to access and use the tool
- The speed of acquiring access once the solicitation is released
Custom Solutions
In certain cases, the IPT may decide it needs more than just access to Generative AI tools and needs a custom solution that is tailored or built specifically for their use. Such acquisitions may end up looking like other purpose-built software purchases in pricing, terms and conditions and process.
3.5.3 Acquisition Vehicle Selection
You can choose from many options of acquisition vehicles, including governmentwide acquisition contracts (GWACs). This resource guide includes a list of potential acquisition methods and vehicle. available for civilian agencies. Some vehicles will be a better fit for your agency’s particular needs. In addition to the steps outlined in FAR Part 8, FAR Part 10, FAR Part 12, and FAR Part 13 below are a few questions to ask as you evaluate which acquisition vehicle might be best to use for your needs:
-
Does your agency have an anticipated contract type?
Your agency may prefer or need to use a certain type of contract. There are multiple contract types outlined in FAR Part 16 such as fixed price, cost reimbursement and time and materials. Not every acquisition vehicle can accommodate all contract types. It is important to research whether the anticipated contract type aligns with a specific vehicle.
-
Is the acquisition vehicle designated Best-in-Class?
Your agency may benefit from using acquisition vehicles that have been designated Best-in-Class (BIC) by the Office of Management and Budget (OMB). BIC vehicles maximize the government’s shared purchasing power, allowing agencies to leverage volume discounts; help agencies operate more efficiently by reducing administrative costs and contract duplication; and expand the sharing of government-wide buying data, leading to better-informed decisions.
-
Does your agency have specific socio-economic requirements or targets?
Your agency may plan to set-aside a contract requirement to target certain socioeconomic categories such as Service Disabled Veteran-Owned Small Businesses (SDVOSBs) or those in the 8(a) Business Development program. Some vehicles provide easy ways to access such suppliers.
-
Does your agency have budget/resource constraints?
Your agency may need to keep to a strict dollar limit for a purchase. Some vehicles have negotiated better prices or fees in exchange for the buying agency taking on more of the burden of the acquisition from reporting to compliance checks.
-
Does your agency have preferred or approved vehicles?
Your agency may have identified particular contracts as part of its strategic sourcing efforts that it prefers to use. Some vehicles may be designated as preferred or even mandatory vehicles because they meet your agency priorities and not using these vehicles would require a waiver.
-
Does your agency have its own capacity to verify compliance?
Your agency may prefer to not worry about reviewing or verifying whether suppliers are in compliance with relevant policies and regulations. Some vehicles have pre-negotiated terms and conditions that ensure suppliers comply with such rules.
-
Are the products or services you need available on the vehicle you plan to select?
Your agency may have very specific requirements for the solutions they are trying to procure. Some vehicles may have the right suppliers or products available while others do not.
See Section 4 Acquisition Methods and Vehicles in this resource guide for a list of relevant acquisition methods and vehicles.
3. Generative AI Procurement
3.6 Market Assessment
3.6.1 Market Research
The marketplace for Generative AI technologies and vendors is changing every day. Large Generative AI companies, universities, small businesses and new upstarts are all announcing new tools, capabilities and features constantly. Generative AI is such a popular topic that there are multiple articles on these advancements every day. You can stay informed about the latest AI technologies and vendors by monitoring tech and Federal IT news websites.
It can still be difficult to know which resellers, vendors and manufacturers to look at for a particular acquisition. Those on government contracts are a good starting place. Here are a few other resources:
- Search tools on acquisition program websites like GSA’s eLibrary and Cloud Information Center provide easy ways to find suppliers.
- The Federal Acquisition Service (FAS) also runs a free Market Research as a Service (MRAS) offering for agencies to craft targeted asks of industry that can then inform those agencies’ Generative AI acquisitions.
- The Governmentwide IT Vendor Management Office (ITVMO) is the government’s trusted advisor and advocate for smarter, faster IT buying.
- The Tradewinds Solutions Marketplace offers ways to broadcast and manage AI opportunities to industry, interface with an ecosystem of partners and companies that can delivery AI technologies and peruse a digital repository of pitch videos that address challenges in the AI/ML, digital, and data analytics space.
- This resource guide contains a data dashboard for recent obligations around Generative AI.
3.6.2. FedRAMP Authorization
As many Generative AI products are cloud-based offerings complying with Federal security rules around cloud services is essential.
Products that have gone through the FedRAMP process have undergone a significant review of security documentation, testing and validation which can accelerate CIO approval for a product to be incorporated into an agency network. The FedRAMP Marketplace is a searchable and sortable database of cloud service offerings that have achieved a FedRAMP designation
FedRAMP also published a draft Emerging Technology Prioritization Framework. This document describes the operational framework for how FedRAMP will prioritize certain cloud service offerings that utilize specific emerging technologies.
The prioritization process will be integrated into existing and future FedRAMP authorization paths. The prioritization framework will not create additional authorization pathways and will maintain the same rigorous and thorough authorization requirements.
3.6.3 Vendor Responsibility
Acquisition professionals are used to conducting responsibility determinations on suppliers. Generative AI is an emerging field with new players arising all the time. Knowing which vendors can deliver on their promised results is difficult and a big reason to test solutions before acquiring them using techniques like sandboxes discussed in this resource guide Section 3.3 Solutions Scoping and Testing
Standard due diligence on companies is still important. Looking at their track record as a government supplier, their financial health and their past performance are all good indications (but not guarantees) of how they might perform on an acquisition you run. At the same time, because this field is emerging, some companies doing sophisticated and powerful work have been in business for less than 10 years. Aside from the older, larger and established IT players, many of the Generative AI companies aren’t registered government suppliers yet.
Because many Generative AI solutions may be accessed through a cloud provider, a partner, a systems integrator or a reseller, you may need to do some extra work to understand both the vendor’s and manufacturer’s ability to deliver. Ask about what access the team may have to the Generative AI tool manufacturer even though the contract may be with another entity.
Reviewing the commercial terms and conditions for software and data use helps ensure data use and the products themselves meet the government requirements for security, privacy, and data ownership. The Governmentwide ITVMO increases agency knowledge on commonly used and emerging vendors, especially small businesses, that offer IT services and products in the Federal IT Marketplace.
3. Generative AI Procurement
3.7 Cost Estimation
Given the large amount of IT budget that often has to go towards ongoing operation and maintenance, costs can be a significant impediment to Generative AI acquisition.
Generative AI systems are software systems. Suppliers bill for them mostly like other “Software as a Service” (SaaS) tools with some slight variants. You may need to pay for both monthly access to the Generative AI system plus how much time you use it plus how much data you input or output.
Costs for using Generative AI tools can add up quickly and unpredictably.
Setting up systems to accurately forecast and check actual costs against the budget when using Generative AI tools helps address concerns about growing consumption costs.
There are also potential added costs of monitoring and reporting for these systems. Rights and safety measures can drive operations and maintenance costs higher. The efficiencies gained by using Generative AI tools may need to be significant just to break-even.
Understand your agency’s users, use cases, desired outcomes and internal policies to help develop accurate estimates.
Having an understanding of fiscal law principles associated with purchasing SaaS and Cloud will help you to appropriately structure and manage your Generative AI project.
3.7.1 Generative AI tools are sold like software.
Again, most commercial Generative AI tools are considered “software” so they tend to be sold the way most software is sold and priced. Agencies pay fees for software in three main ways: subscriptions, usage and feature tiers.
Subscriptions
Agency pays per month or year for each account that renews at the end of the term until canceled, regardless of how much you use the model. Think of this like the fee you might pay for a video streaming service: you can watch as much as you want for a flat fee per month or per year.
Usage
Agency identifies and obligates funding for requirements, which is in turn billed for the amount of data inputted or outputted, the time or resources used on a particular platform or the quality of output. Think of this like the fee you might pay for a rideshare: you pay for the length of the trip, the time the trip takes and a little more if a lot of people are trying to get rides at the same time.
Feature Tiers
Agency pays once per the group of functions and features regardless of consumption/usage or number of users. Think of this like the one-time fee you might pay for a membership: you pay one fee to the club which entitles you to lifetime benefits and if you pay a higher fee you get more benefits like access to exclusive sales.
Generative AI tools typically charge some combination of these fees. Here are a few examples based on real life cases:
- A reseller charges an agency $20 per month per user for email, word processing, spreadsheet and presentation programs and apps. They charge an additional $20 per month for Generative AI features and functions in these same programs and apps.
- A company offers the ability to “train” an AI model charges an agency a base rate per month for an account plus $10 per million tokens where each token represents an individual word of “training data” that the agency uploads.
- A cloud platform provider charges a certain amount to get access to a Generative AI model but then charges the agency for every minute they use the model or for every time they access a database that the model connects to.
3.7.2 Estimating the Costs of Generative AI Solutions
To estimate the costs of Generative AI solutions, start by understanding the fees that the supplier would charge then ask questions to quantify accounts, users, time and usage.
Subscriptions | Usage | Feature Tiers |
---|---|---|
Per month or year for each account that renews at the end of the term until canceled. | Per the amount of data inputted or outputted, the time or resources used on a particular platform or the quality of output. | Per the group of functions and features regardless of consumption/usage or number of users. |
Subscriptions | Usage | Feature Tiers |
---|---|---|
How many unique accounts does the agency program need?
How long will we need accounts for (a few months, a year or more)? What other base subscriptions are required to access the service? Do we need a service or support layer to help us use the tools? |
How much data will we input?
How much data will we output? What quality of output do we want? How much time (per day and per month) will the AI service be used? How many different tools will use the AI service? Do we need a service or support layer to help us use the tools? |
What features do we need to use and which tiers support those features?
What are the limits (number of accounts, total usage, etc.) for using those AI tools at that tier of service? Can we customize the features we buy in each tier or are they preset? Do we need a service or support layer to help us use the tools? |
3.7.3 Developing estimates for using Generative AI tools
Subscription pricing generally is straightforward: estimate the number of users and the costs. As Generative AI tools are new and companies are still figuring out their own costs and revenues these rates may change, these costs may change. Though you should always try to negotiate for discounts from listed prices, companies are rarely offering reduced prices for tools at this time so listed prices are realistic for estimating purposes.
When accessing generative AI tools through a usage-based model, account for any type of access costs in addition to actual usage. For example, if you are accessing a generative AI tool through your Cloud provider’s architecture and you use a Value Added Reseller (VAR) for that architecture, your estimate will need to include any costs the VAR charges in addition to your usage costs. It is also important to understand and account for any additional access charges your Cloud provider may add on top of their usage costs and the usage costs for the particular generative AI tool.
Similar to subscription pricing, estimating costs for software feature tiers is generally straightforward but offerings within tiers and pricing may change more frequently as companies develop and integrate generative AI models and capabilities into their products.
Four of the largest Cloud Service Providers (CSPs) provide cost calculators to assist in estimating a budget and developing a cost estimate. Other CSPs may provide similar tools and serve as a basis for your estimate. Successfully using these tools requires the estimator to have knowledge of the potential workload and types, processing speeds and memory needed as well as the number of users. Consult with technical experts in your organization and use more than one of the tools when building your estimate and ensure you coordinate with your legal team on how to appropriately obligate funding.
Setting up an agency "test bed" or "sandbox" for a particular Generative AI model and constructing an experiment is one way to understand and start to estimate potential usage costs.
When constructing technical architectures the IPT will have visibility and options on which components it chooses to use and the associated costs with each particular component. Ensuring CORs and end users communicate often during the development process and operations of the tools will help manage your spend.
Cost control measures such as not-to-exceed caps on usage time or costs for Generative AI tools can help ensure that teams stay within allotted budgets. The IPT can also put technical checks and reminders in place to help ensure costs don’t balloon. Leaving one operation running on a cloud platform overnight and not shutting it down can eat up an entire program’s monthly budget because it is making continuous, unnecessary use of a server. Keeping these errors to a minimum is critical for keeping costs under control.
An overview of pricing for general Cloud services can be found on the pricing resource page of the Cloud Information Center.
Agencies may find they need to contract for specific services to develop, tune and maintain their models. Contracting officials should reach out to their CAIO's office to identify the most relevant labor categories or skill sets for the particular requirements if this is needed. Use tools such as GSA’s CALC to help develop estimates if you have an understanding of appropriate labor categories and skill levels.
3. Generative AI Procurement
3.8 Performance Monitoring and Evaluation
When set to train on the inputted data, Generative AI tools are constantly learning and changing. Once a tool is implemented, the IPT must closely monitor how it is used and the outputs. The ability for agencies to monitor and evaluate can be hindered by insufficient contractual provisions. Consider provisions in contracts requiring vendors to submit deliverables that help monitor and control for risks when using Generative AI tools. Additionally the NIST AI Risk Management Framework and other resources may be helpful.
3.8.1 Maintaining the IPT
Continuing the Generative AI IPT for the full lifecycle of using the tool brings multiple benefits.
Training
The IPT can develop or collect and make available appropriate training materials and deliver them to the end users. Part of this training should identify the changing nature of generative AI tools to help identify if the product is not functioning correctly.
Incident Detection and Response
Build in reporting and redress procedures for when something goes wrong working in collaboration with your agency’s teams that address risk management. This is particularly important when Generative AI tools and their outputs may be used in a manner that impacts benefits, rights or norms. Robust bias detection and mitigation strategies help to stay ahead of problems. Audit and oversight mechanisms can flag when incidents occur and when model performance is degrading. All of these will help identify when models may need to be refreshed, modified, or taken out of production.
Strategic Objectives
Regularly reviewing and adjusting Generative AI strategies helps ensure they remain effective and aligned with your agency's goals.
3.8.2 Engage regularly with vendors
The inputs and outputs of Generative AI tools may change the functionality and performance of the product. Successfully managing a Generative AI acquisition requires frequent communication with vendors.
Meeting Performance and Ethical Standards
Before contract award (even at the solicitation stage and/or built into evaluation criteria), meeting with vendors can offer the chance to clearly define how they will meet performance and ethical standards and mitigate risks associated with the tool. Clarify the roles and responsibilities of vendors, particularly if a use case has an impact on rights, entitlements or access to benefit programs.
Resolving Disputes
Agree on areas of uncertainty that exist with the generative AI tool and how to navigate issues or disputes between the government and the vendor providing the model. Document these agreements and identify a cadence to review and update them ensuring your legal support is included in the process. If performance issues persist, consider exploring contractual options with the Contracting Officer and legal support such as a cure notice, show cause, or termination.
Updating Products
Identify procedures to measure and support trust and assurance for products. This should include testing and system update cycles and identifying new capabilities incorporated into the products. Also regularly review any changes in commercial terms and conditions around data and data usage. Consider doing this through a Service Level Agreement (SLA) or other mechanism that the IPT can use to monitor vendor performance.
Protecting Legal Rights
Where systems may support decisions on rights-impacting programs make sure the models can be contested in accordance with the laws, statutes and regulations governing the program.
Controlling Expenses and Costs
Monitor costs against the cost estimates you created during the acquisition. Make sure invoices contain sufficient detail to monitor and control expenditures, particularly usage or consumption costs. Regularly communicate with end users and the technical team (for example, the Contracting Officer’s Representative or COR), and review invoices to make sure they are in alignment with the contract and how the Generative AI tools are being used and in accordance with the FAR.
4. Acquisition Methods and Vehicles
There is currently not a governmentwide Generative AI-only acquisition vehicle. That said, the federal government has a variety of procurement options to help agencies purchase Generative AI solutions.
An agency may have its own specific Blanket Purchase Agreement (BPA) or other contractual agreement that permits them to procure Generative AI solutions. If such a BPA exists for your agency, consider using it.
Below is a list of some procurement options that you may wish to consider as you are planning your acquisition. When it comes time to buy Generative AI solutions, look at governmentwide Best-in-Class (BIC) acquisition vehicles as opportunities for best pricing and terms and conditions. Balance BIC use along with small business considerations.
While open market buys of Generative AI are possible and many agencies have done them, they can involve higher risks from financial to compliance. When an agency can meet its needs using governmentwide contracts that mitigate risks and leverage the government’s buying power, this approach is preferred.
Visit the dashboard to gather data regarding what vehicles and strategies agencies have used in the past.
GSA Multiple Award Schedules (MAS) IT
The Multiple Award Schedules (MAS) IT (what used to be called “Schedule 70”) provides easy ways to purchase over 7.5 million innovative commercial products, services, and solutions from thousands of pre-qualified vendors, including ones offering Generative AI solutions. MAS IT is managed by General Services Administration (GSA).
Because MAS IT has a large vendor base it is best for commercial items and/or if you need to meet socioeconomic small business goals. MAS IT vendors have also gone through an initial series of compliance checks and pre-negotiated pricing reducing the workload on agency contracting officials. If you have a well-defined requirement and market research indicates that a commercial Generative AI tool can fulfill your need, the MAS IT vehicle may be a good fit for your procurement.
Government Wide Acquisition Vehicles
Government-Wide Acquisition Contracts (GWACs) provide access to a wide range of IT solutions, including AI technologies. These contracts streamline the procurement process and offer competitive pricing. To use GWACs, Contracting Officers can request delegated procurement authority or work with groups like GSA’s Assisted Acquisition Services (AAS) and NIH NITAAC’s Assisted Acquisitions team. Working with assisted acquisition teams may incur costs in addition to contract access fees.
These vehicles offer a commitment to meet customer needs, easy access to solutions, cost savings, socio-economic goals, streamlined acquisitions, and compliance with federal regulatory guidance (e.g., FedRAMP). Below are the major government-wide contract vehicles that would be most useful for Generative AI purchases:
-
8(a) STARS III
8(a) STARS III
8(a) STARS III is a small business set-aside GWAC that provides flexible access to IT services and customized IT services-based solutions from a large, diverse pool of 8(a) industry partners. Use this vehicle when you want to meet an SDB or 8(a) award. Alliant 2 is considered a Best in Class acquisition vehicle. 8(a) STARS III is managed by the General Services Administration (GSA).
-
Alliant 2
Alliant 2
Alliant 2 is made up of large businesses with extensive experience in major IT contracts and several teaming arrangements and is primarily designed for complex agency requirements. Alliant is often used for requirements where there is a need to rapidly scale to meet the government's needs. Alliant 2 is managed by the General Services Administration (GSA). Alliant 2 is considered a Best in Class acquisition vehicle.
-
CIO CS, SP3 & SP3 Small Business
CIO CS, SP3 & SP3 Small Business
CIO SP3 and CIO SP3 Small Business provide technology related products and services for ten different task areas and about 137 different labor categories. CIO CS provides access to IT commodities delivered either in an On-Premise or through a Managed Service Construct. CIO SP3 and SP3 Small Business are both considered Best in Class acquisition vehicles. This family of contracts are managed by the National Institutes of Health (NIH) through their NITAAC (NIH Information Technology Acquisition and Assessment Center).
-
GSA EIS
GSA EIS
Enterprise Infrastructure Solutions (EIS) is a comprehensive solution-based vehicle to address all aspects of federal agency IT telecommunications, and infrastructure requirements. Under EIS, generative AI solutions may be provided in support of or as part of your network and infrastructure operations. EIS is considered a Best in Class acquisition vehicle. EIS is managed by the General Services Administration (GSA).
-
NASA SEWP
NASA SEWP
NASA SEWP (Solutions for Enterprise-Wide Procurement) provides the latest in commercial Information and Communications Technology (ICT) and Audio-Visual (AV) products and services for all Federal Agencies and their approved contractors. This contract is managed by the National Aeronautics and Space Administration (NASA). SEWP offers streamlined buying as buying agencies retain responsibility for compliance. SEWP is considered a Best in Class acquisition vehicle.
-
GSA VETS 2
GSA VETS 2
VETS 2 is the only GWAC set aside exclusively for Service-Disabled, Veteran-Owned Small Businesses (SDVOSB). It’s designed to meet diverse agency IT services requirements, including new and emerging technologies like GenerativeAI. Use VETS2 when you specifically want to award to an SDVOSB. VETS2 is considered a Best in Class acquisition vehicle. VETS2 is managed by the General Services Administration (GSA).
Department or Agency-Wide Vehicles
Some agencies have their own contracting vehicles designed to meet unique requirements, which can be an efficient way to procure Generative AI solutions tailored to specific missions. Here are a few examples:
-
Army CHESS
Army CHESS
CHESS (Computer Hardware, Enterprise Software and Solutions) is the Army’s designated primary source for commercial IT. CHESS provides a no-fee, flexible procurement strategy through which an Army user may procure commercial-off-the-shelf (COTS) IT hardware, software, and services via an e-commerce (IT e-mart) based process. CHESS offers simple, straightforward contract vehicles through its online Army e-commerce ordering system, the IT e-mart. These contracts provide continuous vendor competition for best value and consolidation of requirements to maximize cost avoidance and leverage the Army’s buying power. This contract is for use by the US Army.
-
EAGLE NextGen
EAGLE NextGen
Enterprise Acquisition Gateway for Leading Edge Solutions Next Generation is not a single contract vehicle, but a suite of contract vehicles, including a portfolio of IT services contract vehicles with specialized, targeted DHS scope. EAGLE Next Gen balances the use of existing Government-wide Acquisition Contracts (GWACs) in conjunction with DHS IT services contract vehicles. DHS tailored requirements from General Services Administration's Alliant 2, 8(a) STARS III, and VETS 2, as well as the National Institutes of Health's CIO-SP3 and CIO-SP3 Small Business into its Category Management and Strategic Sourcing portfolio to meet unique DHS mission and IT needs.
-
Tradewinds
Tradewinds
Tradewinds is the Department of Defense’s Platform for learning about and procuring technology; it offers “AcqBot” (a Generative AI enabled chatbot) and a Solutions Marketplace to support AI procurements. Tradewinds has several contract vehicles that can be utilized for procurement, including TryAI Commercial Solutions Opening (CSO), Test and Evaluation Services Blanket Purchase Agreement (BPA), Data Readiness for Artificial Intelligence Development (DRAID) Basic Ordering Agreement (BOA), and AI Talent. Tradewinds is particularly useful for agencies intending to quickly procure and test commercial AI solutions, acquire demonstrations of innovative commercial items, or seek AI talent.
-
Pilot IRS
Pilot IRS
Pilot IRS is an iterative procurement technique focused on outcomes that allows the Internal Revenue Service (IRS) to test new technologies on faster timelines. If a solution fails to meet expectations, then it will not proceed to the next phase of funding. This methodology creates an agile approach to identify, test, and deploy solutions that support the mission, regardless of whether the solution, technology, or service currently resides within the IRS or the federal government. The Pilot IRS program allows for testing to be done in phases, concurrently with other vendors, and can easily be scaled and ultimately deployed in incremental phases. Pilot IRS could potentially be used for Generative AI projects. Please check with Pilot IRS staff for more specific information.
Cooperative Research and Development Agreements
Generally, a Cooperative Research and Development Agreement (CRADA) authorizes federal labs to enter into agreements with other federal agencies, state/local government, industry, non-profits, and universities for licensing agreements for lab developed inventions or intellectual property to commercialize products or processes originating in federal labs. In some cases, partnerships may be made directly with commercial R&D organizations. This approach may be most suitable if a government lab and their commercial partners either are developing or have developed a Generative AI tool that fits the needs of your agency. While no government funds can directly be provided under this approach, collaborating with private sector entities can provide access to cutting-edge Generative AI technologies and expertise, offering a path to innovative solutions. For more information, please review 15 USC 3710a: Cooperative research and development agreements.
Interagency Agreement
Interagency agreements are authorized by the Economy Act, which allows agencies to enter into agreements to obtain supplies or services from another agency. This approach may be most practical for situations where another agency already has a Generative AI solution or solution set that meets your needs and has the ability for additional agencies to use that solution. For more information, please review FAR Subpart 17.502-2,which provides for executing Economy Act transactions and 31 U.S.C.1535: Agency Agreements.
Other Transaction Authority (OTA)
Some agencies have the authority to enter into a non-FAR based legal agreement called an Other Transaction Authority (OTA) that leverages commercial buying practices. Typically OTAs are utilized to procure state-of-the-art technology solutions from non-traditional vendors. OTAs generally can be used for research and development activities, prototyping, and in certain cases production of prototypes. For Generative AI solutions, this may be a good solution to help a non-traditional vendor modify a commercial product to government specifications, reach smaller or niche companies with specific generative AI expertise, or more quickly develop an agency specific Generative AI capability. Your agency must have specific statutory authority to use OTA arrangements.
Commercial Solutions Opening (CSO)
CSOs are similar to Broad Agency Announcements (BAAs) which are used for Research and Development (R&D); however, CSOs are set up for innovative commercial solutions. Typically a phased acquisition approach is used where a problem statement is provided by the government through a Statement of Objectives (SOO) and vendors submit white papers on their approach to solve the problem. After a technical evaluation and based on funding requirements, prototypes and simulations are funded and potentially follow-on contracts. Consider using a CSO to generate new ideas and concepts and obtain an innovative solution using generative AI when shorter evaluation timelines and rapid selection of innovative commercial solutions are necessary and when proposals with varying technical/scientific approaches are anticipated. Your agency must have specific statutory authority to use CSO arrangements.
Prize Competitions
One way that agencies can begin to explore harnessing Generative AI is through prize based competitions. Challenge.gov is a central hub for prize competitions across the U.S. federal government. Through this platform and others, federal agencies can engage public solvers in prize competitions and crowdsourcing campaigns to identify innovative solutions to important issues. A prize competition may be appropriate if you are hoping to engage a diverse set of solvers, and if you would prefer to reward research outcomes rather than fund research inputs. You may wish to consider using a Prize Competition if you can:
- clearly explain the problem to be solved,
- outline the parameters of a successful outcome, and
- determine how to measure the submissions and declare a winner.
5. Specialized Computing Infrastructure for Generative AI
5.1 Overview
Generative AI tools, like all software, run on computers. Because Generative AI tools are so calculation-heavy, they require computers that are larger, more sophisticated and more powerful. These computers can look like massive “supercomputers” similar to those located at government national laboratories. They can also look like very powerful “server” computers that sit as small clusters in an agency’s IT center or as groups of hundreds of servers sitting in large data centers.
If your agency is accessing Generative AI tools through a cloud-based platform, there may be fewer concerns about computing power. But if your agency is attempting to create its own models and needs its own dedicated computers, or has security needs that require the agency to own and operate its own computers, it may be necessary to set up its own specialized computing infrastructure for Generative AI.
Please note: Creating secure, stable and useful specialized computing infrastructure requires significant effort and expenses. Think carefully before making such investments. Consider reaching out to other agencies to see if they have infrastructure that your own agency can leverage or if the software models you intend to build have already been built elsewhere and you can access them.
Components
Specialized Computing Infrastructure can be thought of as high-performance computers, powerful chips, software, networks and resources made specifically for building, training, fine-tuning, testing, using and maintaining artificial intelligence applications. Computing infrastructure can be on-premise, cloud-based or a combination of both.
Specialized Computing Infrastructure tends to have:
- Advanced Computer Chips that can run multiple, complex calculations simultaneously and coordinate multiple functions at once.
- Scalable Processing Power gives the ability to quickly access additional chips if even more computational power is needed.
- Enhanced Data Storage and Memory to save, access and manage large sets of information.
- Fast Networking to transfer data at high-speeds among systems with minimal errors and little delay or “latency.”
- Sophisticated Software Tools that engineers can use to write computer code, program algorithms and adjust models.
Chips and Computers for Generative AI
The computer chips that Generative AI models rely on to run calculations are powerful and expensive. Only a few companies design them and only a few companies manufacture them. Businesses looking to increase their specialized computing infrastructure are spending millions of dollars on the chips and computers that they can use to run their own Generative AI applications.
Computer manufacturers have started releasing AI chip-enabled servers, workstation PCs, desktops and laptops. The hardware is optimized with specialized GPU chips and larger amounts of RAM to run models directly on the device without having to send the queries to a data center or through a cloud provider.
These computers may be a good fit for Generative AI tools that run off of smaller models or smaller data sets.
5.2 Needs Assessment
Building specialized computing infrastructure for Generative AI can be a longer-term undertaking, and involves a significant amount of up-front investing before delivering results. A thorough assessment of the organization's needs to be served by Generative AI and the specialized computing infrastructure to support it for the long term could include:
- What level of processing power, data storage, connectivity and software are necessary to support the agency’s Generative AI initiatives?
- What specific functions can cloud providers or specialized vendors offer and what can they not offer that an agency has to build itself?
- What federal regulations, policies standards and other relevant requirements (security, privacy, etc.) can only be met with an agency building its own infrastructure given the data and Generative AI tools used?
- What costs could grow rapidly unless the agency built its own specialized computing infrastructure?
- What time savings and quality improvements could be realized if the agency built its own specialized computing infrastructure?
- What risks would be best mitigated by the agency having its own specialized computing infrastructure?
- What capabilities would we need to be able to configure, operate and manage the specialized computing infrastructure we’ve identified?
5.3 Cost Estimates
A civilian agency might want to get servers enabled with AI chips to be able to run focused tests. A few agencies with more specific needs and demands may need to invest more heavily in its own specialized computing infrastructure for Generative AI. Specialized computing infrastructure for generative AI is similar to other high-performance computing environments in many respects.
The most critical decision in determining cost categories is to determine whether the infrastructure project would be more “software” or “hardware” – meaning, would it use dedicated cloud-based processing or have multiple dedicated servers with AI chips and the associated storage, network and software physically on premises in a government owned or operated data center.
5.3.1 Initial questions to help estimate costs
- Will your specialized computing infrastructure for generative AI be internal facing or public facing? The more potential users, data storage needs and traffic, the greater the costs.
- Will your infrastructure be developed, modified, or does it already exist? Facilities can be expanded or modified; brand new construction or major renovation will add to the costs.
- Who will own, maintain and be responsible for the infrastructure? Maintenance teams, physical security, supplies and additional discrete operating costs add to the overhead.
- Where will your infrastructure be located (physically and virtually)? If an agency doesn’t have the expertise, land and budget to build the facilities a Cloud based approach may be more in line with the longer term vision for the infrastructure.
- How will your data be accessed and from where? Individuals accessing and working with Generative AI tech from different locations will drive up ancillary hardware, software, networking and security costs.
- Will the data be transportable between hardware or IaaS providers if there is a need to change either? Understanding the portability of your data will help in longer term planning for future work.
5.3.2 Common Cost Categories
Contracting officials should closely consult with their technical and program counterparts to identify and understand the requirements as well as key cost drivers for Generative AI-specialized computing infrastructure acquisitions. Some common cost categories applicable to specialized computing infrastructure include:
Subject Matter Experts and Support Contractors
- Labor and skills to design, implement, secure and operate Generative AI systems
- Trainers and guides
- Training materials, documentation, and certification requirements
Hardware
- High-performance GPUs for core calculations and compute
- CPUs to manage the overall system and perform less data intensive tasks
- Large, fast memory for handling complex models and datasets
- Modular, scalable units for accommodating future growth
- Fast, scalable storage solutions for data sets, models, and backups
- High-bandwidth networking for data transfer
Software Licensing
- Operating systems tailored for high-performance computing
- Generative AI, Frameworks and Libraries
- Software security and performance updates
- Integrated development environments (IDEs), compilers, and debugging tools
Cloud Services and Management
- Subscriptions for cloud computing platforms offering scalable AI infrastructure
- Management fees when accessing Cloud services through a VAR
Facilities / On-premises Infrastructure
- Configuration and construction costs for new hardware installation
- Higher capacity power/energy systems
- Additional or specialized cooling systems
- Maintenance, support. repairs, replacements and updates.
Security and Other Safeguards
- Zero Trust Architecture (ZTA) security measures to protect against cyber threats,
- Encryption and access controls to secure sensitive data
- Network Security to protect data in transit and at rest from unauthorized access.
- Audits to identify potential risks (technical, operational, compliance)
- Risk mitigation strategies, which may involve additional investments in security measures, privacy safeguards, or compliance certifications.
Workforce
- Recruitment and retention of AI employees (including special pay and incentives)
5.4 Compliance
When acquiring specialized computing infrastructure for Generative AI applications, federal agencies must navigate a unique set of considerations beyond standard procurement practices. Adherence to executive orders, federal regulations, laws, and policies ensures that Gen AI infrastructure aligns with national security interests, data protection requirements, and ethical standards.
Technical leaders, CIOs, CISOs, SAOPs, and other senior leaders and executives are responsible for ensuring agency networks and infrastructure are in compliance. High level overviews of these responsibilities can be found in the CIO Handbook and the CISO Handbook. Look for more specific guidance from relevant agency officials as part of your planning process.
Acquisition professionals also need to be aware of rules and policies related to cybersecurity, privacy, and supply chain management when purchasing specialized computing infrastructure. A few recent ones include:
Section 889
FAR Modifications to Part 4.2102(a)(2) implemented Section 889 of the FY19 National Defense Authorization Act (NDAA). This clause prohibits the purchase of certain networking and telecommunications equipment. Networking and telecom equipment is essential to any specialized computing infrastructure. Any agency looking to purchase specialized computing infrastructure to support Generative AI should ensure that they avoid these prohibited products. More info can be found on Acquisition.gov’s Section 889 page.
Section 5949
In 2022, the FY23 NDAA was signed into law. Section 5949 includes future prohibitions (to take effect in December 2027) on purchasing or renewing certain Semiconductor Products and Services. With advanced AI chips, GPUs and other semiconductors at the heart of any Generative AI specialized computing infrastructure, understanding where such components are made is critical for any such acquisition. A FAR case is pending, but the statutory language is available on Congress' information page on the NDAA.
Trade Agreements Act (TAA)
In 2022, the FY23 NDAA was signed into law. Section 5949 includes future prohibitions (to take effect in December 2027) on purchasing or renewing certain Semiconductor Products and Services. With advanced AI chips, GPUs and other semiconductors at the heart of any Generative AI specialized computing infrastructure, understanding where such components are made is critical for any such acquisition. A FAR case is pending, but the statutory language is available FAR Part 52.225-5 contains the appropriate contract clause and list of designated countries.
DFARS 239.7602-2 or HSAR 3052.204-72
Given the sensitive nature of data used in AI applications, considerations around data storage and processing locations are paramount. Federal regulations require that certain data be stored within the United States or in specific jurisdictions to comply with security and privacy standards.
NIST Special Publication (SP) 800-53
NIST SP 800-53 be stored within the United States or in specific jurisdictions to comply with security provides a methodology and set of procedures for conducting assessments of security and privacy controls employed within systems and organizations within an effective risk management framework. Vendors providing specialized infrastructure must be in compliance with the controls and testing outlined in the standard.
Executive Orders
-
14028, Improving the Nation’s Cybersecurity
-
14105, Addressing US Investments in Certain National Security Technologies and Products in Countries of Concern
-
14093 Prohibition on Use by the United States Government of Commercial Spyware that Poses Risks to National Security
-
13873 Securing the Information and Communications Technology and Services Supply Chain
OMB Memoranda
-
M-24-04 FY24 Guidance on Federal Information Security and Privacy Management Requirements
-
M-23-18 Administration Cybersecurity Priorities for the FY2025 Budget
-
M-23-16 Update to Memorandum M-22-18 Enhancing the Security of the Software Supply Chain through Secure Software Development Practices
-
M-22-09 Moving the US Government Towards Zero Trust Cybersecurity Principles
-
M-21-30 Protecting Critical Software through Enhanced Security Measures
A full list of OMB Memoranda and a complete listing of Executive Orders are available for more information.
6. Data Dashboard
Executive Order 14410 directs agencies to promote transparency and encourage data-informed decision making.
Data was derived from over 30 million contract actions in the Federal Procurement Data System and is structured to help spot trends and patterns.
It enables government employees to visualize and analyze the landscape of federal Generative AI and Specialized Computing Infrastructure acquisitions and fosters insights into market dynamics, spending patterns, and acquisition strategies.
7. Agency Support and Resources
A wealth of resources, tools, and support is available from various agencies and platforms to support AI acquisition.
Executive Office of the President
Current Departmental AI Strategies (April 2024)
Department of Justice
Artificial Intelligence Strategy for the U.S. Department of JusticeDepartment of Defense
Data, Analytics, and Artificial Intelligence Adoption StrategyDepartment of Energy
DOE AI Risk Management PlaybookDepartment of Homeland Security
Department of Health and Human Services
HHS Artificial Intelligence (AI) StrategyDepartment of State
Department of State Artificial Intelligence (AI) StrategyDepartment of Veterans Affairs
Joint Guidance on Deploying Systems Securely
Acquisition Resources
IT Vendor Management Office
Home | IT Vendor Management Office (gsa.gov)TechFAR Hub
TechFAR HubTradewinds Responsible AI Toolkit
RAI Tools ListPeriodic Table of Acquisition Innovations
Periodic Table of Acquisition Innovations (PTAI)Section508.gov
Section 508Ways to contribute
Send feedback
We welcome feedback on the guide or questions about procuring gen AI. Engage with us using our feedback form or other options below.
Contribute code
Engage here on the ITVMO's GitHub, submit pull requests and share your experiences.
Email us
Send an email to genai@gsa.gov to connect directly with our team.